Highlights:

  • The goal involves creating guidelines for red teaming, safety and capability assessments, security, trustworthiness, and watermarking AI-generated content.
  • As per the announcement, the consortium comprises the most extensive gathering of test and evaluation teams ever brought together in the nation.

The U.S. Department of Commerce recently revealed the establishment of the U.S. AI Institute Consortium. This initiative aims to unite leading technology firms, government researchers, and academics to foster the creation of reliable and secure standards for artificial intelligence.

The freshly formed consortium within the U.S. Artificial Intelligence (AI) Safety Institute (USAISI) aligns with President Biden’s October executive order. This order outlines directives for secure AI development, encompassing industry regulations, security standards, and consumer protections.

Over 200 tech companies have joined the AI and safety related consortium, featuring prominent AI firms like OpenAI, Anthropic PBC, Google LLC, Microsoft Corp., Amazon Inc., Meta Platforms Inc., and AI chipmaker Nvidia Corp. Notable industry giants, such as Apple Inc., IBM Corp., Cisco Systems Inc., Intel Corp., and Qualcomm Inc., have also become part of the organization.

Gina Raimondo, U.S. Secretary of Commerce, said, “The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

As per the announcement, the consortium comprises the most extensive gathering of test and evaluation teams ever assembled in the country. It also encompasses members from state and local governments, along with non-profits, and is responsible for collaborating with teams from other nations to achieve its objectives.

The objective involves creating guidelines for red teaming, safety and capability assessments, trustworthiness, security, and the watermarking of AI-generated content.

Red teaming involves an ethical security risk assessment where a team tries to penetrate security systems established by another team, functioning as an “enemy” or “red” team to expose potential flaws in defenses. In AI safety research, red teaming extends to efforts that prompt the AI to hallucinate, challenging it to simulate actions, produce inaccurate results, or generate potentially harmful content. This process aids researchers in enhancing the reliability and trustworthiness of AI systems.

Dr. Richard Searle, Vice President of Confidential Computing at Fortanix Inc., a provider of encrypted and trusted computing and an inaugural participant in the consortium, expressed, “As adoption of AI systems increases across different industry domains, it is vital that appropriate attention is given to individual data privacy, systemic safety and security, and the interoperability of data, models, and infrastructure.”

The announcement follows closely after OpenAI and Meta revealed their intention to label AI-generated images with metadata. Users and fact-checkers will find it simpler to recognize media produced by AI sources thanks to this action. Both companies had voluntarily committed to the White House AI for safety initiative in July, pledging to label AI-created content. Additionally, Google introduced its digital watermarking capability, SynthID, for AI-generated content.