Highlights:

  • The four companies formed the Frontier Model Forum to advance AI research, collaborate with stakeholders, and ensure trustworthy and safe AI.
  • Only companies developing large-scale machine learning models that surpass existing capabilities are eligible to join the forum.

Google LLC, Microsoft Corp., ChatGPT developer OpenAI LP, and Anthropic, an artificial intelligence research startup, recently announced the formation of a new industry body to guarantee the development of safe and responsible AI models.

In a joint statement recently posted on Google’s blog, the four companies described the Frontier Model Forum, which will advance AI research, identify best practices, and work with governments, policymakers, and academics to pave the way for reliable and secure AI.

Brad Smith, Vice Chair and President of Microsoft, said, “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Since its debut in November, OpenAI’s ChatGPT AI chatbot has exploded in popularity due to its ability to understand natural language and respond with convincing human-like conversation. Since then, AI models have made significant advancements, enabling them to efficiently handle tasks such as research, lengthy essays, logic games, and more. Other models can also produce convincing images and videos by reusing existing material, raising concerns about privacy and impersonation.

The forum’s membership is exclusively reserved for companies developing large-scale machine-learning models that aim to surpass the capabilities of existing models. According to the statement, these members will be responsible for creating and implementing innovative “frontier models” while prioritizing safety and collaborating on joint initiatives focused on trust, community outreach, and overall well-being.

Over the next few months, the Frontier Model Forum plans to form an advisory board to steer its priorities. The founding companies will take a central role in making decisions and consulting governments and the community on responsibly overseeing AI model advancement.

As per the statement, the founding companies aim to emulate the success of other initiatives like Partnership for AI and MLCommons, which have been instrumental in promoting ethical and responsible AI development.

The forum’s establishment coincides with a period when governments worldwide increasingly recognize the immense potential of AI models and the rapidly growing industry. As part of this development, all forum members recently made voluntary commitments in conjunction with the White House. These commitments aim to address the mounting concerns about the risks associated with AI models. This matter was discussed by President Joe Biden with science and technology advisors in April, focusing on the impact of AI technology.

The Federal Trade Commission recently opened an investigation into OpenAI’s business practices after an artificial intelligence ethics group requested that the commission look into the startup.

Simultaneously, the European Union has been actively advancing its plans for the “AI Act,” which is set to be the world’s first comprehensive AI law regulating the use of AI within the EU. The legislation has been under discussion since April. It aims to strike a delicate balance between promoting the growth of the AI industry and safeguarding the interests of consumers and human rights, including privacy, surveillance, copyright, and other related concerns.