Highlights:

  • The level of personalization of AI assistants, AI responses to questions about the opinions of famous people, AI support for giving medical, financial, or legal advice, and potential restrictions on AI content are just a few of the policy questions that applicants can choose from or create.
  • OpenAI announces the launch of a program that will grant ten grants totaling USD 100,000 to support research into creating a democratic process for determining the rules AI systems should abide by “within the bounds defined by the law.”

Recently, OpenAI Inc., the nonprofit division of the artificial intelligence business OpenAI LP, unveiled a new program that will provide grants to support research into how to set up a democratic process to decide what rules AI systems should adhere to.

Ten USD 100,000 grants are available as part of the “Democratic Inputs to AI” program’s launch to any group or person looking to develop a strategy to address one or more of the policy questions on the provided list. Those who participate may do so regardless of their prior social science or AI knowledge.

Candidates are encouraged to consider policy queries that elicit more complex responses than a simple “yes” or “no,” with an emphasis on role-model behavior. OpenAI acknowledged that many other aspects of AI, such as usage guidelines and economic impact, could also profit from democratic oversight, even though the grant focuses only on modeling behavior.

Applicants can select or create policy questions, such as the level of personalization of AI assistants, AI responses to inquiries about the opinions of public figures, AI support for providing medical, financial, or legal advice, or potential restrictions on AI content. While the answers to these questions are crucial, the grant’s primary goal is to promote innovation in democratic methods for regulating AI behavior, according to OpenAI, emphasizing improving the decision-making process itself.

The program may have been in development for some time. Still, the timing is intriguing because it comes after OpenAI CEO Sam Altman said on May 16 during a hearing before the U.S. Senate that lawmakers should consider enacting new regulations to ensure the safety of AI systems.

During the hearing, Altman put forth several suggestions for AI regulation, highlighting businesses’ need to create AI models that adhere to safety standards. Before releasing these systems, Altman recommended conducting internal and external testing and disclosing the results. He also suggested combining licensing or registration requirements with AI model sophistication requirements.

According to Altman, a proposed AI law in the European Union could force OpenAI to shut down its regional operations, which brought up the topic of AI regulation and law earlier. The proposed EU AI Act would impose “design, information, and environmental” requirements on advanced models and require developers of foundation AI models to identify and address potential risks related to their products.