- French, German, and Italian lawmakers oppose direct regulation of generative AI models, suggesting that companies developing them should self-regulate based on government-induced codes of conduct.
- According to the EU, creators of “high-impact” general-purpose AI systems that fulfill specific benchmarks will be subject to “obligations.”
The European Union has recently reached a provisional agreement on new regulations that dictate the advancement and utilization of artificial intelligence technologies, including those like ChatGPT.
The recently introduced EU AI Act, considered the inaugural comprehensive regulation overseeing artificial intelligence globally, was reportedly formulated following extensive negotiations among officials from European nations. In addition to other considerations, there were differing opinions on regulating generative AI models and implementing biometric identification technologies, including fingerprint scanning and facial recognition systems.
Directly regulating large language models, or generative AI models, is a topic of debate among French, German, and Italian lawmakers. They argue that the companies creating these models should self-regulate through government-mandated codes of conduct. According to reports, they were worried that too much regulation would hamper European innovation, which is frantically attempting to compete with Chinese and American businesses in the AI race. Some of the most promising generative AI startups, such as Mistral AI and DeepL GmbH, are based in France and Germany.
According to Reuters, the EU’s AI Act marks the first regulation of its kind with a specific focus on AI technology. The development of this law spans several years, originating in 2021 when the European Commission initially suggested establishing a common legal framework for AI. As reported by the New York Times, the act categorizes AI into different levels of risk, ranging from “unacceptable,” warranting a ban, to “high,” “medium,” and “low-risk.”
The EU Commissioner Thierry Breton shared on X, previously known as Twitter, that the agreement is a “historic agreement.” He further mentioned that it signifies “the EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook – it’s a launchpad for EU startups and researchers to lead the global AI race.”
AI regulation has gained prominence following the introduction of OpenAI’s ChatGPT in the last quarter of the previous year. The chatbot’s remarkable abilities, allowing it to participate in humanlike conversations, generate original software code, and execute various tasks, have prompted technology firms to develop similar AI models. There is a widespread belief that generative AI will substantially influence fields such as internet search, email composition, and image generation and significantly enhance the productivity of business professionals.
Legislators are astonished by the quick rise of ChatGPT and other LLMs, including Stable Diffusion, Bard from Google LLC, and Claude from Anthropic PBC. They are worried about how these platforms could spread hate speech and disinformation, replace jobs, and violate people’s privacy and copyright.
Breton claims that the EU’s AI Act will mandate that AI firms reveal the inner workings of their models and assess any “systemic risk.”
According to an official statement, the EU outlines “obligations” for developers of “high-impact” general-purpose AI systems that satisfy specific criteria. These obligations include undergoing risk assessments, engaging in adversarial testing, providing incident reports, and more. The legislation also imposes a prescribed level of transparency, requiring creators of AI systems to furnish technical documentation containing comprehensive summaries of the data used in their training—a practice that certain U.S. companies, including Google and OpenAI, have consistently declined to adopt.
Moreover, the legislation includes a provision stating that EU residents should be afforded a mechanism to file complaints about AI systems and be provided with explanations of how “high-risk” systems may influence their rights.
The announcement lacked substantial information about the specific benchmarks, and it also remained silent on the details of how the enforcement of the rules would be carried out. Nevertheless, it laid out a structure for imposing fines in cases where companies are discovered to have violated the rules. The penalties will be contingent on the size of the implicated company and the severity of its violation, ranging from 35 million euros (USD 37.6 million) to 7% of its global revenue or from 7.5 million euros to 1.5% of its global revenue.
Specific applications and activities are subject to prohibition. For example, it will be against the law to extract facial images from CCTV cameras and classify individuals based on sensitive characteristics like race, sexual orientation, and political beliefs.
Furthermore, the use of emotion recognition systems will be prohibited in workplaces and educational institutions. The development of “social scoring systems,” akin to China’s social credit system, will also face restrictions. Additionally, AI systems with the potential to “manipulate human behavior to circumvent free will” and “exploit the vulnerabilities of individuals” will be banned. Analysts anticipate that these broadly defined regulations will empower lawmakers to take action against individuals attempting to manipulate government elections using AI systems.
Certain exceptions apply to the regulations. For instance, law enforcement agencies will retain the authorization to utilize AI-powered biometric technologies to search for evidence in recordings or in real time.
Enforcing regulations tailored for the governance of AI represents a significant milestone. The EU is committed to being the pioneer continent in establishing comprehensive rules for AI development and utilization, as noted by Holger Mueller of Constellation Research Inc. Mueller contends that these regulations will streamline processes for technology companies engaged in AI, particularly startups that may face financial constraints in building extensive compliance teams. The analyst said, “The EU appears to be on track to regulate data usage too, specifically with regards to personally identifiable information and biometric data, which will also make things easier. But it’s too early to tell if the EU has struck the right balance to ensure the safe adoption of AI without stifling innovation.”
While legislators have reached a tentative agreement on the deal, several specifics still require finalization. Even upon completion, it’s improbable that the act will be enforced before 2025.
Enza Iannopollp, an Analyst at Forrester Research Inc., conveyed to Reuters that the EU’s AI Act is deemed “good news” for both businesses and society. However, Iannopollp anticipates that it will inevitably draw some criticism. “For businesses, it starts providing companies with a solid framework for the assessment and mitigation of risks, that – if unchecked – could hurt customers and curtail businesses’ ability to benefit from their investments in the technology. And for society, it helps protect people from potential, detrimental outcomes,” the analyst said.