EU agrees world's first AI rulebook
The regulations aim to strike a balance between innovation and protection
EU negotiators have announced the world's first comprehensive set of rules to regulate artificial intelligence, described as a "launch pad" to lead the global AI race.
The agreement, known as the AI Act, is set to establish legal oversight for AI technology, which promises to revolutionise everyday life - but also raises concerns about existential threats to humanity.
Negotiators from the European Parliament and the 27 EU member states reached a provisional political agreement after nearly 38 hours of marathon closed-door talks.
Over the last year generative AI systems like ChatGPT have garnered attention for their ability to produce human-like text, photos, and songs. However, concerns about the potential risks to jobs, privacy, copyright protection and even human life have accompanied the rapid development.
The AI Act acknowledges and regulates these diverse applications, aiming to strike a balance between fostering innovation and ensuring ethical and responsible AI development.
The AI Act, initially proposed in 2021, is slated for a parliamentary vote early next year, with implementation expected no earlier than 2025.
EU commissioner Thierry Breton hailed the agreement as "historic," noting that it establishes "clear rules for the use of AI." He underscored that the AI Act goes beyond being a mere rulebook, describing it as a "launch pad for EU start-ups and researchers to lead the global AI race."
"Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is yes, I believe, a historical day," Breton told a press conference.
European Commission president Ursula von der Leyen echoed Breton's sentiments, saying the AI Act aims to foster the development of technology that safeguards people's safety and rights.
"Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act," she said.
What's covered?
The new agreement outlines stringent obligations for both foundation models, including ChatGPT, and general-purpose AI systems. These systems must comply with certain transparency requirements before entering the market, which means the creation of technical documentation, adherence to EU copyright law and the dissemination of detailed summaries about the content used for training.
Tech companies must also thoroughly test their AI systems, especially for high-risk applications like self-driving vehicles and healthcare.
The law prohibits indiscriminate image scraping for facial recognition databases, while allowing exemptions for "real-time" facial recognition by law enforcement in the investigation of terrorism and serious crimes.
Companies violating the law could face fines of up to €35m, or seven per cent of their global turnover.
While the deal has been lauded as a landmark achievement, some campaign groups expressed reservations.
DigitalEurope, a business advocacy group, criticised the rules, claiming that the attempt to regulate foundation models added an unnecessary burden to companies.
Director Ggeneral Cecilia Bonefeld-Dahl said, "We have a deal, but at what cost?"
Privacy rights group European Digital Rights also raised concerns about the legislation, particularly regarding around live public facial recognition across the EU.
Senior policy advisor Ella Jakubowska said, "It's hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc."
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."