EU Countries, European Parliament Agree AI Regulation Standards
By Tomás Lujambio | Journalist & Industry Analyst -
Wed, 12/13/2023 - 13:21
After three days of negotiations, European Union officials have reached a provisional agreement on comprehensive rules governing the use of artificial intelligence (AI). If the AI Act is ratified by all 27 member states and the European Parliament, the legislation would come into effect in 2026. However, while policy makers and governments alike hailed the legislation as historic, business leaders consider it a burden obstructing their technological development.
"The regulation aims to ensure that AI systems used in the EU are safe, respect fundamental rights, and European values," said Pedro Sánchez, Spanish President, European Union.
The agreement mandates that companies developing foundational AI models, such as ChatGPT and general-purpose AI systems (GPAI), must adhere to transparency obligations before entering the market. These obligations include the preparation of technical documentation, compliance with EU copyright law, and the dissemination of detailed training content summaries. As a crucial component of this agreement, the government will implement regulations that specifically pertain to its use of AI, focusing on two key aspects: biometric surveillance and the oversight of AI systems.
Furthermore, the accord dictates that AI models that exhibit recurrent risks, such as ethical biases, must undergo model evaluation, conduct adversarial testing, and promptly report serious incidents to the European Commission. Additionally, foundational AI models will be obligated to ensure the implementation of resilient cybersecurity measures and provide detailed reports on energy efficiency. Cumulatively, these compliance mechanisms aim to minimize the potential for unintended negative consequences and promote the responsible and beneficial use of AI technologies.
Throughout the negotiation process, addressing the development and application of AI-powered biometric surveillance emerged as the most intricate challenge, demanding meticulous consideration. Ultimately, a resolution was achieved by permitting governments to leverage real-time biometric surveillance in public spaces, albeit within defined constraints. Its use will be restricted to cases involving victims of specific crimes and credible terrorist threats. Subsequently, the EU's AI regulation goes beyond this resolution, imposing explicit prohibitions on the government use of AI for cognitive behavioral manipulation, untargeted scraping of facial images from the internet or CCTV footage, social scoring, and biometric categorization systems aimed at inferring political or religious beliefs, sexual orientation and race.
According to Carme Artigas, Spanish Secretary of State, Digitization and Artificial Intelligence, the EU’s AI regulation will “provide both citizens and companies with legal and technical certainty” regarding AI utilization, effectively preempting a multitude of potential legal actions. In her perspective, the right to launch complaints against AI exploitation and receive meaningful explanations, will drive innovation aligned with fundamental rights. The imposition of fines ranging from EU€7.5 million (US$8.1 million) to EU€35 million (US$37.7 million) serves as a substantial deterrent against unethical AI practices.
Despite that the AI Act was hailed by EUs policy makers, businesses and privacy rights advocates were critical against it. For instance, the European Digital Rights criticized the legislation's approach to biometric surveillance and profiling as extremely tepid. Meanwhile, DigitalEurope business group expressed concerns about the additional regulatory burden imposed on companies working with and designing AI models, as they fear it could deter and obstruct their businesses’ technological development.
"We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head," said Cecilia Bonefeld-Dahl, General Director, DigitalEurope.
An independent supervisory body, referred to as the AI Agency, will play a crucial role in overseeing the robust implementation of AI-related regulations. In addition to its supervisory function, the AI Agency will receive guidance from a scientific panel and input from civil society stakeholders. Importantly, it will maintain a direct linkage to the European Commission, ensuring a coordinated and integrated approach to the regulation and governance of artificial intelligence within the European Union.









