EU's First AI Law: Here's the Implementation Timeline
By Diego Valverde | Journalist & Industry Analyst -
Tue, 05/28/2024 - 09:08
The Council of the European Union has approved the world's first law regulating the development, production, and use of artificial intelligence (AI) systems. Authorized by the European Parliament in March and now officially enacted, this legislation sets a global benchmark for AI regulation and aims to foster the creation of safe and reliable systems throughout the EU market.
The initiative began in April 2021, when the European Commission, led by Commissioner for the Internal Market Thierry Breton, proposed the AI law to promote the development and adoption of safe and lawful AI across the single market, ensuring respect for fundamental rights.
In March 2024, the European Commission published the EU Regulation on Artificial Intelligence (AI), with the goal of improving the internal market’s functionality and promoting the adoption of human-centered, trustworthy AI. This regulation aims to ensure a high level of protection for health, safety, fundamental rights, democracy, the rule of law, and the environment against the impact of AI systems, while also supporting innovation.
On May 21, 2024, the European Union officially enacted the AI Act, acknowledging the increasing integration of AI into various aspects of society and the economy. Institutions like the International Monetary Fund have highlighted the ethical and societal challenges posed by AI.
"It is crucial to regulate the development of AI to avoid potential abuses and ensure its benefit to society," said Margrethe Vestager, Executive Vice President of the European Commission.
According to a press release from the Council of the European Union , the law will enter into force 20 days after its enactment, with a phased implementation of its requirements. Within six months, certain AI applications that exploit people's vulnerabilities or involve the non-selective mining of facial images from the Internet or CCTV footage to create facial recognition databases will be banned in the EU, as reported by Wired. This initial phase aims to curb potentially harmful uses of AI technology and protect individuals’ privacy and rights.
One year after enactment, in the second quarter of 2025, the requirements for general-purpose AI (GPAI) models will come into force. GPAI model vendors will need to ensure compliance with the stipulated regulations, marking a shift towards greater transparency and accountability in AI system development and deployment.
By the second quarter of 2026, most rules governing high-risk AI systems and AI systems with specific transparency risks will start to be implemented. This phase will focus on enhancing the safety and reliability of AI applications, especially those that may cause significant harm or affect societal welfare. Additionally, by the second quarter of 2027, these standards will be applied to high-risk AI systems.
A grace period will be provided for AI systems and GPAI models already on the market at the time the relevant requirements come into force, extending compliance until approximately the second quarter of 2027.
Operators of high-risk AI systems offered in the EU before the relevant requirements start to apply will need to comply with the AI Act only in the event of a significant design change. However, high-risk AI systems intended for use by public authorities, where suppliers and implementers must comply within six years of the AI Act coming into force, likely around the second quarter of 2030.
The AI Office, integrated into the European Commission, will oversee the proper implementation of the law.
The adoption of this law has generated mixed reactions among stakeholders. Some, like Brad Smith, President, Microsoft, have praised it as "an example for the world" in regulating emerging technologies. However, others, like Meta and Amazon, have expressed concerns that the law could affect Europe's technological sovereignty and hinder innovation.
"While we recognize the importance of regulating AI to protect citizens' rights, we believe a balance needs to be struck that does not stifle innovation," said Yann LeCun, Chief AI Scientist, Meta.
"There are a whole range of areas where I think the risks are minimal and we should let innovation run that way," said Werner Vogels, Amazon's chief technology officer, "In other areas where mistakes can have a greater impact on people's lives I understand that risks should be managed, but in a way that is unique to that particular area."
According to the European Commission's press release, the enactment of this law marks a turning point in the global AI landscape. "Europe is sending a clear signal to the world that AI must be developed in a safe, ethical and transparent way," said Mariya Gabriel, European Commissioner for Innovation, Research, Culture, Education and Youth.
This regulation is expected to foster the growth of European startups and ventures in AI by providing clarity and transparency in regulatory standards, creating a level playing field to compete with large corporations. Incentives for research and facilitated access to funding are intended to stimulate the creation and expansion of new AI companies in Europe.








