The world’s first major act to regulate Artificial Intelligence (AI) has been passed by European Lawmakers after the parliament on Wednesday approved the regulatory ground rules to govern the technology.
The legislation is a comprehensive framework designed to ensure AI is developed and used in a responsible and ethical manner.
Also, the act is poised to reshape how businesses and organizations in Europe use AI for everything from healthcare decisions to policing. It imposes blanket bans on some “unacceptable” uses of the technology while enacting stiff guardrails for other applications deemed “high-risk.”
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
The president of the European Parliament, Roberta Metsola, described the act as trailblazing, saying it would enable innovation while safeguarding fundamental rights.
She wrote via a social media post,
“Artificial Intelligence is already very much a part of our lives. Now, it will be part of our legislation too.”
Also commenting on the act, the European commissioner for internal market, Thierry Breton wrote on X, “Europe is now a global standard-setter in AI”.
Legal professionals described the act as a major milestone for international AI regulation, noting that it could pave the path for other countries to follow suit.
Meanwhile, Dragos Tudorache, a lawmaker who oversaw EU negotiations of the agreement, lauded the agreement but noted that the biggest hurdle remains implementation. He described the AI Act as not the end of the journey, but rather, the starting point for a new model of governance built around technology.
The new act imposes a ban on certain AI applications that threaten citizens’ rights, which includes biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
Also, AI that manipulates human behavior or exploits people’s vulnerabilities will be forbidden. Companies such as OpenAI that produce powerful, complex, and widely used AI models will also be subject to new disclosure requirements under the law.
Reports reveal that the regulation is expected to enter into force at the end of the legislature in May, after passing final checks and receiving an endorsement from the European Council. Implementation will then be staggered from 2025 onward.
Following concerns surrounding bias, discrimination, and data privacy, amongst others as regards the use of AI technology, governments of several European nations have before now rolled out regulatory laws to guide the use of AI.
With major elections set to take place in different nations across the globe, these Governments fear the possibility of deepfake forms of artificial intelligence that generate false events, including photos and videos being deployed in the lead-up to a swathe of key global elections this year.
In a bid to curb some irregularities in the technology, some AI backers are already self-regulating to avoid disinformation. Tech giant company Google earlier this week, announced that it will limit the type of election-related queries that can be asked on its Gemini chatbot, saying it has already implemented the changes in the U.S. and India.