Christian Klein, Chief Executive Officer (CEO) of Software Giant SAP, has warned that Europe risks falling behind the U.S. and China, following plans to regulate the Artificial Intelligence (AI) industry.
In an interview with CNBC, Klein advised Europe to avoid overregulating AI, but instead focus on the technology’s results. He warned that excessive regulation could hinder the continent’s ability to compete with other top region in the AI sector.
In his words,
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
“If you only regulate technology in Europe, how can our startups here in Europe compete against the other startups in China, Asia, and the US?”
While acknowledging the importance of addressing Al’s risks, Klein argued that regulating the technology in its early stages would be a mistake. He emphasized the need for Al use cases to deliver positive outcomes for employees and society.
“It’s very important that how we train our algorithms, the Al use cases we embed into the businesses of our customers, they need to deliver the right outcome for the employees, for the society. If you only regulate technology in Europe, how can our startups here in Europe, how can they compete against the other startups in Chima, in Asia, and the U.S.? Especially for the startup scene here in Europe, it’s very important to think about the outcome of the technology but not to regulate the AI technology itself”, he added.
Klein’s comment comes in a period where regulatory talks have come up to regulate the AI industry in Europe. Recall that on May 21, 2024, the Council of the EU approved the AI Act, the first European regulation on AI, marking the conclusion of a legislative journey that began in 2021.
The AI Act employs a risk-based approach, categorizing AI systems into four risk levels, with strict regulations for high-risk systems and bans on AI practices that violate EU values. The Act covers both EU and non-EU organizations producing or distributing AI in Europe, excluding military and research applications.
The AI Act aims to balance the protection of rights and freedoms with the facilitation of a “space” conducive to technological innovation. Its primary goal is to ensure the safe deployment of AI systems in Europe, aligning their use with the fundamental values and rights of the EU while encouraging investment and innovation within the continent.
Additionally, the Act includes provisions for General-Purpose AI (GPAI) models, defined as “computer models that, through training on a vast amount of data, can be used for a variety of tasks, either singly or included as components in an AI system.” Due to their broad applicability and potential systemic risks, GPAI models are subject to stricter requirements regarding effectiveness, interoperability, transparency, and compliance.
The core of Klein’s concern revolves around the balance between ensuring safety and promoting innovation. Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, power misinformation campaigns, and issues with “smart machines” thinking for themselves.
However, overly stringent regulations may impede technological advancements and limit the benefits that AI can bring. On the other hand, insufficient regulation could lead to unethical uses of AI or unintended consequences.