Sam Altman, OpenAI’s cofounder and CEO, seems to be at crossroad with EU’s proposed legislation for artificial intelligence, prompting his threat to withdraw OpenAI’s operation in Europe if the company finds it hard to obey the law.
Altman told reporters during his tour of some European capital cities that “the details [of the EU AI Act] really matter,” and “we will try to comply, but if we can’t comply, we will cease operating.”
The EU is ahead of the rest of the world in making rules that will guide artificial intelligence – enacting “the first law on AI by a major regulator anywhere,” according to its website.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
The EU AI Act is designed to deal with the three-category risks arising from the use of AI – focusing on protecting European users and regulating the technology. Having been adopted by the majority of the European parliament, the AI Act is now up for adoption, with June 14 slated as tentative date.
But Altman is said to be concerned that the AI Act could designate OpenAI’s systems, such as ChatGPT and GPT-4 as “high risk.” He is also concerned that the proposed legislation would force generative AI companies to reveal which copyrighted material had been used to train their systems to create text and images.
In March, when OpenAI released GPT-4, some in the AI community were disappointed that it did not publish details such as the data used to train the model, how much it cost, and how it was created.
AI companies have been accused of using the work of artists, musicians and actors to train systems to imitate their work.
Under the proposed legislation, OpenAI will be required to comply with the safety and transparency regulations which Altman described as “over-regulating”. The CEO is worried that it would be technically impossible for OpenAI to comply with some of the AI Act’s safety and transparency requirements.
It was based on these concerns that Altman threatened to halt OpenAI’s operation in Europe if the law goes into effect. But he backtracked on Friday as his threat went viral. He said in a tweet that the company has no plans to leave the block.
“Very productive week of conversations in Europe about how to best regulate AI! We are excited to continue to operate here and of course have no plans to leave,” he said.
very productive week of conversations in europe about how to best regulate AI! we are excited to continue to operate here and of course have no plans to leave.
— Sam Altman (@sama) May 26, 2023
Altman’s U-turn is believed to be an indication of his dilemma in the wake of governments’ struggle to regulate AI amid growing safety concerns about the burgeoning technology. Any regulation from the government is likely going to impact the profitability of OpenAI among others.
Ilya Sutskever, OpenAI’s cofounder and chief scientist, told The Verge that the company didn’t share information on the data used to train its language model due to competition and safety.
“It took pretty much all of OpenAI working together for a very long time to produce this thing,” Sutskever said. “And there are many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field.”
With OpenAI thinking big on profit, especially as its major investor – Microsoft, which has pumped billions of dollars into the AI language models – can’t wait for the returns, Altman appears to be pushing for regulation that will not jeopardize OpenAI’s potential fortune.
In his recent chat with the US lawmakers on AI regulation, Altman had advocated for a government agency to oversee AI projects that perform “above a certain scale of capabilities.” His testimony in the senate hearing indicates that he wants the government to create an agency that will grant licenses to AI companies, and withdraw them if they overstep safety rules.