Sam Altman reverses threat to ‘leave Europe’ over AI regulations

Sam Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law
(Image credit: Win McNamee/Getty Images)

OpenAI chief executive Sam Altman has reneged on threats that the generative AI company could “leave Europe” due to an impending regulatory crackdown. 

Earlier this week at an event in London, Altman suggested the company may pull out of Europe if it could not comply with upcoming AI regulations being considered by EU lawmakers.

While participating in a panel discussion at University College London (UCL), Altman said the company will “try to comply” with the pending regulations. However, he later told Reuters that, in their current state, the proposals would amount to “over regulating”. 

“Either we’ll be able to solve those requirements or not,” Altman told attendees. “If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible.”

The threat to back out of Europe sparked criticism from lawmakers across the union. Thierry Breton, European commissioner for internal markets, hit back at the comments, stating that rules on AI development “cannot be bargained”. 

“Let’s be clear, our rules are put in place for the security and well-being of our citizens and this cannot be bargained,” he told Reuters on Thursday. 

“Europe has been ahead of the curve designing a solid and balanced regulatory framework for AI which tackles risks related to fundamental rights or safety, but also enables innovation for Europe to become a frontrunner in trustworthy AI,” Breton added. 

The backlash appears to have prompted Altman to renege on his comments. In a tweet sent on 26 May, Altman confirmed the company has no plans to leave Europe. 

“Very productive week of conversations in Europe about how to best regulate AI,” he said. “We are excited to continue to operate here, and of course have no plans to leave.”

What prompted the pull-out threats?

The EU is currently working on a standardized set of rules to govern and regulate the development and use of generative AI technologies. 


Whitepaper cover with title and logo over an image of an iris and pupil of an eye

(Image credit: AMD)

AI inferencing with AMD EPYC™ processors

Providing an excellent platform for CPU-based AI inferencing


Under the current proposals, the regulations would see OpenAI systems such as GPT-4 designated as “high risk”, meaning that it would be forced to comply with additional safety requirements. 

Other proposals would require companies such as OpenAI to disclose whether copyrighted materials have been used to train systems such as ChatGPT

These aspects of the regulations have sparked concern at OpenAI, with critics suggesting that it would severely inhibit innovation and cause significant long-term issues. 

The company has repeatedly argued that systems such as ChatGPT are not inherently high risk. 

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at, or on Twitter and LinkedIn.