Why are AI innovators pushing so hard for regulation?

The OpenAI CEO Sam Altman in a darkly lit room
(Image credit: Getty Imags)

With lawmakers on both sides of the Atlantic circling the wagons, generative AI leaders look ready to commend themselves into the loving arms of regulators – or so they’d have us think. 

OpenAI CEO Sam Altman embarked on a whistlestop public relations tour of Europe last week, meeting with lawmakers and industry figures, and flying the flag for the organization amid heightened regulatory scrutiny. 

This tour saw him stop at Downing Street to meet with UK Prime Minister Rishi Sunak alongside several industry execs to discuss the acceleration of generative AI, how regulation may shape the evolution of the industry, and how AI itself may shape the future of industry. 

The tour bore similarities to a royal visit in many ways. Here, there, and everywhere and a whole lot of schmoozing, yet without the crowds, pomp, and lavish splendor. It was a case of  talking a good game and smiling for the cameras, giving the impression of an industry leader open to discussing the simmering undercurrent of concern over what generative AI means for the future. 

Toying with pulling the plug

While there’s no denying Altman has been willing to communicate on the issue of AI regulation – after all, he turned up to Congress last month – it’s part of a pattern of cosying up to regulators prior to crackdowns. 

At an event in London last Wednesday, Altman told attendees during a panel session the company could “leave Europe” if it were unable to meet regulatory requirements, describing pending proposals as ‘over-regulating’. 

“Either we’ll be able to solve those requirements or not,” Altman told attendees. “If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible.”

This pungent whiff of absolutist, childish rhetoric was swiftly snuffed out by a u-turn on Friday in which Altman declared the firm was “excited to continue to operate here, and of course have no plans to leave”. 

A sigh of relief for many across Europe, no doubt. But the breakneck speed at which Altman was willing to threaten pulling out of Europe and then renege on the statement should raise concerns. Perhaps some industry leaders in the AI space just aren’t too keen on regulation – who would’ve thought?

Another regulatory Groundhog Day

We’ve seen this before. In 2020, Meta (then Facebook) spat the dummy out over a potential ban on sharing EU citizens’ data in the US due to privacy concerns. In a court filing, Facebook’s associate general counsel suggested enforcing a ban on trans-Atlantic data sharing would leave the company unable to operate.

“In the event that [Facebook] were subject to a complete suspension of the transfer of users’ data to the US,” Yvonne Cunnane wrote, “it is not clear how, in those circumstances, it could continue to provide the Facebook and Instagram services in the EU.”

Meta’s recent spats with EU regulators have also been halted by swift u-turns after bold declarations. Calling EU regulators’ bluff has, historically, been an abysmal tactic. And Altman’s recent outburst highlighted this perfectly, prompting a strongly-worded backlash from lawmakers in the union. 

Thierry Breton, European commissioner for internal markets, hit back at Altman’s comments, stating that rules on AI development “cannot be bargained”. 

“Let’s be clear, our rules are put in place for the security and well-being of our citizens and this cannot be bargained,” he told Reuters

“Europe has been ahead of the curve designing a solid and balanced regulatory framework for AI which tackles risks related to fundamental rights or safety, but also enables innovation for Europe to become a frontrunner in trustworthy AI.”

Talking the industry into the good books

Although aggressive in nature, Altman’s recent statements are an interesting tactic in the cat-and-mouse game innovators play with regulators. 

An outright crackdown from lawmakers would be disastrous and, frankly, counterproductive to the future of the industry. But if the industry keeps them talking and engaged for long enough, they may be able to temper the outcome and reduce the long-term hit. 

While Altman was wrapping up his Eurotrip, Microsoft president Brad Smith outlined the company’s future goals for AI governance, transparency, and responsible use. 

Detailing ‘five key principles’ to ensuring responsible AI development at the tech giant, Smith’s approach appears very much focused on currying favor with regulators on both sides of the Atlantic. 

“We are committed and determined as a company to develop and deploy AI in a safe and responsible way,” he wrote in a blog post. “We also recognize, however, that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.”

A key talking point in this blog post was Smith’s calls to pursue “public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology”. 

Smith asserted the “key to success” moving forward will be to develop closer ties between government, “respected companies”, and NGOs to ensure transparency, regulation, and prevent misuse. A bold statement, and one that will likely be welcomed by regulators in both the US and EU. 

At the same time, however, acknowledging the “inevitable societal challenges” that will arise due to AI advances hardly fills one with confidence, and underlines how urgent proper regulations are.

AI innovators, it seems, have a choice; spit the dummy out and buckle in for a battle, or cozy up to regulators and hope the gamble pays off. Either way, the "move fast and break things" honeymoon period is over, and regulation is coming.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.

TOPICS