Microsoft, Amazon, Meta, and Google just promised to halt AI development if models are too dangerous — but will they stick to their promise?

Open source AI concept image showing digitized brain hovering over a circuit board.
(Image credit: Getty Images)

AI companies have signed up to a safe development pledge which could see them pull the plug on some of their own AI models if they cannot be built or deployed safely.

Companies including Amazon, Anthropic, Google, IBM, Meta, Microsoft, OpenAI and others have signed up to the frontier AI safety commitments at the AI Seoul Summit.

The companies said they would assess the risks posed by their frontier models or systems across the AI lifecycle, including before and during training, and when deploying them.

Similarly, the firms agreed to set thresholds beyond which the risks posed by a model “would be deemed intolerable”. If these limits were breached, the companies said they would cease further development..

“In the extreme, organizations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds,” the document said.

In broader terms of AI safety, the companies also agreed to stick to best practice, like internal and external red-teaming of frontier AI models for novel threats; to “work toward” information sharing, develop ways to help users know if audio or visual content is AI-generated, and to publicly report model capabilities and limitations.

The commitments are voluntary and come with plenty of caveats, however. The document noted that “given the evolving state of the science in this area, the undersigned organizations’ approaches…may evolve in the future”.

Alec Christie, partner at global law firm, Clyde & Co, said the rapid pace of innovation in the generative AI space makes it very difficult to introduce legislation due to the fluid, ever-evolving nature of the technology.

“The newly signed Seoul Declaration is a step in the right direction to establishing global fundamentals on AI regulation, but there is a time limit on all of this,” Christie said.

AI development is happening at such a rate that the regulation just can’t keep up, as it could become outdated as soon as it is implemented.”

The companies also agreed to provide “public transparency” on the implementation of the AI safety strategy - so long as that doesn’t reveal sensitive commercial information “to a degree disproportionate to the societal benefit”, although they agreed to share more detailed information with governments.

Generously, the organizations also promised to “to develop and deploy frontier AI models and systems to help address the world’s greatest challenges”.

AI safety in the spotlight once again

The Seoul summit follows from the first AI Safety Summit held at Bletchley Park in the UK last year, which set out some of the big concerns about AI. 

While AI can bring big advances it can also create big risks, with new cyber security threats and novel biotechnology risks as well as the rise of disinformation all getting a mention.

“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” the Bletchley declaration said.

For now, many countries – apart from China and the EU - have stopped short of creating specific legislation around AI risks and have preferred to go down the route of a broadly voluntary approach.

RELATED WHITEPAPER

Part of the problem is nobody can quite agree on what AI safety should cover, and many countries are also worried that, if they put in place stringent regulation, they may lose out economically to other regions with a more relaxed approach.

The UK’s approach, for example, has been to set up an AI Safety Institute (AISI) to look over some of the models.

However, researchers at the Ada Lovelace Institute have warned that the voluntary approach being followed right now has some big flaws.

Existing methods of evaluating AI models are easy to manipulate, they warn, and small changes to the models can have big consequences. And looking at a foundation model may shed very little light on the risks and impacts of the apps built on it.

They contrast this with other sectors such as pharmaceuticals, where companies have to hit standards set by regulators, and regulators have the ultimate say in whether a product can be released.

The researchers are calling for powers to compel companies to provide AISI and regulators with access to AI models, their training data and documentation.

“These access requirements could be unilaterally imposed as a prerequisite for operating in the UK – a market of nearly 70 million people,” they said in a blog post.

The researchers said the AISI and other regulators should have the power to block the release of models or products that appear too unsafe, such as voice cloning systems, which can enable fraud.

“Conducting evaluations and assessments is meaningless without the necessary enforcement powers to block the release of dangerous or high-risk models, or to remove unsafe products from the market,” they said.

The rate at which AI is being deployed has outpaced the ability of governments and even companies themselves to test these systems and manage their use, they argue.

“This is bad for the public, who are already being affected by the misuse of these technologies and their failure to act as intended. It’s bad for businesses, who are being told to integrate the latest forms of AI into their products and services without meaningful assurance,” they warned

“And given how little we know about the longer-term societal impacts of these systems, it risks unleashing longer-term systemic harms in much the same way as the unchecked rollout of social media platforms did more than a decade ago.”

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.