Tech pioneers call for six-month pause of "out-of-control" AI development

A blue 2D head in profile made up of binary and circuit boards
(Image credit: Shutterstock)

A score of tech pioneers have signed an open letter calling for a halt to advanced artificial intelligence (AI) development.

The open letter from the Future of Life Institute has received more than 1,100 signatories including Elon Musk, Turing Award-winner Yoshua Bengio, and Steve Wozniak.

It calls for an “immediate pause” on the “training of AI systems more powerful than GPT-4" for at least six months.

According to the institute, the call to action comes in direct response to the rapid acceleration of generative AI technologies currently being rolled out by major industry players such as Microsoft and Google.

The letter argued that while this wave of innovation continues, there is a distinct lack of corporate and regulatory safeguards currently in place to moderate generative AI development, which poses significant risks to society.

“As stated in the widely endorsed Asilomar AI Principles, advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources,” the letter read.

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The recommended six-month pause on development should be “public and verifiable”, the institute said, and include collaboration between “key actors” in the tech industry to jointly develop and implement a “set of shared safety protocols for advanced AI design”.

In addition, once implemented, the safeguards should be “rigorously audited and overseen by independent outside experts”.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

Responsible AI development

The publication of this open letter follows calls from a range of industry stakeholders highlighting the growing importance of responsible and ethical AI development.

RELATED RESOURCE

Six steps to machine learning success

The path toward leveraging the full power of machine learning

FREE DOWNLOAD

Last week, Mozilla announced the launch of a new open-source AI startup, dubbed Mozilla AI, which aims to bring together developers, researchers, and policymakers to encourage the development of ‘trustworthy AI’.

Mark Surman, executive president at Mozilla, told IT Pro the organisation hopes to position the startup as a counterweight to the dominance of Microsoft and Google, both of which are investing heavily in generative AI.

Surman suggested that the acceleration of generative AI products in recent months poses serious risks to both businesses and society alike, with developers often failing to consider the potential negative impact these technologies can have on individuals and public discourse.

Alongside industry action, the open letter called for AI developers to work with policymakers to “dramatically accelerate [the] development of robust AI governance systems”.

These governance systems should “at a minimum” include:

  • New and capable regulatory authorities dedicated to AI
  • Oversight and tracking of highly capable AI systems and large pools of computational capability
  • Provenance and watermarking systems to help distinguish real from synthetic and to track model leaks
  • A robust auditing and certification ecosystem
  • Liability for AI-caused harm
  • Robust public funding for technical AI safety research
  • Well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause

In recent months, the topic of AI regulation has been debated repeatedly. In the UK, policymakers are considering how to forge a balanced approach to generative AI that places a focus on responsibility while enabling innovation and growth.

The UK government’s AI white paper, published today, outlined plans to guide the responsible development of AI while taking a pro-innovation stance.

The government said it will avoid implementing “heavy-handed legislation” that could stifle innovation, instead opting for an “adaptable approach to regulating AI”.

“This approach will mean the UK’s rules can adapt as this fast-moving technology develops, ensuring protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs, and bold new discoveries that radically improve people’s lives,” the government said in a statement.

Signatories deny involvement

The publication of the open letter has sparked controversy among purported signatories, with some revealing that their names had been added without consent.

Yann LeCun, chief AI scientist at Meta, refuted claims that he had signed the open letter in a tweet today while others have followed suit.

“Nope. I did not sign this letter,” he said. “I disagree with its premise.”

See more

Since its publication, the names of other signatories have begun “disappearing”, according to analysis of the letter from Semafor journalist, Louise Matsakis. However, prominent names including Elon Musk, Steve Wozniak, and Yoshua Bengio still remain, prompting confusion.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.