What would pausing AI development actually achieve?

An illustration of a brain made up of machine cogs
(Image credit: Getty Images)

Are we blindly moving toward an apocalyptic future, in which artificial intelligence (AI) development spins out of control and risks humanity’s survival?

Many, including the authors of an open letter published earlier this year, believe the pace at which generative AI is advancing is unsafe, with unintended consequences of the likes of ChatGPT barely considered and regulation unable to keep up.

The letter points to OpenAI’s own conclusion that “it may be important to get an independent review before starting to train future systems” and for those who’ve made an early start to “limit the rate of growth” of the technology.

As with all untested technologies, there will always be unforeseen consequences, and at this early stage, it might already be too late to even enforce a pause even if the industry, customers, and regulators unanimously agree that’s what’s needed. 

Stopping the runaway train of AI advancement

Technology has always run ahead of regulations. Many feel if AI systems are allowed to develop unchecked, though, the consequences could have profound adverse effects on society, putting the safety and privacy of individuals at risk.

“Assuming this pause was achieved, it would create a breathing space where the policy and regulation required to safeguard AI development can be developed,” says Mark Rodseth, VP of technology for EMEA at CI&T. 

“This is an immense challenge due to the speed of AI advancement and the need for the policy to be future-proofed, internationally applicable, and practically enforceable. This also needs to be rolled out at a time when political relationships between superpowers are teetering on the edge of global conflict.

RELATED RESOURCE

Purple whitepaper cover with white text over background image of suited female wearing glasses

(Image credit: Mimecast)

AI and cyber security

The promise and truth of the AI security revolution

DOWNLOAD FOR FREE

“Again, six months is equivalent to throwing a brick into the Grand Canyon. Our only option is to build safety features into the rocket after its launch.”

Amoral use of technology has had a long history, too. While tools and systems are created for legitimate purposes, they can be corrupted, and governments want to get ahead of the curve before industrial AI becomes unmanageable. For the signatories to the open letter, the risk of abuse is alarming if, say, cyber criminals or authoritarian states began deploying advanced AI models.

"Putting a pause on AI development would force providers to address their duty of care,” says Filip van Elsen, a university lecturer, AI expert, and global co-head of technology at Allen and Overy. 

“In an ideal world, it would give providers time to consolidate their approaches to software deployment, initiate more rigorous testing phases, consult for legal advice, and adjust strategy accordingly. With a boom in any new industry, there can often be a lack of regulation guard rails, as lawmakers play catchup."

Regulators have often been reactive when introducing regulations. GDPR, for example, was the product of blurred data protection principles and privacy infringements. Pausing AI development would be more proactive as we have yet to collectively experience privacy or data protection violations connected to AI. But there are areas of concern, such as deepfakes, and the idea that generative AI has been trained on copyrighted material. 

"The recent call to pause development on generative AI is rooted in the implication that we may not be able to control the impact of wider large language model (LLM) deployment, especially in the context of critical societal functions such as banking and healthcare,” adds Greg Benson, chief scientist at SnapLogic and professor of computer science at the University of San Francisco. “Some issues include the possibility of generating confidently incorrect or biased responses.”

The AI arms race is gaining speed

The National Institute of Standards and Technology (NIST) already has a framework for AI risk assessment in the US. Of course, this framework is voluntary. This, however, is voluntary. The UK government also published a whitepaper recently that speaks to many of the issues the open letter highlights. The five principles outlined will guide the development of AI systems to ensure they are created to be safe, transparent, trustworthy, and fair.

“AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases,” says DeepMind COO and UK AI Council member, Lila Ibrahim.

“This transformative technology can only reach its full potential if it’s trusted, which requires public and private partnership in the spirit of pioneering responsibly. The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks.”

Could AI development even be paused? Many of the systems in development are proprietary. The compliance environment they’re evolving within has yet to come into focus. It’s also unlikely any mandate to pause would be heeded by a company that’s made significant investments and wants to protect potential future revenue.

For SnapLogic’s Greg Benson, it’s already too late. “A pause is virtually impossible at this point,” he says. “The technology is moving too rapidly and becoming increasingly accessible so individuals can build and run their own LLMs. So how would such a pause be enforced? Instead of a pause, we could restrict the types of industries and applications where LLMs can be used. However, the advancement and use of LLM-based technology will not stop at this point.”

Businesses are increasingly being drawn to generative AI, while the appeal for developers of being the first to hit the market and shape the AI industry means any thought for regulatory hurdles or unintended consequences become afterthoughts.

“The challenge is for governments, AI providers, and businesses to come together to assess the ways risk impacts them and the wider world,” concludes van Elsen. “With a short-term pause, the downside is that revenue streams run dry. However, a lack of regulation will likely prove the biggest risk for AI users, if down the line there is litigation around misuse or damage caused by AI – and this could prove very costly.”

David Howell

David Howell is a freelance writer, journalist, broadcaster and content creator helping enterprises communicate.

Focussing on business and technology, he has a particular interest in how enterprises are using technology to connect with their customers using AI, VR and mobile innovation.

His work over the past 30 years has appeared in the national press and a diverse range of business and technology publications. You can follow David on LinkedIn.