OpenAI's Sam Altman: Hallucinations are part of the “magic” of generative AI

Sam Altman close-up headshot on stage while giving a speech in Paris
(Image credit: Getty Images)

AI hallucinations are a fundamental part of the “magic” of systems such as ChatGPT which users have come to enjoy, according to OpenAI CEO Sam Altman. 

Altman’s comments came during a heated chat with Marc Benioff, CEO at Salesforce, at Dreamforce 2023 in San Francisco in which the pair discussed the current state of generative AI and Altman’s future plans. 

When asked by Benioff about how OpenAI is approaching the technical challenges posed by hallucinations, Altman said there is value to be gleaned from them. 

“One of the sort of non-obvious things is that a lot of value from these systems is heavily related to the fact that they do hallucinate,” he told Benioff. “If you want to look something up in a database, we already have good stuff for that.

“But the fact that these AI systems can come up with new ideas and be creative, that’s a lot of the power. Now, you want them to be creative when you want, and factual when you want. That’s what we’re working on.”

Altman went on to claim that ensuring that platforms only generate content when they’re absolutely sure would be “naive” and counterintuitive to the fundamental nature of the systems in question. 

Sam Altman and Marc Benioff in discussion at Dreamforce 2023

(Image credit: Salesforce)

“If you just do the naive thing and say ‘never say anything that you’re not 100% sure about’, you can get them all to do that. But it won’t have the magic that people like so much.”

The topic of hallucinations, whereby an AI asserts or frames incorrect information as factually correct, has been a recurring talking point over the last year amid the surge in generative AI tools globally. 

It’s a pressing topic given the propensity for some systems to frame false information as fact during a period in which misinformation still remains an abrasive and highly sensitive subject. 

The issue of hallucinations has even led to OpenAI being sued in the last year, with a US radio host launching a defamation lawsuit against the tech giant due to false claims that he had embezzled funds. 

Similarly, Google was left red-faced during its highly publicized launch event for Bard when it produced an incorrect answer to a question posed to the bot. 

RELATED RESOURCE

What does ChatGPT mean for business?

(Image credit: IBM)

Driving disruptive value with Generative AI

This free webinar explains how businesses are responsibly leveraging AI at scale

DOWNLOAD FOR FREE

Google shrugged the incident off as an example of why testing - or the quality of testing - is a critical factor in the development and learning process for generative AI models. 

Industry stakeholders and critics alike have raised repeated concerns about AI hallucinations of late, with Marc Benioff describing the term as a buzzword for “lies” during a keynote speech at the annual conference. 

“I don’t call them hallucinations, I call them lies,” he told attendees. 

OpenAI is by no means disregarding the severity of the situation, however. In June, the firm published details of a new training process that it said will improve the accuracy and transparency of AI models

The company used “process supervision” techniques to train a model for solving mathematical problems, it explained in a blog post at the time. This method rewards systems for each individual accurate step taken while generating an answer to a query. 

OpenAI said the technique will help train models that produce outputs that will result in fewer confidently incorrect answers. 

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.