OpenAI's Sam Altman: Hallucinations are part of the “magic” of generative AI
The OpenAI chief said there is value to be gleaned from hallucinations
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI hallucinations are a fundamental part of the “magic” of systems such as ChatGPT which users have come to enjoy, according to OpenAI CEO Sam Altman.
Altman’s comments came during a heated chat with Marc Benioff, CEO at Salesforce, at Dreamforce 2023 in San Francisco in which the pair discussed the current state of generative AI and Altman’s future plans.
When asked by Benioff about how OpenAI is approaching the technical challenges posed by hallucinations, Altman said there is value to be gleaned from them.
“One of the sort of non-obvious things is that a lot of value from these systems is heavily related to the fact that they do hallucinate,” he told Benioff. “If you want to look something up in a database, we already have good stuff for that.
“But the fact that these AI systems can come up with new ideas and be creative, that’s a lot of the power. Now, you want them to be creative when you want, and factual when you want. That’s what we’re working on.”
Altman went on to claim that ensuring that platforms only generate content when they’re absolutely sure would be “naive” and counterintuitive to the fundamental nature of the systems in question.
“If you just do the naive thing and say ‘never say anything that you’re not 100% sure about’, you can get them all to do that. But it won’t have the magic that people like so much.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The topic of hallucinations, whereby an AI asserts or frames incorrect information as factually correct, has been a recurring talking point over the last year amid the surge in generative AI tools globally.
It’s a pressing topic given the propensity for some systems to frame false information as fact during a period in which misinformation still remains an abrasive and highly sensitive subject.
The issue of hallucinations has even led to OpenAI being sued in the last year, with a US radio host launching a defamation lawsuit against the tech giant due to false claims that he had embezzled funds.
Similarly, Google was left red-faced during its highly publicized launch event for Bard when it produced an incorrect answer to a question posed to the bot.
RELATED RESOURCE
Driving disruptive value with Generative AI
This free webinar explains how businesses are responsibly leveraging AI at scale
DOWNLOAD FOR FREE
Google shrugged the incident off as an example of why testing - or the quality of testing - is a critical factor in the development and learning process for generative AI models.
Industry stakeholders and critics alike have raised repeated concerns about AI hallucinations of late, with Marc Benioff describing the term as a buzzword for “lies” during a keynote speech at the annual conference.
“I don’t call them hallucinations, I call them lies,” he told attendees.
OpenAI is by no means disregarding the severity of the situation, however. In June, the firm published details of a new training process that it said will improve the accuracy and transparency of AI models.
The company used “process supervision” techniques to train a model for solving mathematical problems, it explained in a blog post at the time. This method rewards systems for each individual accurate step taken while generating an answer to a query.
OpenAI said the technique will help train models that produce outputs that will result in fewer confidently incorrect answers.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
How data storage underpins and powers the modern telcoSponsored Storage is evolving from the endpoint of data to the starting point of business, where data needs to be online, connected, and flowing
-
Zero trust: how to prepare for modern threatsIn-depth Meeting the demands of the latest zero trust guidelines involves ditching legacy apps and positioning for non-human identities
-
OpenAI says AI tools are paying dividends for small businesses, but uptake is sluggish in several UK regionsNews While some small businesses are seeing big benefits, many don't use AI at all
-
Microsoft has a new AI poster child in Anthropic – and it’s about timeOpinion Microsoft is cosying up to Anthropic at a crucial time in the race to deliver on AI promises
-
Salesforce targets unified customer support automation with Agentforce Contact CenterNews Combining AI agents, telephony, and CRM, Salesforce is making a firm case for automated customer interactions and controlled
-
Will AI hiring entrench gender bias?ITPro Podcast This International Women's Day, it's more important than ever to consider the inherent biases of training data
-
Why Amazon’s ‘go build it’ AI strategy aligns with OpenAI’s big enterprise pushNews OpenAI and Amazon are both vying to offer customers DIY-style AI development services
-
Salesforce targets telco gains with new agentic AI toolsNews Telecoms operators can draw on an array of pre-built agents to automate and streamline tasks
-
February rundown: SaaS-pocalypse now?ITPro Podcast Geopolitical uncertainty is intensifying public and private sector focus on true sovereign workloads
-
‘A huge vote of confidence’: London set to host OpenAI's largest research hub outside USNews OpenAI wants to capitalize on the UK’s “world-class” talent in areas such as machine learning