OpenAI's Sam Altman: Hallucinations are part of the “magic” of generative AI
The OpenAI chief said there is value to be gleaned from hallucinations
AI hallucinations are a fundamental part of the “magic” of systems such as ChatGPT which users have come to enjoy, according to OpenAI CEO Sam Altman.
Altman’s comments came during a heated chat with Marc Benioff, CEO at Salesforce, at Dreamforce 2023 in San Francisco in which the pair discussed the current state of generative AI and Altman’s future plans.
When asked by Benioff about how OpenAI is approaching the technical challenges posed by hallucinations, Altman said there is value to be gleaned from them.
“One of the sort of non-obvious things is that a lot of value from these systems is heavily related to the fact that they do hallucinate,” he told Benioff. “If you want to look something up in a database, we already have good stuff for that.
“But the fact that these AI systems can come up with new ideas and be creative, that’s a lot of the power. Now, you want them to be creative when you want, and factual when you want. That’s what we’re working on.”
Altman went on to claim that ensuring that platforms only generate content when they’re absolutely sure would be “naive” and counterintuitive to the fundamental nature of the systems in question.
“If you just do the naive thing and say ‘never say anything that you’re not 100% sure about’, you can get them all to do that. But it won’t have the magic that people like so much.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The topic of hallucinations, whereby an AI asserts or frames incorrect information as factually correct, has been a recurring talking point over the last year amid the surge in generative AI tools globally.
It’s a pressing topic given the propensity for some systems to frame false information as fact during a period in which misinformation still remains an abrasive and highly sensitive subject.
The issue of hallucinations has even led to OpenAI being sued in the last year, with a US radio host launching a defamation lawsuit against the tech giant due to false claims that he had embezzled funds.
Similarly, Google was left red-faced during its highly publicized launch event for Bard when it produced an incorrect answer to a question posed to the bot.
RELATED RESOURCE
Driving disruptive value with Generative AI
This free webinar explains how businesses are responsibly leveraging AI at scale
DOWNLOAD FOR FREE
Google shrugged the incident off as an example of why testing - or the quality of testing - is a critical factor in the development and learning process for generative AI models.
Industry stakeholders and critics alike have raised repeated concerns about AI hallucinations of late, with Marc Benioff describing the term as a buzzword for “lies” during a keynote speech at the annual conference.
“I don’t call them hallucinations, I call them lies,” he told attendees.
OpenAI is by no means disregarding the severity of the situation, however. In June, the firm published details of a new training process that it said will improve the accuracy and transparency of AI models.
The company used “process supervision” techniques to train a model for solving mathematical problems, it explained in a blog post at the time. This method rewards systems for each individual accurate step taken while generating an answer to a query.
OpenAI said the technique will help train models that produce outputs that will result in fewer confidently incorrect answers.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
HPE's new Cray system is a pocket powerhouseNews Hewlett Packard Enterprise (HPE) had unveiled new HPC storage, liquid cooling, and supercomputing offerings ahead of SC25
-
High performance and long battery life: How Dell AI PCs offer the best of both worldsUnlocking the true potential of on-device AI requires a perfect balance between software and hardware
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
‘I don't think anyone is farther in the enterprise’: Marc Benioff is bullish on Salesforce’s agentic AI lead – and Agentforce 360 will help it stay top of the perchNews Salesforce is leaning on bringing smart agents to customer data to make its platform the easiest option for enterprises
-
Dreamforce 2025 live: All the latest updates from San FranciscoNews We're live on the ground in San Francisco for Dreamforce 2025 – keep tabs on all of our rolling coverage from the annual Salesforce conference.
-
Salesforce just launched a new catch-all platform to build enterprise AI agentsNews Businesses will be able to build agents within Slack and manage them with natural language
-
OpenAI signs another chip deal, this time with AMDnews AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
OpenAI signs series of AI data center deals with SamsungNews As part of its Stargate initiative, the firm plans to ramp up its chip purchases and build new data centers in Korea
-
Why Nvidia’s $100 billion deal with OpenAI is a win-win for both companiesNews OpenAI will use Nvidia chips to build massive systems to train AI
-
OpenAI just revealed what people really use ChatGPT for – and 70% of queries have nothing to do with workNews More than 70% of ChatGPT queries have nothing to do with work, but are personal questions or requests for help with writing.