OpenAI aims to reduce generative AI 'hallucinations' with new training method
An AI that shows its workings could be more comprehensible to engineers
OpenAI has published details of a new training method that it hopes will improve the accuracy and transparency of AI models.
The AI firm used ‘process supervision’ to train a model for solving mathematical problems, a method in which systems are rewarded for each accurate step taken toward an answer.
This allows for models to be trained in such a way that they produce comprehensible outputs, and result in fewer confidently incorrect answers or ‘hallucinations’
This contrasts with the more traditional ‘outcome supervision’ method, by which a model is given positive or negative feedback solely based on the answer it gives with little consideration of its working.
RELATED RESOURCE
The three keys to successful AI and ML outcomes
Democratized, operationalized, and responsible
Greater use of safety measures can reduce the performance of AI systems, a relationship that researchers call ‘alignment tax’.
The results of OpenAI’s testing showed that the process supervision method carried a negative alignment tax, meaning that it improved the performance of the test model alongside its safety.
As a result, the method was found to improve both the accuracy and effectiveness of the system, with outputs demonstrating the AI was capable of righting itself after an inaccurate step in order to eventually produce a correct answer.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In this way, the training method may be useful for reducing the hallucinatory output for a range of AI systems such as customer-facing chatbots or image generation programs.
“It is unknown how broadly these results will generalize beyond the domain of math, and we consider it important for future work to explore the impact of process supervision in other domains,” wrote OpenAI.
“If these results generalize, we may find that process supervision gives us the best of both worlds – a method that is both more performant and more aligned than outcome supervision.”
Hallucinations are a phenomenon in which an AI system provides confident but inaccurate responses, and are present in many popular services such as ChatGPT.
They are considered a foundational problem for generative AI, particularly for large language models (LLM), and some experts consider them an inherent trait of LLMs that will never be fully stamped out.
Risk analysts have highlighted these as one of many threats associated with AI, and those already using ChatGPT for business have had to take these system limitations into account.
Google executives have warned against fully trusting AI systems while hallucinations remain an issue, and has aimed at reducing their prevalence in recent updates to its Bard chatbot.
Experts have called for more involvement of humans in the individual decisions made by generative AI models, often known as ‘human-in-the-loop’ structures.
“The beauty and allure of the technology is its impressive ability to sound smart, which is dangerous in a business situation,” said Aaron Kalb, chief strategy officer, co-founder, Alation.
“You could be unknowingly introducing inaccuracies into decision-making. To be really effective, generative models should be fine-tuned on domain-specific data catalogs and their output should be reviewed by humans."
In recent months, prominent figures in the AI ecosystem have called for measures such as a six-month pause on AI development to give time for adequate safeguards to be put in place.
Sam Altman, CEO at OpenAI, has also stated that the firm will not begin work on its GPT-4 successor GPT-5 until the former’s inherent safety issues have been addressed.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
DeepSeek rocked Silicon Valley in January 2025 – one year on it looks set to shake things up again with a powerful new model releaseAnalysis The Chinese AI company sent Silicon Valley into meltdown last year and it could rock the boat again with an upcoming model
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
OpenAI says GPT-5.2-Codex is its ‘most advanced agentic coding model yet’ – here’s what developers and cyber teams can expectNews GPT-5.2 Codex is available immediately for paid ChatGPT users and API access will be rolled out in “coming weeks”
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
OpenAI signs another chip deal, this time with AMDnews AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
OpenAI signs series of AI data center deals with SamsungNews As part of its Stargate initiative, the firm plans to ramp up its chip purchases and build new data centers in Korea

