OpenAI aims to reduce generative AI 'hallucinations' with new training method

An abstract illustration of a profile of a human face on a red and black starry cosmic background
(Image credit: Getty)

OpenAI has published details of a new training method that it hopes will improve the accuracy and transparency of AI models.

The AI firm used ‘process supervision’ to train a model for solving mathematical problems, a method in which systems are rewarded for each accurate step taken toward an answer.

This allows for models to be trained in such a way that they produce comprehensible outputs, and result in fewer confidently incorrect answers or ‘hallucinations’

This contrasts with the more traditional ‘outcome supervision’ method, by which a model is given positive or negative feedback solely based on the answer it gives with little consideration of its working.

RELATED RESOURCE

Whitepaper cover with image of female colleague using a tablet

(Image credit: AWS)

The three keys to successful AI and ML outcomes

Democratized, operationalized, and responsible

DOWNLOAD FOR FREE

Greater use of safety measures can reduce the performance of AI systems, a relationship that researchers call ‘alignment tax’.

The results of OpenAI’s testing showed that the process supervision method carried a negative alignment tax, meaning that it improved the performance of the test model alongside its safety. 

As a result, the method was found to improve both the accuracy and effectiveness of the system, with outputs demonstrating the AI was capable of righting itself after an inaccurate step in order to eventually produce a correct answer.

In this way, the training method may be useful for reducing the hallucinatory output for a range of AI systems such as customer-facing chatbots or image generation programs.

“It is unknown how broadly these results will generalize beyond the domain of math, and we consider it important for future work to explore the impact of process supervision in other domains,” wrote OpenAI.

“If these results generalize, we may find that process supervision gives us the best of both worlds – a method that is both more performant and more aligned than outcome supervision.”

Hallucinations are a phenomenon in which an AI system provides confident but inaccurate responses, and are present in many popular services such as ChatGPT.

They are considered a foundational problem for generative AI, particularly for large language models (LLM), and some experts consider them an inherent trait of LLMs that will never be fully stamped out.

Risk analysts have highlighted these as one of many threats associated with AI, and those already using ChatGPT for business have had to take these system limitations into account.

Google executives have warned against fully trusting AI systems while hallucinations remain an issue, and has aimed at reducing their prevalence in recent updates to its Bard chatbot.

Experts have called for more involvement of humans in the individual decisions made by generative AI models, often known as ‘human-in-the-loop’ structures.

“The beauty and allure of the technology is its impressive ability to sound smart, which is dangerous in a business situation,” said Aaron Kalb, chief strategy officer, co-founder, Alation.

“You could be unknowingly introducing inaccuracies into decision-making. To be really effective, generative models should be fine-tuned on domain-specific data catalogs and their output should be reviewed by humans." 

In recent months, prominent figures in the AI ecosystem have called for measures such as a six-month pause on AI development to give time for adequate safeguards to be put in place.

Sam Altman, CEO at OpenAI, has also stated that the firm will not begin work on its GPT-4 successor GPT-5 until the former’s inherent safety issues have been addressed.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.