Huawei executive says 'we need to embrace AI hallucinations’

Businesses need to understand that AI hallucinations are part-and-parcel of how the technology works

AI hallucination concept image showing human brain with glowing data points and colorful imagery.
(Image credit: Getty Images)

Hallucinations have commonly been considered a problem for generative AI, with chatbots such as ChatGPT, Claude, or Gemini prone to producing ‘confidently incorrect’ answers in response to queries.

This can pose a serious problem for users. There are several cases of lawyers, for example, citing non-existent cases as precedent or presenting the wrong conclusions and outcomes from cases that really do exist.

Unfortunately for said lawyers, we only know about these instances because they’re embarrassingly public, but it’s an experience all users will have had at some point.

For enterprises, there are hopes that LLMs trained on proprietary data – and only proprietary data – may be less prone to these hallucinations than public chatbots, but the need to manually check outcomes remains in cases where accuracy is key.

At the 2025 Huawei Connect conference, however, Tao Jingwen, director of Huawei’s quality, business process & IT management department, suggested that this is a wrongheaded way to look at the technology. Instead, Tao said that businesses should embrace hallucinations as part and parcel of generative AI.

Speaking at the manufacturing summit within the conference, Tao spoke about some of the challenges facing the sector when it comes to using AI.

“AI hallucinations and the black box nature of AI make it challenging for businesses and enterprises, especially businesses from the manufacturing sector, to trust and control, raising new issues around predictability and explainability,” he told delegates.

“From my point of view, well first of all we need to embrace AI hallucinations,” Tao added. “Without hallucinations, AI wouldn’t be what it is today. But there’s still a need to find effective ways to control and mitigate hallucinations.”

Tao also spoke of the challenges facing manufacturers that want to integrate AI into their businesses successfully and usefully when they already have decades of digitalization, automation, and refined processes in place.

AI implementation requires collaboration across business, IT, data, and other teams for effective implementation, but such a magic box from AI has brought new challenges to us,” Tao said.

“Especially in manufacturing after years of digitalization and process, how can we better integrate AI with good results?”

AI hallucinations aren’t going anywhere

Tao’s comments on hallucinations come amid an increasing resignation to – rather than embrace of – the fact these erroneous outputs are unavoidable.

In a paper published on 4 September 2025, OpenAI researchers said that hallucinations were a fact of how most LLMs have been trained and that they are “inevitable” for base models.

More specialist training that has a more focused data set and that penalizes incorrect outputs can help mitigate this, researchers noted.

Ultimately, though, businesses would need to decide if it’s worth the effort of training an LLM in this way or if there are other workarounds that could be used to, in the words of Tao, “control and mitigate” the problem.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

TOPICS
Jane McCallion
Managing Editor

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.

Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.