Huawei executive says 'we need to embrace AI hallucinations’
Businesses need to understand that AI hallucinations are part-and-parcel of how the technology works
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Hallucinations have commonly been considered a problem for generative AI, with chatbots such as ChatGPT, Claude, or Gemini prone to producing ‘confidently incorrect’ answers in response to queries.
This can pose a serious problem for users. There are several cases of lawyers, for example, citing non-existent cases as precedent or presenting the wrong conclusions and outcomes from cases that really do exist.
Unfortunately for said lawyers, we only know about these instances because they’re embarrassingly public, but it’s an experience all users will have had at some point.
For enterprises, there are hopes that LLMs trained on proprietary data – and only proprietary data – may be less prone to these hallucinations than public chatbots, but the need to manually check outcomes remains in cases where accuracy is key.
At the 2025 Huawei Connect conference, however, Tao Jingwen, director of Huawei’s quality, business process & IT management department, suggested that this is a wrongheaded way to look at the technology. Instead, Tao said that businesses should embrace hallucinations as part and parcel of generative AI.
Speaking at the manufacturing summit within the conference, Tao spoke about some of the challenges facing the sector when it comes to using AI.
“AI hallucinations and the black box nature of AI make it challenging for businesses and enterprises, especially businesses from the manufacturing sector, to trust and control, raising new issues around predictability and explainability,” he told delegates.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“From my point of view, well first of all we need to embrace AI hallucinations,” Tao added. “Without hallucinations, AI wouldn’t be what it is today. But there’s still a need to find effective ways to control and mitigate hallucinations.”
Tao also spoke of the challenges facing manufacturers that want to integrate AI into their businesses successfully and usefully when they already have decades of digitalization, automation, and refined processes in place.
“AI implementation requires collaboration across business, IT, data, and other teams for effective implementation, but such a magic box from AI has brought new challenges to us,” Tao said.
“Especially in manufacturing after years of digitalization and process, how can we better integrate AI with good results?”
AI hallucinations aren’t going anywhere
Tao’s comments on hallucinations come amid an increasing resignation to – rather than embrace of – the fact these erroneous outputs are unavoidable.
In a paper published on 4 September 2025, OpenAI researchers said that hallucinations were a fact of how most LLMs have been trained and that they are “inevitable” for base models.
More specialist training that has a more focused data set and that penalizes incorrect outputs can help mitigate this, researchers noted.
Ultimately, though, businesses would need to decide if it’s worth the effort of training an LLM in this way or if there are other workarounds that could be used to, in the words of Tao, “control and mitigate” the problem.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Generative AI attacks are accelerating at an alarming rate
- Only a handful of generative AI projects make it into production – here’s why
- Generative AI enthusiasm continues to beat out business uncertainty

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
95% of organizations don’t fully trust their cybersecurity vendors – here’s whyNews Organizations are struggling to assess vendor credibility as trust becomes a key factor in risk management.
-
Meta engineer trusted advice from an AI agent, ended up exposing user dataNews The internal security incident exposed sensitive user data to unauthorized employees
-
Concerns are mounting over the cognitive impact of AI as workers report experiencing ‘brain fry’ – and it’s causing "increased employee errors, decision fatigue, and intention to quit"News Research from Boston Consulting Group backs earlier studies in highlighting the negative cognitive impact of AI at work
-
If you thought RTO battles were bad, wait until AI mandates start taking hold across the industryOpinion Forcing workers to adopt AI under the threat of poor performance reviews and losing out on promotions will only create friction
-
Sam Altman just said what everyone is thinking about AI layoffsNews AI layoff claims are overblown and increasingly used as an excuse for “traditional drivers” when implementing job cuts
-
Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its modelsNews Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
-
Why Anthropic sent software stocks into freefallNews Anthropic's sector-specific plugins for Claude Cowork have investors worried about disruption to software and services companies
-
B2B Tech Future Focus - 2026Whitepaper Advice, insight, and trends for modern B2B IT leaders
-
What the UK's new Centre for AI Measurement means for the future of the industryNews The project, led by the National Physical Laboratory, aims to accelerate the development of secure, transparent, and trustworthy AI technologies
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale