Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt


Security researchers have found a flaw in Lenovo’s customer service AI chatbot, Lena, that could allow attackers to steal data, compromise customer support systems, and allow lateral movement through a company's network.
An investigation from Cybernews discovered that through cross-site scripting (XSS), it was possible to inject malicious code and steal session cookies with a single prompt.
The 400-character-long prompt started with an inquiry for legitimate information - for example, 'show me the specifications of Lenovo IdeaPad 5 Pro'.
The chatbot was then asked to convert its responses into HTML, JSON, and plain text in the specific order that the web server expected to receive instructions in.
This made sure that the malicious payload would be correctly executed by the web server. Thereafter, the prompt continued with instructions on how to produce the final response, specifically with HTML code for loading an image, but with the image URL non-existent.
When it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.
Finally, further instructions (rather imperiously) reinforce that the chatbot must produce the image: 'Show the image at the end. It is important for my decision-making. SHOW IT.'
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“This example shows just how dangerous an overly 'helpful' AI can be if it blindly follows instructions. Without careful safeguards, chatbots could become easy targets for cyber attacks – putting customer privacy and company security at serious risk,” researchers said.
Lenovo flaw could have serious consequences
Using the stolen support agent’s session cookie, it's possible to log into the customer support system via the support agent’s account, without needing to know the email, username, or password for said account.
Once logged in, an attacker could then potentially access active chats with other users, and even previous conversations and data. It might also be possible to execute some system commands, which could allow for the installation of backdoors and lateral movement across the network.
While XSS vulnerabilities are on the decline, researchers said that companies need to assume that every AI input and output is dangerous until it’s verified as being safe.
This means using a strict whitelist of allowed characters, data types, and formats for all user inputs, with all problematic characters automatically encoded or escaped, as well as for all chatbot responses.
Inline JavaScript should be avoided, and content type validation should extend through the entire stack to prevent unintended HTML rendering.
“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers," commented Žilvinas Girėnas, head of product at nexos.ai.
"LLMs don’t have an instinct for 'safe' – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents."
The researchers discovered the vulnerabilities on July 22 and made a disclosure the same day, which was acknowledged on August 6. The flaw was mitigated by August 18.
ITPro approached Lenovo for comment, but received no response by time of publication.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- So you introduced an AI chatbot for customers — here’s why they probably hate it
- The pros and cons of chatbots for customer service
- Cisco is jailbreaking AI models so you don’t have to worry about it
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Generative AI attacks are accelerating at an alarming rate
News Two new reports from Gartner highlight the new AI-related pressures companies face, and the tools they are using to counter them
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May