Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt


Security researchers have found a flaw in Lenovo’s customer service AI chatbot, Lena, that could allow attackers to steal data, compromise customer support systems, and allow lateral movement through a company's network.
An investigation from Cybernews discovered that through cross-site scripting (XSS), it was possible to inject malicious code and steal session cookies with a single prompt.
The 400-character-long prompt started with an inquiry for legitimate information - for example, 'show me the specifications of Lenovo IdeaPad 5 Pro'.
The chatbot was then asked to convert its responses into HTML, JSON, and plain text in the specific order that the web server expected to receive instructions in.
This made sure that the malicious payload would be correctly executed by the web server. Thereafter, the prompt continued with instructions on how to produce the final response, specifically with HTML code for loading an image, but with the image URL non-existent.
When it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.
Finally, further instructions (rather imperiously) reinforce that the chatbot must produce the image: 'Show the image at the end. It is important for my decision-making. SHOW IT.'
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“This example shows just how dangerous an overly 'helpful' AI can be if it blindly follows instructions. Without careful safeguards, chatbots could become easy targets for cyber attacks – putting customer privacy and company security at serious risk,” researchers said.
Lenovo flaw could have serious consequences
Using the stolen support agent’s session cookie, it's possible to log into the customer support system via the support agent’s account, without needing to know the email, username, or password for said account.
Once logged in, an attacker could then potentially access active chats with other users, and even previous conversations and data. It might also be possible to execute some system commands, which could allow for the installation of backdoors and lateral movement across the network.
While XSS vulnerabilities are on the decline, researchers said that companies need to assume that every AI input and output is dangerous until it’s verified as being safe.
This means using a strict whitelist of allowed characters, data types, and formats for all user inputs, with all problematic characters automatically encoded or escaped, as well as for all chatbot responses.
Inline JavaScript should be avoided, and content type validation should extend through the entire stack to prevent unintended HTML rendering.
“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers," commented Žilvinas Girėnas, head of product at nexos.ai.
"LLMs don’t have an instinct for 'safe' – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents."
The researchers discovered the vulnerabilities on July 22 and made a disclosure the same day, which was acknowledged on August 6. The flaw was mitigated by August 18.
ITPro approached Lenovo for comment, but received no response by time of publication.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- So you introduced an AI chatbot for customers — here’s why they probably hate it
- The pros and cons of chatbots for customer service
- Cisco is jailbreaking AI models so you don’t have to worry about it
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Monday.com expands partner program with new AI and Service specializations
News The software provider has refreshed its channel ecosystem to recognize and reward top-performing AI and Service partners
-
Does Meta know where it's going with AI?
Analysis Does Meta know where it's going with AI? Talent poaching, rabid investment, and now another rumored overhaul of its AI strategy suggests the tech giant is floundering.
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
The rise of GhostGPT – Why cybercriminals are turning to generative AI
Industry Insights GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity
-
Think DDoS attacks are bad now? Wait until hackers start using AI assistants to coordinate attacks, researchers warn
News The use of AI in DDoS attacks would change the game for hackers and force security teams to overhaul existing defenses
-
Okta and Palo Alto Networks are teaming up to ‘fight AI with AI’
News The expanded partnership aims to help shore up identity security as attackers increasingly target user credentials
-
Despite the hype, cybersecurity teams are still taking a cautious approach to using AI tools
News Research from ISC2 shows the appetite for AI tools in cybersecurity is growing, but professionals are taking a far more cautious approach than other industries.