NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to know

Many organizations see prompt injection as just another version of SQL injection - but this is a mistake

Malware concept image showing a laptop with skull and crossbones on screen, symbolizing a cyber attack.
(Image credit: Getty Images)

The National Cyber Security Centre (NCSC) is warning security teams to be on the lookout for AI prompt injection attacks.

These involve an attacker creating apparently innocent inputs to large language models (LLMs) which take advantage of the model's inability to distinguish between developer-defined prompts and user inputs to cause unintended behaviour.

Prompt injection attacks are often seen as just another version of SQL injection attacks, said NCSC technical director for platforms research David C, with data and instructions being handled incorrectly - but this is a mistake.

In SQL, instructions are something the database engine does and data is something that is stored or used in a query; much the same is true in cross-site scripting and buffer overflows.

Mitigations for these issues enforce this separation between data and instructions. For example, the use of parameterized queries in SQL means the database engine can never interpret it as an instruction, regardless of the input. The right mitigation solves the data/instruction conflation at its root, David C pointed out.

"Under the hood of an LLM, there’s no distinction made between ‘data' or ‘instructions'; there is only ever ‘next token’. When you provide an LLM prompt, it doesn’t understand the text it in the way a person does. It is simply predicting the most likely next token from the text so far," he said.

"As there is no inherent distinction between ‘data’ and ‘instruction’, it’s very possible that prompt injection attacks may never be totally mitigated in the way that SQL injection attacks can be."

Security teams should stop treating prompt injection as a form of code injection, but instead view it as an exploitation of an ‘inherently confusable deputy’.

This is where a system can be coerced to perform a function that benefits the attacker, typically where a privileged component is coerced into making a request on behalf of a less-privileged attacker.

"Crucially, a classical confused deputy vulnerability can be mitigated, whilst I’d argue LLMs are ‘inherently confusable’ as the risk can’t be mitigated," said David C.

"Rather than hoping we can apply a mitigation that fixes prompt injection, we instead need to approach it by seeking to reduce the risk and the impact. If the system’s security cannot tolerate the remaining risk, it may not be a good use case for LLMs."

AI prompt injection attacks are rising

Prompt injection attacks have become a recurring talking point over the last three years, with security experts warning about the potential for threat actors to manipulate AI models into producing malicious outputs.

Pete Luban, field CISO at AttackIQ, said the NCSC advice should be taken seriously by enterprises using the technology.

However, just because AI prompt injection attacks can't be mitigated in the same way as SQL injection attacks, Luban said this doesn't mean that lessons can't be learned from SQL injection defense.

"Developers need to build systems around LLMs with the awareness that prompt injection attacks are a threatening class of vulnerability. Since these attacks cannot be handled with a single product or appliance, careful design and operation is paramount to preventing exploitation," he said.

"Security teams should understand that all known methods of prompt injection prevention can only reduce chances of an attack or breach. Updating and strengthening their overall security posture, including continuously monitoring systems for irregularities and testing against common adversarial tactics, can help systems identify earlier stages of an attack and quickly move to mitigate it."

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.