NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to know
Many organizations see prompt injection as just another version of SQL injection - but this is a mistake
The National Cyber Security Centre (NCSC) is warning security teams to be on the lookout for AI prompt injection attacks.
These involve an attacker creating apparently innocent inputs to large language models (LLMs) which take advantage of the model's inability to distinguish between developer-defined prompts and user inputs to cause unintended behaviour.
Prompt injection attacks are often seen as just another version of SQL injection attacks, said NCSC technical director for platforms research David C, with data and instructions being handled incorrectly - but this is a mistake.
In SQL, instructions are something the database engine does and data is something that is stored or used in a query; much the same is true in cross-site scripting and buffer overflows.
Mitigations for these issues enforce this separation between data and instructions. For example, the use of parameterized queries in SQL means the database engine can never interpret it as an instruction, regardless of the input. The right mitigation solves the data/instruction conflation at its root, David C pointed out.
"Under the hood of an LLM, there’s no distinction made between ‘data' or ‘instructions'; there is only ever ‘next token’. When you provide an LLM prompt, it doesn’t understand the text it in the way a person does. It is simply predicting the most likely next token from the text so far," he said.
"As there is no inherent distinction between ‘data’ and ‘instruction’, it’s very possible that prompt injection attacks may never be totally mitigated in the way that SQL injection attacks can be."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Security teams should stop treating prompt injection as a form of code injection, but instead view it as an exploitation of an ‘inherently confusable deputy’.
This is where a system can be coerced to perform a function that benefits the attacker, typically where a privileged component is coerced into making a request on behalf of a less-privileged attacker.
"Crucially, a classical confused deputy vulnerability can be mitigated, whilst I’d argue LLMs are ‘inherently confusable’ as the risk can’t be mitigated," said David C.
"Rather than hoping we can apply a mitigation that fixes prompt injection, we instead need to approach it by seeking to reduce the risk and the impact. If the system’s security cannot tolerate the remaining risk, it may not be a good use case for LLMs."
AI prompt injection attacks are rising
Prompt injection attacks have become a recurring talking point over the last three years, with security experts warning about the potential for threat actors to manipulate AI models into producing malicious outputs.
Pete Luban, field CISO at AttackIQ, said the NCSC advice should be taken seriously by enterprises using the technology.
However, just because AI prompt injection attacks can't be mitigated in the same way as SQL injection attacks, Luban said this doesn't mean that lessons can't be learned from SQL injection defense.
"Developers need to build systems around LLMs with the awareness that prompt injection attacks are a threatening class of vulnerability. Since these attacks cannot be handled with a single product or appliance, careful design and operation is paramount to preventing exploitation," he said.
"Security teams should understand that all known methods of prompt injection prevention can only reduce chances of an attack or breach. Updating and strengthening their overall security posture, including continuously monitoring systems for irregularities and testing against common adversarial tactics, can help systems identify earlier stages of an attack and quickly move to mitigate it."
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
What is Microsoft Maia?Explainer Microsoft's in-house chip is planned to a core aspect of Microsoft Copilot and future Azure AI offerings
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion If Satya Nadella wants us to take AI seriously, let's start with ROI for businesses
-
90% of companies are woefully unprepared for quantum security threats – analysts say they need to get a move onNews Quantum security threats are coming, but a Bain & Company survey shows systems aren't yet in place to prevent widespread chaos
-
LastPass issues alert as customers targeted in new phishing campaignNews LastPass has urged customers to be on the alert for phishing emails amidst an ongoing scam campaign that encourages users to backup vaults.
-
NCSC names and shames pro-Russia hacktivist group amid escalating DDoS attacks on UK public servicesNews Russia-linked hacktivists are increasingly trying to cause chaos for UK organizations
-
An AWS CodeBuild vulnerability could’ve caused supply chain chaos – luckily a fix was applied before disaster struckNews A single misconfiguration could have allowed attackers to inject malicious code to launch a platform-wide compromise
-
There’s a dangerous new ransomware variant on the block – and cyber experts warn it’s flying under the radarNews The new DeadLock ransomware family is taking off in the wild, researchers warn
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
-
Veeam patches Backup & Replication vulnerabilities, urges users to updateNews The vulnerabilities affect Veeam Backup & Replication 13.0.1.180 and all earlier version 13 builds – but not previous versions.
-
NHS supplier DXS International confirms cyber attack – here’s what we know so farNews The NHS supplier says front-line clinical services are unaffected
