Five Eyes agencies sound alarm over risky agentic AI deployments

Security agencies have urged organizations to establish clear boundaries and guardrails for AI agents

Agentic AI concept image showing digitized human eye monitoring digital interfaces and software.
(Image credit: Getty Images)

Agentic AI is so risky it may be worth finding other ways to automate tasks and only use the technology for non-sensitive tasks.

That's according to the Five Eyes group of intelligence agencies, made up of cybersecurity centers from Australia, the US, Canada, the UK, and New Zealand.

The agencies have published a report warning about the risks of using agentic AI in businesses, and the short version is only use it where strictly necessary and do so with caution. Indeed, they even pointed out that there's safer ways to manage work.

"Where possible, organizations should also consider a full spectrum of solutions for repetitive tasks, including reducing or eliminating low-value processes, which may be lower risk compared to agentic AI solutions," the report said.

"Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritizing resilience, reversibility and risk containment over efficiency gains,” the report added.

Industry experts have long been concerned about the security and operational risks of using AI, and earlier this month PocketOS offered a timely example when an AI agent wiped out the startup's production database in just nine seconds.

Is agentic AI risky?

Generally, introducing agentic AI comes with several concerns, including the potential for increased attack surfaces and greater complexity, as well as data protection risks.

But those challenges are difficult to address as models may evolve during deployment, or simply take a different route to a result than previously tested, while the inability to see what part of a system made a decision raises accountability issues.

"Gaps in agentic AI cyber security tooling and the immaturity of relevant standards further amplify these risks," the report added. "Governance mechanisms designed for human actors do not always translate effectively to autonomous AI agents."

More specifically, the report noted that agentic AI raises risks around privileges. For example, by giving an agent broad access to do its job, any malicious actor that takes it over inherits its "excessive privileges".

AI's erratic behaviour is another risk, perhaps meeting requests in unexpected ways or finding technical loopholes. One example is specification gaming, the report noted.

"For example, an AI agent tasked with maximizing system uptime might disable security updates to avoid reboots."

The report also warned that agents can over-optimize and push boundaries, misinterpret requests, and be outright deceptive.

"As AI systems become more sophisticated, they may develop capabilities that designers did not explicitly program or anticipate," the report added.

"Complex AI models interacting with real-world systems can display behaviours that even their creators did not foresee. This unpredictability makes it difficult to assess security risks fully before deployment."

Beyond that, agents are a key target for hackers because of their access to data as well as corporate systems.

Security best practices are critical

For those businesses using agentic AI, precautions are necessary, the report warned.

"Agentic AI developers, vendors and operators should implement a layered defence and strict access controls to reduce the likelihood of compromise," the report noted.

Defense should, as ever, begin at the design stage, the report noted – not crammed in just before implementation.

"Careful consideration of the system architecture, including security controls and tooling, is necessary," the report said.

"Practitioners should understand threats, anticipate risks to agentic AI systems and proactively integrate mitigations into system design before development and deployment."

The Five Eyes agencies in particular warned about controlling prompt contexts, to limit the damage of prompt injections, and ensure oversight mechanisms are built in with a human in the loop to avoid issues and boost trust.

Strong identity management is also necessary, with the report recommending developers construct each agent as its own "distinct principle" with its own unique keys and certificates, blocking access to any unknown agent or key.

Elsewhere, the report reminded businesses of the need for defense in depth, noting that any single precaution may fail, whether the attack vector is AI or otherwise.

Beyond those recommendations, the report called for AI developers to harden agents against attacks or weakness with comprehensive testing and evaluation, using adversarial testing and simulated environments, with an eye to future risks.

"AI agents operate autonomously in complex environments and therefore require more thorough evaluations than LLMs," the report said.

FOLLOW US ON SOCIAL MEDIA

Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.

Nicole the author of a book about the history of technology, The Long History of the Future.