AI security blunders have cyber professionals scrambling
Growing AI security incidents have cyber teams fending off an array of threats
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Generative AI is no longer a novelty for businesses, but an essential utility - and that means headaches for cybersecurity professionals.
According to a new report from Palo Alto Networks, generative AI traffic rocketed in 2024, rising by more than 890%. Analysis by the security firm found the technology is mostly being used as a writing assistant, accounting for 34% of use cases, followed by conversational agents at 29% and enterprise search at 11%.
Popular apps identified in the study include ChatGPT, Microsoft 265 Copilot, and Microsoft Power Apps.
While use of the technology continues at pace, this boom is also giving rise to significant security issues, with cyber professionals reporting a sharp increase in data security incidents.
Data loss prevention (DLP) incidents related to Generative AI more than doubled in early 2025. Meanwhile, the average monthly number of generative AI-related data security incidents rose by two-and-a-half times, now accounting for 14% of all data security incidents across SaaS traffic, the company found.
"Organizations are grappling with the unfettered proliferation of GenAI applications within their environments. On average, organizations have about 66 GenAI applications in use," the researchers said.
"More importantly, 10% of these were classified as high risk,” researchers added. “The widespread use of unsanctioned GenAI tools, coupled with a lack of clear AI policies and the pressure for rapid AI adoption, can expose organizations to new risks."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Researchers said a key problem here lies in a lack of visibility into AI usage, with shadow AI making it hard for security teams to monitor and control how tools are being used across the organization.
It's also hard to control unauthorized access to data, the study noted, raising further concerns.
Jailbroken or manipulated AI models can respond with malicious links and malware, or enable its use for unintended purposes, while the proliferation of plugins, copilots, and AI agents are creating an overlooked 'side door'.
Heightening the risk is a rapidly evolving regulatory landscape where non-compliance with emerging AI and data laws can land organizations with severe penalties.
"The uncomfortable truth is that for all its productivity gains, there are many growing concerns – including data loss from sensitive trade secrets or source code shared on unapproved AI platforms," the researchers said.
"There’s also the risk in using unvetted GenAI tools that are vulnerable to poisoned outputs, phishing scams, and malware disguised as legitimate AI responses."
How to address AI security risks
Organizations need to tighten up their processes, according to Palo Alto Networks.
They should implement conditional access management to limit access to generative AI platforms, apps, and plugins, and guard sensitive data from unauthorized access and leakage, using real-time content inspection.
Similarly, the study advised implementing a zero trust security framework to identify and block what are often highly sophisticated, evasive, and stealthy malware as well as threats within generative AI responses.
"The explosive growth of GenAI has fundamentally altered the digital landscape for enterprise organizations," said the team.
"While GenAI unlocks innovation and accelerates competition, the proliferation of unauthorized AI tools is exposing organizations to greater risk of data leakage, compliance failures and security challenges."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Low-budget devices are the biggest casualty of the RAM crisisNews Say goodbye to budget devices; vendors are doubling down on high-end options to absorb costs
-
Sectigo taps Clint Maddox to lead global field operationsReviews The appointment follows a year of strong momentum for the security vendor as it expands its global channel footprint
-
CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just secondsNews Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools
-
Using AI to generate passwords is a terrible idea, experts warnNews Researchers have warned the use of AI-generated passwords puts users and businesses at risk
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
-
Trend Micro issues warning over rise of 'vibe crime' as cyber criminals turn to agentic AI to automate attacksNews Trend Micro is warning of a boom in 'vibe crime' - the use of agentic AI to support fully-automated cyber criminal operations and accelerate attacks.
