AI security blunders have cyber professionals scrambling
Growing AI security incidents have cyber teams fending off an array of threats


Generative AI is no longer a novelty for businesses, but an essential utility - and that means headaches for cybersecurity professionals.
According to a new report from Palo Alto Networks, generative AI traffic rocketed in 2024, rising by more than 890%. Analysis by the security firm found the technology is mostly being used as a writing assistant, accounting for 34% of use cases, followed by conversational agents at 29% and enterprise search at 11%.
Popular apps identified in the study include ChatGPT, Microsoft 265 Copilot, and Microsoft Power Apps.
While use of the technology continues at pace, this boom is also giving rise to significant security issues, with cyber professionals reporting a sharp increase in data security incidents.
Data loss prevention (DLP) incidents related to Generative AI more than doubled in early 2025. Meanwhile, the average monthly number of generative AI-related data security incidents rose by two-and-a-half times, now accounting for 14% of all data security incidents across SaaS traffic, the company found.
"Organizations are grappling with the unfettered proliferation of GenAI applications within their environments. On average, organizations have about 66 GenAI applications in use," the researchers said.
"More importantly, 10% of these were classified as high risk,” researchers added. “The widespread use of unsanctioned GenAI tools, coupled with a lack of clear AI policies and the pressure for rapid AI adoption, can expose organizations to new risks."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Researchers said a key problem here lies in a lack of visibility into AI usage, with shadow AI making it hard for security teams to monitor and control how tools are being used across the organization.
It's also hard to control unauthorized access to data, the study noted, raising further concerns.
Jailbroken or manipulated AI models can respond with malicious links and malware, or enable its use for unintended purposes, while the proliferation of plugins, copilots, and AI agents are creating an overlooked 'side door'.
Heightening the risk is a rapidly evolving regulatory landscape where non-compliance with emerging AI and data laws can land organizations with severe penalties.
"The uncomfortable truth is that for all its productivity gains, there are many growing concerns – including data loss from sensitive trade secrets or source code shared on unapproved AI platforms," the researchers said.
"There’s also the risk in using unvetted GenAI tools that are vulnerable to poisoned outputs, phishing scams, and malware disguised as legitimate AI responses."
How to address AI security risks
Organizations need to tighten up their processes, according to Palo Alto Networks.
They should implement conditional access management to limit access to generative AI platforms, apps, and plugins, and guard sensitive data from unauthorized access and leakage, using real-time content inspection.
Similarly, the study advised implementing a zero trust security framework to identify and block what are often highly sophisticated, evasive, and stealthy malware as well as threats within generative AI responses.
"The explosive growth of GenAI has fundamentally altered the digital landscape for enterprise organizations," said the team.
"While GenAI unlocks innovation and accelerates competition, the proliferation of unauthorized AI tools is exposing organizations to greater risk of data leakage, compliance failures and security challenges."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May