AI security blunders have cyber professionals scrambling
Growing AI security incidents have cyber teams fending off an array of threats


Generative AI is no longer a novelty for businesses, but an essential utility - and that means headaches for cybersecurity professionals.
According to a new report from Palo Alto Networks, generative AI traffic rocketed in 2024, rising by more than 890%. Analysis by the security firm found the technology is mostly being used as a writing assistant, accounting for 34% of use cases, followed by conversational agents at 29% and enterprise search at 11%.
Popular apps identified in the study include ChatGPT, Microsoft 265 Copilot, and Microsoft Power Apps.
While use of the technology continues at pace, this boom is also giving rise to significant security issues, with cyber professionals reporting a sharp increase in data security incidents.
Data loss prevention (DLP) incidents related to Generative AI more than doubled in early 2025. Meanwhile, the average monthly number of generative AI-related data security incidents rose by two-and-a-half times, now accounting for 14% of all data security incidents across SaaS traffic, the company found.
"Organizations are grappling with the unfettered proliferation of GenAI applications within their environments. On average, organizations have about 66 GenAI applications in use," the researchers said.
"More importantly, 10% of these were classified as high risk,” researchers added. “The widespread use of unsanctioned GenAI tools, coupled with a lack of clear AI policies and the pressure for rapid AI adoption, can expose organizations to new risks."
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Researchers said a key problem here lies in a lack of visibility into AI usage, with shadow AI making it hard for security teams to monitor and control how tools are being used across the organization.
It's also hard to control unauthorized access to data, the study noted, raising further concerns.
Jailbroken or manipulated AI models can respond with malicious links and malware, or enable its use for unintended purposes, while the proliferation of plugins, copilots, and AI agents are creating an overlooked 'side door'.
Heightening the risk is a rapidly evolving regulatory landscape where non-compliance with emerging AI and data laws can land organizations with severe penalties.
"The uncomfortable truth is that for all its productivity gains, there are many growing concerns – including data loss from sensitive trade secrets or source code shared on unapproved AI platforms," the researchers said.
"There’s also the risk in using unvetted GenAI tools that are vulnerable to poisoned outputs, phishing scams, and malware disguised as legitimate AI responses."
How to address AI security risks
Organizations need to tighten up their processes, according to Palo Alto Networks.
They should implement conditional access management to limit access to generative AI platforms, apps, and plugins, and guard sensitive data from unauthorized access and leakage, using real-time content inspection.
Similarly, the study advised implementing a zero trust security framework to identify and block what are often highly sophisticated, evasive, and stealthy malware as well as threats within generative AI responses.
"The explosive growth of GenAI has fundamentally altered the digital landscape for enterprise organizations," said the team.
"While GenAI unlocks innovation and accelerates competition, the proliferation of unauthorized AI tools is exposing organizations to greater risk of data leakage, compliance failures and security challenges."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Pure Storage wants to simplify storage for the AI era
Analysis The firm’s Storage as a Service offering is paying dividends, so it wants to strike while the iron is hot
-
Nasuni bolsters executive team with triple leadership hire
News The vendor has named a new CPO, CIO, and CISO as it looks to expand its global footprint
-
CISOs bet big on AI tools to reduce mounting cost pressures
News AI automation is a top priority for CISOs, though data quality, privacy, and a lack of in-house expertise are common hurdles
-
The FBI says hackers are using AI voice clones to impersonate US government officials
News The campaign uses AI voice generation to send messages pretending to be from high-ranking figures
-
Almost a third of workers are covertly using AI at work – here’s why that’s a terrible idea
News Employers need to get wise to the use of unauthorized AI tools and tighten up policies
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
-
Law enforcement needs to fight fire with fire on AI threats
News UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
-
OpenAI announces five-fold increase in bug bounty reward
News OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
-
Hackers are turning to AI tools to reverse engineer millions of apps – and it’s causing havoc for security professionals
News A marked surge in attacks on client-side apps could be due to the growing use of AI tools among cyber criminals, according to new research.