AI security blunders have cyber professionals scrambling
Growing AI security incidents have cyber teams fending off an array of threats
Generative AI is no longer a novelty for businesses, but an essential utility - and that means headaches for cybersecurity professionals.
According to a new report from Palo Alto Networks, generative AI traffic rocketed in 2024, rising by more than 890%. Analysis by the security firm found the technology is mostly being used as a writing assistant, accounting for 34% of use cases, followed by conversational agents at 29% and enterprise search at 11%.
Popular apps identified in the study include ChatGPT, Microsoft 265 Copilot, and Microsoft Power Apps.
While use of the technology continues at pace, this boom is also giving rise to significant security issues, with cyber professionals reporting a sharp increase in data security incidents.
Data loss prevention (DLP) incidents related to Generative AI more than doubled in early 2025. Meanwhile, the average monthly number of generative AI-related data security incidents rose by two-and-a-half times, now accounting for 14% of all data security incidents across SaaS traffic, the company found.
"Organizations are grappling with the unfettered proliferation of GenAI applications within their environments. On average, organizations have about 66 GenAI applications in use," the researchers said.
"More importantly, 10% of these were classified as high risk,” researchers added. “The widespread use of unsanctioned GenAI tools, coupled with a lack of clear AI policies and the pressure for rapid AI adoption, can expose organizations to new risks."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Researchers said a key problem here lies in a lack of visibility into AI usage, with shadow AI making it hard for security teams to monitor and control how tools are being used across the organization.
It's also hard to control unauthorized access to data, the study noted, raising further concerns.
Jailbroken or manipulated AI models can respond with malicious links and malware, or enable its use for unintended purposes, while the proliferation of plugins, copilots, and AI agents are creating an overlooked 'side door'.
Heightening the risk is a rapidly evolving regulatory landscape where non-compliance with emerging AI and data laws can land organizations with severe penalties.
"The uncomfortable truth is that for all its productivity gains, there are many growing concerns – including data loss from sensitive trade secrets or source code shared on unapproved AI platforms," the researchers said.
"There’s also the risk in using unvetted GenAI tools that are vulnerable to poisoned outputs, phishing scams, and malware disguised as legitimate AI responses."
How to address AI security risks
Organizations need to tighten up their processes, according to Palo Alto Networks.
They should implement conditional access management to limit access to generative AI platforms, apps, and plugins, and guard sensitive data from unauthorized access and leakage, using real-time content inspection.
Similarly, the study advised implementing a zero trust security framework to identify and block what are often highly sophisticated, evasive, and stealthy malware as well as threats within generative AI responses.
"The explosive growth of GenAI has fundamentally altered the digital landscape for enterprise organizations," said the team.
"While GenAI unlocks innovation and accelerates competition, the proliferation of unauthorized AI tools is exposing organizations to greater risk of data leakage, compliance failures and security challenges."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Trump's AI executive order could leave US in a 'regulatory vacuum'News Citing a "patchwork of 50 different regulatory regimes" and "ideological bias", President Trump wants rules to be set at a federal level
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
Trend Micro issues warning over rise of 'vibe crime' as cyber criminals turn to agentic AI to automate attacksNews Trend Micro is warning of a boom in 'vibe crime' - the use of agentic AI to support fully-automated cyber criminal operations and accelerate attacks.
-
NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to knowNews Many organizations see prompt injection as just another version of SQL injection - but this is a mistake
-
AWS CISO Amy Herzog thinks AI agents will be a ‘boon’ for cyber professionals — and teams at Amazon are already seeing huge gainsNews AWS CISO Amy Herzog thinks AI agents will be a ‘boon’ for cyber professionals, and the company has already unlocked significant benefits from the technology internally.
-
HPE selects CrowdStrike to safeguard high-performance AI workloadsNews The security vendor joins HPE’s Unleash AI partner program, bringing Falcon security capabilities to HPE Private Cloud AI
-
Microsoft opens up Entra Agent ID preview with new AI featuresNews Microsoft Entra Agent ID aims to help manage influx of AI agents using existing tools
-
GitHub is awash with leaked AI company secrets – API keys, tokens, and credentials were all found out in the openNews Wiz research suggests AI leaders need to clean up their act when it comes to secrets leaking
-
Generative AI attacks are accelerating at an alarming rateNews Two new reports from Gartner highlight the new AI-related pressures companies face, and the tools they are using to counter them
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malwareNews TrendMicro has called for caution on how much detail is disclosed in security advisories
