Generative AI data violations more than doubled last year
Shadow AI is preventing business leaders from keeping a lid on sensitive data
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
The increasing use of generative AI is leading to a surge in data policy violations, with the number more than doubling year-on-year, according to a new report.
The average organization, according to Netskope Threat Labs’ Cloud and Threat Report: 2026, is now clocking up 223 incidents of users sending sensitive data to AI apps per month, with the figure reaching 2,100 in the top 25% of cases.
User uploads of regulated data including personal, financial, or healthcare information represented the biggest category of policy violations, at 54%.
Much of the problem comes down to the continued prevalence of shadow AI, said Netskope: while reliance on personal generative AI accounts has declined, 47% of generative AI users are still accessing tools via personal, unmanaged accounts, either exclusively or alongside company-approved tools.
Personal apps are a significant insider threat risk, involved in six-in-ten insider threat incidents, with regulated data, intellectual property, source code, and credentials frequently being sent to personal app instances in violation of an organization's policies.
“Enterprise security teams exist in a constant state of change and new risks as organizations evolve and adversaries innovate,” said Ray Canzanese, director of Netskope Threat Labs.
“However, genAI adoption has shifted the goal posts. It represents a risk profile that has taken many teams by surprise in its scope and complexity, so much so that it feels like they are struggling to keep pace and losing sight of some security basics."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The overall number of generative AI app users grew 200% over the past year, the researchers found. The total volume of prompts leapt by 500%, with the average number of prompts sent to generative AI tools increasing from an average of 3,000 to 18,000 per organization per month.
Nine-in-ten organizations are now actively blocking at least one generative AI application, up from 80% last year, with the average organisation blocking ten tools.
Teams should map where sensitive information travels, advised Netskope, including through personal app usage. The authors added they should also implement controls that log and manage user activity across all cloud services, with data movements tracked and consistent policies applied across managed and unmanaged services.
Detailed logs built on this tracking infrastructure is crucial to ensuring staff are adhering to data protection standards.
The researchers said the increasing use of agentic AI systems is creating a vast new attack surface that means a fundamental re-evaluation of security perimeters and trust models.
To tackle this threat, organizations should incorporate agentic AI monitoring into their risk assessments, they said, including mapping the tasks these systems perform and making sure that they operate within approved governance frameworks.
"Security teams need to expand their security posture to be ‘AI-aware’, evolving policy and expanding the scope of existing tools like DLP, to foster a balance between innovation and security at all levels,” said Canzanese.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
CVEs are set to top 50,000 this yearNews While the CVE figures might be daunting, they won't all be relevant to your organization
-
81% of devs plan to migrate to OpenJDK as Oracle Java pricing concerns reach boiling pointNews Oracle Java pricing has developers scrambling for alternatives, and one open source option stands out
