UK firms left in the dark over what workers are sharing with AI
Security teams can’t keep track of what workers are sharing with AI applications, regardless of whether they’re approved or unauthorized
Enterprises across the UK are contending with “critical blind spots” over what workers are sharing with AI applications, according to new research.
A survey from SailPoint found more than two-thirds (67%) of organizations can’t account for the information staff are sharing with AI platforms and large language models (LLMs).
Worse still, the study noted that 35% of respondents admitted to sharing data through external tools, rather than approved internal applications, which is creating an array of risks for enterprises.
The rise of ‘shadow AI’ has become a recurring pain point for organizations over the last two years. Workers using unauthorized applications risk exposing sensitive company data, research shows – and there’s no sign of the trend slowing down.
Research from Gartner in November 2025 predicts that 40% of enterprises will suffer a data breach due to shadow AI by 2030.
SailPoint noted that the growing shadow AI trend comes in spite of the fact many enterprises are investing heavily in data management and AI capabilities for staff.
More than four-in-five respondents (82%) said they have invested in additional staff and skills training to help workers better manage AI applications, while 41% have brought on dedicated AI and analytics personnel.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Notably, nearly half (45%) of IT leaders said they still lack visibility on where information is being shared, and how.
Agentic AI poses new governance challenges
Mark McClain, CEO and Founder at SailPoint, said the findings show AI can often represent a catch-22 for organizations. While these tools are helping staff, they’re now creating additional risk surfaces for security teams.
“AI tools can enhance productivity, but they also create serious risk when they operate outside an organization’s visibility and governance,” he said.
“When sensitive information is entered into unapproved models, it can be exposed, mishandled, or even amplified through errors and hallucinations.”
McClain warned that with the rise of agentic AI, poor data management practices could be further amplified and put businesses at greater risk.
SailPoint noted that the need for greater visibility and oversight is now a priority for many enterprises on account of growing risks. In a previous study from SailPoint, four-in-five organizations (80%) revealed that AI agents had performed “unintended actions” such as accessing or sharing inappropriate data.
UK businesses are adding as many as 10,000 agents and machine identities each month, the company noted, meaning security teams could quickly become overwhelmed.
"As use of AI systems becomes more widespread, the situation is only going to get more out of control if organizations fail to put the right guardrails in place – compounded by other tools flying under the radar,” McClain commented.
“Organizations need to stop workarounds and regain control. That takes a combination of skills and awareness, but it also fundamentally boils down to a challenge around identity”
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
'The goal for this year will be to automate all security processes': Google Cloud is betting on Wiz to usher in a new era of AI securityNews Wiz wants to deploy its agents for continuous penetration testing, and in Google it’s found a parent company that can achieve this vision at scale
-
AI is now a ‘standard part of the attacker toolkit’News Cyber attacks are increasing in scale, intensity, and velocity thanks to AI, and it’s forcing defenders to react faster than ever before
-
AI is raising the stakes for cyber professionals – Claude Mythos just took things to another levelNews AI efficiency gains work both ways, and threat actors are already capitalizing on powerful new tools
-
Agent identity governance can't keeping up with adoption rates – and it’s creating a security nightmareNews Enterprises are leaving high-privilege keys unchanged for months or years at a time
-
Systems are deterministic, people are probabilistic – AI is both, and that's a headache for cyber teamsNews AI combines both the risks associated with IT systems and the people using them, creating headaches for practitioners
-
AI agents are creating new identity security risks: 1Password wants to solve thatNews The Unified Access system from 1Password will help enterprises manage AI agent access across different devices and users
-
CISOs are keen on agentic AI, but they’re not going all-in yetNews Many security leaders face acute talent shortages and are looking to upskill workers
-
CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just secondsNews Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools

