Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disaster
Staff need to be educated on the risks of shadow AI to prevent costly breaches
Nearly half of enterprises could face serious security or compliance-related incidents as a result of Shadow AI by 2030, prompting calls for more robust governance practices.
Analysis from Gartner shows 40% of businesses could fall foul of unauthorized AI usage as employees continue to use tools not monitored or cleared by security teams.
The findings from Gartner come in the wake of a survey of cybersecurity leaders which underlined growing concerns about the rise of shadow AI. More than two-thirds (69%) of respondents said their organization either suspects – or has evidence to prove – that employees are using prohibited tools.
These tools, the consultancy said, can increase the risk of IP loss and data exposure, as well as causing other security and compliance issues.
Gartner said the trend will require a concerted effort to educate staff on the use of these tools, clearer guidelines, and more detailed monitoring.
“To address these risks, CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran, distinguished VP analyst at Gartner.
How to tackle shadow AI
The Gartner report is the latest in a string of industry studies warning about the use of unauthorized AI solutions.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
A recent Microsoft study, for example, found nearly three-quarters (71%) of UK-based workers admitted to using shadow AI tools rather than those offered by their employer.
Notably, the report found that 22% of workers had used unauthorized tools for risky finance-related tasks, placing their employer at huge risk.
Guidance from the British Computer Society (BCS) on shadow AI aligns closely with the advice from Gartner. The organisation said organisations should adopt a comprehensive approach to tackling the problem which combines policy development, employee education, and technological oversight.
Policies should cover all aspects of AI use, from data input to output, and be flexible enough to respond to advancements in AI technology and regulatory changes.
Similarly, reviews should be carried out regularly while blacklists of websites and tools that organizations don't want their employees to use can help, along with continuous monitoring.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
US data center power demand forecast to hit 106GW by 2035, report warnsNews BloombergNEF research reveals a sharp 36% jump in energy forecasts as "hyperscale" projects reshape the American grid
-
Police Digital Service partners with BCS to overhaul law enforcement IT skillsNews The strategic alliance aims to professionalize the policing digital workforce, offering accreditation and support for the National Policing Digital Strategy 2025–2030
-
AWS CEO Matt Garman says AI agents will have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
-
Westcon-Comstor partners with Fortanix to drive AI expertise in EMEANews The new agreement will help EMEA channel partners ramp up AI and multi-cloud capabilities
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Anthropic announces Claude Opus 4.5, the new AI coding frontrunnerNews The new frontier model is a leap forward for the firm across agentic tool use and resilience against attacks
-
Google blows away competition with powerful new Gemini 3 modelNews Gemini 3 is the hyperscaler’s most powerful model yet and state of the art on almost every AI benchmark going
-
Microsoft's new Agent 365 platform is a one-stop shop for deploying, securing, and keeping tabs on AI agentsNews The new platform looks to shore up visibility and security for enterprises using AI agents
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
