Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disaster
Staff need to be educated on the risks of shadow AI to prevent costly breaches
Nearly half of enterprises could face serious security or compliance-related incidents as a result of Shadow AI by 2030, prompting calls for more robust governance practices.
Analysis from Gartner shows 40% of businesses could fall foul of unauthorized AI usage as employees continue to use tools not monitored or cleared by security teams.
The findings from Gartner come in the wake of a survey of cybersecurity leaders which underlined growing concerns about the rise of shadow AI. More than two-thirds (69%) of respondents said their organization either suspects – or has evidence to prove – that employees are using prohibited tools.
These tools, the consultancy said, can increase the risk of IP loss and data exposure, as well as causing other security and compliance issues.
Gartner said the trend will require a concerted effort to educate staff on the use of these tools, clearer guidelines, and more detailed monitoring.
“To address these risks, CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran, distinguished VP analyst at Gartner.
How to tackle shadow AI
The Gartner report is the latest in a string of industry studies warning about the use of unauthorized AI solutions.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
A recent Microsoft study, for example, found nearly three-quarters (71%) of UK-based workers admitted to using shadow AI tools rather than those offered by their employer.
Notably, the report found that 22% of workers had used unauthorized tools for risky finance-related tasks, placing their employer at huge risk.
Guidance from the British Computer Society (BCS) on shadow AI aligns closely with the advice from Gartner. The organisation said organisations should adopt a comprehensive approach to tackling the problem which combines policy development, employee education, and technological oversight.
Policies should cover all aspects of AI use, from data input to output, and be flexible enough to respond to advancements in AI technology and regulatory changes.
Similarly, reviews should be carried out regularly while blacklists of websites and tools that organizations don't want their employees to use can help, along with continuous monitoring.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
The UK AI revolution: navigating the future of the intelligent enterpriseAs AI reshapes industries and societies, decision-makers in the UK face a critical choice: build a sovereign future or merely import it.
-
Turning the UK AI revolution into a sovereign realityThe UK AI Revolution documentary series posed difficult questions about AI’s hype, control, and future. Now, IT leaders must find the architectural answers
-
Workers are wasting half a day each week fixing AI ‘workslop’News Better staff training and understanding of the technology is needed to cut down on AI workslop
-
Retailers are turning to AI to streamline supply chains and customer experience – and open source options are proving highly popularNews Companies are moving AI projects from pilot to production across the board, with a focus on open-source models and software, as well as agentic and physical AI
-
Microsoft CEO Satya Nadella wants an end to the term ‘AI slop’ and says 2026 will be a ‘pivotal year’ for the technology – but enterprises still need to iron out key lingering issuesNews Microsoft CEO Satya Nadella might want the term "AI slop" shelved in 2026, but businesses will still be dealing with increasing output problems and poor returns.
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
Google DeepMind CEO Demis Hassabis thinks startups are in the midst of an 'AI bubble'News AI startups raising huge rounds fresh out the traps are a cause for concern, according to Hassabis
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
AWS has dived headfirst into the agentic AI hype cycle, but old tricks will help it chart new watersOpinion While AWS has jumped on the agentic AI hype train, its reputation as a no-nonsense, reliable cloud provider will pay dividends
-
BT unveils sovereign platform to secure UK AI and cloud infrastructureNews The telecom giant’s new offering aims to insulate UK public and private sector data from geopolitical instability, supporting the government’s national AI strategy
