Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disaster

Staff need to be educated on the risks of shadow AI to prevent costly breaches

AI ransomware and cyber crime concept image showing a digitized human eye observing networks with computer code.
(Image credit: Getty Images)

Nearly half of enterprises could face serious security or compliance-related incidents as a result of Shadow AI by 2030, prompting calls for more robust governance practices.

Analysis from Gartner shows 40% of businesses could fall foul of unauthorized AI usage as employees continue to use tools not monitored or cleared by security teams.

The findings from Gartner come in the wake of a survey of cybersecurity leaders which underlined growing concerns about the rise of shadow AI. More than two-thirds (69%) of respondents said their organization either suspects – or has evidence to prove – that employees are using prohibited tools.

These tools, the consultancy said, can increase the risk of IP loss and data exposure, as well as causing other security and compliance issues.

Gartner said the trend will require a concerted effort to educate staff on the use of these tools, clearer guidelines, and more detailed monitoring.

“To address these risks, CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran, distinguished VP analyst at Gartner.

How to tackle shadow AI

The Gartner report is the latest in a string of industry studies warning about the use of unauthorized AI solutions.

A recent Microsoft study, for example, found nearly three-quarters (71%) of UK-based workers admitted to using shadow AI tools rather than those offered by their employer.

Notably, the report found that 22% of workers had used unauthorized tools for risky finance-related tasks, placing their employer at huge risk.

Guidance from the British Computer Society (BCS) on shadow AI aligns closely with the advice from Gartner. The organisation said organisations should adopt a comprehensive approach to tackling the problem which combines policy development, employee education, and technological oversight.

Policies should cover all aspects of AI use, from data input to output, and be flexible enough to respond to advancements in AI technology and regulatory changes.

Similarly, reviews should be carried out regularly while blacklists of websites and tools that organizations don't want their employees to use can help, along with continuous monitoring.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.