Shadow AI can be a tool for AI innovation with the right controls, say Gartner analysts
Data-driven messaging and a supportive approach to securing AI tools are necessary for security staff looking to balance AI risks and unlock better funding
Security leaders need to embrace AI hype to get the attention of board-level executives and improve staff engagement, according to experts at Gartner.
Christine Lee, VP research at Gartner and Leigh McMullen, distinguished VP analyst at Gartner, told attendees of the Gartner Security and Risk Management Summit 2025, held in London, that organizations need to uphold security while encouraging AI tool use.
Shadow AI, the pair said, can be both a challenge and an opportunity for security staff, as they look to get a grip on AI risks without stifling staff innovation.
“Having a good AI discovery process is the foundation of a versatile AI cyber security program. Use existing tools like web proxies and log management systems to discover what employees and app developers are already doing with AI,” said Lee.
“Once you've discovered this shadow AI your job is to offer guidance better than ‘nope not approved’ because shadow AI is very quickly becoming ambient AI, embedded in everything.”
They gave the example of the mobile games company Playtika, which analyzed the AI tools its staff were already using. For each tool, security teams assessed how beneficial it was to the business, whether it was irreplaceable, and if its potential risks such as data exposure could be managed.
If it passed these checks, the team allowed the tool to be used, then reviewed why proper adoption channels hadn’t been used in the first place.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The pair stressed that chief information security officers (CISOs) need to encourage rather than punish staff for using AI whenever possible or see further burnout and employee disengagement.
Gartner’s 2022 Workforce Change Fatigue Survey showed enterprise change initiatives had increased 400% since 2016, while across the same period employee support for change dropped 36%.
McMullen noted that this period predates generative AI adoption, which has only added to the scale and speed with which organizations are changing.
Gartner’s recent Talent Marketing Survey found 36% of workers in environments with high uncertainty reported burnout, while 37% reported high intent to leave their roles, compared to 2% and 0% in environments with more certainty.
To combat this uncertainty, leaders should look to create more stable environments for staff and lean into the opportunities AI presents when it comes to automating boring tasks. This could include answering policy questions or fixing vulnerabilities in code.
Lee suggested that leaders could look to empower staff already using AI by giving them more autonomy and agency, improving staff buy-in, AI skills, and resilience.
In this way, McMullen said, cybersecurity leaders can “harness and rebound off hype’s change energy”, with enthusiasm at the employee and leadership level acting as an opportunity to drive organizational change with little to no pushback.
AI expansion necessitates proper controls
All of this is being put to the test as AI adoption ramps up. Lee warned that tension between cybersecurity leaders and the rest of their organization “gets worse in a hype-driven world where everyone’s bombarded by the promise of new technology” and employees jump at the opportunity to automate workloads.
“CISOs, you folks are in a unique position because unlike any other role in the organization, you must help protect the enterprise’s investment in AI, while protecting it from AI, which of course you’re not going to be able to do without AI,” said McMullen.
In Gartner’s latest AI Survey, 53% of respondents said they are in the process of building custom AI tools and Lee said this is an opportunity for leaders to unlock “quick wins” such as running tests and supporting security use cases.
“When AI gets to production, consider specialized AI runtime controls,” said Lee.
“There is massive room for improvement here: our data shows that only 23% of you have implemented AI runtime controls. The market is maturing quickly, emerging tools can inspect and validate queries and responses within an AI pipeline. They can also prevent sensitive data leakage with real-time data masking.”
Organizations will also need to pair new AI tools with new security processes that encompass harmful outputs, hallucinations, and intellectual property risks.
Gartner data shows 51% of security leaders are planning to establish incident response plans unique to their custom-built AI, including remediation and isolation procedures, McMullen explained, as well as risk assessments that update data retention policies to protect AI input and outputs.
“When Leigh and I say become literate about how AI works, its risks, its limitations, we don't mean that as an injunction not to use AI.
“In fact, the opposite: we must experiment in order to develop practical insight into which use cases are effective, safe, and secure. So be bold with AI, designate AI champions in the way that you've done security champions, spend the next 18 to 24 months collecting use cases, practising with them in cybersecurity first.”
Focus on metrics, not scare tactics
From AI to targeting new ransomware trends, CISOs have a number of priorities to bring to their boards. But when it comes to pitching executives, particularly for more investment, the speakers urged attendees to lean into enablement rather than fear.
Lee gave an example in which, following a ransomware attack on a competitor, a CEO named Sarah asks her CISO for strategic advice and whether a similar attack could happen against her firm.
“You have Sarah’s undivided attention – but now you also have a couple of choices. You could go in the direction of fear, uncertainty, and doubts, start slipping her brochures or you rant about protection tools, hoping to get some more budget.
“Lee and I hope you don’t do that, because that would erode your credibility as a trusted advisor. Or you could take a different path and show Sarah how your cybersecurity team is already making targeted investments that not only protect the enterprise today, but also future proof for the coming new product lines in AI automation.”
To set out this certainty, Lee and McMullen urged attendees to establish protection level agreements (PLAs), defined by McMullen as “formal agreement on the amount of money the enterprise is willing to spend to deliver a desired level of cybersecurity protection”.
Turning back to the example of Sarah the CEO, the pair asked the audience to imagine 20% of her organization’s critical systems in production had procedures to keep them operational in the event of a ransomware attack.
This would be the current ‘implicit protection levels’, which could spark a board-level conversation about the level of risk with which leadership is comfortable and the levels of protection for which they’re willing to provide funding.
In their example, the speakers showed how this hypothetical business could look to increase that 20% figure to 70% by spending $1 million – or choose to go even further to 80% for $1.5 million.
McMullen said this data-driven debate is essential, as it’s nearly impossible to establish an objective return on investment (ROI) for cybersecurity spending:
“Let’s be real here, have you ever tried to calculate the ROI of an anti-ransomware tool? What is it, existence? This is nonsense.
“Instead, framing the discussion in terms of PLAs makes the conversation one of cost-benefit analysis and trade-offs, much more intuitive and understandable.”
The Institute of Cancer Research has measured a 37% increase in cybersecurity budget since introducing PLAs, with its executive committee voting on areas of cybersecurity most in need of investment based on detailed data.
MORE FROM ITPRO
- Enterprises are worried about agentic AI security risks – Gartner says the answer is just adding more AI agents
- Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
- Big tech promised developers productivity gains with AI tools – now they’re being rendered obsolete

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
More transparency needed on sprawling data center projects, activists claimNews Activists call for governments to be held accountable when data centers are pushed through without proper consultation
-
Red Hat eyes tighter data controls with sovereign support for EU customersNews The company's new offering will see support delivered entirely by EU citizens in the region
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
Nvidia CEO Jensen Huang says future enterprises will employ a ‘combination of humans and digital humans’ – but do people really want to work alongside agents? The answer is complicated.News Enterprise workforces of the future will be made up of a "combination of humans and digital humans," according to Nvidia CEO Jensen Huang. But how will humans feel about it?
-
‘I don't think anyone is farther in the enterprise’: Marc Benioff is bullish on Salesforce’s agentic AI lead – and Agentforce 360 will help it stay top of the perchNews Salesforce is leaning on bringing smart agents to customer data to make its platform the easiest option for enterprises
-
This new Microsoft tool lets enterprises track internal AI adoption rates – and even how rival companies are using the technologyNews Microsoft's new Benchmarks feature lets managers track and monitor internal Copilot adoption and usage rates – and even how rival companies are using the tool.
-
Salesforce just launched a new catch-all platform to build enterprise AI agentsNews Businesses will be able to build agents within Slack and manage them with natural language
-
The tech industry is becoming swamped with agentic AI solutions – analysts say that's a serious cause for concernNews “Undifferentiated” AI companies will be the big losers in the wake of a looming market correction
-
Microsoft says 71% of workers have used unapproved AI tools at work – and it’s a trend that enterprises need to crack down onNews Shadow AI is by no means a new trend, but it’s creating significant risks for enterprises