In from the shadows: AI’s uncontrolled growth

AI is now part of our day-to-day language, but the chatbots and generative AI tools spreading through the enterprise are often unauthorised and uncontrolled

A graphic of a bar chart with points labelled 'AI' on a blue background, to show AI adoption.
(Image credit: Getty Images)

Artificial intelligence is one of the fastest-growing technologies of all time.

The United Nations’ trade and development agency, UNCTAD, predicts that the global AI market will be worth$4.8 trillion by 2033, or 25 times its value just three years ago.

ChatGPT, which popularised generative AI and large language models (LLMs), reached 100m active users within just two months of its launch in 2022, making it the fastest-growing consumer app ever.

As ChatGPT’s growth suggests, however, the initial uptake of generative AI has been largely outside corporate IT, with CIOs and enterprises playing catch up ever since.

Fast-growing, free-to-use large language models and generative AI tools have created a new layer of technology, largely outside the control of corporate IT. This so-called shadow AI has grown, even as enterprise software vendors have added AI functions to their applications.

While shadow AI has parallels to the shadow IT challenges of the last two decades, uncontrolled use of AI is bringing its own, significant challenges.

These include breaches of privacy and confidentiality, loss of intellectual property, security issues, and even incorrect and damaging decisions made by shadow AI.

Hiding in plain sight

According to industry analysts Gartner, IT leaders in 69% of organizations suspect, or know, that staff are using “unauthorised AI”. The firm also predicts that 40% of businesses will see “security or compliance incidents” resulting from unauthorised AI.

Security and privacy experts are increasingly worried about the downsides of unauthorised AI, especially as it spreads from simple chatbots to more critical areas such as code development (vibe coding), or “agentic” systems that operate with little in the way of human oversight.

Current IT and data security systems are not well-placed to detect and block AI tools experts warn.

“AI has been available for about three years now, and as such, people are getting used to which platforms they get good results from,” James Gillies, head of cyber security at Logicalis UKI, tells ITPro.

“It’s become ‘easy’ to get a quick answer to almost any question, but oftentimes the consideration of asking the question in the first place gets forgotten, or worse still, ignored,” he warns.

This sees employees putting confidential and sensitive information into AI tools, with little or no consideration about where that data goes and how it might be used or reused.

Employees will continue to use shadow AI, unless CIOs and CISOs can implement strict control measures. Even then, enterprises need to balance the dangers of AI, with the risk of stifling innovation.

“Unlike traditional shadow IT, which typically involves unauthorised apps or cloud services, shadow AI introduces unique risks because of the way AI tools operate and interact with sensitive data,” warns Findlay Whitelaw, a cybersecurity strategist and researcher at security company Exabeam.

“Information shared with AI tools may be stored, reused or transferred across jurisdictions often outside organizational visibility or control,” he tells ITPro. “What adds to the complexity of shadow AI is just how widespread its consequences can be.”

Inside the machine

For CIOs and those in the enterprise tasked with data security and privacy, there is an urgent need to break down the problem posed by shadow AI. This is not easy.

“Shadow AI is a bigger concern than shadow IT as it has identity, access and infrastructure impact,” Jon Collins, field CTO at analysts GigaOm, tells ITPro.

“Shadow IT came with risks: accessing corporate data on unsecured personal devices for example, or using Dropbox and the like for file exchange with data leakage potential, or SaaS systems for project and data management. Each had an operational overhead and was – and remains – a source of pain,” he says.

“Shadow AI, when confined to the LLM or chatbot world, has similar risks. Additional risks come from agentic approaches, which operate autonomously and at scale. [It’s] no longer a single person causing issues or creating risk, but a single person, potentially inadvertently, launching a flotilla of uncontrolled, autonomous operators into an infrastructure environment which has not been set up to protect against them.

“Our internal infrastructures have gaping holes of misconfigured systems, poor deprovisioning and access practices, open APIs and mismanaged secrets, which agents can catalogue, navigate and exploit.”

As Collins describes it, many of the risks from shadow AI are similar to those posed by any unauthorised and uncontrolled technology. For CIOs, this is not a new challenge. Adopting consumer IT in the enterprise has always been a trade off between control and usability, but AI amplifies the problem, both through its complexity and because humans tend to trust machines.

Issues of trust

“Shadow AI often exploits what is often described as ‘machine trust’, the assumption that algorithms are inherently accurate and reliable,” suggests Exabeam’s Whitelaw. “Despite this default trust, AI agents are only as good as the data they’re trained on and can carry biases, errors, or vulnerabilities that can be inadvertently, or even intentionally, exploited.”

Even where individual users trust personal AI tools and perhaps prefer them to corporate technology, they are unlikely to have insights into AI’s algorithms, or details of the data vendors use to train models.

This, in turn, can lead to unexpected and unwelcome results and the enterprise can be held liable. Controlling shadow AI needs monitoring tools, mapping AI tools to data and business risk, and workforce education.

“Organizations will have to look into the usage of AI models. To get more visibility into the models they use, they will have to manage usage and make sure they’re not introducing malicious models into their infrastructure,” Ron Minis, senior security researcher at JFrog, tells ITPro.

However, banning AI won’t work, says Minis. “If companies think they can block or somehow prevent the usage of AI In the company, I think they're just going to make their employees find other creative ways of using AI without telling them,” he warns.

Instead, enterprises will want to give employees access to AI tools that help them do their jobs more efficiently, integrate well with existing systems and, ideally, are easy to use.

As with shadow IT before it, shadow AI will continue to grow if approved alternatives are too limited, too restrictive, or even just too dull.