The risks of shadow AI and what leaders can do to prevent it

A silhouette of a worker using a phone, with a speech bubble above their device reading "AI" to represent shadow AI.
(Image credit: Getty Images)

Amid all the excitement around generative AI and chatbots, companies may be overlooking how their employees are using the technology. 

‘Shadow AI’ is a term used to describe the phenomenon of employees using generative AI tools, like ChatGPT, to assist them in their work without the knowledge or permission of their employer. The intention behind it is generally well-meaning – be it streamlining repetitive and mundane tasks, batch-writing emails, or responding to customer queries quicker – but it can lead to cyber security and data risks.

“Shadow AI will take place when companies don’t engage in conversations around generative AI early enough and don’t provide teams with the tools they need to succeed,” says Dom Couldwell, head of field engineering for the EMEA region at DataStax, a platform for real-time application development.

“They will risk their staff using large language models (LLMs) without at least informing IT.”

According to a new report from Veritas Technologies, 49% of UK office workers are using generative AI at least once a week and 19% every single working day per TechRadar. But it also found 38% of respondents said they or a colleague had fed an LLM sensitive information, such as customer financial data while 60% admitted that they were unaware that sharing such data could put their employer at risk. Moreover, 44% of UK employees are currently receiving no guidance on generative AI from their employer.

It’s clear that companies should be doing more to address the issue of shadow AI and a C-suite survey published by Kaspersky in October 2023 found 95% of executive respondents were worried about shadow AI in their workforce. But the question is where do you start? 

Generative AI bans are futile 

Generative AI comes with risks, which can be compounded by shadow AI. For example, if leaders don’t have oversight of the information their workforce is passing to closed AI tools, they could be teeing themselves up for a data breach. In 2023, Apple restricted internal ChatGPT use due to fears employees might accidentally expose sensitive company information through the popular tool, while TechRadar Pro reported on a case in which Samsung engineers accidentally leaked source code by using ChatGPT for code suggestions.

RELATED WHITEPAPER

But while you may think the best move is to implement a full or partial ban on generative AI, taking this approach could inevitably push more use of generative AI into the shadows. “The reality is the technology is there and people will use it, even if it’s forbidden. If people are going to use it, then you want to know about it, and it benefits businesses to foster a culture where people can be open,” says Graham Glass, CEO of education platform Cypher Learning. 

In order to establish this culture, senior leaders need to have bought into the value and benefits of generative AI and how it can improve productivity and even reduce burnout

“Adoption needs to have a robust structure and processes built around it. This starts with executive sponsorship to ensure the strategy has backing from the senior leaders,” argues Sara Portell, vice president of user experience at enterprise cloud application provider Unit4. The strategy should be overseen by a team made up of employees from across the business, including HR and marketing, “so there’s consensus around how it’s adopted and messages are communicated consistently to everyone,” she adds. 

Education can prevent shadow AI becoming habit

Once senior leaders are on board, then they’ll be better placed to make employees aware of the lurking threat of shadow AI. 

“Leaders should educate teams on what safe generative AI practice looks like. They should also provide clear guidance on when ChatGPT can and can’t be used safely at work,” advises Steve Salvin, founder and CEO of data insights company Aiimi. 

The guidance defining acceptable use should be set out in your company’s AI policy. The wording should be crystal clear – any gray areas that are open to interpretation could bring generative AI use in through the back, even if employees aren’t aiming to be malicious. This is also true for AI solutions that may already be in a firm’s tech stack.

For example, GitHub Copilot and GitHub Copilot Enterprise, which was already being used by users to generate 46% of their code in February 2023, can produce code containing dangerous vulnerabilities.

As for how the education should be delivered, Glass recommends using “engaging, memorable, story-based training that can grab employees’ attention to bring them on side and help them see the implications of generative AI gone wrong”. But this shouldn’t be seen as a one-and-done exercise. Employees’ understanding of shadow AI needs to be assessed on a regular basis to ensure they really are aware of what is expected of them.

“This can go a long way in helping drive employee confidence, accountability, and compliance regarding generative AI. And with it out of the shadows and responsibly controlled, innovation might even be encouraged. if employees find new ways to use generative AI, then this could be something that benefits the business,” Glass adds. 

Don’t forget about data governance 

A mix of education and stronger AI policies can help you put in place guardrails against shadow AI. However, equipping employees with the knowledge and tools they need to ensure they’re using it safely and sensibly is only truly effective if you’re implementing good data governance.

“It’s not enough to train employees to keep data safe if that data is being poorly governed in the first place. Poorly governed data doesn't make for accurate or relevant outputs by generative AI tools, either,” says Salvin. 

You should run an audit to understand what data your company holds and update the relevant permission to ensure that only authorized employees have access to certain information. Any sensitive data needs to be labeled as such and encrypted to prevent it from being submitted to an LLM, Salvin adds. Specialist tools like Microsoft Azure OpenAI service can help you to automate these processes. 

"Tech leaders must take control of the generative AI agenda in their business and they need to do this before their employees do it for them,” he concludes. 

Rich McEachran

Rich is a freelance journalist writing about business and technology for national, B2B and trade publications. While his specialist areas are digital transformation and leadership and workplace issues, he’s also covered everything from how AI can be used to manage inventory levels during stock shortages to how digital twins can transform healthcare. You can follow Rich on LinkedIn.