Almost a third of workers are covertly using AI at work – here’s why that’s a terrible idea
Employers need to get wise to the use of unauthorized AI tools and tighten up policies


Almost half of office workers say they're using AI tools that aren't provided by their employer, with nearly a third keeping it a secret.
For 36%, the reason is that they feel it gives them a secret advantage, while three-in-ten worry their job may be cut. More than a quarter say they're suffering from AI-fueled imposter syndrome, saying they don’t want people to question their ability.
Findings from Ivanti's 2025 Technology at Work Report: Reshaping Flexible Work show that the use of AI at work is rising, with 42% of employees now using the technology in their daily workflow in 2025.
This, the study noted, marks a significant increase compared to the year prior, in which just one-quarter said they use AI in their role.
IT professionals are even keener on AI, with three-quarters using it. But even though they'd be expected to be more aware of the security risks, 38% are still using unauthorized tools.
This growing trend of covert AI use is a serious cause for concern, Ivanti noted, and as such bosses need to begin cracking down on the practice.
"Employees are using AI tools without their bosses' knowledge to boost productivity. It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards," said Brooke Johnson, Ivanti chief legal counsel and SVP of HR and security.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"Employees adopting this technology without proper guidelines or approval could be fueling threat actors, violating company contracts, and risking valuable company IP."
Shadow AI could cause a security disaster
Ivanti warned that the use of unauthorized AI tools at work is putting many organizations at risk - and it isn’t the only study in the last year to emphasize the dangers.
Research from Veritas Technologies, for example, found that 38% of UK office workers said that they or a colleague had fed an LLM sensitive information such as customer financial data.
However, six-in-ten failed to realize that this could result in the leaking of confidential information and breach data privacy compliance regulations.
Meanwhile, analysis from BCS last year warned that staff using non-approved tools risk breaching data privacy rules, exposing themselves to potential security vulnerabilities, and even falling foul of intellectual property rights.
"To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications," said Johnson.
"By fostering an open dialogue, employers can encourage transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively."
A raft of major firms have already cracked down on the use of AI at work, most notably Apple, which implemented strict controls on the use of ChatGPT not long after it launched in late 2022.
Amazon and JP Morgan also implemented similar policies while Samsung took drastic action after discovering an accidental leak of sensitive information by an engineer who uploaded code to the popular chatbot.
But it's not just a question of policies, said Johnson. Indeed, organizations need to do more to monitor whether they're actually being implemented.
"Employees are using AI tools without their bosses' knowledge to boost productivity," she said.
"It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Generative AI attacks are accelerating at an alarming rate
News Two new reports from Gartner highlight the new AI-related pressures companies face, and the tools they are using to counter them
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…