Using DeepSeek at work is like ‘printing out and handing over your confidential information’

The majority of UK security professionals are concerned DeepSeek will open their organizations up to cyber attacks

AI applications pictured on a smartphone screen, with logos for DeepSeek, ChatGPT, and Google Gemini included.
(Image credit: Getty Images)

Workers using DeepSeek for business tasks are putting sensitive information at serious risk, according to a cybersecurity expert.

Andy Ward, SVP International at Absolute Security, told ITPro that businesses should be extremely cautious about using applications such as DeepSeek, while offering staff more hands-on training to promote safe AI use within the workplace.

“I think predominantly it's [about] data sovereignty and data governance, where is your data, who has access to it?

DeepSeek caused major upheaval in the AI space at the start of 2025 by offering a free alternative to Western AI models and is still freely available in the UK and US.

“The services, we believe, are obviously in mainland China, which are governed obviously by the Chinese authorities and therefore any content you share or upload inadvertently ultimately is going to mainland China," Ward said.

“So I guess the the concern for anyone running a company is that intellectual property, your data, however innocent a user is, just uploading a file or trying to corroborate a spreadsheet or whatever they're using it for, that data is effectively leaving your organization and going straight into mainland China, where compliance, governance, everything's gone out the window at that point.”

DeepSeek sparked immediate security concerns

In the wake of its rise in popularity, Cisco researchers warned that DeepSeek R1 contained “critical security flaws” that leave it vulnerable to misuse by attackers.

According to Absolute Security’s UK Resilience Risk Index 2025, 60% of senior security professionals believe the use of AI tools such as DeepSeek will increase cyber attacks on their organization.

Absolute Security’s report took in responses from 250 senior security leaders in medium-to-large-sized UK companies throughout May 2025.

The vast majority of respondents also said that the UK government had a more active role to play in protecting against the potential risks associated with DeepSeek, with 80% insisting that regulation of the model was the government’s responsibility.

Countries such as Germany, South Korea, and Australia have cracked down on use of the app within their regions, and the US government is reportedly mulling its own ban.

“I do think that there is room and a need for an oversight, for recommendations from the government, at the very least, saying ‘Look, if you're going to be using AI in the workplace or even to the social extent, here's the ones that we believe are vetted are secure and are robust with data privacy in place’.”

This would include recommending DeepSeek and others not used for commercial use, he added.

HR and compliance teams could then formalize this advice as company policy, Ward added, making it a part of company conduct and enforcing a ban internally.

“If you're going to use these in the workplace, it's as good as printing out and handing over confidential information,” he explained.

“Which no one would do in their right minds, but using the likes of DeepSeek – or others, it doesn't have to be DeepSeek – has the same effect.”

Beyond DeepSeek, data from Absolute Security has revealed widespread distrust of AI tools in general among security professionals. Over a third (34%) of surveyed CISOs stated they have banned AI within their workplaces altogether.

Ward told ITPro that this was likely to be a temporary measure while security teams get their AI strategies in order – and warned that in the meantime, staff are unlikely to abstain from AI altogether.

“What I would say about the 34%, I would guarantee and lay money on the fact that irrespective of a ban, to some extent, people are going to use it anyway because of the inherent advantages.”

To tackle this potential shadow AI use, he said security leaders need to offer staff a clear list of approved AI tools, from trusted companies such as Microsoft and Google, as well as other known brands with a track record of strong data governance for enterprise customers.

AI is still in its infancy

Though there are signs that AI is beginning to show return on investment (ROI) in the UK, this is not yet a standard experience and some leaders are yet to see meaningful improvements from their AI spending.

Ward told ITPro that despite the hype of the past 18 months and rush by some companies to assemble dedicated AI teams, organizations continue to be held back by data preparation.

“I think we're still in its infancy of really seeing it mature,” he said.

“Definitely, the AI models have really come on leaps and bounds in the last 18 months. So I think there are some really good tangible use cases out there today, but maybe one of the inhibitors is the lack of quality of centralized data in most legacy organisations.”

He also underlined the major AI skills gap across all regions, with CISOs currently allocating budgets to address this issue. Ward said that while having the right infrastructure is crucial, having skilled humans in the loop is an absolute necessity:

“I think then the key to all of this though, which underpins that all, is having the right talent and maintaining that talent and making sure that you're also not only training your existing teams, but in some cases bringing in some experts that have that fresh set of skills around AI to truly run it.”

To encourage safe AI uptake among staff, Ward also recommended running internal challenges and competitions, in which workers can use approved AI models to propose improvements to their workflow.

“I think there are many benefits out there and a lot of them are probably going undiscovered, because they're not feeling comfortable that they should be even using the AI.”

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.