Deepfake business risks are growing – here's what leaders need to know

As the risk of being targeted by deepfakes increases, what should businesses be looking out for?

A visualization of an executive shown on a screen with their face mapped and tracked by software, to represent the creation of deepfakes.
(Image credit: Getty Images)

Deepfake attacks were once a faraway reality, but there are now numerous examples of the technology being used successfully by adversaries.

Generative AI attacks are accelerating at an alarming rate, as hackers make use of tailored AI tools as well as abuse of public-facing models. For example, Google has warned APTs are already using its Gemini models for malicious purposes. In a survey from September 2025, Gartner found that 62% of organizations had experienced a deepfake attack involving social engineering or exploiting automated processes.

Others estimate an even bigger impact from deepfakes. According to enterprise security provider Ironscales, 85% of IT leaders at mid-range and large enterprises have experienced at least one deepfake attack in the last year.

As the risk of being targeted by an attack of this kind grows, what should businesses be looking out for and how should they respond?

Types of deepfake attacks

Deepfakes are often delivered via AI-enhanced voice and video to impersonate company executives and persuade employees to transfer cash.

A famous example is the deepfake incident that affected the Hong Kong branch of Arup, where hackers used deepfake technology to impersonate the firm’s CTO and trick a finance worker into remitting $25 million directly to the criminals.

“The scam was so elaborate that at one point, the unsuspecting worker attended a video call with deepfake recreations of several coworkers, which he later said looked and sounded just like his real colleagues,” says Sergei Serdyuk, VP of product management, NAKIVO.

Alexia Konstantinidi, vice president DFIR at Kroll describes how the firm has observed a range of cyber attacks leveraging deepfakes. In several incidents, attackers impersonated employees by combining voice-cloned messages with follow-up phishing emails designed to prompt the target into taking action. “In one such case, a voice note message on WhatsApp impersonating an executive led to the targeted employee downloading information-stealer malware, resulting in credential theft, data exfiltration and ransomware.”

At the same time, adversaries are using the technology to disguise their identities when applying for jobs, such as in North Korean fake IT worker schemes. “People are applying for jobs with false identities – a trend that’s catching on in an age of remote working,” says Mark McClain, CEO and founder at SailPoint.

US security firm KnowBe4 was the victim of this type of attack after hiring what the company believed was a remote software engineer, complete with successful video interviews and background checks. In reality, the "employee" was a North Korean adversary using a stolen identity and AI-generated imagery, says Luke Cooper, lead research and development analyst at SYTECH. “The ruse was only exposed after the individual began installing malware on company devices.”

Deepfakes are usually delivered via social engineering, says McClain. “Attackers use emails, phone calls and video chats to impersonate executives or contractors.”

Adversaries typically use a “hybrid approach”, bringing in deepfake content to augment social engineering attacks, says Konstantinidi. “Typically, a voice-cloned audio message will be shared before or after a customised email has been sent to the target, who might be a helpdesk or security engineer.”

The risk for businesses

The risk of deepfake attacks appears to be growing as the technology becomes more accessible. The threat from deepfakes has escalated from a “niche concern” to a “mainstream cybersecurity priority” at “remarkable speed”, says Cooper. “The barrier to entry has lowered dramatically thanks to open source software and automated creation tools. Even low-skilled threat actors can launch highly convincing attacks.”

The target pool is also expanding, says Cooper. “As larger corporations invest in advanced mitigation strategies, threat actors are turning their attention to small and medium-sized businesses, which often lack the resources and dedicated cybersecurity teams to combat these threats effectively.”

The technology itself is also improving. Deepfakes have already improved “a staggering amount” – even in the past six months, says McClain. “The tech is internalising human mannerisms all the time. It is already widely accessible at a consumer level, even used as a form of entertainment via face swap apps.”

Adding to the risk, generative AI is only going to get “more accessible and easier to use”, Cooper warns. “As models become more powerful, the realism of synthetic voices, faces and behaviors will continue to improve, making deepfakes increasingly difficult to detect and trivially easy to deploy.”

However, at the moment, there are still limitations to deepfake content generation. Most voice-cloning models struggle to precisely reproduce regional accents and tone, says Konstantinidi. “That said, we have observed large improvements since last year with models now supporting more languages and accents than before. A recording of five minutes or less from a podcast, video interview or phone call can be enough to produce a very convincing clone.”

How to mitigate deepfake attacks

For now, there are steps you can take to protect your business against deepfake attacks.

Firms can begin by identifying publicly-available audio and video material for executives and other high-value targets, Konstantinidi says. “Once this exposure is understood, organizations should determine which procedures involve those most likely to be impersonated and remove any reliance on voice-only or single-channel verification. Where possible, attempt to remove public media of employees to reduce what is available.”

Going forward, restrict public voice and video exposure where possible, adds Cooper. “Every podcast appearance, conference recording and video message provides raw material for voice and image cloning. Be strategic about what gets recorded and published.”

Employee awareness is also key in stopping deepfake attacks. Company-wide phishing and social engineering training should be updated to include the growing threat of AI-generated content and detection mechanisms, Konstantinidi says. “Update incident response plans to explicitly include impersonation scenarios involving audio or video deepfakes, ensuring that escalation paths and verification steps are clearly defined.”

Meanwhile, technology can be helpful in mitigating deepfake attack risks. Cooper recommends deepfake detection tools that use AI to analyse facial movements, voice patterns and metadata in emails, calls and video conferences. “While not foolproof, these tools can flag suspicious content for human review.”

With the risks in mind, it also makes sense to implement multi-factor authentication for sensitive requests. “Require secondary confirmation through a separate, pre-established channel,” says Cooper. “If your CFO sends an urgent email requesting a wire transfer, call them back on their known mobile number before proceeding.”

Kate O'Flaherty is a freelance journalist with well over a decade's experience covering cyber security and privacy for publications including Wired, Forbes, the Guardian, the Observer, Infosecurity Magazine and the Times. Within cyber security and privacy, her specialist areas include critical national infrastructure security, cyber warfare, application security and regulation in the UK and the US amid increasing data collection by big tech firms such as Facebook and Google. You can follow Kate on Twitter.