The FBI says hackers are using AI voice clones to impersonate US government officials
The campaign uses AI voice generation to send messages pretending to be from high-ranking figures


Had a call from a senior US official? It probably wasn't real. The FBI has issued a warning about an ongoing malicious text and voice messaging campaign in which scammers use AI-generated voices to target victims.
As part of the campaign, threats actors claim to be a senior US official in a bid to access personal accounts. The campaign began in April, according to the law enforcement agency, and it hasn't said which senior US officials are being impersonated.
AI-generated voice calls have been used in a few high profile attacks. Last year, an executive at Ferrari stymied a similar attack by asking about a book recommended by the person being impersonated.
Similarly, British engineering company Arup paid out $25 million to scammers who set up a false video call meeting to trick an employee while back in 2019 a British energy firm was targeted using AI-generated calls to a cost of more than £200,000.
In its advisory, the FBI said the "smishing" or "vishing" attacks, as the American policing agency called them, may be using AI tools to generate the voices.
"One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform," the FBI said in a statement.
Once the account of one person is compromised, it can be used in future attacks.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"Access to personal or official accounts operated by US officials could be used to target other government officials, or their associates and contacts, by using trusted contact information they obtain," the FBI added.
"Contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds."
The warning comes as 68% of businesses have said they've developed a "deepfake" response plan amid the rise in social engineering attacks, with separate research saying nearly two-thirds of finance professionals had been targeted by deepfake fraud.
Avoiding AI scams
The FBI warning noted that the scammers are using software to generate a phone number that isn't attributed to a specific phone. As such, anyone unsure of a message should verify the identity of the person calling with a bit of research, independently verify their correct number, and check that any information shared is correct.
However, Max Gannon, intelligence manager at Cofense, noted that threat actors can also spoof known phone numbers of trusted individuals or organizations. This, he said, adds another layer of risk for potential victims.
“Phone filtering does not typically detect when the number is being spoofed, giving a false sense of security to users who rely on their phones to tell them when something is a scam call,” he said.
When examining a video or image for signs of AI, the FBI suggested looking for subtle imperfections such as distorted hands or feet, indistinct faces, inaccurate shadows, voices matching facial movements, and other unnatural movements.
These practices could be the difference between swerving a disaster or falling victim, the agency added. However, it warned that AI-generated content has now “advanced to the point that it is often difficult to identify”.
As such, the FBI suggested people create a secret word or phrase to prove their identity, as well as the usual security advice of not trusting links or email attachments that haven't been verified. Additionally, individuals and enterprises should never send money, gift cards, or cryptocurrency to someone via the internet or phone.
"Both smishing and vishing techniques rely on social engineering to manipulate recipients, often by instilling a sense of urgency or fear," Gannon added.
"Threat actors are increasingly turning to AI to execute phishing attacks, making these scams more convincing and nearly indistinguishable from legitimate communication. For traditional phishing alone, Cofense has observed a 70% increase in BEC attacks from 2023 to 2024, which can be attributed to the increasing use of AI."
MORE FROM ITPRO
- Ransomware attacks are rising — but quiet payouts could mean there's more than actually reported
- Preventing deepfake attacks: How businesses can stay protected
- FBI issues guidance for enterprises as fake North Korean IT workers wreak havoc
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Proofpoint bolsters Microsoft 365 protection with Hornetsecurity acquisition
News Proofpoint said the acquisition will “significantly enhance” its human-centric security capabilities
-
Hybrid models have changed the way we work – but not where we live
News The wider impact of working from home has been limited by requiring employees in the office for a few days a week, academic research suggests
-
Employee phishing training is working – but don’t get complacent
News Educating staff on how to avoid phishing attacks can cut the rate by 80%
-
Almost a third of workers are covertly using AI at work – here’s why that’s a terrible idea
News Employers need to get wise to the use of unauthorized AI tools and tighten up policies
-
Russian hackers tried to lure diplomats with wine tasting – sound familiar? It’s an update to a previous campaign by the notorious Midnight Blizzard group
News The Midnight Blizzard threat group has been targeting European diplomats with malicious emails offering an invite to wine tasting events, according to Check Point.
-
This hacker group is posing as IT helpdesk workers to target enterprises – and researchers warn its social engineering techniques are exceptionally hard to spot
News The Luna Moth hacker group is ramping up attacks on firms across a range of industries with its 'callback phishing' campaign, according to security researchers.
-
Hackers are using Zoom’s remote control feature to infect devices with malware
News Security experts have issued an alert over a new social engineering campaign using Zoom’s remote control features to take over victim devices.
-
State-sponsored cyber groups are flocking to the 'ClickFix' social engineering technique
News State-sponsored hackers from North Korea, Iran, and Russia are exploiting the ‘ClickFix’ social engineering technique for the first time – and to great success.
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors