Cyber criminals have used artificial intelligence (AI) and voice technology to impersonate a UK business owner, resulting in the fraudulent transfer of $243,000 (201,000).
In March this year, what is believed to be an unknown hacker group is said to have exploited AI-powered software to mimic the prominent business leader's voice to fool his subordinate, the CEO of a UK-based energy subsidiary, according to the Wall Street Journal (WSJ).
The hackers were then able to convince the CEO to carry out transactions in the guise of urgent funds destined for its German parent company.
It's believed that the fraudsters phoned the UK-based CEO to demand a transfer to a Hungarian supplier. They contacted him again, still impersonating the parent company's chief executive, to reassure him the transfer would be reimbursed.
The CEO was then contacted a third time, before any reimbursement funds had appeared, to request a second urgent transfer. It was at this point the CEO became suspicious and declined to make any further payments.
The funds that were transferred to Hungary, however, were soon moved on to Mexico and various other locations, with law enforcement still looking for suspects.
This social engineering attack could be a sign for things to come, according to ESET cyber security specialist Jake Moore, who expects to see a huge rise in machine-learned cyber crimes in the near future.
"We have already seen DeepFakes imitate celebrities and public figures in video format, but these have taken around 17 hours of footage to create convincingly," he said.
"Being able to fake voices takes fewer recordings to produce. As computing power increases, we are starting to see these become even easier to create, which paints a scary picture ahead."
With enterprise security practices becoming more robust with time, criminals may increasingly look to staff as the most easily-exploitable gaps in an organisation's defence.
Social engineering has, indeed, grown to be far more sophisticated in recent years with employees faced with slicker phishing campaigns and highly targeted attempts at deception.
"To reduce risks it is imperative not only to make people aware that such imitations are possible now, but also to include verification techniques before any money is transferred," Moore added.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.