Mandiant says generative AI will empower new breed of information operations, social engineering

Mandiant generative AI: Neon blue human head (right-side profile) with particle overlay to denote AI
(Image credit: Getty Images)

Researchers have warned that generative AI will give small-scale hackers the wherewithal to usher in a new wave of personalized social engineering alongside scattershot disinformation. 

Mandiant researchers divided potential attacks into four categories, comprising AI images, video, text, and audio. Each was associated with a distinct emotional response on the part of victims, and separated by technological maturity.

It’s feared that text generated by large language models (LLMs) could be used to generate a large volume of finely-tailored ‘lure’ content rooted in an analysis of a victim’s social media presence or contacts. 

As attackers launch large and automated phishing attacks, enhanced with personalized information, security teams may struggle to keep up with the evolving threat landscape. 

The range of AI forms could also cause fatigue in users and experts as digital content is interrogated to an increasing degree. Old advice around how to identify phishing content may become outdated as attackers produce increasingly specific and sophisticated attacks.

Researchers cited ZeroFox’s Social Network Automated Phishing with Reconnaisance (SNAP_R), a tool demonstrated at Black Hat 2016 which could be used to produce phishing emails based solely on a victim’s tweets.

There is limited evidence that AI has been used for cyber attacks such as breaches as opposed to information campaigns, but researchers voiced an expectation that attacks of this kind would rise in time.

“While we expect the adversary to make use of generative AI, and there are already adversaries doing so, adoption is still limited and primarily focused on social engineering,” John Hultquist, chief analyst at Mandiant Intelligence, Google Cloud.

“There’s no doubt that criminals and state actors will find value in this technology, but many estimates of how this tool will be used are speculative and not grounded in observation.”

RELATED RESOURCE

Whitepaper cover with title and images of multiple screens and users interacting with them

(Image credit: IBM)

AI For Customer Service Smartpaper

Studies show that 91% of customers will not return after a bad experience. Read about real-world success stories from clients who use AI for customer service.

DOWNLOAD FOR FREE

Audio and video were also highlighted as carrying the potential for future campaigns, with open source software allowing hackers to supplement text with persuasive media such as a voicemail purportedly left by a co-worker or a fake video call with a business partner.

For example, Microsoft’s VALL-E allows users to synthesize believable human speech using text and just a few seconds of sample audio and can replicate specific intonations and emotions.

The use of audio deepfake attacks has been feared for some time, with researchers placing the threat in their future-threat predictions as far back as 2019.

Security experts also recently told ITPro they believe highly convincing AI voice-based phishing to become a serious threat ‘within months’.

The voice of a UK business owner, however, was impersonated using AI technology in 2020 in a case that convinced a CEO to transfer $243,000 to an unknown group of hackers.

Two AI portraits, one on the left of a man in a suit and the other on the right of a woman on a pale blue background.

GAN-generated profile pictures, created for free on the website thispersondoesnotexist. (Image credit: thispersondoesnotexist)

Images created through AI diffusion models such s DALL-E 2 or Stable Diffusion are often visibly artificial, but have rapidly improved and outputs become harder to recognize as fake with each update.

Some have expressed concern that AI images could be used for disinformation campaigns, or to damage the reputation of key individuals such as CEOs of companies or politicians.

Public erosion of trust in media could also be a side effect of the widespread dissemination of AI images.

Mandiant recorded the use of generative adversarial networks (GANs), machine learning (ML) models that leverage competitive deep learning to produce randomized images based on training data, for the purpose of creating artificial profile pictures for social engineering.

RELATED RESOURCE

Purple whitepaper cover with white text over background image of suited female wearing glasses

(Image credit: Mimecast)

AI is fast becoming an essential cyber security tool. Learn how AI fits into your security systems and see examples of its best use cases.

DOWNLOAD FOR FREE

The potential for AI to create malware is currently limited and being actively hampered by AI developers through guardrails and commitments to ethical AI.

In the near future, it’s feared that unsophisticated threat actors could use AI to augment and improve their attacks, with researchers comparing the technology to the exploit framework Cobalt Strike.

Darktrace researchers saw novel social engineering soar 135% in the first two months of 2023, as attackers adopted LLMs to produce believable scam emails.

At AWS Summit London Darktrace researchers also presented a proof of concept in which they used the AI agent Auto-GPT, which operates using the GPT-4 API to automate a spear phishing campaign.

Within the parameters given to it by the researchers, the Auto-GPT client searched LinkedIn for a high-value Darktrace target and found one in the form of Marcus Fowler, CEO of Darktrace Federal. It then created a phishing email aimed at another Darktrace executive, informed by content on Fowler’s profile.

At present, businesses have limited access to tools that can reliably detect AI content

There is a demand for tools of this nature across sectors, including academia which has struggled with AI-driven plagiarism for written work, but the most intense development in this field is focused on countering threat actors trying to disguise AI content as legitimate.

Intel’s FaceCatcher is intended to remedy the problem of real-time deepfakes, which could be used to perform extortion or social engineering via video calls. 

It uses deep learning techniques to analyze the color variation in subdermal blood vessels, which current deepfake technology cannot accurately replicate. 

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.