Google says you shouldn't worry about AI malware – but that won’t last long as hackers refine techniques

While some strains are still in an experimental phase, researchers warn they could be a sign of what's to come

Malware concept image showing red-colored alert symbol imposed over binary code in background.
(Image credit: Getty Images)

Google has uncovered a new type of malware that can use AI to adapt to its environment mid-execution – and while some strains are currently ineffective, the company is warning this could change.

The Google Threat Intelligence Group (GTIG) said it identified several malware families, such as Promptflux and Promptsteal, that use large language models (LLMs) during execution for the first time.

"These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware," researchers said.

"While still nascent, this represents a significant step toward more autonomous and adaptive malware."

Promptflux, for example, is a dropper written in VBScript that decodes and executes an embedded decoy installer to mask its activity. Its main capability is regeneration, which it achieves by using the Google Gemini API.

It prompts the LLM to rewrite its own source code, saving the new, obfuscated version to the Startup folder to establish persistence. Promptflux also attempts to spread by copying itself to removable drives and mapped network shares.

Promptsteal, meanwhile, is a data miner written in Python and packaged with PyInstaller, that's been observed in operation.

"It contains a compiled script that uses the Hugging Face API to query the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands," said the team.

"Prompts used to generate the commands indicate that it aims to collect system information and documents in specific folders. Promptsteal then executes the commands and sends the collected data to an adversary-controlled server."

AI malware strains will get better

So far, researchers said these new techniques appear to be experimental and don't have the ability to compromise a victim network or device.

However, researchers warned this will change as threat actors hone and refine their techniques to create more potent, effective strains.

"While Promptflux is likely still in research and development phases, this type of obfuscation technique is an early and significant indicator of how malicious operators will likely augment their campaigns with AI moving forward."

Other malware identified by the researchers includes Fruitshell, which has been spotted in the wild. It's a publicly available reverse shell written in PowerShell that establishes a remote connection to a configured command-and-control server and allows a threat actor to execute arbitrary commands on a compromised system.

"Notably, this code family contains hard-coded prompts meant to bypass detection or analysis by LLM-powered security systems," said the team.

Promptlock – also apparently experimental – is cross-platform ransomware written in Go that leverages an LLM to dynamically generate and execute malicious Lua scripts at runtime.

Its capabilities include filesystem reconnaissance, data exfiltration, and file encryption on both Windows and Linux systems, the researchers said.

Security industry sounds the alarm

The findings are sending ripples through the security industry, with Nick Tausek, lead security automation architect at Swimlane warning the study is a sign of things to come for security teams.

"Cybersecurity teams have been wondering what the next evolution of threat actors would look like, and it appears it’s starting to reveal itself,” he said.

“Utilizing malware that can use LLMs to dynamically adapt its behavior to the environment it finds itself in creates massive problems for security teams, as the ability to detect, predict, and respond to threats becomes significantly harder.”

Tausek added that since there have been no recorded instances of malware utilizing AI to adapt during attacks, there’s “nothing for security teams to look to as guidance”.

This is a particular cause for concern, he noted, as teams will essentially be flying blind for some time before the industry can react.

Kevin Kirkwood, CISO at Exabeam, described the discovery as 'a science fiction film that has come to life'.

"Since the malware is adapting and changing dynamically, the only way to detect and manage the activity is to ensure that anomalous detection is working in high gear as a signature-based approach to detection may struggle to follow the changes and protect the organization," he said.

"More ‘fun’ tactics are going to start coming to the fore sooner than folks thought. Defenders who aren’t keeping up with the new tactics and vectors are going to be in the headlines soon as the latest victims of these new attacks."

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.