Google says AI is now being used to build zero-days – and we just narrowly avoided a 'mass exploitation event'

Google cyber researchers think they’ve found the first AI-generated zero-day exploit

Zero-day exploit concept image showing red-colored binary code on a computer screen, blurred code and user alerts.
(Image credit: Getty Images)

Cyber criminals have been observed using AI to build a working zero-day exploit in a case Google researchers say is the first of its kind.

According to new research from Google Threat Intelligence Group (GTIG), a threat actor planned to deploy the zero-day in a “mass exploitation event” but was thwarted.

The company said that its “proactive counter discovery” may have prevented its use.

The zero-day in question bears all the hallmarks of an AI-generated exploit, researchers noted, largely due to the fact the script contained an “abundance of educational docstrings" as well as a hallucinated CVSS score.

Latest Videos From

Other tell-tale signs, such as a “textbook” Python format which is characteristic of LLM training data, also gave the game away for the threat actor.

Google noted that it does not believe its Gemini model was used to develop the zero-day exploit, but said it has “high confidence” another AI model was used.

John Hultquist, chief analyst at GTIG, said the discovery marks a significant moment in the use of AI for nefarious purposes.

“There’s a misconception that the AI vulnerability race is imminent,” he said. “The reality is that it’s already begun.”

Under the hood of the zero-day

According to GTIG, the vulnerability is classified as a two-factor authentication (2FA) bypass. However, researchers noted it requires valid user credentials.

What made the flaw stand out is that it stems from a “high-level semantic logic flaw where the developer hardcoded a trust assumption”.

Simply put, this particular flaw is the type that could be easily identified by large language models, and researchers deduced that the model’s reasoning capabilities allowed it to read the developer’s intent during development.

“While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies,” researchers noted.

“Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning, effectively reading the developer's intent to correlate the 2FA enforcement logic with the contradictions of its hardcoded exceptions.”

Nation-state activity accelerates

The discovery by GTIG highlights the growing appeal of AI-based tools for cyber criminal groups and nation state-backed threat actors.

“Threat actors are leveraging AI to augment various phases of the attack lifecycle,” researchers said.

“This includes supporting the development of vulnerability exploits and malware, facilitating autonomous execution of commands, enabling more targeted and well-researched reconnaissance, and improving the efficacy of social engineering and information operations.”

Indeed, the Google threat intelligence wing said groups in China, North Korea, and Russia in particular are flocking to AI for vulnerability research and exploit development.

“These actors have leveraged sophisticated approaches toward AI-augmented vulnerability discovery and exploitation, beginning with persona-driven jailbreaking attempts and the integration of specialized, high-fidelity security datasets to augment their vulnerability discovery and exploitation workflows,” researchers said.

One threat group, tracked as UNC2814, was observed using expert persona prompting in Gemini to research remote code execution flaws in TP-Link router firmware and Odette File Transfer Protocol (OFTP) implementations.

Another group, APT45, was observed sending thousands of repetitive prompts to analyze different CVEs and validate PoC exploits, according to researchers.

This, GTIG added, is helping them to create a “more robust arsenal of exploit capabilities” that would be difficult to tackle for defenders.

FOLLOW US ON SOCIAL MEDIA

Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.