Google says AI is now being used to build zero-days – and we just narrowly avoided a 'mass exploitation event'
Google cyber researchers think they’ve found the first AI-generated zero-day exploit
Cyber criminals have been observed using AI to build a working zero-day exploit in a case Google researchers say is the first of its kind.
According to new research from Google Threat Intelligence Group (GTIG), a threat actor planned to deploy the zero-day in a “mass exploitation event” but was thwarted.
The company said that its “proactive counter discovery” may have prevented its use.
The zero-day in question bears all the hallmarks of an AI-generated exploit, researchers noted, largely due to the fact the script contained an “abundance of educational docstrings" as well as a hallucinated CVSS score.
Other tell-tale signs, such as a “textbook” Python format which is characteristic of LLM training data, also gave the game away for the threat actor.
Google noted that it does not believe its Gemini model was used to develop the zero-day exploit, but said it has “high confidence” another AI model was used.
John Hultquist, chief analyst at GTIG, said the discovery marks a significant moment in the use of AI for nefarious purposes.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“There’s a misconception that the AI vulnerability race is imminent,” he said. “The reality is that it’s already begun.”
Under the hood of the zero-day
According to GTIG, the vulnerability is classified as a two-factor authentication (2FA) bypass. However, researchers noted it requires valid user credentials.
What made the flaw stand out is that it stems from a “high-level semantic logic flaw where the developer hardcoded a trust assumption”.
Simply put, this particular flaw is the type that could be easily identified by large language models, and researchers deduced that the model’s reasoning capabilities allowed it to read the developer’s intent during development.
“While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies,” researchers noted.
“Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning, effectively reading the developer's intent to correlate the 2FA enforcement logic with the contradictions of its hardcoded exceptions.”
Nation-state activity accelerates
The discovery by GTIG highlights the growing appeal of AI-based tools for cyber criminal groups and nation state-backed threat actors.
“Threat actors are leveraging AI to augment various phases of the attack lifecycle,” researchers said.
“This includes supporting the development of vulnerability exploits and malware, facilitating autonomous execution of commands, enabling more targeted and well-researched reconnaissance, and improving the efficacy of social engineering and information operations.”
Indeed, the Google threat intelligence wing said groups in China, North Korea, and Russia in particular are flocking to AI for vulnerability research and exploit development.
“These actors have leveraged sophisticated approaches toward AI-augmented vulnerability discovery and exploitation, beginning with persona-driven jailbreaking attempts and the integration of specialized, high-fidelity security datasets to augment their vulnerability discovery and exploitation workflows,” researchers said.
One threat group, tracked as UNC2814, was observed using expert persona prompting in Gemini to research remote code execution flaws in TP-Link router firmware and Odette File Transfer Protocol (OFTP) implementations.
Another group, APT45, was observed sending thousands of repetitive prompts to analyze different CVEs and validate PoC exploits, according to researchers.
This, GTIG added, is helping them to create a “more robust arsenal of exploit capabilities” that would be difficult to tackle for defenders.
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Changes to EU AI Act implementation deadlines welcomed by industryNews New implementation deadlines for the EU AI Act could help remove “genuine friction” for European companies
-
What businesses need to know about the update to Cyber EssentialsIn-depth Cyber Essentials was updated this April – what are the key changes?
-
UK firms left in the dark over what workers are sharing with AINews Security teams can’t keep track of what workers are sharing with AI applications, regardless of whether they’re approved or unauthorized
-
AI is now a ‘standard part of the attacker toolkit’News Cyber attacks are increasing in scale, intensity, and velocity thanks to AI, and it’s forcing defenders to react faster than ever before
-
AI is raising the stakes for cyber professionals – Claude Mythos just took things to another levelNews AI efficiency gains work both ways, and threat actors are already capitalizing on powerful new tools
-
Google just revised its ‘Q-Day’ timeline: Quantum computers could break existing encryption techniques within three years – and enterprises are nowhere near readyNews Google has warned that “Q-Day”, the point where a quantum computer is powerful enough to crack current encryption techniques, could come as soon as 2029.
-
Google just launched a new Gemini-powered dark web monitoring serviceNews A new AI-powered dark web monitoring service looks to give enterprises more "reasoned answers" and deeper insights
-
Flaw in Chrome’s Gemini Live gave attackers access to user cameras and microphonesNews The in-browser AI assistant loads differently in the side panel, rather than a regular tab, exposing users to risks
-
CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just secondsNews Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools
-
Using AI to generate passwords is a terrible idea, experts warnNews Researchers have warned the use of AI-generated passwords puts users and businesses at risk