Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its models
Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
State-backed threat actors from CRINK nations have come to rely on large language models (LLMs) as “essential tools” for researching and targeting victims, according to a new report.
The latest AI Threat Tracker report from Google Threat Intelligence Group (GTIG), produced in collaboration with Google DeepMind, details the numerous ways threat groups are already using AI to plan and carry out attacks.
Advanced persistent threat (APT) groups were tracked using Google’s own Gemini family of models to conduct targeted research on potential victims, probe vulnerabilities, and create tailored code and scripts.
For example, the China-based APT Temp.HEX was found using Gemini to file information on individual targets in Pakistan.
The as-yet-unattributed APT UNC6148 also used Gemini to seek out sensitive information tied to victims, such as email addresses and account details, as the first step in a targeted phishing campaign on Ukraine and the wider defense sector.
In response, Google disabled the assets associated with both groups. Other incidents saw attackers use public AI models to more directly fuel attack campaigns.
Iranian-backed groups such as APT42 were observed using Gemini and other AI models to research potential victims, then craft convincing phishing emails based on target biographies.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
That same group was observed using Gemini to translate local languages as well as regional references and phrases.
North Korea-backed groups seized headlines throughout 2024 and 2025, as hackers infiltrated IT departments of major organizations including KnowBe4 with fake addresses and identities.
In the report, the North Korean-backed group UNC2970 was found using Gemini to plan attacks on cybersecurity defense companies and map job specifications.
AI-enhanced malware is gathering steam
The report also noted the growing risk presented by malware that uses AI to achieve novel capabilities such as preventing network detection.
HONESTCUE malware, for example, has been found to use API calls to Gemini to generate ‘stage two’ code. This is used to download and execute additional malware directly in the memory of target systems using CSharpCodeProvider, a legitimate .NET class for executing C# code.
Because the Gemini-produced code executes the secondary malware directly in memory, HONESTCUE infects target systems without leaving telltale artifacts on the victim’s disk.
Though the malware hasn’t been linked to specific attack campaigns to date, GTIG researchers said they believe its developer is a single threat actor or small group testing the waters for future attacks. This is backed up by evidence HONESTCUE has been tested on Discord.
Another example can be found in COINBAIT, a phishing kit created by the APT UNC5356 that shows signs of having been created using the vibe coding platform Lovable.
GTIG has previously warned that while AI malware is still nascent, it’s developing quickly. In the latest report, authors noted that while no “paradigm shift” has yet been unlocked by APTs, their exploration of malicious AI is ongoing and the technology will play a growing role in every stage of the attack lifecycle.
On the other hand, researchers discovered that threat actors are passing off jailbroken public AI models as handmade offensive tools.
For example ‘Xantharox’, a dark web toolkit advertised as tailor-made offensive AI toolset, is actually powered by open source AI tools such as Crush and Hexstrike AI via model context protocol (MCP), as well as public AI models like Gemini.
Threat actors are stealing API keys to enable this hidden activity, with GTIG warning organizations with cloud and AI resources are at risk. Users on platforms such as One API and New API, often those in countries with regional AI censorship, are also targeted for API key harvesting.
Model extraction puts AI developers at risk
Researchers also observed instances of APTs performing ‘model extraction’, in which attackers use legitimate access to frontier models such as Gemini to help train new AI and machine learning (ML) models.
Generally, attackers use an approach known as knowledge distillation (KD) in which a ‘student’ AI model is trained on the answers to specific questions based on the exemplar answers of the pre-existing AI model.
This can result in models with advanced capabilities such as frontier reasoning but none of the guardrails present in public AI models like Gemini. In the future, threat actors could then use
GTIG tracked over 100,000 prompts intended to expose and replicate Gemini’s reasoning capabilities in non-English languages, which were automatically counteracted by Google’s systems.
“Google’s latest AI Threat Tracker marks a specific turning point: we are no longer just worried about bad prompts, but the industrial-scale extraction of the models themselves,” wrote Jamie Collier, lead advisor in Europe at Google Threat Intelligence Group, in a LinkedIn post marking the launch of the report.
Google DeepMind and GTIG blocked attempts at model extraction throughout 2025, noting that the attacks were launched by private companies and researchers around the world rather than APTs.
Distilling secondary models from Gemini is a violation of Google’s terms of service and is considered theft of intellectual property (IP). The hyperscaler recommended organizations that provide AI models as a service should closely observe API access for signs of model extraction.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
DigiCert continues EMEA partner focus with latest appointmentNews The channel veteran will lead the digital trust vendor’s EMEA partner strategy as it targets deepen partner connections and new growth
-
Microsoft patches six zero-days targeting Windows, Word, and moreNews Patch Tuesday update targets large number of vulnerabilities already being used by attackers
-
‘The fastest adoption of any model in our history’: Sundar Pichai hails AI gains as Google Cloud growth, Gemini popularity surgesNews The company’s cloud unit beat Wall Street expectations as it continues to play a key role in driving AI adoption
-
Why Anthropic sent software stocks into freefallNews Anthropic's sector-specific plugins for Claude Cowork have investors worried about disruption to software and services companies
-
B2B Tech Future Focus - 2026Whitepaper Advice, insight, and trends for modern B2B IT leaders
-
What the UK's new Centre for AI Measurement means for the future of the industryNews The project, led by the National Physical Laboratory, aims to accelerate the development of secure, transparent, and trustworthy AI technologies
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
-
What Anthropic's constitution changes mean for the future of ClaudeNews The developer debates AI consciousness while trying to make Claude chatbot behave better
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.