Microsoft files suit against threat actors abusing AI services
Cyber criminals are accused of using stolen credentials for an illegal hacking as a service operation


Microsoft has filed a lawsuit against 10 foreign threat actors, accusing the group of stealing API keys for its Azure OpenAI service and using it to run a hacking as a service operation.
According to the complaint, filed in December 2024, Microsoft discovered the customer API keys were being used to generate illicit content in late July that year.
After investigating the incident it found the credentials had been stolen and were scraped from public websites.
In a blog post publicly announcing the details of the legal action, Steven Masada, assistant general counsel at Microsoft’s Digital Crimes Unit (DCU), said the group identified and unlawfully accessed accounts with ‘certain AI services’ and intentionally reconfigured these capabilities for malicious purposes.
“Cyber criminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content,” he reported.
In particular, it appears the group had bypassed internal guardrails to use the DALL-E AI image generation system to create thousands of harmful images.
“Upon discovery, Microsoft revoked cyber criminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future,” Masada noted.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
He added that Microsoft was able to seize a website that was instrumental to the group’s operation and will allow the DCU to collect further evidence on those responsible, and how these services are monetized.
Threat actors consistently look to jailbreak legitimate generative AI services
Masada warned that generative AI systems are continually being probed by cyber criminals looking for ways to corrupt the tools for use in threat campaigns.
“ Every day, individuals leverage generative AI tools to enhance their creative expression and productivity. Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes,” he explained.
“Microsoft recognizes the role we play in protecting against the abuse and misuse of our tools as we and others across the sector introduce new capabilities. Last year, we committed to continuing to innovate on new ways to keep users safe and outlined a comprehensive approach to combat abusive AI-generated content and protect people and communities. This most recent legal action builds on that promise. ”
OpenAI, a key partner of Microsoft for the frontier models driving its AI services, reported in October 2024 that it had disrupted more than 20 attempts to use its models for malicious purposes since the start of the year.
RELATED WHITEPAPER
The company said it observed threat actors trying to use its flagship ChatGPT service to debug malware, generate fake social media accounts, and produce general disinformation.
In July 2024, Microsoft also warned users of a new method of prompt engineering AI models into disclosing harmful information.
The jailbreaking technique, labelled Skeleton Key, involves asking the model to augment its behavior models such that when given a request for illicit content, instead of refusing the request the model would comply but simply prefix the response with a warning.
Earlier that year, researchers published a paper on a weakness in OpenAI’s GPT-4 where attackers could jailbreak the model by translating their prompts into rarer or ‘low-resource’ languages.
By translating their prompts into languages such as Scots Gaelic, Hmong, or Guarani, which the model has less rigorous training on, the researchers were far more likely to be able to get the system to generate harmful outputs.

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
A Windows 11 update bug is breaking SSDs – here’s what you can do to prevent it
News Users first began reporting the Windows 11 update bug last week
-
UK government set to back down on Apple encryption order
News Tulsi Gabbard, US director of national intelligence, has confirmed the UK plans to back down on plans that would see Apple forced to create a "back door" for authorities.
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
The rise of GhostGPT – Why cybercriminals are turning to generative AI
Industry Insights GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity
-
Think DDoS attacks are bad now? Wait until hackers start using AI assistants to coordinate attacks, researchers warn
News The use of AI in DDoS attacks would change the game for hackers and force security teams to overhaul existing defenses
-
NCSC says ‘limited number’ of UK firms affected by SharePoint attack as global impact spreads
News The SharePoint flaw has already had a wide impact according to reports from government security agencies