OpenAI is clamping down on ChatGPT accounts used to spread malware
Tools like ChatGPT are being used by threat actors to automate and amplify campaigns
OpenAI has taken down a host of ChatGPT accounts linked to state-sponsored threat actors as it continues to tackle malicious use of its AI tools.
The ten banned accounts, which have links to groups in China, Russia, and Iran, were used to support cyber crime campaigns, the company revealed late last week.
"By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt, and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams," OpenAI said in a blog post detailing the takedown.
Four of the campaigns appear to have originated in China, generating posts in English, Chinese, and Urdu that were then posted on social media sites including TikTok, X, Reddit, and Facebook.
Topics included Taiwan, specifically targeting Reversed Front, a video and board game that depicts resistance against the Chinese Communist Party, along with posts on Pakistani activist Mahrang Baloch, who has publicly criticized China’s investments in Balochistan and the closure of the US Agency for International Development (USAID).
Meanwhile, a group of ChatGPT accounts apparently operated by a Russian-speaking threat actor were banned. OpenAI said these were being used to develop and refine malware strains aimed at targeting Windows devices.
Threat actors also used the chatbot to debug code in multiple languages and to set up their command-and-control infrastructure.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Other China-linked accounts - dubbed Uncle Spam - were used to create social media posts on US politics, particularly tariffs.
"We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole.” the company said. “These messages offered recipients high salaries for trivial tasks — such as liking social media posts —and encouraged them to recruit others."
Sam Rubin, SVP of Unit 42 at Palo Alto Networks, said the report aligned with what its own cybersecurity specialists have been seeing in recent months.
Threat actors are increasingly flocking to AI tools to support and ramp up operations and activities, he noted.
"Attacker use of LLMs is accelerating, and as these models become more advanced, we can expect attacks to increase in speed, scale, and sophistication. It’s no surprise that threat actors — from profit-driven cybercriminals to state-sponsored groups like those aligned with China — are embracing LLMs,” Rubin commented.
“They lower the barrier to entry and dramatically improve the believability of malicious content. In one model we tested, 51 out of 123 malicious prompts slipped past safety filters — a 41% failure rate that makes it clear today’s guardrails aren’t holding the line."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Gender diversity improvements could be the key to tackling the UK's AI skills shortageNews Encouraging more women to pursue tech careers could plug huge gaps in the AI workforce
-
Researchers claim Salt Typhoon masterminds learned their trade at Cisco Network AcademyNews The Salt Typhoon hacker group has targeted telecoms operators and US National Guard networks in recent years
-
Chinese hackers are using ‘stealthy and resilient’ Brickstorm malware to target VMware servers and hide in networks for months at a timeNews Organizations, particularly in the critical infrastructure, government services, and facilities and IT sectors, need to be wary of Brickstorm
-
OpenAI hailed for ‘swift move’ in terminating Mixpanel ties after data breach hits developersNews The Mixpanel breach prompted OpenAI to launch a review into its broader supplier ecosystem
-
The Scattered Lapsus$ Hunters group is targeting Zendesk customers – here’s what you need to knowNews The group appears to be infecting support and help-desk personnel with remote access trojans and other forms of malware
-
Shai-Hulud malware is back with a vengeance and has hit more than 19,000 GitHub repositories so far — here's what developers need to knowNews The malware has compromised more than 700 widely-used npm packages, and is spreading fast
-
The US, UK, and Australia just imposed sanctions on a Russian cyber crime group – 'we are exposing their dark networks and going after those responsible'News Media Land offers 'bulletproof' hosting services used for ransomware and DDoS attacks around the world
-
Europol hails triple takedown with Rhadamanthys, VenomRAT, and Elysium sting operationsNews The Rhadamanthys infostealer operation is one of the latest victims of Europol's Operation Endgame, with more than a thousand servers taken down
-
Hackers are using these malicious npm packages to target developers on Windows, macOS, and Linux systems – here’s how to stay safeNews Security experts have issued a warning to developers after ten malicious npm packages were found to deliver infostealer malware across Windows, Linux, and macOS systems.
-
Cyber researchers have already identified several big security vulnerabilities on OpenAI’s Atlas browserNews Security researchers have uncovered a Cross-Site Request Forgery (CSRF) attack and a prompt injection technique
