OpenAI is clamping down on ChatGPT accounts used to spread malware
Tools like ChatGPT are being used by threat actors to automate and amplify campaigns


OpenAI has taken down a host of ChatGPT accounts linked to state-sponsored threat actors as it continues to tackle malicious use of its AI tools.
The ten banned accounts, which have links to groups in China, Russia, and Iran, were used to support cyber crime campaigns, the company revealed late last week.
"By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt, and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams," OpenAI said in a blog post detailing the takedown.
Four of the campaigns appear to have originated in China, generating posts in English, Chinese, and Urdu that were then posted on social media sites including TikTok, X, Reddit, and Facebook.
Topics included Taiwan, specifically targeting Reversed Front, a video and board game that depicts resistance against the Chinese Communist Party, along with posts on Pakistani activist Mahrang Baloch, who has publicly criticized China’s investments in Balochistan and the closure of the US Agency for International Development (USAID).
Meanwhile, a group of ChatGPT accounts apparently operated by a Russian-speaking threat actor were banned. OpenAI said these were being used to develop and refine malware strains aimed at targeting Windows devices.
Threat actors also used the chatbot to debug code in multiple languages and to set up their command-and-control infrastructure.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Other China-linked accounts - dubbed Uncle Spam - were used to create social media posts on US politics, particularly tariffs.
"We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole.” the company said. “These messages offered recipients high salaries for trivial tasks — such as liking social media posts —and encouraged them to recruit others."
Sam Rubin, SVP of Unit 42 at Palo Alto Networks, said the report aligned with what its own cybersecurity specialists have been seeing in recent months.
Threat actors are increasingly flocking to AI tools to support and ramp up operations and activities, he noted.
"Attacker use of LLMs is accelerating, and as these models become more advanced, we can expect attacks to increase in speed, scale, and sophistication. It’s no surprise that threat actors — from profit-driven cybercriminals to state-sponsored groups like those aligned with China — are embracing LLMs,” Rubin commented.
“They lower the barrier to entry and dramatically improve the believability of malicious content. In one model we tested, 51 out of 123 malicious prompts slipped past safety filters — a 41% failure rate that makes it clear today’s guardrails aren’t holding the line."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
AI security and compliance concerns are driving a private cloud boom
News A new survey suggests AI workloads may be a serious motivation behind moving back to private cloud and on-premise infrastructure
-
Geekom A5 2025 Edition review
Reviews Thanks to an excellent performance-to-price ratio, good connectivity, and easy upgradability, the Geekom A5 is a surprisingly capable AMD Ryzen-powered mini PC
-
It's been a bad week for ransomware operators
News A host of ransomware strains have been neutralized, servers seized, and key players indicted
-
Hackers are using Zoom’s remote control feature to infect devices with malware
News Security experts have issued an alert over a new social engineering campaign using Zoom’s remote control features to take over victim devices.
-
Hackers are duping developers with malware-laden coding challenges
News A North Korean state-sponsored group has been targeting crypto developers through fake coding challenges given as part of the recruitment process.
-
‘Phishing kits are a force multiplier': Cheap cyber crime kits can be bought on the dark web for less than $25 – and experts warn it’s lowering the barrier of entry for amateur hackers
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
-
Seized database helps Europol snare botnet customers in ‘Operation Endgame’ follow-up sting
News Europol has detained several people believed to be involved in a botnet operation as part of a follow-up to a major takedown last year.
-
This potent malware variant can hijack your Windows PC, steal passwords, and more: Neptune RAT is spreading on GitHub, Telegram, and even YouTube – and experts warn 'anyone could use it to launch attacks'
News Neptune RAT can hijack Windows PCs and steal passwords – and it's spreading fast
-
Warning issued over ‘fast flux’ techniques used to obscure malicious signals on compromised networks
News Cybersecurity agencies have issued a stark message that too little is being done to sniff out malware hiding in corporate networks
-
OpenAI announces five-fold increase in bug bounty reward
News OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.