OpenAI is clamping down on ChatGPT accounts used to spread malware

Tools like ChatGPT are being used by threat actors to automate and amplify campaigns

ChatGPT logo and branding pictured in white coloring against a black backdrop.
(Image credit: Getty Images)

OpenAI has taken down a host of ChatGPT accounts linked to state-sponsored threat actors as it continues to tackle malicious use of its AI tools.

The ten banned accounts, which have links to groups in China, Russia, and Iran, were used to support cyber crime campaigns, the company revealed late last week.

"By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt, and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams," OpenAI said in a blog post detailing the takedown.

Four of the campaigns appear to have originated in China, generating posts in English, Chinese, and Urdu that were then posted on social media sites including TikTok, X, Reddit, and Facebook.

Topics included Taiwan, specifically targeting Reversed Front, a video and board game that depicts resistance against the Chinese Communist Party, along with posts on Pakistani activist Mahrang Baloch, who has publicly criticized China’s investments in Balochistan and the closure of the US Agency for International Development (USAID).

Meanwhile, a group of ChatGPT accounts apparently operated by a Russian-speaking threat actor were banned. OpenAI said these were being used to develop and refine malware strains aimed at targeting Windows devices.

Threat actors also used the chatbot to debug code in multiple languages and to set up their command-and-control infrastructure.

Other China-linked accounts - dubbed Uncle Spam - were used to create social media posts on US politics, particularly tariffs.

"We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole.” the company said. “These messages offered recipients high salaries for trivial tasks — such as liking social media posts —and encouraged them to recruit others."

Sam Rubin, SVP of Unit 42 at Palo Alto Networks, said the report aligned with what its own cybersecurity specialists have been seeing in recent months.

Threat actors are increasingly flocking to AI tools to support and ramp up operations and activities, he noted.

"Attacker use of LLMs is accelerating, and as these models become more advanced, we can expect attacks to increase in speed, scale, and sophistication. It’s no surprise that threat actors — from profit-driven cybercriminals to state-sponsored groups like those aligned with China — are embracing LLMs,” Rubin commented.

“They lower the barrier to entry and dramatically improve the believability of malicious content. In one model we tested, 51 out of 123 malicious prompts slipped past safety filters — a 41% failure rate that makes it clear today’s guardrails aren’t holding the line."

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.