OpenAI is clamping down on ChatGPT accounts used to spread malware
Tools like ChatGPT are being used by threat actors to automate and amplify campaigns
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
OpenAI has taken down a host of ChatGPT accounts linked to state-sponsored threat actors as it continues to tackle malicious use of its AI tools.
The ten banned accounts, which have links to groups in China, Russia, and Iran, were used to support cyber crime campaigns, the company revealed late last week.
"By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt, and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams," OpenAI said in a blog post detailing the takedown.
Four of the campaigns appear to have originated in China, generating posts in English, Chinese, and Urdu that were then posted on social media sites including TikTok, X, Reddit, and Facebook.
Topics included Taiwan, specifically targeting Reversed Front, a video and board game that depicts resistance against the Chinese Communist Party, along with posts on Pakistani activist Mahrang Baloch, who has publicly criticized China’s investments in Balochistan and the closure of the US Agency for International Development (USAID).
Meanwhile, a group of ChatGPT accounts apparently operated by a Russian-speaking threat actor were banned. OpenAI said these were being used to develop and refine malware strains aimed at targeting Windows devices.
Threat actors also used the chatbot to debug code in multiple languages and to set up their command-and-control infrastructure.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Other China-linked accounts - dubbed Uncle Spam - were used to create social media posts on US politics, particularly tariffs.
"We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole.” the company said. “These messages offered recipients high salaries for trivial tasks — such as liking social media posts —and encouraged them to recruit others."
Sam Rubin, SVP of Unit 42 at Palo Alto Networks, said the report aligned with what its own cybersecurity specialists have been seeing in recent months.
Threat actors are increasingly flocking to AI tools to support and ramp up operations and activities, he noted.
"Attacker use of LLMs is accelerating, and as these models become more advanced, we can expect attacks to increase in speed, scale, and sophistication. It’s no surprise that threat actors — from profit-driven cybercriminals to state-sponsored groups like those aligned with China — are embracing LLMs,” Rubin commented.
“They lower the barrier to entry and dramatically improve the believability of malicious content. In one model we tested, 51 out of 123 malicious prompts slipped past safety filters — a 41% failure rate that makes it clear today’s guardrails aren’t holding the line."
MORE FROM ITPRO
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
UK Semiconductor Centre names new international partnerships chiefNews The appointment aims to strengthen international collaboration and attract fresh investment into the UK’s semiconductor sector.
-
European Commission awards digital sovereignty contractsNews The Commission has picked four providers to offer services for EU bodies, but one consortium includes Google Cloud
-
‘The build pipeline is becoming the new frontline’: Axios npm compromise highlights growing software supply chain risks, experts warnNews Cyber criminals exploited a hijacked maintainer account to compromise one of the world's most widely used JavaScript libraries
-
OpenAI is cracking down on AI misuse with a new bug bounty programNews Submissions don't have to be security vulnerabilities, OpenAI says, just the potential to cause material harm
-
'It's destructive, not ransomware': Security experts weigh in on motivation behind Stryker cyber attackNews The attack on medical tech company Stryker has severely impacted operations globally
-
Thousands of Asus routers are being used to fuel a massive cyber crime spreeNews Black Lotus Labs has spotted a massive botnet of Asus routers built by malware that uses a common peer networking tool
-
The rise of teen hackers ‘makes for a good headline’, but cyber crime activities peak later in lifeNews With family responsibilities and mortgages to pay, it's not teenagers dishing out malware or carrying out cyber extortion
-
DIY hackers are turning to ‘flat-pack’ malware components to speed up attacks and cut costsNews While these malware campaigns are very basic, researchers noted “they still work”
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
