Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
Anthropic researchers discovered ransomware made via vibe-coding and North Koreans getting jobs in the US with chatbots


Anthropic admits its AI tools have been "weaponized" by hackers to conduct serious attacks against organizations – and security experts warn it's a sign of things to come as cyber criminal groups flock to the technology.
The AI developer revealed the details as part of a trio of case studies in its Threat Intelligence report, highlighting an employment scam by fake North Korean IT workers, as well as "large-scale extortion" using Claude Code and vibe-coded ransomware for sale on the dark web.
"Agentic AI has been weaponized," the company said in a blog post. "AI models are now being used to perform sophisticated cyber attacks, not just advise on how to carry them out."
30% off Keeper Security's Business Starter and Business plans
Keeper Security is trusted and valued by thousands of businesses and millions of employees. Why not join them and protect your most important assets while taking advantage of this special offer?
Anthropic said AI is being used throughout hacking operations, including finding victims, analyzing stolen data, and creating personas to hide behind — as well as creating ransomware.
Crucially, the post warned that the technology is lowering the barriers of entry for up-and-coming hackers, enabling criminals with few or even no technical skills to conduct major operations, create dangerous ransomware strains, or simply get a job at an American company.
Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, said the admission from Anthropic shows “just how quickly AI is changing the threat landscape”.
"It is already speeding up the process of turning proof-of-concepts – often shared for research or testing – into weaponized tools, shrinking the gap between disclosure and attack,” he said.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks," Curran added.
"At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats."
AI has sparked a cyber crime renaissance
Anthropic laid out details of three different instances of its systems being used in cyber criminal activities.
The first saw 17 organizations targeted across healthcare, emergency services, and government, with criminals threatening to leak stolen data if a ransom wasn't paid.
This particular hacker used Claude Code to "an unprecedented degree”, allowing them to automate reconnaissance practices, harvest victims’ credentials, and penetrate networks.
"Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands,” the blog post noted.
“Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines."
While the Anthropic post sounds almost impressed with the efforts, it was quick to not only ban the accounts used in the attacks, but also develop ways to prevent similar use in the future, rolling out screening tools and a new detection method.
It's no surprise that attackers are turning to AI and automation to improve the success of their criminal endeavors, noted Nivedita Murthy, senior security consultant at Black Duck.
"In this case, it is interesting to note that Claude Code had a wealth of information on which organizations were vulnerable and where," Murthy said. "It also freely gave away this information in the form of an attack vector."
That suggests that companies may feeding too much internal data into the AI tools they're currently using.
"What organizations need to really look into is how much the AI tools they use know about their company and where that information goes," Murthy said.
"While AI usage has been highly beneficial to all, organisations need to understand that AI is a repository of confidential information that requires protection, just like any other form of storage system.
"Accountability and compliance are core requirements of doing business. While embracing AI at scale, these two factors need to be kept in mind."
AI is hard at work...for North Korea
Alongside large-scale automated attacks, Anthropic also detailed an operation run by North Korean hackers, who used Claude secure roles at Fortune 500 companies in the US, working as front-end developers and in programming more widely.
"This involved using our models to create elaborate false identities with convincing professional backgrounds, complete technical and coding assessments during the application process, and deliver actual technical work once hired," the post said.
The chatbot was used to conduct mock interviews, but also to answer questions in actual interviews, create personas, as well as to complete the actual work once hired.
This particular scam wasn't designed to fool those companies, however. Instead, it was designed to earn money by doing the work.
"These employment schemes were designed to generate profit for the North Korean regime, in defiance of international sanctions," the post said. "This is a long-running operation that began before the adoption of LLMs, and has been reported by the FBI."
The report noted that such work is worth hundreds of millions of dollars for North Korea annually. Previously, North Koreans hoping to get jobs overseas would need to actually train to do the technical work, stifling the effort to dodge sanctions.
However, AI has "eliminated this constraint”, the blog post added.
"Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions. This represents a fundamentally new phase for these employment scams."
Anthropic has since banned the accounts and improved how it spots such scams.
Vibe-coded ransomware
In another incident, a criminal turned to Claude to create Ransomware as a Service (RaaS) variants complete with evasion capabilities, encryption, and anti-recovery tools, selling them on the dark web for $100 to $1,200 each.
"This actor appears to have been dependent on AI to develop functional malware," Anthropic said in its blog post. "Without Claude’s assistance, they could not implement or troubleshoot core malware components, like encryption algorithms, anti-analysis techniques, or Windows internals manipulation."
Anthropic has since banned the account and implemented new ways to detect malware generation.
"While specific to Claude, the case studies presented below likely reflect consistent patterns of behaviour across all frontier AI models," the report noted, adding further reports on the topic would be forthcoming.
Indeed, OpenAI has released a similar report, laying out how cyber criminals are making use of its AI — and how the company is proactively stopping such attacks.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Why we all need more kindness in tech
Opinion As tech workers deal with staggering burnout, leaders and workers need to align on healthy avenues to let out stress
-
Google says 'claims of a major Gmail security warning are false' following recent media reports
News Reports of a massive Gmail hack affecting billions of users have been denied by Google
-
Google says 'claims of a major Gmail security warning are false' following recent media reports
News Reports of a massive Gmail hack affecting billions of users have been denied by Google
-
Ransomware attack on IT supplier disrupts hundreds of Swedish municipalities
News The attack on IT systems supplier Miljödata has impacted public sector services across the country
-
Warning issued to Salesforce customers after hackers stole Salesloft Drift data
News Customers were targeted through compromised OAuth access tokens from Salesloft Drift integrations
-
A notorious hacker group is ramping up cloud-based ransomware attacks
News The Storm-0501 threat group is refining its tactics, according to Microsoft, shifting away from traditional endpoint-based attacks and toward cloud-based ransomware.
-
Hackers are abusing ConnectWise ScreenConnect, again
News A new spear phishing campaign has targeted more than 900 organizations with fake invitations from platforms like Zoom and Microsoft Teams.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
AI means cyber teams are rethinking their approach to insider threats
News Nearly two-thirds of European cybersecurity professionals see insider threats as their biggest security risk – and AI is making things worse.
-
74% of companies admit insecure code caused a security breach
News A large number of data breaches are linked to insecure code, prompting calls for better training