‘Slopsquatting’ is a new risk for vibe coding developers – but it can be solved by focusing on the fundamentals
Malicious packages in public code repositories can be given a sheen of authenticity via AI tools
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI tools offer great potential to clear up time for developers and software engineers – but can also be manipulated to introduce malware and other threats through a new attack vector known as ‘slopsquatting’
Slopsquatting is an attack method in which hackers exploit common AI hallucinations to trick engineers into mistakenly installing malicious packages.
In short, hackers track non-existent packages hallucinated by AI coding tools and then publish malicious packages under these names on public repositories such as PyPI. The seemingly legitimate packages are then installed by victims who trust their AI code suggestions.
ITPro spoke to Dustin Kirkland, SVP of Engineering at Chainguard, to learn more about slopsquatting and where it sits in the wider issue of risky AI code.
“It’s kind of a modern twist on typosquatting, we’ve seen that for many years all the way down to simple URLs, you accidentally leave one letter out of ‘Google’ and end up on a malicious site, or something like that,” Kirkland adds.
The term ‘slopsquatting’ combines ‘typosquatting’ and ‘AI slop’, a pejorative term for low-effort content generated with AI, Kirkland said
“For at least ten years, typosquatting has been a problem in the Python and Java universes, where it's quite easy for almost anyone to register a Python package with PIP Python and PIP install.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Kirkland told ITPro that porting this attack methodology into AI comes with risky implications, as developers are increasingly engaged in vibe coding to produce business-critical code at scale.
“I would say an old school human coder, especially in the open source world where I've spent my entire career, every line of code was typically reviewed and approved by a human maintainer.
“And now the ability of an AI to generate literally gigabytes of code, millions, if not billions of lines of code, starts to put this out of the reach of even some of the most prolific maintainers.
“And so the scary risk is that these sorts of problems leak into systems that don't get as much human verification.”
But Kirkland sees potential for AI assistants to police one another, with designated agents taught to identify signs of slopsquatting, typosquatting, and other common attacks using predefined algorithms.
“[W]hen using something like AI here, one of the real advantages over the human way of doing it is when there is identified a new way of a malicious actor slopsquatting, we can create one algorithm and roll it out comprehensively by updating a single model,” he explained.
AI risks aren’t top of mind for leaders
Chainguard’s 2026 Engineering Reality Report took in responses from 1,200 software engineers and senior technology leaders from across the US, UK, France, and Germany.
The report also found widespread enthusiasm for using AI tools to mostly or entirely automate tasks in the engineering workflow.
For example, over two thirds (68%) of respondents said testing, monitoring, and quality assurance had been automated in this way, with a similar figure registered for security patching and vulnerability remediation (67%) as well as code review (65%).
However, the report also shed light on the core concerns holding software engineers back from full AI adoption.
The top concern among respondents was the lack of security and privacy associated with AI, with 17% saying as much, while other core concerns included accountability and trust in code and the use of shadow AI.
Kirkland said these concerns can be eased as firms set out AI usage policies, with Chainguard having rolled its own out at the start of 2025.
This “living document” clearly sets out trusted AI tools employees can use, which reduces operational risk and helps provide oversight for the packages and libraries developers install.
To prevent slopsquatting in the short term, Kirkland suggested firms could look to methods such as more closely verifying package registries and signatures to prevent those from unknown sources being installed.
In the long term, Kirkland told ITPro he sees slopsquatting and cybersquatting as a solvable problem. He added that with AI security tools in place, it’s increasingly likely that only the most sophisticated attacks will get through.
“Just being somewhat contemporary and topical here, when was the last time you heard of an art museum having jewels stolen, right? That's typically, that's typically the story of movies and blockbuster hits and yet, here we are in 2025 and the world's most famous art museum had some of its most famous jewels stolen.”
In the same way, Kirkland is confident that automated security algorithms that check the popularity, age, and author of packages will ensure that slopsquatting and similar attacks become the exception rather than the rule.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Marc Benioff thinks AI isn't quite ready to replace software engineersNews Claims of AI replacing software engineers aren't fully reflected in big tech hiring trends, according to Marc Benioff
-
Four things you need to know about GitHub's AI model training policy – including how to opt outNews Users of certain GitHub Copilot plans will have interaction data used to train AI models, but can opt out
-
'AI doesn't solve the burnout problem. If anything, it amplifies it': AI coding tools might supercharge software development, but working at 'machine speed' has a big impact on developersNews Developers using AI coding tools are shipping products faster, but velocity is creating cracks across the delivery pipeline
-
‘I hope there's a world where AI is is complementary to humans’: Workday CEO vows to support HR workers as Sana integration automates more processes than ever beforeNews Sana from Workday seeks to bring agentic AI to Workday’s systems and beyond with natural language input and third-party connectors
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers
-
Zoom users can now create their own custom AI agentsNews The workplace collaboration giant is going all in on "agentic AI orchestration"
-
Microsoft CEO Satya Nadella says 'anyone can be a software developer' with AI, but skills and experience are still vitalNews AI will cause job losses in software development, Nadella admitted, but claimed many will reskill and adapt to new ways of working
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks

