Developers are slacking on AI-generated code safety – here's why it could come back to haunt them
While organizations are aware of the risks, many are spending little time or effort on tracking artifact versions, origins, and security attestations
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Organizations are taking a slapdash approach to AI-generated code, with many spending far too little time on oversight, new research suggests.
The vast majority (93%) of respondents to Cloudsmith's 2026 Artifact Management Report said their organization was using AI-generated code, more than twice as many as last year.
Yet despite this sharp increase, around than one-third (31%) spend 10 hours or less per month validating, auditing, or securing it. Indeed, just 58% spend at least 11 hours per month on this front while one-in-twenty said they don't audit AI code at all.
While AI models have become a leading artifact type, only 12% of organizations are managing them using the same security policies and provenance tracking as traditional binaries, such as language packages and operating system libraries.
This is despite the fact that organizations are mostly aware of the risks, with only 17% very confident that AI is not introducing new vulnerabilities into their codebase.
“We are at a huge inflection point in the history of software development. In a matter of months, we’ve gone from, ‘How can AI help me write better code?’ to, ‘How can I help AI write better code?’”, said Glenn Weinstein, CEO of Cloudsmith.
"But at the same time, AI tools are expanding the attack surface, introducing more open source dependencies. And those same tools are being used by malicious actors to find more vulnerabilities in existing libraries, leading to more CVEs.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Sloppy practices could come back to haunt devs
Poor security practices on this front could have wide-reaching regulatory implications for enterprises, the study warned.
Under the EU’s Cyber Resilience Act (CRA), organizations are required to provide a detailed assessment 48 hours after becoming aware of a breach – and this includes providing provenance data.
More than half (53%) of respondents told Cloudsmith they'd need to put in a significant amount of manual effort or time to produce a comprehensive report of artifact versions, origins, and security attestations.
Only a quarter of engineering teams automatically generate and verify Software Bills of Materials (SBOMs) at every build, with the rest doing it manually, reactively, or only when an auditor asks.
Notably, nearly three-quarters (74%) said they'd struggle to produce a complete report quickly if they were hit with a surprise audit tomorrow.
The majority (83%) run outdated artifact management systems, often because they're worried that upgrading is risky or painful.
Software supply chain threats are growing
Weak software supply chain security has become a high-profile issue over the last year, not least with the Axios npm compromise that hit earlier this month.
With threat campaigns including Shai Hulud 2.0 and SANDWORM_MODE specifically targeting the software supply chain via upstream repositories, 44% of respondents said they'd experienced a security incident caused by a third-party dependency.
The same number said their organization spent over 50 hours per month investigating potential security issues linked to third-party dependencies, whether or not they resulted in a breach.
“Agentic development is an incredibly powerful way to build software, and teams will be far more productive and write even more software as a result. That is a good thing, because the world certainly needs more software and more automation," said Weinstein.
"For enterprises to manage this new velocity and productivity, automated guardrails and context are the new keys to unlock the production of safer, more efficient code.”
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
The threat Iran-backed cyber attacks pose to businessesIn-depth What’s the real risk to business in the US and UK during this critical situation?
-
Dynatrace eyes IT observability gains with Bindplane acquisitionNews The vendor said Bindplane’s integration will help customers gain greater control over telemetry data across distributed environments
-
Marc Benioff thinks AI isn't quite ready to replace software engineersNews Claims of AI replacing software engineers aren't fully reflected in big tech hiring trends, according to Marc Benioff
-
Four things you need to know about GitHub's AI model training policy – including how to opt outNews Users of certain GitHub Copilot plans will have interaction data used to train AI models, but can opt out
-
'AI doesn't solve the burnout problem. If anything, it amplifies it': AI coding tools might supercharge software development, but working at 'machine speed' has a big impact on developersNews Developers using AI coding tools are shipping products faster, but velocity is creating cracks across the delivery pipeline
-
‘I hope there's a world where AI is is complementary to humans’: Workday CEO vows to support HR workers as Sana integration automates more processes than ever beforeNews Sana from Workday seeks to bring agentic AI to Workday’s systems and beyond with natural language input and third-party connectors
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers
-
Zoom users can now create their own custom AI agentsNews The workplace collaboration giant is going all in on "agentic AI orchestration"
-
Microsoft CEO Satya Nadella says 'anyone can be a software developer' with AI, but skills and experience are still vitalNews AI will cause job losses in software development, Nadella admitted, but claimed many will reskill and adapt to new ways of working
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks