AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’
Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
AI has overtaken all other factors in reshaping security priorities, with teams now forced to deal with AI-generated code that appears correct, professional, and production-ready – but that quietly introduces security risks.
That’s according to a new survey from Black Duck, which recorded a 12% rise in teams actively risk-ranking where LLM-generated code can and can’t be deployed last year.
Meanwhile, there was a 10% increase in custom security rules designed specifically to catch AI-generated flaws.
“The real risk of AI-generated code isn’t obvious breakage; it’s the illusion of correctness. Code that looks polished can still conceal serious security flaws, and developers are increasingly trusting it,” said Black Duck CEO Jason Schmitt.
“We’re witnessing a dangerous paradox: developers increasingly trust AI-produced code that lacks the security instincts of seasoned experts."
It's regulation that's forcing most security investment, researchers found, with the use of Software Bill of Materials (SBOM) up nearly 30%, and automated infrastructure verification surging by more than 50%.
Similarly, respondents reported an increase of more than 40% in streamlining responsible vulnerability disclosure, driven by the EU Cyber Resilience Act (CRA) and evolving US government demands.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"The surge in SBOM adoption reported in BSIMM16 is so critical, since it gives organizations the transparency to understand exactly what’s in their software — whether written by humans, AI, or third parties — and the visibility to respond quickly when vulnerabilities surface," said Schmitt.
"As regulatory mandates expand, SBOMs are moving beyond compliance — they’re becoming foundational infrastructure for managing risk in an AI-driven development landscape.”
A sharp focus on first and third-party code
Organizations are rapidly standardizing tech stacks, the survey noted. Black Duck found teams are now expanding visibility beyond first-party code as third-party and AI-assisted development explodes.
Security training is also being reinvented, researchers revealed, with multi-day courses now being replaced by just-in-time, on-demand guidance embedded directly into developer workflows in bite-sized chunks.
The use of open collaboration channels increased 29% year over year, giving teams instant access to security guidance on the fly.
AI-generated code is in vogue
Recent research from Aikido found that AI coding tools now write 24% of production code globally as enterprises across Europe and US ramp up adoption.
The trend has been gaining momentum for several years now, with a host of major tech providers such as Google and Microsoft revealing significant portions of source code is now written using the technology.
However, while this is speeding up production, research shows it also carries huge risks.
Aikido found that AI-generated code is now the cause of one-in-five breaches, with 69% of security leaders, engineers, and developers on both sides of the Atlantic having found serious vulnerabilities.
These risk factors are further exacerbated by the fact many developers are placing too much faith in the technology when coding. A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
AI is coming to Ubuntu: Canonical exec teases future AI features and agentic workflow capabilities for version 26.10 — but on a ‘strictly opt-in basis’News A range of new AI features are coming to Ubuntu over the next year, according to maintainers, but only providing they’re of “sufficient maturity and quality”.
-
Everything you need to know about the GitHub Copilot pricing changesNews GitHub Copilot pricing changes mean users will be charged based on consumption, rather than a set number of credits
-
Developers are slacking on AI-generated code safety – here's why it could come back to haunt themNews While organizations are aware of the risks, many are spending little time or effort on tracking artifact versions, origins, and security attestations
-
Four things you need to know about GitHub's AI model training policy – including how to opt outNews Users of certain GitHub Copilot plans will have interaction data used to train AI models, but can opt out
-
'AI doesn't solve the burnout problem. If anything, it amplifies it': AI coding tools might supercharge software development, but working at 'machine speed' has a big impact on developersNews Developers using AI coding tools are shipping products faster, but velocity is creating cracks across the delivery pipeline
-
‘I hope there's a world where AI is is complementary to humans’: Workday CEO vows to support HR workers as Sana integration automates more processes than ever beforeNews Sana from Workday seeks to bring agentic AI to Workday’s systems and beyond with natural language input and third-party connectors
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers
-
Alert issued over critical vulnerabilities in Linux’s AppArmor security layer – more than 12 million enterprise systems are at risk of root accessNews Researchers have warned Linux flaws allow unprivileged local users to gain root privileges and weaken container isolation


