AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’

Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes

A visualization of agentic AI layers, shown as blue nodes of code connected by blue strands against a dark background.
(Image credit: Getty Images)

AI has overtaken all other factors in reshaping security priorities, with teams now forced to deal with AI-generated code that appears correct, professional, and production-ready – but that quietly introduces security risks.

That’s according to a new survey from Black Duck, which recorded a 12% rise in teams actively risk-ranking where LLM-generated code can and can’t be deployed last year.

Meanwhile, there was a 10% increase in custom security rules designed specifically to catch AI-generated flaws.

“The real risk of AI-generated code isn’t obvious breakage; it’s the illusion of correctness. Code that looks polished can still conceal serious security flaws, and developers are increasingly trusting it,” said Black Duck CEO Jason Schmitt.

“We’re witnessing a dangerous paradox: developers increasingly trust AI-produced code that lacks the security instincts of seasoned experts."

It's regulation that's forcing most security investment, researchers found, with the use of Software Bill of Materials (SBOM) up nearly 30%, and automated infrastructure verification surging by more than 50%.

Similarly, respondents reported an increase of more than 40% in streamlining responsible vulnerability disclosure, driven by the EU Cyber Resilience Act (CRA) and evolving US government demands.

"The surge in SBOM adoption reported in BSIMM16 is so critical, since it gives organizations the transparency to understand exactly what’s in their software — whether written by humans, AI, or third parties — and the visibility to respond quickly when vulnerabilities surface," said Schmitt.

"As regulatory mandates expand, SBOMs are moving beyond compliance — they’re becoming foundational infrastructure for managing risk in an AI-driven development landscape.”

A sharp focus on first and third-party code

Organizations are rapidly standardizing tech stacks, the survey noted. Black Duck found teams are now expanding visibility beyond first-party code as third-party and AI-assisted development explodes.

Security training is also being reinvented, researchers revealed, with multi-day courses now being replaced by just-in-time, on-demand guidance embedded directly into developer workflows in bite-sized chunks.

The use of open collaboration channels increased 29% year over year, giving teams instant access to security guidance on the fly.

AI-generated code is in vogue

Recent research from Aikido found that AI coding tools now write 24% of production code globally as enterprises across Europe and US ramp up adoption.

The trend has been gaining momentum for several years now, with a host of major tech providers such as Google and Microsoft revealing significant portions of source code is now written using the technology.

However, while this is speeding up production, research shows it also carries huge risks.

Aikido found that AI-generated code is now the cause of one-in-five breaches, with 69% of security leaders, engineers, and developers on both sides of the Atlantic having found serious vulnerabilities.

These risk factors are further exacerbated by the fact many developers are placing too much faith in the technology when coding. A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.

FOLLOW US ON SOCIAL MEDIA

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.