AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’
Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI has overtaken all other factors in reshaping security priorities, with teams now forced to deal with AI-generated code that appears correct, professional, and production-ready – but that quietly introduces security risks.
That’s according to a new survey from Black Duck, which recorded a 12% rise in teams actively risk-ranking where LLM-generated code can and can’t be deployed last year.
Meanwhile, there was a 10% increase in custom security rules designed specifically to catch AI-generated flaws.
“The real risk of AI-generated code isn’t obvious breakage; it’s the illusion of correctness. Code that looks polished can still conceal serious security flaws, and developers are increasingly trusting it,” said Black Duck CEO Jason Schmitt.
“We’re witnessing a dangerous paradox: developers increasingly trust AI-produced code that lacks the security instincts of seasoned experts."
It's regulation that's forcing most security investment, researchers found, with the use of Software Bill of Materials (SBOM) up nearly 30%, and automated infrastructure verification surging by more than 50%.
Similarly, respondents reported an increase of more than 40% in streamlining responsible vulnerability disclosure, driven by the EU Cyber Resilience Act (CRA) and evolving US government demands.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"The surge in SBOM adoption reported in BSIMM16 is so critical, since it gives organizations the transparency to understand exactly what’s in their software — whether written by humans, AI, or third parties — and the visibility to respond quickly when vulnerabilities surface," said Schmitt.
"As regulatory mandates expand, SBOMs are moving beyond compliance — they’re becoming foundational infrastructure for managing risk in an AI-driven development landscape.”
A sharp focus on first and third-party code
Organizations are rapidly standardizing tech stacks, the survey noted. Black Duck found teams are now expanding visibility beyond first-party code as third-party and AI-assisted development explodes.
Security training is also being reinvented, researchers revealed, with multi-day courses now being replaced by just-in-time, on-demand guidance embedded directly into developer workflows in bite-sized chunks.
The use of open collaboration channels increased 29% year over year, giving teams instant access to security guidance on the fly.
AI-generated code is in vogue
Recent research from Aikido found that AI coding tools now write 24% of production code globally as enterprises across Europe and US ramp up adoption.
The trend has been gaining momentum for several years now, with a host of major tech providers such as Google and Microsoft revealing significant portions of source code is now written using the technology.
However, while this is speeding up production, research shows it also carries huge risks.
Aikido found that AI-generated code is now the cause of one-in-five breaches, with 69% of security leaders, engineers, and developers on both sides of the Atlantic having found serious vulnerabilities.
These risk factors are further exacerbated by the fact many developers are placing too much faith in the technology when coding. A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Meta engineer trusted advice from an AI agent, ended up exposing user dataNews The internal security incident exposed sensitive user data to unauthorized employees
-
Stryker hackers struck by FBI in domain seizure campaignNews The domain seizures come hot on the heels of Handala's devastating attack on the medical tech firm
-
'AI doesn't solve the burnout problem. If anything, it amplifies it': AI coding tools might supercharge software development, but working at 'machine speed' has a big impact on developersNews Developers using AI coding tools are shipping products faster, but velocity is creating cracks across the delivery pipeline
-
‘I hope there's a world where AI is is complementary to humans’: Workday CEO vows to support HR workers as Sana integration automates more processes than ever beforeNews Sana from Workday seeks to bring agentic AI to Workday’s systems and beyond with natural language input and third-party connectors
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers
-
Alert issued over critical vulnerabilities in Linux’s AppArmor security layer – more than 12 million enterprise systems are at risk of root accessNews Researchers have warned Linux flaws allow unprivileged local users to gain root privileges and weaken container isolation
-
Zoom users can now create their own custom AI agentsNews The workplace collaboration giant is going all in on "agentic AI orchestration"
-
Microsoft CEO Satya Nadella says 'anyone can be a software developer' with AI, but skills and experience are still vitalNews AI will cause job losses in software development, Nadella admitted, but claimed many will reskill and adapt to new ways of working
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks
-
AI isn’t killing DevOps, you’re just using it wrongNews New research indicates that enterprises with mature DevOps processes are gaining the most from AI adoption
