Using AI to code? Watch your security debt

Black Duck research shows faster development may be causing risks for companies

Open source vulnerabilities concept image showing HTML code on a computer screen.
(Image credit: Getty Images)

AI is helping to speed up code deployment, but security isn't keeping up.

That's one of the key findings from a report by security firm Black Duck, with 81% of 1,000 security professionals surveyed saying application security testing is now slowing down development and delivery.

The report comes as AI is starting to have an impact on coding and development, with 84% of developers surveyed by Stack Overflow over the summer saying they've used the technology in the last year or plan to imminently, even though trust in the end product remains an issue. Indeed, a survey of security leaders last year found that the vast majority – some 92% of those polled – were concerned that AI-generated code could cause a security incident in their organization.

The Black Duck research found nearly 60% of those polled are deploying code every day – if not more frequently – but security is getting in the way of that AI-accelerated speed.

That's because 46% of those asked still rely on manual processes with security, with 81% of professionals saying that application security slows down the development process – and that can lead to security debt, with security problems left piling up unaddressed.

The report noted that companies have successfully built "high-velocity development pipelines," thanks to AI as well as other coding tools, but automation of security lags behind.

"This automation gap means many businesses are simply unaware of their vulnerabilities, with 61.64% of

organizations testing less than 60% of their own applications," the report noted. "The result is that you're accumulating a massive security debt with every single release."

AI vs workflows

Because of that, developers said that improving development workflow integration is a top priority for 27% of those polled.

"The findings paint a clear picture: the old ways of doing application security aren't working, and speed without integrated security creates risk for companies," said Jason Schmitt, CEO of Black Duck. "To navigate this new world, development teams must shift from a reactive, tool-centric model to a proactive, platform-based strategy that integrates security directly into developer workflows to achieve true scale application security."

The challenges are exacerbated by 71% of respondents saying that security alerts are noise, with too many false positives and duplicate findings from multiple tools eating into any benefits, known as tool sprawl.

Risk vs reward for AI coding

More generally, two-thirds of respondents said AI helps them write more secure code, but 57% agreed it can also introduce new risks.

"This report nails the core dilemma facing modern DevSecOps: the tradeoff between velocity and visibility," said Mayur Upadhyaya, CEO at APIContext. "We're seeing a shift away from adding more tools toward simplifying and integrating the ones teams already have."

Upadhyaya added: "As AI becomes both a productivity multiplier and an attack vector, governance must evolve, but so must observability. If you can't baseline normal behavior and detect deviation in real time, you're playing catch-up. And in AI-native pipelines, catch-up is too late."

Of course, security debt is a problem even without AI. A report by Veracode earlier this year showed remediation times for security flaws had grown by 47% over the last five years, from an average 171 days to fix a vulnerability in 2020 to 252 days in 2025.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.

Nicole the author of a book about the history of technology, The Long History of the Future.