AI-generated code is now the cause of one-in-five breaches – but developers and security leaders alike are convinced the technology will come good eventually

Most security leaders still think AI tools will eventually write secure, reliable, code

Software developer using AI coding tools on a laptop computer with source code pictured on screen and desktop monitor in background.
(Image credit: Getty Images)

AI coding tools are creating serious security risks in production, with one-in-five CISOs saying they've suffered major incidents because of AI-generated code.

AI coding tools now write 24% of production code – 21% in Europe and 29% in the US – according to a new report from Aikido. But it's risky, with 69% of security leaders, security engineers, and developers across Europe and the US revealing they'd found serious vulnerabilities in AI-written code.

US-based respondents were among the worst hit by AI-related flaws, with 43% of organizations reporting serious incidents, compared with just 20% in Europe.

This, the study noted, appears to be down to better prevention and oversight. For example, EU-based firms reported more “near misses” with AI-generated code than their US counterparts, potentially highlighting more robust testing practices.

Adding more tools to address the issue isn’t helping, Aikido found. Indeed, organizations with more security tools report more incidents, with more overhead and slower remediation.

Nearly two-thirds (64%) of those with just one or two tools had an incident, the figure was 90% for those with between six and nine tools.

All-in-one AI coding tools are helping bridge gaps

Notably, teams using tools designed for both developers and security teams were more than twice as likely to report zero incidents than those using tools made for only one specific group.

“Giving developers the right security tool that works with existing tools and workflows allows teams to implement security best practices and improve their posture,” commented Walid Mahmoud, DevSecOps lead at the UK Cabinet Office.

Teams using separate AppSec and CloudSec tools were 50% more likely to face incidents, and 93% of those with separate tools reported integration headaches such as duplicate alerts or inconsistent data.

The security blame game is heating up

The blame for incidents caused by AI code is now becoming a serious point of contention within enterprises, the report noted. For example, 53% of respondents blamed security teams for failing to address issues, while 45% blamed developers who failed to spot issues before pushing to production.

Meanwhile, 42% pointed toward whoever merged it. This blame game is expected to continue escalating, according to Aikido. Half of developers reckoned they’d be blamed if the AI code they wrote introduced a vulnerability, even more than the security team itself.

“There's clearly a lack of clarity among respondents over where accountability should sit for good risk management,” commented Andy Boura, CISO at Rothesay.

Despite concerns across the board, enterprises are expected to continue driving ahead with adoption of AI coding tools, the study noted. Nine-in-ten said they expect AI to take over penetration testing within the next five years, for example

Meanwhile, 96% believe AI will write secure, reliable, code at some point, with the biggest proportion (44%) thinking it will happen in the next three-to-five years.

Only 21% think this will be achieved without human oversight, however, underlining the importance of keeping humans in the loop.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.