AI-generated code is now the cause of one-in-five breaches – but developers and security leaders alike are convinced the technology will come good eventually
Most security leaders still think AI tools will eventually write secure, reliable, code
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI coding tools are creating serious security risks in production, with one-in-five CISOs saying they've suffered major incidents because of AI-generated code.
AI coding tools now write 24% of production code – 21% in Europe and 29% in the US – according to a new report from Aikido. But it's risky, with 69% of security leaders, security engineers, and developers across Europe and the US revealing they'd found serious vulnerabilities in AI-written code.
US-based respondents were among the worst hit by AI-related flaws, with 43% of organizations reporting serious incidents, compared with just 20% in Europe.
This, the study noted, appears to be down to better prevention and oversight. For example, EU-based firms reported more “near misses” with AI-generated code than their US counterparts, potentially highlighting more robust testing practices.
Adding more tools to address the issue isn’t helping, Aikido found. Indeed, organizations with more security tools report more incidents, with more overhead and slower remediation.
Nearly two-thirds (64%) of those with just one or two tools had an incident, the figure was 90% for those with between six and nine tools.
All-in-one AI coding tools are helping bridge gaps
Notably, teams using tools designed for both developers and security teams were more than twice as likely to report zero incidents than those using tools made for only one specific group.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Giving developers the right security tool that works with existing tools and workflows allows teams to implement security best practices and improve their posture,” commented Walid Mahmoud, DevSecOps lead at the UK Cabinet Office.
Teams using separate AppSec and CloudSec tools were 50% more likely to face incidents, and 93% of those with separate tools reported integration headaches such as duplicate alerts or inconsistent data.
The security blame game is heating up
The blame for incidents caused by AI code is now becoming a serious point of contention within enterprises, the report noted. For example, 53% of respondents blamed security teams for failing to address issues, while 45% blamed developers who failed to spot issues before pushing to production.
Meanwhile, 42% pointed toward whoever merged it. This blame game is expected to continue escalating, according to Aikido. Half of developers reckoned they’d be blamed if the AI code they wrote introduced a vulnerability, even more than the security team itself.
“There's clearly a lack of clarity among respondents over where accountability should sit for good risk management,” commented Andy Boura, CISO at Rothesay.
Despite concerns across the board, enterprises are expected to continue driving ahead with adoption of AI coding tools, the study noted. Nine-in-ten said they expect AI to take over penetration testing within the next five years, for example
Meanwhile, 96% believe AI will write secure, reliable, code at some point, with the biggest proportion (44%) thinking it will happen in the next three-to-five years.
Only 21% think this will be achieved without human oversight, however, underlining the importance of keeping humans in the loop.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Think AI coding tools are speeding up work? Think again – they’re actually slowing developers down
- How AI coding is transforming the IT industry in 2025
- AI coding tools are booming – and developers in this one country are by far the most frequent users
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Salesforce targets telco gains with new agentic AI toolsNews Telecoms operators can draw on an array of pre-built agents to automate and streamline tasks
-
Four national compute resources launched for cutting-edge science and researchNews The new national compute centers will receive a total of £76 million in funding
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks
-
AI isn’t killing DevOps, you’re just using it wrongNews New research indicates that enterprises with mature DevOps processes are gaining the most from AI adoption
-
‘AI is making us able to develop software at the speed of light’: Mistral CEO Arthur Mensch thinks 50% of SaaS solutions could be supplanted by AINews Mensch’s comments come amidst rising concerns about the impact of AI on traditional software
-
Automated code reviews are coming to Google's Gemini CLI Conductor extension – here's what users need to knowNews A new feature in the Gemini CLI extension looks to improve code quality through verification
-
Claude Code creator Boris Cherny says software engineers are 'more important than ever’ as AI transforms the profession – but Anthropic CEO Dario Amodei still thinks full automation is comingNews There’s still plenty of room for software engineers in the age of AI, at least for now
-
Anthropic Labs chief Mike Krieger claims Claude is essentially writing itself – and it validates a bold prediction by CEO Dario AmodeiNews Internal teams at Anthropic are supercharging production and shoring up code security with Claude, claims executive
-
AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’News Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
-
‘Not a shortcut to competence’: Anthropic researchers say AI tools are improving developer productivity – but the technology could ‘inhibit skills formation’News A research paper from Anthropic suggests we need to be careful deploying AI to avoid losing critical skills
