So much for ‘trust but verify’: Nearly half of software developers don’t check AI-generated code – and 38% say it's because it takes longer than reviewing code produced by colleagues

A concerning number of developers are failing to check AI-generated code, exposing enterprises to huge security threats

Female software developer wearing hoodie coding on a desktop computer in a dimly lit room with warm pink glowing light reflecting off computer monitor.
(Image credit: Getty Images)

A majority of developers are using AI to create code, but even though most don't trust the output, they're failing to take steps to verify it.

That's according to a survey from code review company Sonar, which found that 72% of developers use AI tools every day, with the technology helping to write up to 42% of committed code.

Notably, 96% of developers surveyed said they don't fully trust that AI-generated code is functionally correct – but fewer than half say they review it before committing.

Sonar said this leads to "verification debt", a term used by AWS CTO Werner Vogels while discussing the use of AI in software development at the company's annual re:Invent conference in December.

Tariq Shaukat, CEO of Sonar, said the research highlights a “fundamental shift” in software development, whereby value is no longer simply defined by the speed at which code can be written, but by the "confidence in deploying it”.

"While AI has made code generation nearly effortless, it has created a critical trust gap between output and deployment,” he said. "To realize the full potential of AI, we must close this gap."

Why devs are slacking on AI-generated code

There may be a good reason for the failure to check AI-generated code, the study noted, mainly as it typically takes more time than reviewing human-written code.

"While AI is supposed to save time, developers are spending a significant portion of that saved time on review," the Sonar report said, adding: "In fact, 38% of developers say reviewing AI-generated code requires more effort than reviewing code written by their human colleagues."

One reason for that is AI often produces code that looks correct but isn't reliable, a statement that 61% of respondents agreed with.

"That's a critical finding — it means AI code can introduce subtle bugs that are harder to spot than typical human errors," the report noted. "The same percentage (61%) agree that it 'requires a lot of effort to get good code from AI' through prompting and fixing."

How developers are using AI

The survey found the most common use for AI by developers was for proofs of concept and prototypes (88%), followed by the creation of production software for internal, non-critical workflows (83%), customer-facing applications (73%), and business-critical internal software (58%).

Those surveyed said AI was most effective at writing documentation, explaining existing code, and vibe coding. Just 55% of those polled said such tools were effective for assisting development of new code, but that task had the highest adoption rate at 90%.

"Developers have embraced AI as a daily partner, but they're finding it's a much better 'explainer' and 'prototyper' than it is a 'maintainer' or 'refactorer' — at least for now," the report states.

"It's highly effective at generating new things (docs, tests, new projects) but struggles more with the complex, nuanced work of modifying and optimizing existing, mission-critical code."

Too much trust in AI tools

The Sonar report is the latest in a string of studies highlighting the benefits of AI for developers, but a prevailing lack of trust among many on their outputs.

Best practices have also been slipped among many since the influx of these tools in the profession, research shows. In a survey from Cloudsmith last year, for example, nearly half of developers (42%) said their codebases are now largely AI-generated.

Respondents specifically highlighted productivity and efficiency gains while using the technology, yet only 67% said they actively review code before deployments.

Cloudsmith warned this lax approach to code testing and reviews could have dire consequences for enterprises, leaving them open to an array of security risks and vulnerabilities.

FOLLOW US ON SOCIAL MEDIA

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.

Nicole the author of a book about the history of technology, The Long History of the Future.