UK government programmers trialed AI coding assistants from Microsoft, GitHub, and Google, reporting huge time savings and productivity gains – but questions remain over security and code quality

Software developers are reporting big efficiency gains from AI coding tools

Female software developer using AI coding tools on a desktop computer while sitting a dimly lit office space.
(Image credit: Getty Images)

The UK government has revealed its push to encourage AI use is delivering marked benefits for developers.

As part of an AI trial across government, more than 1,000 tech workers across 50 different departments trialed AI coding assistants from Microsoft, GitHub Copilot, and Google Gemini Code Assist between November 2024 and February this year.

Figures published following a review of the scheme shows developers are saving around one hour each day, equivalent to around 28 working days a year.

Technology minister Kanishka Narayan said the trial scheme highlights the benefits of rolling the technology out across government.

"For too long, essential public services have been slow to use new technology – we have a lot of catching up to do," said Narayan.

"These results show that our engineers are hungry to use AI to get that work done more quickly, and know how to use it safely."

Most of the time savings from the AI assistants came from using them to write first drafts of code that experts then edit, or using them to review existing code.

With only 15% of code generated by AI assistants being used without any edits, engineers have been taking care to check and correct outputs where needed, the government said.

The trial appears to have been popular with government coders, with 72% of users agreeing they offered good value for their organization. Nearly two-thirds (65%) reported that they were completing tasks more quickly and 56% said they could solve problems more efficiently.

Notably, more than half (58%) of participants said they would prefer not to return to working without these solutions.

Tara Brady, resident of Google Cloud EMEA, said the tech giant is “thrilled to see the positive impact” its AI coding tool delivered for government workers.

"This landmark trial, the largest of its kind for Gemini Code Assist in the UK public sector, underscores the transformative potential of AI in enhancing productivity and problem-solving for coding professionals, and highlights the successful collaboration stemming from Google Cloud’s Strategic Partnership Agreement with the UK government."

Prime minister Keir Starmer has been highly vocal about the government’s plans to roll out AI across government and public services since taking office. Downing Street hopes to save taxpayers more than £45 billion through the use of AI.

Questions remain over government AI coding gains

Martin Reynolds, field CTO at software delivery platform Harness, welcomed the move but questioned whether the plans go far enough. A key factor here, he noted, lies in the volume of manual remediation required by developers using AI-generated code.

"While AI is creating an initial velocity boost, 85% of government AI-generated code still needs to be manually edited by engineers,” he said.

“That's before it enters the more manual downstream stages of delivery, such as testing, security scanning, deployment, and continuous verification, which are essential to getting code into production safely and reliably," Reynolds added.

The quality of AI-generated code has become a recurring talking point in recent months. A recent study from Fastly, for example, found developers often find themselves manually remediating faulty code, which ultimately negates the time savings delivered by the technology.

Nigel Douglas, head of developer relations at software supply chain security firm Cloudsmith, also voiced concerns about potential security issues, saying there's not much evidence of secure-by-design thinking.

Given the critical nature of the work conducted by developers in key government departments, this should be a key focus moving forward.

“Without security-aware tooling or policy enforcement, you can easily see over-enthusiastic use of AI coding assistants unknowingly introducing vulnerabilities into one of this country’s most critical software ecosystems," he said.

"We’re getting past the point where it’s acceptable for software development teams to ‘hope for the best’ - you’ve got to be able to verify the provenance of the ingredients flowing through your software supply chain and into production systems, and you need tools to help respond to newly emerging threats that may impact what you’ve already deployed.”

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.