Google's new Jules coding agent is free to use for anyone – and it just got a big update to prevent bad code output
Devs using Jules will be able to leverage more autonomous code generation and completion, within certain limits


Google publicly released its Jules coding agent earlier this month, and it’s already had a big update aimed at shoring up code quality.
Jules is intended as an ‘asynchronous’ coding agent, capable of working in the background alongside developers to write coding tests, fix bugs, and apply version updates across code.
Users working with the agent can delegate complex coding tasks based on project goals, marking a contrast to manually-operated vibe coding models and AI coding assistants such as GitHub Copilot or Google Code Assist.
Once Jules has access to an organization’s repositories, it clones the code contained within to a Google Cloud virtual machine (VM) and makes changes based on user prompts.
As a safety precaution, Jules does not make permanent changes to code before a human has manually approved its decisions - an issue that has caused serious problems for vibe coding users in recent weeks.
When its outputs are completed, Jules presents them as a pull request to the relevant developer, with a breakdown showing its reasoning for the changes it made and the difference between the code in its existing branch and its own proposed code.
Under the hood of the Jules coding agent
Jules is powered by Gemini 2.5 Pro, Google’s flagship LLM which is capable of ‘reasoning’ through complex problems before acting. Boasting a one million token context window, Jules is capable of using large swathes of an organization’s existing code for context.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Google has stated that Jules runs privately by default and is not trained on the private code to which it’s exposed.
Although Jules is free to use, the entry level tier only allows users to complete 15 tasks per day and just three concurrent tasks at any given moment.
Enterprises looking to use it at scale may contact their Google Cloud sales representative to add Google AI Ultra to their Workspace plan. This provides access to Jules for 300 daily tasks, as well as 60 concurrent tasks.
Independent developers or other interested users can pay for £18.99 ($19.99) per month for AI Pro, for which they’ll get 100 daily and 15 concurrent tasks, or £234.99 ($249.99) per month for AI Ultra.
Those on paid plans also have priority access to Google’s latest model. In practice, this means all plans currently use Gemini 2.5 Pro, but in the near future subscribers will be able to use more advanced models to power their Jules outputs.
Jules comes with constructive self-criticism
AI-powered coding is becoming increasingly popular, with Google’s own internal code now standing at more than 25% AI-generated and a recent Stack Overflow study finding that 84% of software developers now use AI.
But nearly half (46%) of respondents to the same study said they don’t trust AI accuracy, while three-quarters (75.3%) said they would turn to co-workers when they don’t trust AI answers.
AI-generated code also comes with inherent risks and limitations. Almost half of the code generated with leading models contains vulnerabilities, according to recent research by Veracode, including cross-site scripting (XSS) and SQL injection flaws.
To address these concerns, Google has also released a ‘critic’ feature within Jules, which subjects all proposed changes to adversarial reviews at point of generation in an attempt to ensure the code generated by the agent is as robust and efficient as possible.
When the critic detects code that needs to be redone, it sends it back to Jules to be improved. Google gave the example of code that includes a subtle logic error being handed back to Jules with the explanation: “Output matches expected cases but fails on unseen inputs”.
Google differentiated the critic from linters, common code analysis tools that can be found in coding agents from competitors such as OpenAI’s Codex, due to its ability to judge code relative to the user’s context and intent rather than assessing code quality based on presets.
It added that in the future the critic will become an agent in its own right, with the ability to draw on tools such as an AI code interpreter or search engines to gain more context on code, as well as to trigger criticism of code generation at multiple points throughout Jules’ reasoning process.
These ambitions are due to be realized in future versions, with no announced release timeframe.
“Our goal is fewer slop PRs, better test coverage, stronger security,” Google wrote in its blog post announcing the feature.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- GitHub's new 'Agent Mode' feature lets AI take the reins for developers
- AI coding tools are finally delivering results for enterprises
- Everything you need to know about OpenAI’s new agent for ChatGPT

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Apple just released an emergency patch for a zero-day exploited in the wild
News Apple is warning millions of users of iPhones, iPads and Macs to update their software to protect against an out-of-bounds write vulnerability
-
What is cluster computing and how it can help your enterprise?
Explainer Knowing what cluster computing is – and isn't – is the first step towards understanding its advantages
-
GitHub just launched a new 'mission control center' for developers to delegate tasks to AI coding agents
News The new pop-up tool from GitHub means developers need not "break their flow" to hand tasks to AI agents
-
Using an older version of Python? You’re leaving ‘money and performance on the table’ if you don’t upgrade – and missing out on big developer efficiency gains
News New research from JetBrains shows a majority of enterprises are using a version of Python that’s a year or more older – and it's having a big impact on efficiency and performance.
-
Developers say AI can code better than most humans – but there's a catch
News A new survey suggests AI coding tools are catching up on human capabilities
-
84% of software developers are now using AI, but nearly half 'don't trust' the technology over accuracy concerns
News AI coding tools are delivering benefits for developers, but they’re still worried about security and compliance
-
Think AI coding tools are speeding up work? Think again – they’re actually slowing developers down
News AI coding tools may be hindering the work of experienced software developers, according to new research
-
This handy new Gmail feature is exactly what you need to clean up your inbox
News A simple change in Gmail will give users more control over repeat senders
-
OpenAI's plan to acquire AI coding startup Windsurf ended in disaster – here’s how the deal fell apart
News The acquisition by Cognition comes after a rumored $3bn offer from OpenAI fell through
-
Atlassian says AI has created an 'unexpected paradox' for software developers – they're saving over 10 hours a week, but they’re still overworked and losing an equal amount of time due to ‘organizational inefficiencies’
News While AI is helping save developers over 10 hours a week, these productivity boosts are being offset by growing workloads and poor operational efficiency, Atlassian says.