Mistral targets security-conscious developers with new AI coding assistant
The coding assistant, available now in private preview, will be fully customizable
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Mistral AI has announced a new coding assistant designed for secure enterprise deployment, with a focus on customizable models and observable decision making.
The assistant runs on a bundle of four Mistral models, all of which specialize in an aspect of its broader capabilities. It generates code using Codestral, Mistral’s coding assistant fluent in over 80 programming languages, while drawing on Codestral Embed, intended for code searches and retrieval augmented generation (RAG), for coding context.
Additionally, the assistant contains Mistral’s agentic software engineering LLM Devstral, which it has claimed can more accurately solve real-world software engineering problems, as well as its general model Mistral Medium 3 for handling user chats.
The French AI giant said it spoke with CISOs, platform leads, and VPs of engineering and found there are four common hurdles to adopting coding copilots.
These were restrictions on how they can connect to internal code repositories, low customizability, surface-level task completion rather than reasoning processes, and multi-vendor SLAs split across copilot models, infrastructure, and software.
Mistral Code also comes with 24/7 support.
It has claimed to address all these pain points with Mistral Code, which combines all models, inference infrastructure, compliance, and support in one offering.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Mistral has emphasized that Mistral Code is built on the open source project Continue, which it praised for its transparency, but added that it has made key changes that improve its capabilities in areas such as user chats, audit logging, multi-line editing, and issue resolution.
Mistral Code is currently in private beta via JetBrains IDEs and VSCode. The firm has promised to make it generally available in the near future, after receiving customer feedback.
Will Mistral AI's coding gambit work?
As Anthropic continues to dominate coding benchmark leaderboards and the likes of OpenAI and Google continue to roll out more powerful code generation models, Mistral will find it ever harder to carve a space for itself in the market.
Leaning into open source, customizable, locally-run tools such as Mistral Code could be a path for the firm to maintain relevance and win over a core developer community. AI code generation is still far from ubiquitous and to me, it seems developers are willing to give all tools a fair shot if their organizational policies allow for it.
The potential benefits of running AI coding tools entirely on premises could prove especially attractive for firms within Europe looking for more control and transparency in AI, in the wake of the EU AI Act. It could also be an enticing offer for organizations in controlled sectors such as finance, government, healthcare, or defense.
But ultimately, it comes down to whether developers enjoy using Mistral Code and find it satisfactory for their needs. The lower costs and latency of running on premises are important but if it still lags behind rivals in terms of sophistication, leaders may be willing to fork out the extra cash to go elsewhere.
MORE FROM ITPRO

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Microsoft CEO Satya Nadella says 'anyone can be a software developer' with AI, but skills and experience are still vitalNews AI will cause job losses in software development, Nadella admitted, but claimed many will reskill and adapt to new ways of working
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks
-
‘AI is making us able to develop software at the speed of light’: Mistral CEO Arthur Mensch thinks 50% of SaaS solutions could be supplanted by AINews Mensch’s comments come amidst rising concerns about the impact of AI on traditional software
-
Automated code reviews are coming to Google's Gemini CLI Conductor extension – here's what users need to knowNews A new feature in the Gemini CLI extension looks to improve code quality through verification
-
Anthropic Labs chief Mike Krieger claims Claude is essentially writing itself – and it validates a bold prediction by CEO Dario AmodeiNews Internal teams at Anthropic are supercharging production and shoring up code security with Claude, claims executive
-
AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’News Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
-
‘Not a shortcut to competence’: Anthropic researchers say AI tools are improving developer productivity – but the technology could ‘inhibit skills formation’News A research paper from Anthropic suggests we need to be careful deploying AI to avoid losing critical skills
-
So much for ‘trust but verify’: Nearly half of software developers don’t check AI-generated code – and 38% say it's because it takes longer than reviewing code produced by colleaguesNews A concerning number of developers are failing to check AI-generated code, exposing enterprises to huge security threats

