Open source AI models are cheaper than closed source competitors and perform on par, so why aren’t enterprises flocking to them?
A new research paper touts the benefits of open source AI models for business, but the ecosystem needs to do more to encourage adoption
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Open source AI models often perform on-par with closed source options and could save enterprises billions each year in cost savings, new research suggests, yet uptake remains limited.
In a new working paper, The Latent Role of Open Models in the AI Economy, researchers Frank Nagle and Daniel Yue found closed models from major providers naturally dominate the market.
Drawing on data from OpenRouter, closed models account for around 80% of overall usage globally while also generating roughly 96% of revenue. This dominance isn’t driven by a “substantial performance gap”, however.
In fact, open models “routely achieve 90% or more” of the performance of closed counterparts. These models also benefit from “significantly lower prices” compared to closed models, researchers found, with operational costs up to 84% lower.
“If open models offer comparable performance at substantially lower prices, why do closed models continue to dominate?” the paper notes.
Simply put, there’s more to open source AI use than performance and cost factors, particularly for enterprises balancing these aspects alongside security and regulatory considerations.
Open source considerations
In a blog post detailing the research, Nagle said developers typically opted for closed options “even when an open model both performs better and costs less” and identified a range of factors that influenced this decision process.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Switching costs, for example, were among those highlighted as a barrier for teams as many have “optimized workflows around specific model behaviors”.
“Changing models creates friction,” he noted.
Brand trust and “perceived safety” were also key factors, with enterprises typically more comfortable opting for models from household names regardless of whether the costs outweigh the benefits of an open model.
Elsewhere regulatory considerations were a lingering issue. Nagle noted that closed model providers can often provide “contractual assurances” that open alternatives cannot.
But are closed source models truly more secure? Some industry stakeholders raised concerns about the security implications of open source AI in the wake of the DeepSeek launch in early 2025.
Speaking to ITPro earlier this year, Andy Ward, SVP International at Absolute Security, equated using the Chinese open source model to essentially “printing out and handing over your confidential information”.
John Smith, EMEA chief technology officer (CTO) at Veracode, echoed these concerns on the security front.
“Open source models such as DeepSeek and Llama are built on complex ecosystems of external libraries and dependencies,” he told ITPro.
“While these are essential for functionality, they can introduce significant vulnerabilities, with over 70% of applications containing open source flaws that often go undetected. Hidden backdoors, outdated code and insufficient patching practices are just some of the issues that can arise and are free for attackers to exploit,” Smith added.
“As we saw with the XZ Utils backdoor incident in Linux systems, these types of flaws can have catastrophic consequences, affecting not only individual systems but potentially global networks.”
Despite some concerns, Amanda Brock, CEO of OpenUK argued that so-called security discrepancies associated with open source models aren’t too dissimilar to those found in closed options.
“I am yet to be shown how opening this up is worse than black box technology in the hands of a few,” she told ITPro. “Bad actors are equally able to hack into this as we have seen many times”.
Smith, meanwhile, noted that the context of open source AI use – particularly in terms of control over data – is crucial here when calculating risk.
“When self-hosting an open source model the business will have more control over where their data resides and how it is used,” he explained.
“With proprietary models hosted by the provider, businesses will need to thoroughly understand where their data will be stored and how it may be used by the provider in order to make an informed decision about the approach to take.”
With this in mind, “black box” options from major providers raise the same regulatory compliance and security considerations. According to Tom Finch, engineering leader at Chainguard, organizations in heavily regulated industries are using open source AI tools due to the transparency they afford from a regulatory perspective.
“When it comes to enterprise AI, open source needs cost and flexibility but also trust. We’re seeing a clear shift: highly regulated industries are turning to open source models because they need transparency and auditability,” he explained.
“You simply can’t meet compliance standards or anticipate risks if you’re dealing with a black box model. Open source makes it possible to inspect every dependency, which is essential for responsible AI adoption.”
The cost of not choosing open source
Notably, the research paper found that hesitancy on the part of enterprises not only means many are missing out on lucrative savings, but raises serious questions for the open source community over how it can emphasize the potential benefits.
Nagle and Yue found that the global “AI economy” could save anywhere between $20-48 billion each year if users opted for models based solely on price and performance, which tips in favor of the open source ecosystem.
“Our preferred estimate is $24.8 billion in annual unrealized value, based on an extrapolation of Menlo Ventures’ 2025 estimate of the LLM inference market size and our observed underutilization rates,” Nagle wrote.
“For Linux Foundation stakeholders, including enterprises considering open model AI adoption, policymakers evaluating market competitiveness, and engineers building tooling atop open ecosystems, this is a critical insight,” he added.
“Open models are not just philosophically important, they are economically indispensable.”
Analysis from IBM and Morning Consult in January this year found enterprises opting for open source AI models typically record a stronger return on investment than those working with proprietary models.
According to the study, 51% of firms using open source options recorded returns, whereas just 41% of those working with proprietary models saw positive gains.
IBM noted that these positive returns showcased the appeal of open source for enterprises, with two-in-five not using these models planned to turn to it to unlock financial gains.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Google just added a new automated code review feature to Gemini CLINews A new feature in the Gemini CLI extension looks to improve code quality through verification
-
Zyxel NWA50BE Pro reviewReviews The NWA50BE Pro offers a surprisingly good set of wireless features at a price that small businesses will find hard to resist
-
Claude Code creator Boris Cherny says software engineers are 'more important than ever’ as AI transforms the profession – but Anthropic CEO Dario Amodei still thinks full automation is comingNews There’s still plenty of room for software engineers in the age of AI, at least for now
-
Anthropic Labs chief Mike Krieger claims Claude is essentially writing itself – and it validates a bold prediction by CEO Dario AmodeiNews Internal teams at Anthropic are supercharging production and shoring up code security with Claude, claims executive
-
AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’News Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
-
‘Not a shortcut to competence’: Anthropic researchers say AI tools are improving developer productivity – but the technology could ‘inhibit skills formation’News A research paper from Anthropic suggests we need to be careful deploying AI to avoid losing critical skills
-
The open source ecosystem is booming thanks to AI, but hackers are taking advantageNews Analysis by Sonatype found that AI is giving attackers new opportunities to target victims
-
A torrent of AI slop submissions forced an open source project to scrap its bug bounty program – maintainer claims they’re removing the “incentive for people to submit crap”News Curl isn’t the only open source project inundated with AI slop submissions
-
‘This is a platform shift’: Jensen Huang says the traditional computing stack will never look the same because of AI – ChatGPT and Claude will forge a new generation of applicationsNews The Nvidia chief says new applications will be built “on top of ChatGPT” as the technology redefines software
-
So much for ‘trust but verify’: Nearly half of software developers don’t check AI-generated code – and 38% say it's because it takes longer than reviewing code produced by colleaguesNews A concerning number of developers are failing to check AI-generated code, exposing enterprises to huge security threats