Open source AI models are cheaper than closed source competitors and perform on par, so why aren’t enterprises flocking to them?
A new research paper touts the benefits of open source AI models for business, but the ecosystem needs to do more to encourage adoption
Open source AI models often perform on-par with closed source options and could save enterprises billions each year in cost savings, new research suggests, yet uptake remains limited.
In a new working paper, The Latent Role of Open Models in the AI Economy, researchers Frank Nagle and Daniel Yue found closed models from major providers naturally dominate the market.
Drawing on data from OpenRouter, closed models account for around 80% of overall usage globally while also generating roughly 96% of revenue. This dominance isn’t driven by a “substantial performance gap”, however.
In fact, open models “routely achieve 90% or more” of the performance of closed counterparts. These models also benefit from “significantly lower prices” compared to closed models, researchers found, with operational costs up to 84% lower.
“If open models offer comparable performance at substantially lower prices, why do closed models continue to dominate?” the paper notes.
Simply put, there’s more to open source AI use than performance and cost factors, particularly for enterprises balancing these aspects alongside security and regulatory considerations.
Open source considerations
In a blog post detailing the research, Nagle said developers typically opted for closed options “even when an open model both performs better and costs less” and identified a range of factors that influenced this decision process.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Switching costs, for example, were among those highlighted as a barrier for teams as many have “optimized workflows around specific model behaviors”.
“Changing models creates friction,” he noted.
Brand trust and “perceived safety” were also key factors, with enterprises typically more comfortable opting for models from household names regardless of whether the costs outweigh the benefits of an open model.
Elsewhere regulatory considerations were a lingering issue. Nagle noted that closed model providers can often provide “contractual assurances” that open alternatives cannot.
But are closed source models truly more secure? Some industry stakeholders raised concerns about the security implications of open source AI in the wake of the DeepSeek launch in early 2025.
Speaking to ITPro earlier this year, Andy Ward, SVP International at Absolute Security, equated using the Chinese open source model to essentially “printing out and handing over your confidential information”.
John Smith, EMEA chief technology officer (CTO) at Veracode, echoed these concerns on the security front.
“Open source models such as DeepSeek and Llama are built on complex ecosystems of external libraries and dependencies,” he told ITPro.
“While these are essential for functionality, they can introduce significant vulnerabilities, with over 70% of applications containing open source flaws that often go undetected. Hidden backdoors, outdated code and insufficient patching practices are just some of the issues that can arise and are free for attackers to exploit,” Smith added.
“As we saw with the XZ Utils backdoor incident in Linux systems, these types of flaws can have catastrophic consequences, affecting not only individual systems but potentially global networks.”
Despite some concerns, Amanda Brock, CEO of OpenUK argued that so-called security discrepancies associated with open source models aren’t too dissimilar to those found in closed options.
“I am yet to be shown how opening this up is worse than black box technology in the hands of a few,” she told ITPro. “Bad actors are equally able to hack into this as we have seen many times”.
Smith, meanwhile, noted that the context of open source AI use – particularly in terms of control over data – is crucial here when calculating risk.
“When self-hosting an open source model the business will have more control over where their data resides and how it is used,” he explained.
“With proprietary models hosted by the provider, businesses will need to thoroughly understand where their data will be stored and how it may be used by the provider in order to make an informed decision about the approach to take.”
With this in mind, “black box” options from major providers raise the same regulatory compliance and security considerations. According to Tom Finch, engineering leader at Chainguard, organizations in heavily regulated industries are using open source AI tools due to the transparency they afford from a regulatory perspective.
“When it comes to enterprise AI, open source needs cost and flexibility but also trust. We’re seeing a clear shift: highly regulated industries are turning to open source models because they need transparency and auditability,” he explained.
“You simply can’t meet compliance standards or anticipate risks if you’re dealing with a black box model. Open source makes it possible to inspect every dependency, which is essential for responsible AI adoption.”
The cost of not choosing open source
Notably, the research paper found that hesitancy on the part of enterprises not only means many are missing out on lucrative savings, but raises serious questions for the open source community over how it can emphasize the potential benefits.
Nagle and Yue found that the global “AI economy” could save anywhere between $20-48 billion each year if users opted for models based solely on price and performance, which tips in favor of the open source ecosystem.
“Our preferred estimate is $24.8 billion in annual unrealized value, based on an extrapolation of Menlo Ventures’ 2025 estimate of the LLM inference market size and our observed underutilization rates,” Nagle wrote.
“For Linux Foundation stakeholders, including enterprises considering open model AI adoption, policymakers evaluating market competitiveness, and engineers building tooling atop open ecosystems, this is a critical insight,” he added.
“Open models are not just philosophically important, they are economically indispensable.”
Analysis from IBM and Morning Consult in January this year found enterprises opting for open source AI models typically record a stronger return on investment than those working with proprietary models.
According to the study, 51% of firms using open source options recorded returns, whereas just 41% of those working with proprietary models saw positive gains.
IBM noted that these positive returns showcased the appeal of open source for enterprises, with two-in-five not using these models planned to turn to it to unlock financial gains.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
‘Slopsquatting’ is a new risk for vibe coding developers – but it can be solved by focusing on the fundamentalsNews Malicious packages in public code repositories can be given a sheen of authenticity via AI tools
-
Microsoft’s Windows chief wants to turn the operating system into an ‘agentic OS' – users just want reliability and better performanceNews While Microsoft touts an AI-powered future for Windows, users want the tech giant to get back to basics
-
Google Brain founder Andrew Ng thinks everyone should learn programming with ‘vibe coding’ tools – industry experts say that’s probably a bad ideaNews Vibe coding might help lower the barrier to entry for non-technical individuals, but users risk skipping vital learning curves, experts warn.
-
European software spending is set to surge in 2026 – here's whyNews Enterprises are approaching the “trough of disillusionment” with AI, but it’s not stopping them from spending
-
AI is transforming Agile development practices as teams battle mounting delivery cycle pressure and ROI concernsNews The influx of AI tools is helping reshape Agile development at a critical juncture for the methodology
-
UK software developers are still cautious about AI, and for good reasonNews Experts say developers are “right to take their time” with AI coding solutions given they still remain a nascent tool
-
AI-generated code is now the cause of one-in-five breaches – but developers and security leaders alike are convinced the technology will come good eventuallyNews AI coding tools now write 24% of production code globally, but it's risky and causing issues for developers and security practitioners alike.
-
Anthropic’s new Claude Code web portal aims to make AI coding even more accessibleNews Claude Code for web runs entirely in a user’s browser of choice rather than in a command-line interface and can be connected directly to chosen GitHub repositories.

