The pros and cons of open source AI for business
Leaders face a choice between frontier freedom and cloud lock-in without thr right adoption strategy
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Discussion is intensifying around whether open source AI can offer more flexibility and resilience compared to proprietary AI – but the conversation is muddied by confusion around what makes an AI solution ‘open’.
Thanks to the 30-year-old Open Source Definition (OSD) there’s a clear understanding of what’s considered open source software, but the same cannot be said for AI, says Amanda Brock, CEO at OpenUK. She believes trying to define open source AI as a whole hasn’t worked, and that a better approach is disaggregation: looking at what the components of a form of AI are, and how they’re licensed.
This is because most so-called open models are open weights only, says Nell Watson, IEEE senior member, author and AI ethics engineer at Singularity University. “You get the trained parameters, but not the training data, methodology or reproducibility guarantees. Llama is not open source in the way Linux is. That distinction matters, because the value proposition of openness – auditability, sovereignty and independent verification – depends on which layers are actually open.”
“Equally important is what license it’s shared on. This is where we can see ‘open washing’, where someone uses the term ‘open source’ to describe their project when the license has some form of restriction,” says Brock. She points to Meta’s Llama 2 as an example.
“There’s a commercial restriction and an acceptable use policy in the Llama software license, meaning it’s neither open source nor open model. Coming up to its launch, Meta was calling it open innovation behind the scenes and I think that’s right. There are degrees of openness around AI, and we have to be careful to understand what these distinctions and definitions are.”
The case for open source AI
Once enterprises have clarity on what they're actually adopting, they can then weigh up the pros and cons. One advantage of open-source AI is retaining more control.
While choosing a proprietary application programming interface (API) can make things easier at the start, it can create compounding dependencies including pricing you don’t control, availability you can’t enforce and data handling you can’t verify.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The question every CIO should ask is what happens when your vendor changes terms, raises prices or discontinues the model you’ve built around, says Watson. “Enterprises that went all-in on single-vendor cloud stacks in the 2010s learned this lesson the hard way. AI is heading for the same reckoning, only faster,” she warns.
Regulated industries are leading the way, with Nvidia reporting financial services firms deploying open weight models at higher rates than most. 84% of respondents surveyed by the chip giant stated that open source models are important to their overall AI strategy.
"Open models that have been vetted and hardened allow enterprises to maintain full ownership and control of both models and training data," says IDC research manager Dr Michele Rosen – with half of respondents in regulated sectors calling this "critical."
Digital sovereignty is sharpening this pressure further, as enterprises serving international clients can no longer treat data residency and compliance as an afterthought.
Can open source compete with the hyperscalers?
The control argument is all well and good, but counts for nothing if the technology can't do the job. Belief that open source AI simply can't keep up with hyperscalers is widespread, but that’s only partly true.
The reality is that most enterprises don’t need ‘frontier’ capabilities. Instead, they’re after reliable, auditable and cost-predictable performance on specific workflows, and for those use cases, a fine-tuned open model will often outperform a general-purpose frontier model, “because specificity beats scale,” says Watson.
For organizations like Bloomberg, the open versus proprietary question is largely beside the point. "In a production environment, it's less about open weight vs proprietary and more about performance metrics. A smaller model, whether open weight or proprietary, might actually give you better speed and accuracy in a given situation,” says Amanda Stent, head of AI Strategy and Research in the Office of the CTO at Bloomberg.
“The gap at the absolute frontier still exists for the most demanding general reasoning tasks, but optimizing for benchmarks that don’t match your workload is an expensive distraction,” continues Watson. “The enterprises getting the most value from AI right now are those asking ‘what do we actually need?’ rather than ‘what model tops the leaderboard?’.”
In the next few years, a hybrid model to model licensing could become the dominant approach.
“Proprietary APIs will remain the standard for general purpose ‘daily assistant’ copilots, while open source will become the backbone of internal, mission-critical AI platforms,” says Mark Scrivens, CEO of FPT UK. “We’re already seeing this in highly regulated sectors such as healthcare and banking, financial services and insurance, where we’re seeing a surge in open-source adoption for large-scale transformation projects.”
What to consider before choosing open source AI
The case for open-source AI is building, but so is the to-do list, as it demands significantly more from an enterprise than just signing up for an API key. Brock points to the parallel with early open source software. The assumption then was that free to use meant free to run, and that simply wasn’t the case.
In the case of open-source AI, models often require more hands-on oversight. “Unlike proprietary models, open-source deployments place the responsibility squarely on internal teams,” says Rosen. “Therefore, moving beyond proprietary APIs requires a higher level of operational maturity.”
“It requires a shift from AI user to operator,” adds Scrivens. “Organizations need GPU-ready clusters, whether on-premise or via the cloud. This transition also requires talent proficient in machine learning operations (MLOps), model evaluation and prompt engineering. Organizations should also be prepared to monitor model drift, performance and security in real time.”
Not all open-source projects are equal either. Some have thriving contributor ecosystems, while others are effectively a single corporate sponsor with a permissive license, notes Brock, and enterprises need to know the difference before they commit.
Building a strategy that works
For those weighing up their options, Scrivens recommends organizations begin with pilot projects that use open source models to test performance, cost and operational requirements. The next step, he says, is building a flexible architecture that supports both proprietary and open source options.
According to Watson, the enterprises making the smartest moves right now are treating this as a “constitutional moment”.
“They’re putting governance structures and technical architectures in place that preserve real optionality before the market consolidates further. Sovereign AI capability – the ability to inspect, modify, govern and if necessary, walk away from your AI systems, is something you either build now or negotiate for later from a position of weakness,” she advises.
“The window is open, but won’t remain so indefinitely,” she concludes.
Keri Allan is a freelancer with 20 years of experience writing about technology and has written for publications including the Guardian, the Sunday Times, CIO, E&T and Arabian Computer News. She specialises in areas including the cloud, IoT, AI, machine learning and digital transformation.


