OpenAI could fast become a money pit for investors

OpenAI CEO Sam Altman speaking at the firm's inaugural developer day in November 2023
(Image credit: Getty Images)

OpenAI has become one of the most recognized names in the field of AI, and the company’s valuation has only grown since the release of ChatGPT nearly one year ago. But it’s clear that the company has a long way to go before it can become self-sufficient - or even turn a profit.

As one of the first firms to make generative AI models publicly available, OpenAI has enjoyed a year in the sun in which it raised a reported $10 billion investment from Microsoft and released a number of popular generative AI services.

In October, The New York Times reported that OpenAI was in active talks on a deal that could balloon the company’s value to upwards of $80 billion.

Yet for all its success so far, OpenAI has continued to turn a loss even with the billions of dollars that Microsoft has poured into the company being used to build hardware used to train LLMs like GPT-4.

Sam Altman, CEO at OpenAI, has previously complained about the immense cost of running services such as ChatGPT. In the past year, the company has released a number of subscription tiers for its services to limit usage and recoup some of the computing costs such as ChatGPT for Business or the release of GPT-4 for those subscribed to ChatGPT Plus

Building generative AI platforms is costly and this has been a key factor behind the relatively slow pace of on-prem AI uptake compared to the meteoric rise of AI platforms in the public cloud. Hyperscalers can afford to shell out the immense investment necessary to stockpile the GPUs and CPUs necessary for immense AI workloads. 

Nvidia’s newest GH200 Grace Hopper chips were designed with tomorrow’s generative AI models in mind, but will be a costly investment to say the least. Though Nvidia hasn’t stated how much it costs, the chip can vastly outperform Nvidia’s H100 range, which costs $40,000 apiece.  

While Nvidia continues to cash in on demand for AI chips, threatening the dominance of AWS in the process, this upwards trend in model size isn’t nailed on. If we’re headed into an era in which trillion-parameter LLMs can consistently outperform those we have today, OpenAI will be locked into investment with Nvidia, or a competitor in the space such as AMD

Reports have suggested that OpenAI is investing in its own silicon in an attempt to reduce its dependence on volatile chip supplies, but this is a long-term solution to the problem and would put the firm in an uncertain position against those with decades of experience in the semiconductor sector. 

AI setup is a pain point for businesses looking to take advantage of the technology, with the costs described as “prohibitive” by analysts. It could be that after the boom of the past year, firms look to reassess their stake in AI, and either stick with public AI or double down and invest more heavily in private solutions.

RELATED RESOURCE

The business leader’s guide to digital worker technology for improving productivity whitepaper

(Image credit: IBM)

Get tips that will turn your workforce into a talent force

DOWNLOAD NOW

OpenAI could make either work, but in its current shape would much prefer customer loyalty in the public cloud. Its models are a pinnacle of Azure’s AI infrastructure, and its current appeal helps Microsoft almost as much as Microsoft’s funding plugs the holes in OpenAI’s business model.

Smaller models deployed at the edge may well be the future, and OpenAI would certainly be able to capitalize on selling digestible access to its own models to run in this off-the-shelf manner. But this would also force it to compete more aggressively with the likes of Meta’s Llama 2 and yield its significant Azure advantage.

Altman’s ambitions of creating artificial general intelligence (AGI), a system that can realistically rival the intelligence of humans, is so far little more than an expensive pipe dream. If he carries on down that path, other firms with substantial hyperscaler backing such as Anthropic – into which AWS has poured $4 billion – could seize the lead for enterprise AI.

None of this points to certain failure for the firm. OpenAI could - and in all likelihood will - continue to grow in valuation, with Microsoft’s heavy backing and reliance on the firm’s models for its AI assistants such as 365 Copilot, Copilot for Windows, and GitHub Copilot Enterprise

Google DeepMind turned its first profit in 2020, a decade after it was first founded. 

At the time, the firm’s main expenses were in staff salaries. But as the world of AI has become more focused on intensive training of foundation models, computing and hardware investment has become a more serious drain on finances. 

If it’s not careful, OpenAI could quickly become something of a passion project for Microsoft. 

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.