Why OpenAI is fighting a losing battle with in-house chips
OpenAI could be set for a showdown with the likes of Nvidia with its in-house chips move


A move from OpenAI to develop in-house chips could present significant logistical and financial challenges for the generative AI firm, according to industry experts.
The firm has reportedly been assessing its options when it comes to chip supplies as part of a combined approach to support expansion plans and alleviate the pressure of global chip supply issues.
Reuters reported that the firm, known for the chatbot ChatGPT as well as generative AI models such as GPT-4 and DALL-E, has gone as far as considering an acquisition of an existing semiconductor company to jump-start production efforts.
Alex White, GM EMEA at AI firm SambaNova Systems, said that while OpenAI could benefit from decreased reliance on Nvidia if it had its own chips, manufacturing is an uphill process that takes many years.
“There’s a clear advantage to owning the whole stack from hardware to software - including the models that run on top. But designing and manufacturing chips doesn't happen overnight, it requires huge levels of expertise and resources that are in increasingly short supply.
“It took OpenAI over five years to develop GPT-4, which may be too long to wait for customers. I wouldn’t be surprised if hardware took a similar amount of time.”
OpenAI’s operational costs are extremely high, with CEO Sam Altman having once described them as “eye watering”. Although it is a leader in the rapidly growing generative AI market, it also has been forced to bear the rising costs of building generative AI platforms.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The firm has released a number of premium tiers over the past year including ChatGPT Plus and its API for Businesses to generate funds. But inference on in-house chips could work to slash costs substantially and tap into AI chips revenue down the line.
“OpenAI is trying to reinvent itself as an enterprise business, and that requires the ability to be able to fine-tune or build bespoke large language models - and we all know that training models require vastly more compute power than running the models,” White added.
A number of big tech firms use Nvidia’s chips, which are among the most powerful on the market and have acquired a reputation as reliable hardware backing for AI systems.
In recent months, the firm has partnered with Dell on its managed AI platform, worked to integrate its Nvidia NeMo framework with Snowflake Data Cloud, and will collaborate with VMware on its private AI offering.
Microsoft used an array of Nvidia’s H100 chips to help OpenAI train its most recent AI models, and in 2022 both firms announced plans to create one of the world’s most powerful supercomputers, specifically for AI training.
RELATED RESOURCE
Discover the three pathways customers use to generate business value from AWS for their modern applications
DOWNLOAD FOR FREE
OpenAI is not the first big name in AI to mull a move to in-house silicon. In May, a Bloomberg report stated that Microsoft was looking into producing its own chip, codenamed ‘Athena’, to cut operational costs.
White noted that OpenAI could benefit from similar savings through its own chips, but that the road to getting there would be far from easy.
Rival chip designers have laid out plans to compete with Nvidia in the coming years, with Intel targeting AI hardware dominance by 2025 and Google claiming its AI chips are faster and more energy efficient.
ITPro has approached OpenAI for comment.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
How MSPs can learn to stop worrying and love AI
This week’s ChannelCon EMEA 2025 conference provided much-needed context as well as inspiration for the ecosystem…
-
Observability opens up new opportunities for the channel
Industry Insights Channel partners are responding to the growing demand for observability products to help customers keep track of their data and assets across cloud, serverless, and containerized environments
-
OpenAI signs another chip deal, this time with AMD
news AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
OpenAI signs series of AI data center deals with Samsung
News As part of its Stargate initiative, the firm plans to ramp up its chip purchases and build new data centers in Korea
-
Why Nvidia’s $100 billion deal with OpenAI is a win-win for both companies
News OpenAI will use Nvidia chips to build massive systems to train AI
-
OpenAI just revealed what people really use ChatGPT for – and 70% of queries have nothing to do with work
News More than 70% of ChatGPT queries have nothing to do with work, but are personal questions or requests for help with writing.
-
Is the honeymoon period over for Microsoft and OpenAI? Strained relations and deals with competitors spell trouble for the partnership that transformed the AI industry
Analysis Microsoft and OpenAI are slowly drifting apart as both forge closer ties with respective rivals and reevaluate their long-running partnership.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers
-
Three things we expect to see at OpenAI’s GPT-5 reveal event
Analysis Improved code generation and streamlined model offerings are core concerns for OpenAI
-
Everything you need to know about OpenAI's new open weight AI models, including price, performance, and where you can access them
News The two open weight models from OpenAI, gpt-oss-120b and gpt-oss-20b, are available under the Apache 2.0 license.