DeepSeek’s R1 model training costs pour cold water on big tech’s massive AI spending
Chinese AI developer DeepSeek says it created an industry-leading model on a pittance
In mid-2024, Anthropic CEO Dario Amodei projected AI training costs to soar to such an extent that building a new model could cost upwards of $100 billion.
Amodei’s lofty claims appear to have been completely shot down with the publication of a new research paper from DeepSeek. In a recent paper, published in the academic journal Nature, the Chinese AI developer claims it spent a paltry amount training its flagship R1 model.
All told, training costs amounted to $294,000, with the company using 512 Nvidia H800 chips to build the model that had US companies sweating earlier this year.
It’s worth noting that these costs come in addition to around $6 million spent by the firm to create the base LLM R1 is built on. Regardless, the results are impressive given the far higher training costs associated with competing models.
So what makes DeepSeek R1 tick?
Under the hood of DeepSeek R1
DeepSeek R1 is a ‘reasoning model', meaning it’s designed specifically to excel at tasks such as mathematics and coding. It’s also an ‘open weight’ model, so is freely available for anyone to download.
As ITPro reported earlier this year, the decision to offer R1 as an open weight model was welcomed by industry stakeholders, and also gave competitors a vital insight into how the model performed compared to others out there on the market.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
R1 is among the most popular models currently available on AI community platform, Hugging Face, having been downloaded over 10 million times.
AI reasoning models are purposefully trained on real-world data, enabling them to “learn” how to solve specific problems. It’s a costly, long-winded process but has been a key focus for AI providers over the last 18-months as they offer users more intuitive tools.
According to this latest research paper, DeepSeek was able to reduce training costs through a carrot-and-stick type approach to reinforcement learning. Researchers essentially incentivized and rewarded the model to produce correct answers to user queries.
In a blog post dissecting the paper, researchers at Carnegie Mellon University noted that this was akin to a child playing video games, constantly learning new ways to progress.
“As the child navigates their avatar through the game world, they learn through trial and error that some actions (such as collecting gold coins) earn points, whereas others (such as running into enemies) set their score back to zero,” researchers explained.
“In a similar vein, DeepSeek-R1 was awarded a high score when it answered questions correctly and a low score when it gave wrong answers.”
So much for huge training costs
DeepSeek has previously hinted that its training processes were highly cost efficient. A preprint release of the study, published in January, pointed toward far lower costs compared to US competitors.
Some Silicon Valley figures questioned the veracity of DeepSeek’s claims at the time, but with R1 becoming the first peer-reviewed LLM, the company appears vindicated.
AI training has long been lamented as a costly, compute-intensive process for big tech providers. Amodei isn’t alone in highlighting the huge costs associated with this, either.
In 2023, OpenAI CEO Sam Altman hinted that foundation model training had cost upwards of $100 million.
These figures appear to have been confirmed during his appearance at a 2024 MIT event, where he said it’s “more than that”, per reports from Wired.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Can enterprises transform through startup theory?In-depth For big corporations, the flexibility, adaptability, and speed of a startup or scale-up is often the total opposite of what’s possible within their own operations
-
AI is creating more software flaws – and they're getting worseNews A CodeRabbit study compared pull requests with AI and without, finding AI is fast but highly error prone
-
Google DeepMind CEO Demis Hassabis thinks startups are in the midst of an 'AI bubble'News AI startups raising huge rounds fresh out the traps are a cause for concern, according to Hassabis
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
AWS has dived headfirst into the agentic AI hype cycle, but old tricks will help it chart new watersOpinion While AWS has jumped on the agentic AI hype train, its reputation as a no-nonsense, reliable cloud provider will pay dividends
-
AWS CEO Matt Garman says AI agents will have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
-
Westcon-Comstor partners with Fortanix to drive AI expertise in EMEANews The new agreement will help EMEA channel partners ramp up AI and multi-cloud capabilities
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Anthropic announces Claude Opus 4.5, the new AI coding frontrunnerNews The new frontier model is a leap forward for the firm across agentic tool use and resilience against attacks
-
Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disasterNews Staff need to be educated on the risks of shadow AI to prevent costly breaches