Former Google CEO Eric Schmidt rejects claims Al scaling has peaked
AI doesn't has a scaling problem, despite reports to the contrary
How can the large language models (LLMs) driving the generative AI boom keep getting better? That's the question driving a debate around so-called scaling laws — and former Google CEO Eric Schmidt isn't concerned.
Scaling laws refer to how the accuracy and quality of a deep-learning model improves with size — bigger is better when it comes to the model itself, the amount of data it's fed, and the computing that powers it.
However, these scaling laws may have limits. Eventually, LLMs face diminishing returns, improvements become more expensive, or stop becoming possible altogether - perhaps reflecting a limitation in the technology itself or for more practical reasons, such as a shortage of data.
Reports earlier this month suggested the next models from the major AI developers may reflect this, with smaller leaps forward than with previous releases.
First, a report from The Information suggested OpenAI was concerned about the next version of its model, dubbed Orion, with improvements more limited than in previous releases.
Because of that, the report claimed the company was using additional techniques for better performance beyond the model itself, including post-training tweaks.
Similar challenges were noted in a report by Bloomberg at both Google and Anthropic, with the next iteration of the former's Gemini model failing to meet expectations, while Anthropic’s challenges with the next version of its Claude model have reportedly led to delays to its release.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Is there a wall?
Not everyone agrees that there is a limit to these LLMs, however. Following the reports, OpenAI chief executive Sam Altman posted online insisting “there is no wall."
Altman's succinct argument was expanded on by the former leader of one of OpenAI's rivals, ex-Google CEO Eric Schmidt.
Speaking on the Diary of a CEO podcast, host Steven Bartlett asked Schmidt what worried him about LLMs in the near future, and he began his answer by explaining that he believes these AI models won't face a scaling wall within the next five years.
"In five years, you'll have two or three more turns of the crank of these large language models," Schmidt said.
"These large models are scaling with an ability that is unprecedented; there's no evidence that the scaling laws, as they're called, have begun to stop. They will eventually stop, but we're not there yet,” he added.
"Each one of these cranks looks like it's a factor of two, factor of three, factor of four of capability, so let's just say turning the crank all of these systems get 50 times or 100 times more powerful."
Continued scaling a "fantasy"
Given that ever increasing power, Schmidt echoed AI safety concerns by discussing how LLMs could plan cyber attacks, create viruses, and exacerbate conflicts.
Others disagree, including long-time AI skeptic, the New York University professor Gary Marcus, who said in his Substack that AI would hit a wall, and continued progress at the same rate until reaching Artificial General Intelligence (AGI) was "a fantasy".
"The truth is that scaling is running out, and that truth is, at last coming out," he wrote, adding that venture capitalist Marc Andreesen had publicly said that on a podcast "we're not getting the intelligent improvements at all out of it" — which Marcus interpreted as "basically VC-ese for 'deep learning is hitting a wall'."
That will mean troubling times for companies that have bet big on LLMs, such as OpenAI and Microsoft. Marcus predicted that LLMs are here to stay, but the increasing costs will change the market.
"LLMs such as they are, will become a commodity; price wars will keep revenue low," he wrote.
"Given the cost of chips, profits will be elusive. When everyone realizes this, the financial bubble may burst quickly; even Nvidia might take a hit, when people realize the extent to which its valuation was based on a false premise."
The costs associated with LLMs have also been flagged as a major concern by other industry stakeholders, such as Anthropic CEO Dario Amodei.
In July, Amodei noted that it already costs hundreds of millions of dollars to train models in some instances, and there are some “in training today” that cost closer to $1 billion.
These numbers pale in comparison to the coming years, however, with Amodei predicting that AI training costs will hit the $10 billion and $100 billion mark over the course of “2025, 2026, maybe 2027”.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Hackers are using LLMs to generate malicious JavaScript in real timeNews Defenders advised to use runtime behavioral analysis to detect and block malicious activity at the point of execution, directly within the browser
-
Developers in India are "catching up fast" on AI-generated codingNews Developers in the United States are leading the world in AI coding practices, at least for now
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion The Microsoft chief said there’s a risk public sentiment might sour unless adoption is distributed more evenly
-
What Anthropic's constitution changes mean for the future of ClaudeNews The developer debates AI consciousness while trying to make Claude chatbot behave better
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.
-
DeepSeek rocked Silicon Valley in January 2025 – one year on it looks set to shake things up again with a powerful new model releaseAnalysis The Chinese AI company sent Silicon Valley into meltdown last year and it could rock the boat again with an upcoming model
-
Anthropic’s Claude AI chatbot is down as company confirms ‘elevated error rates’ for Opus 4.5 and Sonnet 4.5News Users of Anthropic's Sonnet 4.5 and Opus 4.5 models are being met with "elevated error rates"
-
Google’s Apple deal is a major seal of approval for Gemini – and a sure sign it's beginning to pull ahead of OpenAI in the AI raceAnalysis Apple opting for Google's models to underpin Siri and Apple Intelligence is a major seal of approval for the tech giant's Gemini range – and a sure sign it's pulling ahead in the AI race.
-
Everything you need to know about Claude Cowork, including features, pricing, and how to access the new productivity toolNews Users can give Claude Cowork access to specific folders on their computer, allowing the bot to autonomously sort and organize files in the background while you're working away.
-
Microsoft CEO Satya Nadella wants an end to the term ‘AI slop’ and says 2026 will be a ‘pivotal year’ for the technology – but enterprises still need to iron out key lingering issuesNews Microsoft CEO Satya Nadella might want the term "AI slop" shelved in 2026, but businesses will still be dealing with increasing output problems and poor returns.
