Former Google CEO Eric Schmidt rejects claims Al scaling has peaked
AI doesn't has a scaling problem, despite reports to the contrary
How can the large language models (LLMs) driving the generative AI boom keep getting better? That's the question driving a debate around so-called scaling laws — and former Google CEO Eric Schmidt isn't concerned.
Scaling laws refer to how the accuracy and quality of a deep-learning model improves with size — bigger is better when it comes to the model itself, the amount of data it's fed, and the computing that powers it.
However, these scaling laws may have limits. Eventually, LLMs face diminishing returns, improvements become more expensive, or stop becoming possible altogether - perhaps reflecting a limitation in the technology itself or for more practical reasons, such as a shortage of data.
Reports earlier this month suggested the next models from the major AI developers may reflect this, with smaller leaps forward than with previous releases.
First, a report from The Information suggested OpenAI was concerned about the next version of its model, dubbed Orion, with improvements more limited than in previous releases.
Because of that, the report claimed the company was using additional techniques for better performance beyond the model itself, including post-training tweaks.
Similar challenges were noted in a report by Bloomberg at both Google and Anthropic, with the next iteration of the former's Gemini model failing to meet expectations, while Anthropic’s challenges with the next version of its Claude model have reportedly led to delays to its release.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Is there a wall?
Not everyone agrees that there is a limit to these LLMs, however. Following the reports, OpenAI chief executive Sam Altman posted online insisting “there is no wall."
Altman's succinct argument was expanded on by the former leader of one of OpenAI's rivals, ex-Google CEO Eric Schmidt.
Speaking on the Diary of a CEO podcast, host Steven Bartlett asked Schmidt what worried him about LLMs in the near future, and he began his answer by explaining that he believes these AI models won't face a scaling wall within the next five years.
"In five years, you'll have two or three more turns of the crank of these large language models," Schmidt said.
"These large models are scaling with an ability that is unprecedented; there's no evidence that the scaling laws, as they're called, have begun to stop. They will eventually stop, but we're not there yet,” he added.
"Each one of these cranks looks like it's a factor of two, factor of three, factor of four of capability, so let's just say turning the crank all of these systems get 50 times or 100 times more powerful."
Continued scaling a "fantasy"
Given that ever increasing power, Schmidt echoed AI safety concerns by discussing how LLMs could plan cyber attacks, create viruses, and exacerbate conflicts.
Others disagree, including long-time AI skeptic, the New York University professor Gary Marcus, who said in his Substack that AI would hit a wall, and continued progress at the same rate until reaching Artificial General Intelligence (AGI) was "a fantasy".
"The truth is that scaling is running out, and that truth is, at last coming out," he wrote, adding that venture capitalist Marc Andreesen had publicly said that on a podcast "we're not getting the intelligent improvements at all out of it" — which Marcus interpreted as "basically VC-ese for 'deep learning is hitting a wall'."
That will mean troubling times for companies that have bet big on LLMs, such as OpenAI and Microsoft. Marcus predicted that LLMs are here to stay, but the increasing costs will change the market.
"LLMs such as they are, will become a commodity; price wars will keep revenue low," he wrote.
"Given the cost of chips, profits will be elusive. When everyone realizes this, the financial bubble may burst quickly; even Nvidia might take a hit, when people realize the extent to which its valuation was based on a false premise."
The costs associated with LLMs have also been flagged as a major concern by other industry stakeholders, such as Anthropic CEO Dario Amodei.
In July, Amodei noted that it already costs hundreds of millions of dollars to train models in some instances, and there are some “in training today” that cost closer to $1 billion.
These numbers pale in comparison to the coming years, however, with Amodei predicting that AI training costs will hit the $10 billion and $100 billion mark over the course of “2025, 2026, maybe 2027”.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Google CEO Sundar Pichai says vibe coding has made software development ‘exciting again’News Google CEO Sundar Pichai claims software development has become “exciting again” since the rise of vibe coding, but some devs are still on the fence about using AI to code.
-
15-year-old revealed as key player in Scattered LAPSUS$ HuntersNews 'Rey' says he's trying to leave Scattered LAPSUS$ Hunters and is prepared to cooperate with law enforcement
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Anthropic announces Claude Opus 4.5, the new AI coding frontrunnerNews The new frontier model is a leap forward for the firm across agentic tool use and resilience against attacks
-
Microsoft is hell-bent on making Windows an ‘agentic OS’ – forgive me if I don’t want inescapable AI features shoehorned into every part of the operating systemOpinion We don’t need an ‘agentic OS’ filled with pointless features, we need an operating system that works
-
Google blows away competition with powerful new Gemini 3 modelNews Gemini 3 is the hyperscaler’s most powerful model yet and state of the art on almost every AI benchmark going
-
Microsoft's new Agent 365 platform is a one-stop shop for deploying, securing, and keeping tabs on AI agentsNews The new platform looks to shore up visibility and security for enterprises using AI agents
-
Google CEO Sundar Pichai sounds worried about a looming AI bubble – ‘I think no company is going to be immune, including us’News Google CEO Sundar Pichai says an AI bubble bursting event would have global ramifications, but insists the company is in a good position to weather any storm.
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws
-
Sundar Pichai thinks commercially viable quantum computing is just 'a few years' awayNews The Alphabet exec acknowledged that Google just missed beating OpenAI to model launches but emphasized the firm’s inherent AI capabilities
