Google boasts that a single Gemini prompt uses roughly the same energy as a basic search – but that’s not painting the full picture
Research by Google fails to highlight the broader impact of AI infrastructure and training


How much energy does it take for Gemini to respond to a user prompt? Google says it’s less than you might think.
One of the challenges of generative AI is how much energy is required to power the data centers needed to run a model to answer a question, generate some text, and so on.
Unpicking that isn't easy, but early reports suggested that using OpenAI's ChatGPT for search was ten-times more energy intensive than using plain-old Google, though CEO Sam Altman has since confirmed it's about 0.034 watt-hours (Wh) of energy.
That said, it's an easier question for Google researchers to answer than their academic peers, what with the access to the company's own systems. Researchers at the tech giant looked at the full stack of infrastructure that underpins its Gemini AI range, detailing their findings in a recent technical paper.
The wide-ranging study examined “active AI accelerator power, host system energy, idle machine capacity, and data center energy overhead".
All told, the average text prompt in Gemini Apps uses 0.24 watt-hours of energy – a figure the company is keen to point out is "substantially lower than many public estimates".
Elsewhere, a single prompt emits 0.03 grams of carbon dioxide equivalent and consumes 0.26 milliliters of water, about five drops.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
This, researchers said in a separate blog post, is equivalent to “watching TV for less than nine seconds".
So how does Gemini stack up to a traditional Google search? According to reports from MIT Technology Review, 58% of the energy demand of running a prompt was from AI chips, with the CPU and memory of a server taking up another 25%.
A further 8% is from cooling, power conversion, and other data center overheads, with 10% attributed to backup equipment.
In comparison, running a Google search uses an average 0.0003kWh or 0.03 Wh, which suggests that running a Gemini text prompt is on par with – or even more energy efficient than – a basic search, and roughly on par with OpenAI's figures.
Google study comes with a catch
It's worth noting these figures are just for using an AI model. Training AI models also consumes huge amounts of energy, as does building and running the data centers needed to do host AI models and various assorted workloads.
That's why Google's emissions are up 51% over the last five years — it posted a 22% in Scope 3 emissions, specifically highlighting data center capacity delivery as a major hurdle to curbing emissions.
The tech giant isn't alone in assessing the environmental impact of its AI models, either. Last month, Mistral unveiled its own sustainability auditing tool that included insights on model training as well as 18 months of use.
The French AI company found that the training phase alone used the same amount of water as 5,100 citizens annually and 20.4 kilotons of carbon dioxide.
By leaving out the training phase, Google isn't including a huge part of the broader impact of these systems. Plus, as Google admitted, the data hasn't been independently verified and could change as new models are added.
Beyond that, it's also notable that this specifically refers to a text prompt in the Gemini App, not more energy intensive queries like deep research using more advanced models or image generation, for example.
Working to slash AI's impact on the environment
Google said that improvements to hardware and software had helped reduce the energy required by Gemini systems, pointing to improvements in the Transformer model architecture, more efficient algorithms, and custom-built hardware.
"We continuously refine the algorithms that power our models with methods like Accurate Quantized Training (AQT) to maximize efficiency and reduce energy consumption for serving, without compromising response quality," the post said, adding:
"Our latest-generation TPU, Ironwood, is 30x more energy-efficient than our first publicly-available TPU and far more power-efficient than general-purpose CPUs for inference."
Because of such efforts, Google insisted the average demand from a Gemini App text prompt fell by 33x in terms of energy and 44x in terms of carbon footprint in the last year.
"While the impact of a single prompt is low compared to many daily activities, the immense scale of user adoption globally means that continued focus on reducing the environmental cost of AI is imperative," the report noted, calling for other AI companies to follow suit with similar data sets.
"We advocate for the widespread adoption of this or similarly comprehensive measurement frameworks to ensure that as the capabilities of AI advance, their environmental efficiency does as well."
The figures may be better than expected, but the heavy use of these AI tools — not to mention the environmental cost of building data centers and training models — still represent a threat to power grids.
Data center energy demands are growing rapidly worldwide, with the International Energy Agency warning that AI will use the same amount of power annually as Japan by 2030.
Behavioral shifts on the part of consumers – and more information from providers – could be key to reducing the broader energy impact of AI.
Earlier this year, OpenAI CEO Sam Altman revealed that users are needlessly wasting compute power just by being polite to the chatbot.
In a thread on X, Altman revealed users saying ‘please’ and ‘thank you’ in response to ChatGPT cost the company millions of dollars a year.
Altman added that it’s “tens of millions of dollars well spent”.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Data centers are growing in size and number as AI prompts widespread global expansion
- Gas-powered data centers: what's behind the boom?
- Google is worried about AI power failures – so it wants to train electricians
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Microsoft touts new Copilot features in Excel, but says you shouldn’t use them if you want accurate results
News Microsoft has warned against using new AI features in Excel for “tasks with legal, regulatory, or compliance implications” – so when can you use it?
-
Gen Z has a cyber hygiene problem
News A new survey shows Gen Z is far less concerned about cybersecurity than older generations
-
Mistral AI wants businesses to make new memories with Le Chat
News The company hopes new functionality and Connection Partners will broaden business appeal
-
Salesforce CEO Marc Benioff says the company has cut 4,000 customer support staff for AI agents so far
News The jury may still be out on whether generative AI is going to cause widespread job losses, but the impact of the technology is already being felt at Salesforce.
-
This Stanford study shows AI is starting to take jobs – and those identified as highest risk are eerily similar to a recent Microsoft study
News AI may already be impacting early-career jobs, particularly roles featuring tasks that can be automated like software developers or customer service, according to Stanford researchers.
-
Jensen Huang says 'the AI race is on' as Nvidia shrugs off market bubble concerns
News The Nvidia chief exec appears upbeat on the future of the AI market despite recent concerns
-
Meta’s chaotic AI strategy shows the company has ‘squandered its edge and is scrambling to keep pace’
Analysis Does Meta know where it's going with AI? Talent poaching, rabid investment, and now another rumored overhaul of its AI strategy suggests the tech giant is floundering.
-
Microsoft says these 10 jobs are at highest risk of being upended by AI – but experts say there's nothing to worry about yet
News Microsoft thinks AI is going to destroy jobs across a range of industries – while experts aren't fully convinced, maybe it's time to start preparing.
-
Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
News Workers are comfortable working alongside AI agents, according to research from Workday, but the prospect of having an AI 'boss' is a step too far.
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers