How HPE plans to combat generative AI’s 'dirty secret'

A sign saying "announcing HPE GreenLake for Large Language Models
(Image credit: Jane McCallion/Future)

In 2020, the academic journal Science estimated that, in 2018, data centers constituted approximately 1% of global energy consumption – in the region of 205 terawatt-hours (TWh).

While advances in energy efficiency have meant the percentage of global energy consumption attributed to data centers has stayed roughly steady since 2011, this is still a colossal energy draw. 

Given most of the world’s energy (80% according to the US Environmental and Energy Study Institute) is generated by fossil fuel sources such as coal and gas, however, data centers are a significant contributor to global carbon emissions.

With the explosion of complex artificial intelligence (AI) technology, such as generative AI and large language models, the volume of carbon emissions generated by data centers is in very real danger of increasing quite rapidly.

Not your average workload

These advanced AI systems require heavy duty hardware to run on. These may be top-spec traditional servers or, for some of the most intense workloads, specialist infrastructure like supercomputers.

These are hungry beasts, drawing a significant amount of energy when carrying out intense workloads. A supercomputer, running several processes in parallel, can use over 10 megawatts of electricity, the equivalent of tens of thousands of households.

It’s not just the energy needed to run the infrastructure that needs to be taken into account either, but also other elements – notably cooling.

With this technology resurgent, is it possible to balance the need for progress against the need for sustainability and environmental protection?

This is a question ITPro put to several executives at HPE Discover 2023, after the firm revealed its big high performance computing (HPC)-powered AI play, GreenLake for Large Language Models (LLMs).

Sustainability is a core principle

The company clearly isn’t blind to this problem, with almost everyone who spoke about GreenLake for LLMs talking up the green credentials of the facility housing it – it’s carbon neutral and runs on almost 100% renewable energy.

Asked when HPE started thinking about the importance of sustainability in the project, CEO Antonio Neri said it wasone of the organization’s “core principles” and forms part of its environmental, social, and governance (ESG) goals.

“We laid out a very ambitious agenda to achieve net zero by 2040,” Neri said. Nevertheless, he recognised the problems inherent in HPC and AI’s energy demands.

Remember crypto? Crypto was a massive consumption of energy everywhere. And this is not that different in many ways,” he said. “If you think about the Frontier system for a moment …  just one rack of GPUs and accelerators and everything we have [put] into that traditional data center right, 19in, actually consumes 450 kilowatts. That system has 75 racks just for the compute.”

Even though Frontier uses water cooling, which is much more efficient than air cooling, the pump associated with that system also requires power – about 35 megawatts, according to Neri.

“That's why we thought about sustainable AI from the beginning [for GreenLake for LLMs], because if you're going to build these massive AI clouds which are capability clouds, it has to be sustainable by design,” he said.

Wastewater recycling and secondary possibilities

ITPro asked Justin Hotard, executive vice president and general manager of HPC at HPE, about the potential for the AI explosion to turn into something that’s potentially environmentally damaging.

RELATED RESOURCE

Dark purple whitepaper cover with image of the Earth with dotted and solid lines circling it

(Image credit: Paysafe)

A green future: How the crypto asset sector can embrace ESG

Understanding the challenges and opportunities of new ESG standards and policies

DOWNLOAD FOR FREE

“I think this is why we are so fundamentally committed to delivering a carbon neutral service,” he said. “Because there's no question, if you've looked at the compute cycles for HPC or AI and you apply any kind of model out to some of the potential demand for these services both in a commercial sense or in a research sense, there's going to be a significant amount of incremental consumption.

“If that incremental consumption doesn't start off with a sustainable premise, it actually will become a massive problem.”

HPE, Hotard said, is therefore starting with the “fundamental principle” of net neutrality for supercomputing clusters. He also pointed to some facilities it’s involved in that are going beyond the carbon neutrality goal by reusing wastewater from the cooling system. One such example is the LUMI supercomputer in Kajaani, Finland, that is putting the hot wastewater from the cooling system to use heating nearby municipal areas.

Other motivators

While HPE says there’s a lot of interest in energy efficient AI and HPC infrastructure as customers seek to fulfill their own sustainability goals, that’s not the only reason an organization might be looking for this kind of offering. Notably, with the cost of energy on the rise, companies are looking to reduce their energy consumption.

CTO Fidelma Russo told ITPro: “Let's be honest, some people are driven by the carbon footprint and the planet, and some people are driven by energy prices. I think there's dramatically more interest, since the price of energy went up, in sustainability and knowing how much each of your IT assets is using and then aligning that with the power usage equivalency of your colo or  your datacenter and trying to manage those bills.”

“It's not all altruistic,” she continued. “A lot of it is determined by, you know, power consumption and electricity costs, especially in Europe. That's where we see the bulk really leaning into wanting to know what their IT infrastructure is pulling.”

Whatever the motivation, the need to balance the use of HPC and growth of AI is clear. The genie can’t be put back in the bottle, and nor should it be, so organizations must be proactive in mitigating any potential negative impacts before they take root.

Jane McCallion
Managing Editor

Jane McCallion is ITPro's Managing Editor, specializing in data centers and enterprise IT infrastructure. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.

Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.

TOPICS