HPE unveils Mod Pod AI ‘data center-in-a-box’ at Nvidia GTC
Water-cooled containers will improve access to HPC and AI hardware, the company claimed
Recent research carried out by ITPro showed investment in AI is a key priority for businesses globally, however, following through on that ambition is a question that looms large for many organizations.
At Nvidia’s GTC conference, Hewlett Packard Enterprise (HPE) unveiled Mod Pod, a product intended to answer at least some elements of this conundrum.
Mod Pod is a liquid-cooled modular data center that’s optimized for AI and HPC workloads. It’s built into a container and can, the company says, be easily deployed on a business’ premises without needing to completely overhaul its existing data center.
“A lot of data center space that does exist, does not have the capabilities for liquid cooling, which means you don't have the density in your racks, and you also don't have the PUE (power usage effectiveness),” said HPE CTO Fidelma Russo. “ So [Mod Pod] gives you a lower total cost of ownership.”
“We have examples of our customers, siting these in parking lots where they used to have employees, but with the work from home from COVID, they have the space,” she added. “So again it's easy, [you’ve] just got to level some space and you can have a data center in your backyard up and running in months.”
Mod Pod comes in 6m and 12m configurations, and supports up to 1.5MW per unit with a PUE of under 1.1. While HPE is keen to highlight its liquid-cooling credentials, the Adaptive Cascade Cooling technology can be adapted to use either air or liquid cooling depending on customer need and preference.
HPE expands Private Cloud capabilities
In addition to Mod Pod, HPE also announced several new features in Private Cloud AI, the flagship – and, thus far, only – product from its partnership with the chipmaker, Nvidia AI Computing by HPE.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The first is support for the newly announced Nvidia AI Data Platform, which allows Nvidia-Certified Storage providers, of which HPE is one, to build AI query agents into their hardware using Nvidia AI Enterprise software, such as NIM and Llama Nemotron models, as well as AI-Q Blueprint.
It also revealed a new developer system that adds an “instant AI development environment”, powered by Nvidia accelerated computing, HPE Data Fabric, and support for rapid deployment of Nvidia blueprints.
RELATED WHITEPAPER
There were also a number of storage announcements, including HPE ProLiant Compute DL380a Gen12 and HPE ProLiant Compute DL384b Gen12, which feature Nvidia RTX Pro 6000 Blackwell chips and NVIDIA GB200 Grace Blackwell NVL4 Superchips respectively.
HPE ProLiant Compute XD servers, meanwhile, will support the NVIDIA HGX B300 platform, launched at GTC. The company says this technology will allow customers to “train, fine-tune and run large AI models for the most complex workloads, including agentic AI and test-time reasoning inference”.
MORE FROM ITPRO
- What HPE's results say about the direction of enterprise AI
- HPE’s AI and supercomputing journey continues with new Cray and Slingshot hardware
- HPE launches exclusive sovereign cloud offering for the channel

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
Trump's AI executive order could leave US in a 'regulatory vacuum'News Citing a "patchwork of 50 different regulatory regimes" and "ideological bias", President Trump wants rules to be set at a federal level
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
On the ground at HPE Discover Barcelona 2025ITPro Podcast This is a pivotal time for HPE, as it heralds its Juniper Networks acquisition and strengthens ties with Nvidia and AMD
-
HPE promises “cross pollinated" future for Aruba and JuniperNews Juniper Networks’ Marvis and LEM capabilities will move to Aruba Central, while client profiling and organizational insights will transfer to Mist
-
Nvidia just announced new supercomputers and an open AI model family for science at SC 2025News The chipmaker is building out its ecosystem for scientific HPC, even as it doubles down on AI factories
-
HPE ProLiant Compute DL325 Gen12 review: A deceptively small and powerful 1P rack server with a huge core countReviews The DL325 Gen12 delivers a CPU core density and memory capacity normally reserved for expensive, power-hungry dual-socket rack servers
-
US Department of Energy’s supercomputer shopping spree continues with Solstice and EquinoxNews The new supercomputers will use Oracle and Nvidia hardware and reside at Argonne National Laboratory
-
HPE's new Cray system is a pocket powerhouseNews Hewlett Packard Enterprise (HPE) has unveiled new HPC storage, liquid cooling, and supercomputing offerings ahead of SC25
-
UK to host largest European GPU cluster under £11 billion Nvidia investment plansNews Nvidia says the UK will host Europe’s largest GPU cluster, totaling 120,000 Blackwell GPUs by the end of 2026, in a major boost for the country’s sovereign compute capacity.
-
Inside Isambard-AI: The UK’s most powerful supercomputerLong read Now officially inaugurated, Isambard-AI is intended to revolutionize UK innovation across all areas of scientific research