Nvidia to host Argonne National Laboratory's largest supercomputer
Polaris will feature Nvidia’s A100 Tensor Core GPUs
Nvidia has announced its accelerated computing platform will host Argonne National Laboratory's largest supercomputer, Polaris.
The HPE-developed Polaris is estimated to have 560 nodes with four Nvidia A100 Tensor Core GPUs each. Up to 20 times faster than its predecessor, A100 can be partitioned into seven GPU instances on demand.
With 2,240 Nvidia A100 Tensor Core GPUs, Polaris can reach nearly 1.4 exaflops of theoretical artificial intelligence (AI) performance and approximately 44 petaflops of peak double-precision. Pairing simulation with machine learning, this supercomputer can tackle the most data-intensive AI computing workloads.
According to the US Department of Energy, Polaris will supercharge Argonne Leadership Computing Facility’s (ALCF’s) scientific research and algorithms.
In addition to government agencies, Polaris can be accessed by academic researchers and industry experts, thanks to ALCF's peer-reviewed application and allocation programs.
“Polaris is a powerful platform that will allow our users to enter the era of exascale AI,” stated Michael Papka, director of ALCF.
Papka added, “Harnessing the huge number of Nvidia A100 GPUs will have an immediate impact on our data-intensive and AI HPC workloads, allowing Polaris to tackle some of the world’s most complex scientific problems.”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
RELATED RESOURCE
The IT expert’s guide to AI and content management
How artificial intelligence and machine learning could be critical to your business
Lastly, Polaris will allow researchers to upload their workloads to Aurora, the upcoming exascale system from Argonne.
“The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society,” said Ian Buck, vice president and general manager of accelerated computing at Nvidia.
“Nvidia’s GPU-accelerated computing platform provides pioneers like the ALCF breakthrough performance for next-generation supercomputers such as Polaris that let researchers push the boundaries of scientific exploration.”
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
Microsoft Excel is still alive and kicking at 40News A recent survey found Gen Z and Millennial finance professionals have a strong “emotional attachment” to Microsoft Excel
-
HPE and Nvidia launch first EU AI factory lab in FranceNews The facility will let customers test and validate their sovereign AI factories
-
Dell Technologies doubles down on AI with SC25 announcementsAI Factories, networking, storage and more get an update, while the company deepens its relationship with Nvidia
-
Nvidia CEO Jensen Huang says future enterprises will employ a ‘combination of humans and digital humans’ – but do people really want to work alongside agents? The answer is complicated.News Enterprise workforces of the future will be made up of a "combination of humans and digital humans," according to Nvidia CEO Jensen Huang. But how will humans feel about it?
-
OpenAI signs another chip deal, this time with AMDnews AMD deal is worth billions, and follows a similar partnership with Nvidia last month
-
Why Nvidia’s $100 billion deal with OpenAI is a win-win for both companiesNews OpenAI will use Nvidia chips to build massive systems to train AI
-
Jensen Huang says 'the AI race is on' as Nvidia shrugs off market bubble concernsNews The Nvidia chief exec appears upbeat on the future of the AI market despite recent concerns
-
HPE's AI factory line just got a huge updatenews New 'composable' services with Nvidia hardware will allow businesses to scale AI infrastructure
-
Nvidia, Deutsche Telekom team up for "sovereign" industrial AI cloudNews German telecoms giant will host industrial data center for AI applications using Nvidia technology