Intel targets AI hardware dominance by 2025
The chip giant's diverse range of CPUs, GPUs, and AI accelerators complement its commitment to an open AI ecosystem
Intel has laid out a roadmap for establishing product leadership in the processor market by 2025, alongside a goal of democratising AI under a consolidated range of AI-optimised hardware and software.
Core to its proposition is a diverse range of products including central processing units (CPUs), graphics processing units (GPUs), and dedicated AI architecture alongside open-source software improvements.
Businesses can expect to benefit from fourth-generation ‘Sapphire Rapids’ Xeon CPUs immediately, with the fifth-generation Xeon codenamed ‘Emerald Rapids’ set for a Q4 2023 release. This will be followed in 2024 by two processors known as Granite Rapids and Sierra Forest.
Sapphire Rapids can deliver up to ten times greater performance than previous generations. Internal test results also showed that a 48-core, fourth-generation Xeon delivered four times better performance than a 48-core AMD EPYC for a range of AI imaging and language benchmarks.
With Granite Rapids and Sierra Forest, Intel will address current limitations for AI and high-performance computing workloads such as memory bandwidth, with 1.5TB memory bandwidth capacity, and 83% peak bandwidth increases over current generations.
Seperately, Intel is also focusing development of GPU and FPGAs (field programmable gate arrays) to meet the demands for large language model training, largely through its Intel Max and Gaudi chips.
It stated that Gaudi 2 has demonstrated two times higher deep learning inference and training performance than the most popular GPUs.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Training on this level is key for large language models (LLM), and demand has risen since the meteoric rise in generative AI models such as ChatGPT.
Around 15 FPGA products will be brought out this calendar year, which will add to Intel’s compute product range, including for deep learning, artificial intelligence, and other high-performance computing needs.
Over time, Intel intends to draw together its GPU and Gaudi AI accelerator portfolios to allow developers to run software to run across architectures.
A plan for an open AI ecosystem
In addition to its achievements and plans for hardware, the firm said it aims to capture and democratise the AI market through software development and collaboration.
With 6.2 million active developers in its community, and 64% of AI developers using Intel tools, its ecosystem already has strong foundations for further AI development.
Intel cited its recent work with Hugging Face, enabling the 176 billion-parameter LLM BLOOMZ through its Gaudi2 architecture. This is a refined version of BLOOM, a text model that can process 46 languages and 13 programming languages, and is also available in a lightweight 7-billion-parameter model.
“For the 176-billion-parameter checkpoint, Gaudi2 is 1.2 times faster than A100 80GB,” wrote Régis Pierrard, machine learning engineer at Hugging Face.
“Smaller checkpoints present interesting results too. Gaudi2 is 3x faster than A100 for BLOOMZ-7B! It is also interesting to note that it manages to benefit from model parallelism whereas A100 is faster on a single device.
Hugging Face noted that the first-generation Gaudi accelerator also offers a better price proposition than A100, with a Gaudi AWS instance costing $13 per hour in comparison to Nvidia’s $30 per hour.
Intel did not provide benchmarks for Gaudi performance next to an H100, the successor to the A100 which is part of the reason big tech is choosing Nvidia for AI.
But lining up Nvidia’s GPUs - long considered best in market - against its own shows Intel is confident that it can deliver and exceed shareholder expectations when it comes to market dominance by 2025.
As part of its Hugging Face collaboration, fourth generation Xeon was used to improve the speed of the open source image generation model Stable Diffusion by more than three times as part of its work with Hugging Face.
The company affirmed its commitment to keep contributing upstream software optimisations to frameworks like TensorFlow and PyTorch, as one of the top three contributors to the latter.
To further open the AI ecosystem, Intel is adding more features to oneAPI, its cross-architecture programming model that offers an alternative to Nvidia’s CUDA software layer.
One of these improves access to SYCL, an open source, royalty-free programming model based in C++ that is heavily used to access hardware accelerators.
Intel’s SYCLomatic can be used to migrate CUDA source code automatically, freeing programmers from time constraints that could otherwise lock them into Nvidia’s architecture.
“We believe that the industry will benefit from an open, standardised programming language that everyone can contribute to, collaborate on, and which is not locked into a particular vendor so it can evolve organically based on its community and public requirements,” said Greg Lavender, CTO and GM of the software and advanced technology group at Intel.
“The desire for an open, multi-vendor, multi-architectural alternative to CUDA is not diminishing. Fundamentally, we believe that innovation will flourish the most in an open field, rather than in the shadows of a walled garden.”
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
Snowflake bigs up the power of the partner and eyes deeper engagement to tackle business challenges in the enterprise AI era
Snowflake CEO: “Many vendors sell you parts of a car and tell you to build it yourself. At Snowflake we have a different philosophy. We want to give you the car.”
Zoom launches new AI companion features for workplace platform