AI chips revenue to reach $53 billion in 2023, Gartner predicts

Graphic of a CPU on a mult-coloured computerised motherboard
(Image credit: Getty Images)

Chips specially made for AI work will generate $53.4 billion in revenue in 2023, according to new research from Gartner.

The 20.9% growth in revenue compared to 2022 is reflective of a rapidly-expanding market led by some of the largest hardware companies in the world, including Nvidia and Intel.

According to Gartner’s projections, AI chip revenue will grow a further 25.6% in 2024 to $67.1 billion, and top $119.4 billion by 2027.

While publicly-accessible generative AI models such as ChatGPT have captured the attention of consumers, enterprises have sought more specialized hardware to meet their specific use cases efficiently and at as low a cost as possible.

“The developments in generative AI and the increasing use of a wide range of AI-based applications in data centers, edge infrastructure, and endpoint devices require the deployment of high-performance graphics processing units (GPUs) and optimized semiconductor devices. This is driving the production and deployment of AI chips,” said Alan Priestley, VP analyst at Gartner.

“For many organizations, large-scale deployments of custom AI chips will replace the current predominant chip architecture – discrete GPUs – for a wide range of AI-based workloads, especially those based on generative AI techniques,” he added.

The state of AI hardware

Big tech has flocked to Nvidia, as the firm boasts some of the most powerful dedicated AI hardware on the market as well as a range of cloud architectures tailored for AI models. 

The firm recently announced its next-generation GH200 Grace Hopper chip, which can be doubled up to form a supercomputer configuration called the DGX GH200. Nvidia says this will tackle future AI problems such as trillion-parameter large language models (LLMs).

VMware announced a new partnership with Nvidia at its VMware Explore conference on 22 August. The firms will collaborate on VMware Private AI Foundation from 2024, to help customers train private AI models on the cloud.

To make this possible, Nvidia has allocated special AI hardware to around 100 servers in partnership with Dell, Lenovo, and Hewlett Packard Enterprise (HPE), packed with systems to accelerate LLM training. 

The backbone of these servers is new Nvidia hardware such as the L40S GPU, which the company claimed will train LLMs 1.7x faster than its A100 chip. The A100 was used to train the base model for ChatGPT, GPT-3.5.

RELATED RESOURCE

A whitepaper from BJSS discussing the relationship between cloud data and AI, and and It's importance as an enabler to success in the new digital age, with drawing of a data cloud and up arrow pointing to documents

(Image credit: BJSS)

Cloud-native applications provide the flexibility to adapt to challenging business requirements. Learn everything you need to know to prepare your organization for future success.

 
DOWNLOAD FOR FREE

Chip powerhouse Intel is also keen to carve out its share of the AI market and has publicly committed to AI dominance by 2025.

The firm laid out a roadmap for AI chips in March 2023, encompassing its central processing units (CPUs), GPUs, and dedicated AI architecture. 

This includes the Gaudi 2 processor, which Intel claimed can run deep learning inference workloads twice as fast as the competition, and next-gen Xeon CPUs like ‘Sappire Rapids’ which boasts a ten-times performance boost on previous CPU generations.

In all, Intel aims to release 15 field programmable gate arrays (FGPAs), chips that can be heavily customized for enterprise AI inference, in 2023.

Google has also thrown its hat into the ring with its tensor processing unit (TPU) AI chips, comparable in performance to Nvidia’s L40S. The firm claims these are greener than those sold by its competitors, with the TPU using on average 1.3-1.9 times less energy than Nvidia’s A100.

Google used TPUs in a supercomputer cluster to train its own LLMs, PaLM and PaLM 2.

While overall spend on AI has increased, Gartner research from July found that to date AI has not had a significant impact on IT spending. The 4.3% rise in IT spending expected across the year was mainly driven by software and IT services, according to the report.

Gartner distinguished VP analyst John-David Lovelock told ITPro that AI will likely be added to software without a price increase and that AI spending will be hard to track at the end-user level, as AI will use all forms of technology as a channel to market.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.