AMD Advancing AI 2025: Racks, openness, and the spectre of Nvidia
Can the chipmaker really step out of the market leader's shadow?


AMD's Advancing AI conference is one of the shortest flagship events I've been to, certainly this year and possibly ever, but still managed to be fairly action-packed.
The one day event was centered around CEO Lisa Su's two-hour keynote presentation, in which she was joined on stage by numerous special guests – including a surprise appearance from OpenAI CEO Sam Altman.
I'll return to these guests in a moment, but first let's dig down into the news announcements. The biggest news was Helios, a double-wide, rack scale offering from AMD that will be available next year, which Su described as a "game changer" for the company.
To say there was a lot of pointing to graphs where AMD's devices either match or exceed Nvidia's comparable offerings would be an extraordinary understatement. In Su's keynote we got to see a slide claiming that the Instinct MI355X chip – also announced at the event – is 1.2x faster at DeepSeek R1 throughput and 1.3x faster at Llama 3.1 405B throughput than Nvidia's B200, and offers the same throughput as the Nvidia GB200 on Llama 3.1 405B. This was followed by a claim that the MI355X offers 'up to' 40% more tokens per dollar than the B200.
This was followed by similar slides showing parity or slight gains on LLM training and fine tuning, while in the press conference we were treated to further slides showing greater memory capacity, memory bandwidth, and peak performance than Nvidia GB200 or B200.
As we're not able to independently verify these claims, I'm not going to repeat them all verbatim, but needless to say there was a strong "better than Nvidia, actually" theme.
In many ways this is to be expected. Like it or not, AMD and Intel are both playing catch-up with Nvidia, a company led by perhaps one of the most charismatic CEOs out there.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
As Forrest Norrod, EVP and GM of AMD's data center solutions business unit, told a press round table: "Nvidia has done a great job of convincing the industry that we're living in the future, and so that whatever... they announce is actually here today, so we have to get a little bit more aggressive."
Norrod also admitted that "Nvidia is the de facto standard right now," but added that Helios will be a "good solution" for hyperscalers, tier two cloud providers, and neo clouds alike. He was also at pains to point out that Helios is "not the only thing [AMD is] doing", however, and while he didn't expand on this previous slides had shown plans for future rack-scale architecture going into 2027, as well as an annual chip release plan going up to the Instinct MI500X range. Watch this space, I suppose.
Embracing the open ecosystem
The final key talking point for Su and other members of the AMD leadership was the importance of openness. Su said during her keynote that AMD is "investing heavily in an open, developer-first ecosystem," adding that the company is "really supporting every major framework, every library, and every model to bring the industry together in open standards so that everyone can contribute to AI innovation".
Su drew a parallel between the self-declared open approach that AMD is taking and Linux surpassing Unix as the data center operating system of choice "when global collaboration was unlocked". She also pointed to the open source Android operating system as one of the drivers in increasing the number of people who own smart phones. Indeed, the latest figures from Statista at the time of writing show Android had a 72.7% market share, while iOS is just 26.9%, despite iPhones being the most popular device.
As mentioned above, many of the graphs in Su's presentation leaned on the throughput performance of Llama – Meta's ostensibly open source AI platform – and Vamsi Boppana, SVP of AI at AMD, pointed out that 1.8 million Huggingface models now run on the company's ROCm platform.
What's next for AMD
It's hard to say when the next AMD Advancing AI event will take place – typically companies this size run their conferences annually, but only nine months had passed between the 2024 event and this most recent one. Nevertheless, the company has given a clear indication of what its plans are for the next few years at least.
Su committed to an annual release cycle for its Instinct GPU range, which is unlikely to be broken. We can expect Helios to ship in 2026 and a second rack-scale product to come in 2027. What the full specs of all those upcoming products are – and how much interest there is in Helios in particular – remains to be seen. One thing is certain, though – Nvidia may be the "de facto standard" but AMD's spoiling to take its crown.

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
Half of developers want to quit over "embarrassing" tech stack
news Upgrade CMS and offer AI or risk your devs walking away, finds survey
-
Kaseya eyes further EMEA growth with fresh $100 million investment
News The regional investment will see the addition of 200 new developers, the company revealed at this year's Kaseya DattoCon Europe event
-
What is exascale computing? Exploring the next step in supercomputers
60 years after the birth of the first supercomputers, we are entering a new era
-
AMD has put in the groundwork for a major AI push while the tech industry has fawned over Nvidia
Analysis The processor giant will be keen to use its AMD Advancing AI Event in San Francisco to capitalize on recent successes
-
Empowering enterprises with AI: Entering the era of choice
whitepaper How High Performance Computing (HPC) is making great ideas greater, bringing out their boundless potential, and driving innovation forward
-
AMD’s acquisition spree continues with $665 million deal for Silo AI
News The deal will enable AMD to bolster its portfolio of end-to-end AI solutions and drive its ‘open standards’ approach to the technology
-
AMD retains its position as the partner of choice for the world’s fastest and most efficient HPC deployments
Supported content AMD EPYC processors and AMD Instinct accelerators have been used to power a host of new supercomputers globally over the last year
-
AMD strikes deal to power Microsoft Azure OpenAI service workloads with Instinct MI300X accelerators
Supported content Some of Microsoft’s most critical services, including Teams and Azure, will be underpinned by AMD technology
-
What do you really need for your AI project?
Supported content From GPUs and NPUs to the right amount of server storage, there are many hardware considerations for training AI in-house
-
AMD reveals MI300 series accelerators for generative AI
Supported editorial MI300X is the star of the new family of processors with AMD ‘laser-focused’ on data center deployment