Google Cloud Next 2026 is a chance to demonstrate Google’s unique advantages

Across hardware, security, and AI optimization, Google Cloud can use its annual event to market itself as the best all-round choice for enterprise AI

The Google Cloud Next banner on site at the conference floor of the Mandalay Bay, Las Vegas.
(Image credit: Future)

Google Cloud Next 2026 is just days away, and by the end of the week IT leaders will have a huge list of new products and offerings across the hyperscaler’s cloud, AI, and security portfolios to pore over.

This year’s event finds Google Cloud in a strong position. In Alphabet’s Q4 earnings, it reported a 48% year on year increase in Google Cloud revenue, driven primarily by demand for its AI platform and core cloud products.

Much of this spending, to date, has been from enterprises embarking on AI adoption and looking to use embedded Gemini features within their workplace. The value proposition going forward is for far more radical changes within the workplace, with AI agents automating more roles and delivering firm return on investment for IT leaders.

Businesses have had a year of hearing about the changes agents can usher in, and the value they can deliver. If they’re sold on that vision, they need to know which platforms deserve their investment now – which makes this year’s event especially pivotal for Google Cloud.

The opening keynote title this year is particularly vague: ‘The agentic cloud’. Needless to say, AI agents will be a key focus here, as with every major tech event right now. But Google Cloud has a unique opportunity here to explain why it’s the best vendor for a truly AI-native cloud, and how its agents can deliver better performance at a lower cost than its competitors.

Neither Microsoft nor Amazon create frontier, trillion-parameter AI models. If Google plays its cards right, it can capitalize on this first-party offerings gap to entice enterprise customers further into its cloud ecosystem. Google Cloud Next 2026 must answer two key questions: why Gemini? And why Google Cloud?

Infrastructure as a USP

The biggest bottleneck for AI training and inference at scale continues to be compute. Nvidia and AMD are going head to head trying to service an explosion in demand for hardware that can run AI workloads, and even Arm has entered the first-party chip market with a CPU designed for running AI agents at scale.

What makes Google stand out here is its global cloud infrastructure, which smoothly handles an unimaginable amount of data in the form of the world’s searches, YouTube videos, emails, and more. Beyond scale, Google has also spent years specializing its chips for AI workloads, resulting in its tensor processing units (TPUs).

Last year, we heard about Ironwood, Google’s TPU v7 designed to provide the raw computational power needed to train and run frontier AI models at the enterprise level.

I wouldn’t be at all surprised if this year’s TPU upgrades, which I’ll call TPU v8 for ease, come in the form of two distinct offerings. One, like last year’s Ironwood, would push the boundaries of compute in direct competition with the likes of Nvidia Rubin. The other could be fully optimized for AI inference, to efficiently meet the massive inference demands of AI agents.

Microsoft announced such a chip in January, the Maia 200, which it said offers better performance per dollar than Google TPUs and AWS Inferentia.

Rival labs can claim their models are better than Gemini, and credibly so in the case of the latest GPT and Claude Opus releases. But none train and run their models on in-house hardware that was designed in collaboration with their developers.

This has a knock-on effect for the cost of running agents. If Google Cloud can demonstrate that TPUs are the best choice for low-cost deployment of AI agents – and potentially low-effort, through platforms like Vertex AI Agent Builder – it will win hefty business investment.

Anthropic recently signed a deal for 3.5GW extra TPU capacity, in addition to an October 2025 deal to take control of up to a million TPUs in 2026. This is good news for Google, but it also demands answers such as which chips will Anthropic use and what, if any, are the benefits of choosing Gemini over Claude on Google hardware?

There’s a twist in the tail this year, in the form of OpenAI’s repeated infrastructure cancellations, first dropping its plans for Stargate UK and next for Stargate Norway. Not only do these roadblocks for OpenAI free up room for Google Cloud to shout about its infrastructure successes, Google itself is reportedly looking to rent the newly-vacant capacity at Nscale’s Stargate UK cluster.

This could provide headroom for more regional expansion in the UK, as well as diversify Google’s hardware stack in the region further (Nscale will use Nvidia chips in the facility).

If Google Cloud can knit all of this together, it can make a strong argument for the resilience, reach, and power of its AI infrastructure.

All eyes on Wiz

Last year’s event took place just before Google completed its $32 billion acquisition of Wiz, which will be rolled into Google Cloud while retaining its brand. Nearly one year on, I’m expecting to hear a lot about how Wiz has been integrated into the Google Cloud Platform (GCP) and where it’s already delivering results.

The addition of Wiz seriously expands Google Cloud’s ability to detect and protect cloud assets, with the two now combining their proprietary AI approaches in a unified security platform. But the other major benefit for Google is what Wiz brings in the multi-cloud domain.

In its announcement of the acquisition, Google highlighted that Wiz will continue to work on “all major clouds,” and that by combining forces Google Cloud customers will be better equipped to protect themselves across all their cloud environments.

With threat actors now using AI as a standard tool for attacks, enterprises and small businesses alike could be won over by the promise of automated cloud security that extends across their entire cloud estate.

Leaders could also be more willing to increase their Google Cloud spend in the knowledge that they can still protect assets in other cloud environments, which would allow Google to undermine the USPs of competitors such as AWS and Microsoft Azure.

I’ll be listening closely to what those within Wiz have to say about the deal, and where Google Cloud thinks the firm will best slot into its operations.

Will Gemini 4 be announced?

A question that’s undoubtedly on the minds of many attendees ahead of Google Cloud Next 2026 is whether a surprise release for Gemini 4 is on the cards.

While model release cycles are undoubtedly speeding up – Anthropic has announced five frontier models in as many months, including its gated cybersecurity model Claude Mythos – I think it’s quite unlikely that we’ll see a fully-fledged Gemini 4 at the event.

Google has a track record for this. Its frontier Gemini models tend to release around February or March, with talks at Google Cloud Next focused heavily on the enterprise capabilities of the latest model but never taking the event as an opportunity to announce a brand new one.

Gemini 1.5 Pro was the focus at Google Cloud Next 2024; Gemini 2.5 Pro was the focus at Google Cloud Next 2025. This year, we’re just a few months on from the launch of Gemini 3.1 Pro and unless Google DeepMind breaks form to speed up releases, technical demos will likely focus on the practical uses for this latest model.

Sorry AI fanatics, I just don’t see Gemini 4 being on the roster.

Expect lots of mentions for Google DeepMind’s less enterprise-focused models, though, such as image generator Nano Banana 2, music generator Lyria 3, and video generator Veo 3.1. Is the link between garish AI images and enterprise bottom lines tenuous at best? Absolutely. Will I be posting photos of said images being shown off at the opening keynote? Without a shadow of a doubt.

Nevertheless, Gemini will once again be the star of the show at Google Cloud Next 2026. Expect to hear about how it powers many of the features announced in the opening keynotes, as well as how businesses can use it to build more automation into their daily activities.

Rory Bathgate will be covering Google Cloud Next live from Mandalay Bay, Las Vegas between 22-24 April. To stay up-to-date with the latest news and announcements from the conference, follow our live blog and subscribe to the ITPro newsletter.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.