The reinvention of Intel
Why the chip pioneer’s strategic pivot mirrors the shifting values of the IT industry as a whole
For about as long as computers have existed, conversations around IT have been almost entirely based on specifications; how much network throughput a switch can handle, how many megahertz a processor runs at, how much storage capacity a server has. These so-called ‘speeds and feeds’ are often the primary factor by which a product is judged.
Intel had a big part in driving this paradigm. As an early innovator in the field of semiconductors, the company was one of the key forces behind the PC boom in the 1980s, providing the processor for the seminal IBM PC. From there, the company’s engineers continued to push the technical sophistication of its components forward, popularising multi-core processors and pioneering simultaneous multi-threading. To this day, core count, clock speed, TDP and other technical metrics remain at the centre of many conversations between technologists, engineers and assorted geeks.
So what does it mean when Intel puts on an entire conference - including several product launches - and spends almost no time discussing technical details?
This week, the company gathered a selection of partners and customers in Dallas, Texas for Intel Vision 2022, the first iteration of a new annual conference focused on Intel’s enterprise and business offerings. However, rather than focusing on its Xeon server portfolio or its range of 12th-generation vPro-enabled processors, Intel CEO Pat Gelsinger spent most of his time emphasising the ways in which its technologies could be used as part of a wider IT stack to accelerate transformation. It should be noted that this event is geared towards customers and partners, rather than specifically targeting developers and sysadmins, but this is still telling; rather than hardware capabilities, Intel has determined that business outcomes are what attendees want to hear about.
A chip on the shoulder
What makes this all the more surprising is that Intel’s engineers have been busy, and some interesting new hardware is rolling off the production lines, not least of which are the first dedicated GPUs in the company’s history. The new Arc A-series graphics chips (codenamed ‘Alchemist’) are just starting to come to market; mobile SOCs have started shipping as part of systems from Intel’s OEM partners, while the fully-fledged desktop card will be coming later this summer. This represents a significant evolution for the silicon manufacturer. Although the integrated GPUs built into its Intel Core processors mean it’s the largest manufacturer of graphics chips by volume, this is the first time that Intel has ever explored the discrete graphics market.
“We're launching a major business in the accelerated computing and graphics area; one that we were derelict in not participating in for a decade plus,” Gelsinger says. “Customers want a viable alternative to Nvidia in that space - we're going to create it.”
Which is the best way to acquire your IT?
Purchase, lease or consumption-based IT solutionsFree Download
While this characterisation of the market does somewhat overlook the presence of AMD’s Radeon GPU range, Gelsinger is correct in his assessment that Nvidia has a notable stranglehold on the discrete GPU market. The company has cemented its position as the de facto standard for graphics cards - but Intel may have an ace up its sleeve here. With control over both CPU and GPU elements, Intel can introduce more engineering innovation, getting more out of both compute elements by managing the interplay between them.
The first SKUs of its new 4th-gen Xeon Scalable server processors have also now started rolling out to OEMs and system builders, with the first 4th-gen infrastructure pencilled in for release in Q3 or Q4. This server chip refresh promises big things, with built-in accelerators for AI, cryptography, networking and database workloads, but scant information is available on the forthcoming SKUs, and with such major developments in motion, the fact that hardware specifications weren’t a main focus is remarkable.
It’s possible that Intel is making a conscious effort to avoid direct comparisons with some of its key rivals; the company is facing increasing competition on almost every front, with AMD and Nvidia making particular inroads into Intel’s vital data centre business. Specifically calling out performance details or specifications leaves the company open to being one-upped on those numbers, and as Gelsinger himself admits, the company has ceded ground in that area over the last few years.
“Due to our capacity limitations, in many cases, we didn’t lose market share; we gave market share [away], where we just didn't have the capacity to supply against what our customers wanted. We also had our own execution issues, Where in some markets, we were not delivering the best products.”
On the other hand, this move away from shouting about core speeds and thread counts is a reflection of a wider shift within the IT industry: the rise of ‘outcome-driven IT’. Business and technology leaders now care less and less about the tin in their data centre racks, and much more about the applications that are running on it; they want to know that their infrastructure is going to meet their needs, and it doesn’t really matter what configuration it’s using to do that.
Indeed, many CTOs are no longer using physical data centres at all - as Gelsinger is well aware of. After spending many years as Intel’s CTO - helping to drive innovations including Wi-Fi, USB, and both the Intel Core and Intel Xeon ranges - he became the president of EMC, and then CEO of VMware, where he bore first-hand witness to the explosion of cloud computing and the impact that it had on the way businesses consume technology. Under his leadership, VMware went from being focused largely on on-premise data centres to developing a broad suite of cloud-enabled software platforms.
Cards on the table
Evidently, Gelsinger is hoping to repeat this trick a second time, because there was one area where Intel’s keynote presentation did highlight the technical capabilities of its products: a series of hardware units aimed squarely at cloud service providers. The most popular announcement was around the company’s Arctic Sound M data centre GPUs, first unveiled back in February.
This PCIe 4.0 card will come in two versions: a 150W unit aimed at maximising performance and a 75W model designed for high-density multipurpose deployments. Both are optimised to deliver compute-intensive server-side capabiltites such as cloud gaming, media streaming, VDI, machine learning model training, inference, other tasks which have become popular use-cases for GPU compute.
The new cards are coming in Q3, with partner-built systems from Dell, HPE, Cisco, Supermicro and other key OEMs, and Intel also announced that it was extending its roadmap for IPUs through to 2026, with 400GB and 800GB models coming in later generations.
IPUs - or Infrastructure Processing Units - are hardware modules which allow the offloading of various processing tasks involved with running a large, complex data centre, such as storage management, virtual switching and security monitoring. They’re particularly useful for cloud providers managing a large multi-tenant environment, as they can be used to free up CPU cycles which can then be sold to customers.
These products are underpinned by what Intel is calling One API - a consistent underlying platform across its enterprise CPU & GPU portfolio that provides a unified programming environment across the kernel as well as performance libraries. It’s built to integrate with open source standards and tools, and is designed to ensure that customers can use the same code across both on-premise Xeon servers and Intel instances running in the public cloud.
Both Arctic Sound M and the company’s IPU portfolio are interesting developments for cloud service providers, telcos and any organisation managing highly-complex systems - but they’re not as relevant outside that ecosystem. This, combined with the emphasis on supporting multi-cloud deployments, reads as an acknowledgement that although there will always be a need for physical hardware, it’s not necessarily a growth market in the long term.
Spirit in the sky
Software is a different story though, and another learning that Gelsinger has taken from his time with VMware is that software has enormous power - not just from a commercial perspective, but as “the soul of the data centre”. To hear the devoutly religious CEO tell it, this insight is one of the most valuable things that he’s bringing back with him to Intel.
“Sometimes I ask myself that question philosophically: why did God have me lead a software company for eight years? Right? And now I’m back at a silicon company.”
This divine plan has led Gelsinger - together with Intel’s CTO Greg Lavender - to implement a “software-first strategy” that will see the company pushing deeper into the IT stack. To do this, Intel will capitalise on its existing data centre footprint and combine this with a range of software products designed to speed up or improve management, administration and optimisation of server infrastructure.
Gelsinger points out that despite ostensibly running a hardware manufacturer, he now has more software developers under his command than he did when he was in charge of VMware, and this mirrors the shifting priorities of the IT landscape.
“if you think over the last 40 years of our careers in the industry, we went from hardware being two to one to software revenue; now it's three to one software revenue to hardware. The role of software has become dramatically more important, and we don't believe that changes.”
One early example of this in practice is Granulate, a small software company that provides software to automatically analyse and optimise Linux-based server processes. Intel acquired Granulate last month, and according to Gelsinger, that’s just the start.
“We're going to do more SaaS; More SaaS acquisitions as well,” he says. “You'll see us pulling silicon differentiation through SaaS services that we're making broadly available to the industry. And my simple formula is ‘silicon plus software, [or] SaaS, equals solutions’ - and you're going to see us doing a lot more of that solutioning.”
Taken together, what all these elements make clear is that Intel is a company in transition. Its intention to focus on the business market is writ large, but rather than falling back on pushing iron, the goal appears to be to reinvent itself as a full-stack platform provider. Regardless of whether customers’ workloads are running on-prem, in the cloud or at the edge, Intel is planning to use its insight into the silicon they’re running on to approach their infrastructure challenges from a holistic, end-to-end perspective.
As a company that has historically been known first and foremost for the physical engineering of its hardware, Intel’s newfound enthusiasm for the wider world of software and cloud solutions may feel a little like turkeys buying shares in a christmas tree farm - but Pat Gelsinger is nothing if not an astute observer of which way the winds of change are blowing. Intel may be the house that chips built, but if he thinks that the future lies beyond pure silicon, then there’s a good chance that he’s probably right.
Activation playbook: Deliver data that powers impactful, game-changing campaigns
Bringing together data and technology to drive better business outcomesFree Download
In unpredictable times, a data strategy is key
Data processes are crucial to guide decisions and drive business growthFree Download
Achieving resiliency with Everything-as-a-Service (XAAS)
Transforming the enterprise IT landscapeFree Download
What is contextual analytics?
Creating more customer value in HR software applicationsFree Download