From the desktop to the datacenter: what does success look like in the AI era?

Success in the AI era requires a holistic hardware strategy, from empowering employees with the new generation of AI PCs to building a powerful, AI-ready datacenter

AI hallucination concept image showing human brain with glowing data points and colorful imagery.
(Image credit: Getty Images)

The age of artificial intelligence (AI) has firmly arrived, shifting from a futuristic concept to a present-day business imperative.

The appetite for AI within enterprise organizations is immense. Indeed, in March this year, IDC Research suggested that eight in ten IT decision makers plan to invest in AI PCs across 2025. The analyst firm also predicts that demand for AI PCs is only set to increase, and shipments of these types of devices will dominate the market (equating to 93.9%) by 2028.

"The AI PC era is here, and whether you're fully ready to embrace it or not, the fact is that the PC purchases you make now will be in your installed base for many years to come," wrote Tom Mainelli, IDC's group vice president, device and consumer research and Linn Huang, research vice president, devices and displays, in a research note.

Yet, this enthusiasm is tempered by the challenge of converting investment into tangible success. True success in the AI era requires a holistic strategy, one that thoughtfully integrates optimized hardware from the employee's desktop all the way to the core of the datacenter.

The AI PC as a new locus of productivity

The traditional personal computer, a staple of the modern workplace for decades, is being fundamentally reimagined for the age of AI. An AI PC is a computer specifically designed with an integrated neural processing unit (NPU) that works in concert with the central processing unit (CPU) and graphics processing unit (GPU) to accelerate AI tasks directly on the device. This specialized hardware is the key differentiator, enabling AI-powered software features to run more quickly, efficiently, and securely than ever before.

The NPU is engineered to handle the sustained, parallel processing required by AI workloads with far greater power efficiency than a CPU or GPU alone. This architectural advantage delivers a cascade of benefits. By offloading AI tasks, the NPU frees up the CPU and GPU to focus on other operations, boosting overall system performance and responsiveness. Processing data locally on the PC, rather than sending it to the cloud, significantly enhances security and privacy, a critical consideration for enterprises handling sensitive information. This on-device processing also reduces latency, providing the real-time responsiveness needed for dynamic AI applications.

The real-world business gains are already compelling. For everyday productivity, NPUs accelerate features in collaboration tools like Microsoft Teams and Zoom, such as real-time background blurs, noise suppression, and live transcriptions, without draining battery life.

In creative fields, Adobe Premiere Pro and Photoshop use the NPU to speed up tasks like generative fill, intelligent masking, and automatic subtitling. For data analysts, an AI PC can process machine learning models on local datasets in a fraction of the time, while security software can leverage the NPU to run real-time threat detection without impacting system performance.

Knowing whether your business needs AI PCs hinges on understanding the local versus cloud AI trade-off. Cloud AI is powerful for training large, complex models on massive datasets. However, for the instant, constant, and often private AI interactions that define modern workflows, on-device processing is superior. If your employees are engaged in content creation, data analysis, or frequent collaboration, the performance and security benefits of an AI PC will be immediate.

When choosing the right AI PC, businesses should look beyond traditional specs. Key considerations include the performance of the NPU, often measured in Trillions of Operations Per Second (TOPS). Processors like the AMD Ryzen AI 300 Series deliver over 50 TOPS, providing substantial power for AI-driven workflows. A minimum of 16GB of RAM is essential for multitasking with AI tools, and a fast solid-state drive (SSD) with at least 512GB ensures quick access to AI models and data.

The engine room that’s powering AI from the datacenter

In the shift to an AI‑driven enterprise, individual users may harness the power of AI PCs, but it is the datacenter that fuels large‑scale, organization‑wide intelligence. Yet most traditional datacenters were never built for the extraordinary demands of modern AI, the relentless processing of huge datasets, and the need to run complex algorithms at unprecedented speed. Meeting these demands calls for a deliberate, strategic refresh of core hardware.

At the heart of this transformation lies high‑performance compute. Training AI models is an intensely computational process that relies on servers packed with powerful, multi‑core CPUs such as AMD EPYC processors, bolstered by GPU‑based accelerators engineered for massive parallelism. Equal attention must go to storage: AI models require rapid, continuous access to enormous volumes of data, which makes high‑speed NVMe‑based SSDs essential for avoiding latency that would otherwise idle costly compute resources.

Networking, too, becomes mission‑critical. The process of training a single AI model can generate rivers of data between servers, demanding high‑bandwidth, low‑latency connectivity so workloads can flow without interruption. Meanwhile, the physical realities of AI computing, its power density and thermal footprint, mean that robust power delivery and advanced cooling solutions, including liquid cooling systems, are indispensable to prevent hardware stress and downtime.

Forward‑thinking organizations approach these decisions with modular, open‑standard architectures that guard against vendor lock‑in and create space for future scalability. The most common missteps include underestimating the massive power and cooling requirements of AI deployments or neglecting to build storage and networking foundations capable of keeping pace with today’s powerful compute resources. Such oversights can leave even the most advanced AI accelerators underutilized, turning potential innovation into costly inefficiency.

A blueprint for continued success

Deploying AI, ready hardware is a foundational step, not the destination. To ensure continued success now and in the future, businesses must create a supportive ecosystem. This involves investing in training and upskilling to empower employees to leverage new AI tools effectively. Establishing a robust data governance framework is also paramount to managing data quality, security, and ethical use.

Finally, success in the AI era is a dynamic process. Organizations must build a flexible and scalable infrastructure that can adapt and evolve to emerging AI models and changing business needs, ensuring their hardware investments continue to deliver real value for years to come.

As Naveen Chhabra, principal analyst at Forrester Research, notes: "CIOs and enterprise architects must now view AI infrastructure as a core business capability. Choosing the right partners, selecting deployment regions, and securing computing capacity will shape competitive advantage for years."

Rene Millman

Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.