How direct-to-chip cooling is helping MSPs meet AI demand
MSPs must make careful, strategic choices now to position them - and their customers - for future success
Managed Service Providers (MSPs) face growing pressure from surging AI workloads in data centers in their network. They have grown accustomed to predictable planning and forecasting, delivering infrastructure as required.
But, if the current pressures from AI highlight anything, it is simply that data centers are unequipped and are operating with the wrong infrastructure.
This leaves MSPs at risk of losing revenue and also means that operators cannot confidently adapt to meet changing demands at the speed required.
Now, unpredictable demand and sudden spikes are creating a significant dilemma where MSPs who overcompensate could lose revenue, whereas those who have insufficiently prepared may jeopardise customer retention.
With AI infrastructure requiring operational expertise, it is difficult for MSPs to address gaps by solely optimizing capacity. This change further brings about the need for heavy asset models, such as advanced cooling and GPUs.
The MSPs who continue to employ older GPUs will experience reduced performance and processing power capacities, which in turn can adversely impact client retention and future revenue. Ultimately, high-density workloads are not merely testing whether data centers can meet the new demand; they are assessing whether existing models can survive this transition, and the next.
Where cooling comes into it
Traditional cooling approaches, in particular, are being pushed beyond their limit. Not only do AI workloads operate at extremely high-power densities, but they are characterized by their highly concentrated heat generation, which air cooling cannot handle.
Stay up to date with the latest Channel industry news and analysis with our twice-weekly newsletter
Direct-to-chip cooling was once a niche engineering solution, but it’s now a key consideration for MSPs as GPU manufacturer demands grow and the need to increase processing density becomes more prevalent.
There are varying levels of compromise to make regarding making updates to existing cooling infrastructure, for example, Rear Door Cooling and Liquid-to-Chip 'Side Cars' are two different approaches, though both fall short of true direct-to-chip cooling.
Sidecars, for instance, deliver coolant directly to the chip via a cold plate, but reject that heat back into the room air to be handled by conventional CRAC or CRAH units, making them dependent on the very air-cooling infrastructure they are meant to supplement.
Cooling requirements now impact site selection and speed to deployment, so these decisions clearly go beyond the data center. MSPs can no longer consider direct-to-chip cooling as a mere technical upgrade for data centers, as it has undeniably become a strategic business decision that directly impacts growth and risk and promotes long-term viability.
On average, direct-to-chip can support 60-120+ kW per rack, which is far more effective than traditional air-cooling methods. The MSPs that don’t act and make this change will significantly limit their densities and overall output.
In basic terms, the direct-to-chip method delivers coolant directly to processors, removing heat more efficiently than traditional cooling frameworks. In the past, direct-to-chip cooling was exclusively used in high-performance computing and specialist environments.
The high up-front costs and complexity meant it wasn’t suitable for traditional data center environments. Even today, the decision to adopt direct-to-chip cooling is not straightforward, with in-rack and in-row CDU options to consider, as well as how the technical loop is configured, which directly affects integration risk.
AI workloads continue to make this cooling method an invaluable and core aspect of data center infrastructure and a resource MSPs must strive to be familiar with.
Traditional cooling methods cannot withstand increasing AI workloads due to airflow constraints that struggle to transfer the dramatic increase in heat effectively.
Air cooling was not designed to handle the high rack densities and energy consumption that come with today’s level of AI usage. Data center operators and MSPs that do not upgrade their cooling infrastructure put their facilities at risk of thermal throttling and failures, which not only reduce performance but also increase energy costs over time.
Addressing cooling throughout the lifecycle
Cooling decisions have a big impact across multiple stages of the data center lifecycle. The direct-to-chip cooling method eliminates common problems often felt by operators by providing stable thermal environments, reducing component degradation, and offering a range of sustainable benefits.
This cooling system not only operates at higher temperatures, which reduces the need for mechanical cooling systems, but it also maximizes free cooling and provides 'heat reuse' opportunities for integration and alignment with local communities. Poor cooling decisions can shape the financial lifespan of a data center, so it is paramount to make changes early on.
Aside from operational benefits, external factors are pushing MSPs towards adopting direct-to-chip cooling for their data centers. Regulatory pressures are intensifying reporting requirements concerning energy consumption, water usage, and carbon emissions, which highlights a clear strategic implication.
Across EMEA, the percentage market for direct-to-chip is much less than in the US, and there is still high demand for air-cooled densities up to 75kW per rack, so MSPs and operators need to have a 'Liquid Flex' or 'Hybrid' model to address customer demand. These cooling techniques feature direct-to-chip cooling alongside a mix of other cooling types, which can support different workloads and provide flexibility when scaling.
Cooling methods now also determine where centres are built in relation to power sources and water, meaning some regions will be unsuitable for AI workloads. Speed-to-market can be impacted by the complexity of cooling infrastructure, influencing build times and the suitability of retrofits.
Within the last few years, changing regulatory standards have raised questions regarding the environmental impact of cooling systems. When poor cooling choices are made, regulatory pressures for MSPs will inevitably increase.
Site selection is also driven by cost, energy, and carbon impact. Direct-to-chip cooling requires less white space, but more grey space to support the increase in capacity. So, it’s clear how cooling now impacts where and when centres can be built and whether they can meet demand.
Adapt to survive the shift
Cooling is becoming a pressing consideration, and there is no room for error. When advanced cooling is not considered or implemented, MSPs will struggle to operate their centers at full capacity and face thermal bottlenecks that compound operational instability.
However, risks extend far beyond performance alone. Inadequate cooling systems can introduce operational exposures, including downtime and chemical imbalance within the technical loop, which can cause damage in the long run.
Without robust isolation strategies, these risks can disrupt the entire data center lifecycle. From a business perspective, failure to adopt direct-to-chip cooling may not only impact revenue and capacity but also enhance the risks of system compromise and regulatory non-compliance. Now more than ever, cooling is a critical point for resilience, long-term viability, and growth.
As AI workloads inevitably increase, traditional cooling systems will fail to sustain performance, meet sustainable guidelines, and eradicate unnecessary risks.
The MSPs that make the right decision early on and adopt direct-to-chip cooling will remain competitive and ensure sustainable compliance and performance is upheld. Those leading the way will undoubtedly be best positioned to scale AI infrastructure efficiently and sustainably.

Rich Clifford is vice president of sales and solutions, EMEA at Salute:
He s known for his visionary, problem-solving approach across diverse data centre environments.
He leads integrated go-to-market strategies, strengthens client relationships through Salute’s lifecycle services, and oversees solution design for customers spanning colocation, enterprise, AI, cloud, and hyperscale sectors.
-
Why reselling AI isn’t where MSP margins are madeThe AI boom is driving record IT spending, but much of the licence revenue is flowing to hyperscalers. For channel partners, the real value lies in using AI internally to automate service desks, NOCs, and managed service delivery
-
Kaseya shifts from AI ‘insights’ to autonomous action with new agentic platformNews The company aims to evolve from its suite of management tools into an autonomous operating system for MSPs
-
Shadow AI and the new visibility gap in software developmentIndustry Insights Shadow AI adoption creates security risks and visibility gaps in development
-
AI readiness and legal compliance: Practical strategies for MSPs in the age of CopilotIndustry Insights How MSPs can respond effectively to the rising demand for AI services
-
From AI hype to AI reality: The steps businesses need to take to adopt AI responsiblyIndustry Insights Responsible AI adoption requires a strategic, long-term approach rather than simply deploying new tools
-
The UK’s AI ambitions depend on channel partnersIndustry Insights Strong AI rollout hinges on channel partners driving successful adoption
-
How to build trust into automation at scaleIndustry Insights How channel partners can scale robotics securely while building customer trust
-
Why ‘buy vs build’ Is the wrong question for AI strategyIndustry Insights AI is now central to modern enterprises, but many struggle to match hype with results

