EU Carbon €67.42 +2.1%
US REC (National) $3.85 -0.8%
UK Baseload £48.20/MWh +5.3%
DE Grid Load 58.2 GW -1.2%
US Solar Cap 192.4 GW +0.4%
EU Wind Output 142.8 TWh +3.7%
EU Carbon €67.42 +2.1%
US REC (National) $3.85 -0.8%
UK Baseload £48.20/MWh +5.3%
DE Grid Load 58.2 GW -1.2%
US Solar Cap 192.4 GW +0.4%
EU Wind Output 142.8 TWh +3.7%
Wind turbines at dusk

Data-Driven Energy Analysis

How the world's energy systems actually work

Analysis of power grids, data center energy, and renewable infrastructure. No spin, just data.

View latest analysis
AI & Data Centers

What Is Liquid Cooling for Data Centers? How Immersion and Direct-to-Chip Systems Work

Why Data Centers Need Liquid Cooling

Traditional data centers use air cooling, where fans push cold air through server racks to absorb heat and carry it away. This approach worked well when server racks consumed 5 to 10 kilowatts of power. But AI computing has pushed rack power densities to 40, 70, and even 120 kilowatts per rack. At these power levels, air simply cannot remove heat fast enough. The physics of air cooling hit a practical ceiling around 30 to 40 kilowatts per rack.

Liquid cooling uses water or specialized dielectric fluids to remove heat directly from computing hardware. Liquids have thermal conductivities and heat capacities that are orders of magnitude higher than air. A liquid cooling system can remove the same amount of heat using a fraction of the energy that air cooling requires, reducing the data center’s total power consumption and improving its Power Usage Effectiveness.

Direct-to-Chip Cooling

Direct-to-chip cooling, also called cold plate cooling, circulates liquid through metal plates mounted directly on the hottest components, primarily CPUs and GPUs. The liquid absorbs heat from the chips and carries it to a heat exchanger where the heat is rejected outside the data center. The rest of the server, including memory, storage, and power supplies, may still be air-cooled.

This approach is the most widely adopted form of liquid cooling because it can be retrofitted into existing data centers with relatively modest modifications. Major chip manufacturers, including Nvidia and Intel, now design their highest-performance processors with direct liquid cooling support. The liquid used is typically treated water flowing through a closed loop.

Immersion Cooling

Immersion cooling submerges entire servers in a tank of dielectric fluid, a non-conductive liquid that can safely make direct contact with electronic components. Single-phase immersion systems circulate the fluid through an external heat exchanger. Two-phase immersion systems use a fluid that boils at a low temperature, absorbing heat through phase change and condensing on a cooling coil above the tank.

Immersion cooling offers several advantages beyond thermal performance. It eliminates the need for fans entirely, reducing server power consumption and noise. It protects components from dust, humidity, and corrosion. And it enables extremely dense rack configurations because there are no airflow constraints. A single immersion tank can accommodate computing hardware that would require multiple traditional air-cooled racks.

Energy Savings and PUE Impact

The energy savings from liquid cooling are substantial. Air-cooled data centers typically achieve PUE values of 1.3 to 1.6, meaning that 30% to 60% of total facility power goes to cooling and infrastructure overhead. Liquid-cooled facilities can achieve PUE values of 1.02 to 1.1, approaching the theoretical minimum where virtually all electricity goes directly to computing.

The waste heat captured by liquid cooling systems is also higher quality than heat captured by air systems. Water leaving a direct-to-chip system might be 45 to 60 degrees Celsius, warm enough to be useful for building heating, district heating, or industrial processes. Some data centers in Nordic countries already capture and sell their waste heat, turning a cost center into a revenue stream.

Adoption Challenges and Outlook

Despite its advantages, liquid cooling adoption has been slower than the technology merits. The upfront cost of liquid cooling infrastructure is higher than air cooling. Facility staff must be trained on new maintenance procedures. Supply chains for dielectric fluids and specialized plumbing components are still maturing.

The surge in AI workloads is forcing the issue. Nvidia’s latest GPU systems are designed for liquid cooling, and data center operators deploying these systems have no choice but to adopt some form of liquid thermal management. Industry analysts expect liquid cooling to be standard for new high-density deployments within the next three to five years, with hybrid approaches combining direct-to-chip liquid cooling for compute hardware and air cooling for less power-dense equipment.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

AI & Data Centers

JinkoSolar Globally Launches AIDC Modules for Data Centers

On March 22, 2026, JinkoSolar made a significant stride in renewable energy...

AI & Data Centers

What is PUE? The Complete Guide to Power Usage Effectiveness

Key Takeaway PUE measures how efficiently a data center uses energy. A...

AI & Data Centers

What Is Edge Computing? How Processing at the Network Edge Changes Infrastructure

What Edge Computing Means Edge computing is an architectural approach that moves...

AI & Data Centers

What Is Data Center Water Usage? The Hidden Resource Demand

Why Data Centers Use Water Water is one of the most efficient...