Skip to main content
RISE logo
ICE Data center liquid cooling energy efficiency generative AI

Generative AI must run using liquid cooling!

Optimal performance of Generative AI necessitates the implementation of liquid cooling! 

In two preceding blog entries, we delved into the challenges and substantial energy consumption associated with training and utilizing expansive AI models, such as in the case of ChatGPT. In this blog post, our objective is to go into the thermal management.

In our two preceding blog entries, "Does Generative AI Operate on Thin Air?" and "Generative AI Does Not Operate on Thin Air!" we delved into the challenges and substantial energy consumption associated with training and utilizing expansive AI models, such as in the case of ChatGPT. These developments underscore a heightened demand for energy, new microprocessors, novel servers, cooling methods, and data center floor space.

In this blog post, our objective is to go deeper into the thermal management strategies for handling the heat generated by advanced microprocessors (XPUs) engaged in the training and inference of large language models. The conventional methods, which employ air to cool servers and dissipate heat in data centers, face challenges due to escalating heat fluxes in the latest generation of microprocessors. The blog post is based on the presentation by Jon Summers at DCD London 2023.

Another impending challenge pertains to powering the racks housing these energy-intensive servers, but that discussion is reserved for a forthcoming blog post.

Switching energy

Our initial consideration revolves around the switching energy of the transistors in CMOS microprocessors. Presently, the dissipation of switching energy stands at approximately 10aJ (1 aJ = 1000zJ = 10^-18J). According to past reports from the International Technology Roadmap for Semiconductors (ITRS), this technology has been projected to achieve 1 aJ by 2030. These figures become crucial when calculating the power requirements of a microprocessor.

The thermal design power (TDP)

Upon performing rapid calculations for thermal design powers using the formula Power (W) = Switch Energy (J) x Switching Rate (s-1), we can ascertain the maximum heat flux.

Consider the A100 SXM4 Nvidia GPU as an example, based on an estimated energy dissipation of approximately 15aJ per transistor and equipped with 54.2 billion transistors switching at 1.4GHz (7nm). Applying the formula Power = (54 x 10^9) x (1.4 x 10^9) x (15x10^-18) results in 1134W – a rough estimate of the maximum power if all transistors were simultaneously switching. While the known TDP of the GPU is 400W, a 1134W maximum indicates a potential of 65% dark silicon. Subsequently, the heat flux (HF) is calculated as TDP/area, yielding a maximum heat flux of 484kW/m2, considering the die size of 826mm2.

Now, let's compare this with the H100 SXM5 Nvidia GPU, where the estimated energy dissipation is around 10aJ per transistor. With 80 billion transistors switching at 1.8GHz (4nm), the power can be calculated roughly to be 1440W. The TDP is specified as 700W, indicating the potential of 51% dark silicon. The maximum heat flux is then evaluated as HF = 860kW/m2, considering a die size of 814mm2.

Where are TDPs heading?

Intel has projected that by 2030, microprocessors will boast an impressive 1 trillion transistors. As previously mentioned, ITRS anticipates a switching energy of 1aJ per transistor. Assuming a clock speed of, let's say, 4GHz, the power output is expected to reach 4000W. Factoring in a presumed 40% dark silicon, we arrive at 2400W. Given a die size of 1000mm2 for example, the resultant heat flux (HF) stands at 2.4MW/m2. For optimal functionality, it's imperative to maintain future microprocessor temperatures below 60°C, especially considering the HF and that dark silicon accounts for only 40%.

Now, let's compare these heat flux values with established benchmarks. Levels exceeding 1 MW/m2 align with the core surfaces of nuclear power stations. The forthcoming escalation in maximum heat flux levels poses a substantial challenge to thermal management of microprocessors. The question arises: How can we effectively dissipate the heat emanating from the microchip surfaces?

Fluids are needed to remove the heat

Drawing insights from the work of Tummala, R.R. in "Fundamentals of Microsystems Packaging," we can establish that the temperature difference (Tc-Ta) between the microprocessor case temperature (Tc) and the ambient coolant temperature (Ta), essential for heat dissipation from the heat sink, is represented as the product of the convective thermal resistance (Rconv for a 10cm2 area) and the power spread across the convection area in units of 10cm2. Mathematically, this relationship is expressed as Tc-Ta = Rconv*P/(area/10), see figure. 

The data from Tummala’s book for the convective thermal resistance of different fluids, with Rconv per 10cm2 being 1 K/W for forced convection using water, fluorochemical liquids, or transformer oils (hydrocarbons traditionally used in single phase immersion today), or boiling liquids, while forced air has a convective thermal resistance over 10cm2 of 5 K/W or higher.

Let's take the H100 as an illustrative example: With 700W distributed over a 500 cm2 convection area, the required temperature difference between the case and ambient coolant fluid is calculated as 14 (700/50) times the convective thermal resistance. Given that the H100 has a maximum case temperature of 86°C, attempting to cool it with forced air convection yields a necessary ambient temperature of 16°C (Ta = 86 – 5*14). This implies that an H100 can just barely operate on "thin air" or the heat sink surface area needs to be increased further, beyond 500cm2 taking up more rack volume.

However, when we consider future microprocessors with a power output of 2400 W spread across a 500 cm2 convection area, a case temperature of 60°C, and an ambient coolant temperature of, let's say, 27°C, we require a convective thermal resistance per 10cm2 of 1 K/W or below. Mathematically, Rconv < (Tc-Ta)/(2400/50) = (60-27)/48 = 0.7. This indicates that "thin air" isn't sufficient; instead, it will necessitate forced convection of liquids, likely involving targeted liquid flows with optimized convection heat sinks or cooling heads.

And RISE is on the case

We at ICE data center are happy to help you if more questions about data centers and liquid cooling come up—please get in touch if you have any questions!

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.