Keysight EDA 2025 Event
WP_Term Object
(
    [term_id] => 151
    [name] => General
    [slug] => general
    [term_group] => 0
    [term_taxonomy_id] => 151
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 442
    [filter] => raw
    [cat_ID] => 151
    [category_count] => 442
    [category_description] => 
    [cat_name] => General
    [category_nicename] => general
    [category_parent] => 0
)

Reducing Data Centre Cooling by 40%

Reducing Data Centre Cooling by 40%
by Daniel Payne on 07-22-2016 at 12:00 pm

Living in Oregon has many benefits, including access to cheap electricity thanks to the plentiful river systems that provide hydro power and a growing green power business fueled by wind and sun. Many of the world’s largest data centers are located in Oregon for access to this cheap electricity, and Google has a sizable investment in the Dalles, Oregon.

I’ve learned that the racks of servers found in a data center generate a lot of heat, so that keeping all of that electronics cool does take up a lot of energy itself. The bright engineers at Google decided that one way to reduce cooling costs would be to analyze the data for cooling by using an AI-based system known as DeepMind. What that yielded was a surprising 40% reduction in the cooling costs.

Another unique decision by Google is to use renewable energy for their data centers as another way to reduce emissions into the environment. The cooling in a data center uses big industrial equipment:

  • Pumps
  • Chillers
  • Cooling towers

What’s so difficult about cooling a large data center? It turns out that the data center responds dynamically to requests by users, so the actual servers don’t have a static profile, rather the power and therefore cooling bounces around a lot. The interaction between servers, cooling and demand by users is sophisticated and non-linear, so trying to use a traditional engineering formula or your own common sense doesn’t really help to optimize the cooling challenge. Cooling systems also don’t respond instantly, there is a certain lag time to get started, reach a level, or ramp down. The physical plant at one data center may be quite different from another data center, so an approach must be taken that is based on each unique location.

For the past couple of years Google has used a machine learning based approach to help operate data centers in a more optimal manner than before. The DeepMind system was used by researchers to improve the cooling efficiency by creating a system with neural networks that could understand the various operating scenarios and parameters that characterize a data center. An adaptive framework helped Google to learn the data center’s interactions.

Past data was already available for analysis from all of the sensors inside a data center:

  • Temperatures
  • Power
  • Pump speeds
  • Setpoints

This data was then used to train a collection of deep, neural networks. Optimization was focused on the Power Usage Effectiveness (PUE), which is the ratio of total building energy divided by the IT energy usage. Two additional neural networks were trained to predict the upcoming temperature and pressure over the next hour operating the data center. These predictions helped to simulate what actions were recommended from the PUE model, so that the cooling system operated within specifications.

Here’s a plot showing the PUE value as a function of time, where a lower number is better because it saves power:

When the Machine Learning (ML) starts we can see a very quick drop in PUE, which shows the 40% savings in energy used for cooling.

Summary
Google data centers are becoming even more energy efficient by using machine learning approaches and neural network modeling to reduce power consumption for cooling by 40%.

Read the full blog about Google DeepMind and saving 40% on cooling costs here.

Share this post via:

Comments

0 Replies to “Reducing Data Centre Cooling by 40%”

You must register or log in to view/post comments.