As the demand for data consumption and storage surges, so does the pace at which data centers, computer rooms, and telecom rooms are planned and constructed. While cooling requirements are a primary consideration, too often the wrong cooling systems are implemented. This costly mistake can come from not understanding the difference between comfort cooling and precision cooling.
Comfort cooling systems are engineered to manage temperature and humidity, for the primary purpose of creating a comfortable environment for people. These systems are often for facilities with a moderate flow of in-and-out traffic, such as office buildings. When people are not present, comfort cooling systems, such as building HVAC systems, can be dialed down to reduce energy consumption and cost. Thus, they are designed to operate intermittently throughout the day and seasonally throughout the year as temperature and humidity levels fluctuate. While these systems are capable of effectively maintaining acceptable conditions for people, they are not designed to regulate humidity and temperature within precise margins as are precision computer room air conditioners (CRACs). Also, compared to precision cooling systems, comfort cooling systems are designed to offset a lower heat load density, as heat generated by people, lighting, solar loads, and office equipment is significantly less the that which is produced by computer servers, switch gear, and uninterruptable power supplies which are the primary heat sources in today’s mission critical facilities.
Precision cooling systems, on the other hand, are specifically engineered for facilities that require year-round, constant cooling, precise humidity control, and a higher cooling capacity per square foot. This type of mission critical environment would be found in data centers, computer rooms, and telecom spaces. These sensitive environments require reliable operation no matter the time of day and no matter the season. Precision cooling systems are designed for 24-hour-a-day, 365-day operation to offset the higher density heat loads that are produced by IT equipment.
Meeting Data Center Demands
It is critical, as well as cost-effective, to properly control the temperature and humidity of mission critical spaces. Unlike when air conditioning systems fail and people may face temporary discomfort, in a data center, it is essential to maintain the environment to prevent failures and shutdowns. Without the proper cooling equipment, IT equipment sensitive to temperature and/or humidity, which are not controlled exactly, will compromise uptime, reliability, efficiency, and ultimately affect profitability. In today’s world of e-commerce downtime is the antithesis of black ink.
Challenged by such precise needs, engineers have been pushed to research and develop reliable, energy-efficient cooling systems. And, over the years, there have been many changes in requirements and capabilities, such as an increase in the density of new IT equipment and the resulting higher heat loads. This significantly changes the demands faced by the cooling system. To better understand these demands, it is important to note that there are two types of cooling: latent and sensible.
Latent Cooling vs Sensible Cooling
In short, sensible cooling is used to remove heat energy, lowering the dry bulb temperature of the controlled space, while latent cooling is used to primarily remove moisture content from the air. Latent cooling is an important component for comfort cooling applications, as they are designed to balance temperature and humidity for maximum comfort. Comfort cooling systems have a sensible heat ratio (SHR) typically of 0.60 to 0.80. This means that they are 60 to 80 percent dedicated to lowering temperature, and 20 to 40 percent dedicated to lowering humidity. This is fitting for a room or building with a moderate amount of people but is not fit to cool a room full of electronics which may require no moisture stripping to maintain to maintain the proper conditions. Unwanted moisture stripping could lead to the need for humidification which would lead to higher operating costs.
Sensible cooling – strictly removing heat – is needed in spaces with high density heat loads and little need for dehumidification. Precision cooling systems are designed to achieve a SHR of 0.90 to 1.0, with 80 to 100 percent of their effort devoted to lowering temperature and only 0 to 10 percent to lowering humidity. In data room applications where there are temperature swings, high or low humidity can have negative effects on the room’s electronics and intended output. High, low, or fluctuating temperatures are capable of corrupting and even shutting down entire data systems. For precision air systems with la SHR lower than 1 on a design day, a humidifier is often included to put moisture back into the room.
The Bottom Line
If a data center’s environment requires 90 to 100 percent effort to reduce temperature, and comfort cooling systems only provide 60 to 70% effort, you would need to buy more comfort capacity to do the same job as a precision air cooling system. Also, as comfort cooling systems pull down humidity by 30 to 40 percent, you would need to invest in a humidification system to put moisture back into the air, to avoid the buildup of electronic static which can lead to electrostatic discharge failures of the IT equipment as well as other issues caused by low humidity.
At first glance, it might seem as though comfort cooling systems designed for buildings can be used for other applications quickly and cost efficiently. However, the needs of data centers are different. Critical IT equipment may experience a variety of issues if not situated in the properly maintained environment. Although utilizing a precision cooling solution has a higher upfront cost, the OPEX savings add up in the end.
Author: Dave Meadows
Dave Meadows is the Director of Technology at STULZ USA. He has over 20 years of experience in mission critical technologies and participates in numerous committees and panels throughout the year. Dave has a BS in Mechanical Engineering from the University of Maryland, Baltimore County. He is also a graduate of the United States Navy Nuclear Power School. Dave is an voting member on these ASHRAE committees: 1. ASHRAE Technical Committee 9.9: “Mission Critical Facilities, Data Centers, Technology Spaces and Electronic Equipment”. 2. ASHRAE SSPC 90.4: “Energy Standard for Data Centers”. 3. ASHRAE SPC 127: “Method of Testing for Rating Computer and Data Processing Room Unitary Air Conditioners”.