Join us as we take a deep dive into the data center industry with topics including emerging technologies and industry regulations.
Before we get into the Q&A, let's learn a little bit about our data center expert, Dave Meadows.
Dave has over 25 years of experience in mission-critical technologies and participates in numerous industry committees and panels throughout the year. Dave is a voting member on several ASHRAE committees, as well as his participating in the AHRI data com standards committee. He holds multiple US patents for data center cooling devices. Dave got his BS in Mechanical Engineering from the University of Maryland, Baltimore County, and is a proud graduate of the United States Navy Nuclear Power School. While in the Navy, Dave became a shellback with dolphins and a member of the "Order of the Ditch". Dave has held several key positions at STULZ, including leadership in the Engineering and Application Engineering teams. He is currently the Director of Technology at STULZ USA and travels extensively to events and customer sites.
Now that you know a little more about Dave's background, let's dive into the good stuff!
Q: Twenty-five years is a long time to watch a dynamic industry like data centers undergo changes. What are the biggest changes that you have seen since you started in 1999?
Dave: I’m not even sure where to start with that one. The industry has changed in so many ways. I guess the biggest changes have to do with scale. Data centers have grown significantly in size and power needs; they are also hotter and denser in heat load than ever before. This increase in the size of the data center has led to significant changes in the cooling equipment; when I started, the largest CRAH we offered was rated at 175 kW; our largest perimeter unit today is rated at over a MW of cooling, and our fan walls can be rated at even higher capacities. Now, we are seeing the advent of the AI-specific data center, designed to support high-watt density GPU servers.
Q: Other than the equipment itself what else has changed the data center world in the last 25 years?
Dave: One of the biggest changes I have seen on the cooling side is the introduction and slow adoption of liquid-cooled servers, primarily direct liquid-to-chip and immersion technologies. Liquid cooling is not new; many early mainframes were liquid-cooled, but the fear of water in the data center and the capital expense associated with liquid cooling ensured that air cooling was king. New high-powered chips have watt densities that exceed the capabilities of air cooling and are driving some new data centers to be designed with the flexibility to be either air, liquid, or hybrid-cooled.
Q: Data center regulations, how has that changed over time?
Dave: I guess the easy answer is that we have them now and we didn’t back then. When I started out data centers were exempted from energy efficiency standards and regulations in ASHRAE 90.1 and the International Energy Efficiency Code through the process cooling clauses. CRACS had no minimum energy efficiency standard such as EER or SEER. That all changed in 2010 when we first had minimum sensible coefficient of performance (SCOP) requirements in ASHRAE 2010, followed shortly thereafter by a federal Title 10 requirement for the same. Since then, we have seen new maximum GWP requirements for CRACs and seen the adoption of the newish ASHRAE 90.4 standard (energy efficiency for data centers) in some states. We have recently moved from the SCOP metric to the NsenCOP (net sensible coefficient of performance). Due to the large power consumption of data centers, I see more regulation coming our way.
Q: How is STULZ working to meet the new regulations coming in 2027?
Dave: I believe you are asking about the new GWP requirements. The EPA mandated that computer room air conditioners are required to use a refrigerant that has a GWP of 700 or less.by January 1st, 2027. The Stulz engineering team is working hard to design 4 individual DX product lines to ensure compliance. This requires a tremendous amount of design work and is probably the most ambitious design project that I have witnessed here at Stulz. We have chosen R-454B as our refrigerant; it is an A2L refrigerant, has a GWP of 466, and has the thermodynamic efficiency that we need to meet the new Title 10 NsenCOP requirements.
Q: What has changed about the environmental envelope that you see being maintained in data centers today as opposed to what was the norm back in the 2000s? I know temperatures have been rising over time, do you see this as a good thing?
Dave: Over the last several iterations of ASHRAE’s thermal guidelines for data processing environments, the envelope has continued to expand. My sense is that for the time being, we have seen the limit on temperature and moisture content, and we are probably not going to move much further unless there is some disruptive technology in the server world.
Q: What emerging trends in data center cooling have surprised you? Has anything caught you off guard, maybe something that you didn’t anticipate?
Dave: I actually expected liquid cooling, especially direct liquid-to-chip systems, to take off much sooner than they have. Physics and economics tell us that this is the direction we need to go to address the ever-increasing power demand of the industry. I think the comfort level and flexibility associated with air cooling have slowed down the adoption rate to date, but I think we will see wide-scale adoption of liquid cooling in the next few years.
Q: We’ve seen a shift to new technologies, what do you see as the future of liquid cooling? Is it direct to chip or immersion?
Dave: I see value in both approaches, I don’t think this needs to be an either/or conversation. I believe for the North American Market, the direct liquid-to-chip will be the more ubiquitous of the two, at least for the next decade. I can see a future where the choice you make is based more on application and predominant server type than anything else. I think Colocation providers will find the direct liquid-to-chip gives them the flexibility to serve their customers, while industrial providers such as crypto miners may lean more toward immersion. That’s just my gut feeling based on what I’m hearing from consulting engineers and customers. STULZ will be prepared to support both technologies with CDU’s, heat rejection, and air cooling as required.
Q: AI is a hot topic these days. How do you envision it will continue to impact the data center industry?
Dave: AI is throwing gasoline on an already roaring market. The challenge will not be cooling the large thermal loads or the greatly increased rack-watt densities; we have solutions for that in our toolbox now. The real challenge will be finding the required power and transmitting it to the data center. I hope for a more holistic approach to site planning in the future where we find every new power plant surrounded by new data centers. We would greatly reduce electrical line losses and transmission challenges, and we could recycle some of the waste heat from the data center at the power plant. We have to be honest with ourselves; going forward, we will be using more data, not less. Planning for that “more data” future is in all our best interests.
We appreciate Dave's insight into the data center industry and, more specifically, the technologies and regulations that directly affect our customers throughout North America.