Modius Data Center Blog

Data Center Cooling Computation Fluid Dynamics… on Steroids

Posted by Donald Klein on Mon, Sep 27, 2010 @ 03:37 PM

Data Center CFDComputational Fluid Dynamic (CFD) software provides modeling of data center airflow and quick identification of hot spots.  A CFD system’s three-dimensional, multi-colored thermal maps are downright sexy, and, if you’ll pardon the pun, extremely cool.  When changes are made to the data center intentionally, CFD analysis can be repeated to detect the introduction of new thermal problems.  So far, so good.

DC Cooling MistakeBut what happens when the data center changes unintentionally?  Today, CFD users require real-time thermal imaging of hot spots that could result from contingencies like equipment failure, blockage or cabinet overloading.  Furthermore, users want more than just problem visualization – they want recommendations for problem mitigation.  They want a CFD model with some muscle – in effect, a CFD on steroids.

 

What is a CFD on Steroids, and more importantly, why do we need it?

The CFD on steroids works in real-time by collecting and synthesizing all available sensor data within the data center.  It leverages wireless, wired, server-based and return/discharge air-temperature readings to determine not only the immediate problem, but also the immediate impact.  This high-fidelity monitoring system renders a thermal topology map and also sends immediate notification to operations personnel stating what temperature has been registered, where it is located, and that urgent action is needed.

Really pumping you up

Data Center MonitoringThe next level of growth in temperature control is temperature-based reaction.  Data Center operators are now looking not only at identification but also action automation through demand-driven cooling directly to the cabinet.  By leveraging Variable Frequency Drives (VFD) in cooling units, remote commands can adjust cooling at the point of demand.  This can reduce power costs substantially and can prevent a cabinet meltdown.  Automated actions can be taken with the existing Building Management System (BMS) without having to rip out and replace the entire system.  Integration of CFD can make the BMS smarter - processing and synthesizing a vast array of data, encoding commands in building-management language, and passing reliable information to the appropriate destination so that the secure communication infrastructure can be fully maintained.  Modius OpenData is currently being leveraged by customers to pump up their BMS, leverage the current infrastructure, prevent cooling related outages, and save money in power-related cooling.

Topics: data center monitoring, data center cooling, data center analysis, data center management, BACnet, data center temperature sensors, Cooling-Airflow, Energy Analysis

Data Center Economics 101: Cool when it's cheap!

Posted by Mark Harris on Wed, Jul 07, 2010 @ 11:47 AM

OK, I have been doing a bunch of reading about the highly innovative approaches to energy management being tested in places like Arizona. Phoenix as you can imagine sees temperature extremes like few other places in the country. (I remember stepping off a plane at Sky Harbor Airport in June 1979 and seeing an air temperature of 118-degrees). HEAT is a major topic in Phoenix. And as they say, "It's a dry heat".  That said, it is a great place for people and academia and technology. Lots of land, lots of great people, lots of sunshine.

cactus wip 13

So it was no wonder that a 'master plan' was created when revitalizing their economy over the past decade. New headquarter locations, businesses, downtown campuses and sprawling Data Centers have all sprung up and are considered some of the best in the nation. (Look at I/O Data Center's North 48th street facility as an example of a BIG new data center, with HALF A MILLION square feet coming online).

For the downtown area, an innovative approach was taken in the 2002 timeframe for cooling which I had not seen at this scale before. The local public utility (APS) created a commercial partnership called Northwind to provide cooling to the new sports stadium being built. Traditional approaches for cooling this size open-air pro sports stadium in 120-degree heat proved to be an interesting challenge, so innovative new ways of doing so were solicited. The challenge: Provide a comfortable environment for tens of thousands of sports fans during a ball game being played in the hot July and August afternoons. Somehow exploit the fact that lots of extra capacity in POWER was widely available during the middle of the night when kilowatt rates were low, and be prepared for massive cooling needs during the next DAY (about 12 hours later). The Opportunity: Figure out how to spend money on energy 12 hours before it was needed? How to STORE energy required for cooling at this scale effectively. They needed a massive 'energy battery'.

ice cubes

So, how did they solve this energy management task? ICE. It occured to their engineers that ICE is a great medium for storing energy. It can be created at any time, and used in the future as demand requires. Ultimately they built a highly efficient ICE plant that each night was able to manufacture 3 MILLION POUNDS of solid ICE when the power rates were the lowest. As each subsequent day progressed and business demands required cooling, the ICE would be used to absorb heat from water with the newly chilled water distributed as needed. The water was recaptured, and used to create ICE the next night in a closed loop system.

This approach worked SO well, that within the first few years, Northwind had sold it's entire energy savings capacity for the downtown plant to other commercial properties in the area. Turns out it really is quite easy to transport this chilled water at a relatively low cost in an area such as downtown Phoenix. Economies of scale play out nicely here.

Who would have thought? Make lots of ICE at midnight when it was cheap, and then use it to cool as needed the next day. And do so on a REALLY GRAND scale to the tune of MILLIONS of pounds each day. Actually there are a number of projects just like this now in operation, including several in New York City and Toronto to name a few.

Energy management innovation is key. Look past the traditional. Investments were made in well thought-out new technologies that could capture savings and return value to customers. Everybody Wins!

 

Topics: Energy Efficiency, data center monitoring, Cooling-Airflow, Energy Management

ASHRAE raises (and lowers) the bar for Data Center Cooling!

Posted by Mark Harris on Wed, Jun 23, 2010 @ 12:54 PM

It's finally here, the ASHRAE Technical Committee 9.9 has released new recommendations for the temperature and humidity most ideal for data centers.

In a nutshell, dry bulb temperature recommendations now extend down to 64.4-degrees F, and UP to 80.6-degrees F and the humidity range is also expanded at both ends.

Both of these are VERY realistic in today's real world. Extending the LOWER limit  down to 64.4F eliminates a great deal of need to mix HOT and COLD previously required to maintain the previous low limit of 68-degrees F. I could never really get a handle on why the recommendation of 68-degree was imposed. It seems to be counter-intuitive that a data center manager that mainly has a heat issue would be required to add heat back into the precious cooling stream... hence with the lower value, the DC manager will have to do this mix LESS often. Nice!

Perhaps more important for the majority of data center operators, is the official sanction to extend the UPPER limit now to 80.6-degree F. Touche'!!!!  We all know that IT gear is spec'd well above these figures, and raising data center temperatures by even a single degree makes a significant impact into cooling costs. Immediately apparent is the ability to use economizer technologies for a much higher percentage of the hours each year.

The TC 9.9 guideline also shows some real thought for Moisture, with the UPPER and LOWER limits tuned to today's conditions and technologies.

The changes to the relative humidity guideline addresses the risks associated with Electro-static discharge (too low) and Conductive Anodic Filament growth (too high). This CAF basically occurs in dense PC board laminate dielectrics when tiny filaments of Copper spring out due to moisture and sometimes cause semiconductor-like connectivity between adjacent routes and vias (holes).

 

(Here is some light reading on CAF:  http://www.parkelectro.com/parkelectro/images/CAF%20Article.pdf)

So what does this all mean to you??? It means that the operation of a data center using 'best practices' as recommended by ASHRAE will be much more manageable and potentially much more economical. We no longer have to 'baby' the IT gear, and treat it will soft kidd gloves. Intel, Seagate, Infineon and a slew of other IT gear component makers have gone to great lengths to design their individual component-level devices to work hard in a wide range of environments, and we have barely even approached the limits by any analysis. We have played it very safe for a very long time...

We can now feel empowered to stretch a bit. push a little faster, a little deeper and with a bit less rigor for the environment. A little common sense goes a long way...

Topics: Energy Efficiency, Cooling-Airflow

Latest Modius Posts

Posts by category

Subscribe via E-mail