Modius Data Center Blog

Data Center Economics 101: Cool when it's cheap!

Posted by Mark Harris on Wed, Jul 07, 2010 @ 11:47 AM

OK, I have been doing a bunch of reading about the highly innovative approaches to energy management being tested in places like Arizona. Phoenix as you can imagine sees temperature extremes like few other places in the country. (I remember stepping off a plane at Sky Harbor Airport in June 1979 and seeing an air temperature of 118-degrees). HEAT is a major topic in Phoenix. And as they say, "It's a dry heat".  That said, it is a great place for people and academia and technology. Lots of land, lots of great people, lots of sunshine.

cactus wip 13

So it was no wonder that a 'master plan' was created when revitalizing their economy over the past decade. New headquarter locations, businesses, downtown campuses and sprawling Data Centers have all sprung up and are considered some of the best in the nation. (Look at I/O Data Center's North 48th street facility as an example of a BIG new data center, with HALF A MILLION square feet coming online).

For the downtown area, an innovative approach was taken in the 2002 timeframe for cooling which I had not seen at this scale before. The local public utility (APS) created a commercial partnership called Northwind to provide cooling to the new sports stadium being built. Traditional approaches for cooling this size open-air pro sports stadium in 120-degree heat proved to be an interesting challenge, so innovative new ways of doing so were solicited. The challenge: Provide a comfortable environment for tens of thousands of sports fans during a ball game being played in the hot July and August afternoons. Somehow exploit the fact that lots of extra capacity in POWER was widely available during the middle of the night when kilowatt rates were low, and be prepared for massive cooling needs during the next DAY (about 12 hours later). The Opportunity: Figure out how to spend money on energy 12 hours before it was needed? How to STORE energy required for cooling at this scale effectively. They needed a massive 'energy battery'.

ice cubes

So, how did they solve this energy management task? ICE. It occured to their engineers that ICE is a great medium for storing energy. It can be created at any time, and used in the future as demand requires. Ultimately they built a highly efficient ICE plant that each night was able to manufacture 3 MILLION POUNDS of solid ICE when the power rates were the lowest. As each subsequent day progressed and business demands required cooling, the ICE would be used to absorb heat from water with the newly chilled water distributed as needed. The water was recaptured, and used to create ICE the next night in a closed loop system.

This approach worked SO well, that within the first few years, Northwind had sold it's entire energy savings capacity for the downtown plant to other commercial properties in the area. Turns out it really is quite easy to transport this chilled water at a relatively low cost in an area such as downtown Phoenix. Economies of scale play out nicely here.

Who would have thought? Make lots of ICE at midnight when it was cheap, and then use it to cool as needed the next day. And do so on a REALLY GRAND scale to the tune of MILLIONS of pounds each day. Actually there are a number of projects just like this now in operation, including several in New York City and Toronto to name a few.

Energy management innovation is key. Look past the traditional. Investments were made in well thought-out new technologies that could capture savings and return value to customers. Everybody Wins!

 

Topics: Energy Efficiency, data center monitoring, Cooling-Airflow, Energy Management

ASHRAE raises (and lowers) the bar for Data Center Cooling!

Posted by Mark Harris on Wed, Jun 23, 2010 @ 12:54 PM

It's finally here, the ASHRAE Technical Committee 9.9 has released new recommendations for the temperature and humidity most ideal for data centers.

In a nutshell, dry bulb temperature recommendations now extend down to 64.4-degrees F, and UP to 80.6-degrees F and the humidity range is also expanded at both ends.

Both of these are VERY realistic in today's real world. Extending the LOWER limit  down to 64.4F eliminates a great deal of need to mix HOT and COLD previously required to maintain the previous low limit of 68-degrees F. I could never really get a handle on why the recommendation of 68-degree was imposed. It seems to be counter-intuitive that a data center manager that mainly has a heat issue would be required to add heat back into the precious cooling stream... hence with the lower value, the DC manager will have to do this mix LESS often. Nice!

Perhaps more important for the majority of data center operators, is the official sanction to extend the UPPER limit now to 80.6-degree F. Touche'!!!!  We all know that IT gear is spec'd well above these figures, and raising data center temperatures by even a single degree makes a significant impact into cooling costs. Immediately apparent is the ability to use economizer technologies for a much higher percentage of the hours each year.

The TC 9.9 guideline also shows some real thought for Moisture, with the UPPER and LOWER limits tuned to today's conditions and technologies.

The changes to the relative humidity guideline addresses the risks associated with Electro-static discharge (too low) and Conductive Anodic Filament growth (too high). This CAF basically occurs in dense PC board laminate dielectrics when tiny filaments of Copper spring out due to moisture and sometimes cause semiconductor-like connectivity between adjacent routes and vias (holes).

 

(Here is some light reading on CAF:  http://www.parkelectro.com/parkelectro/images/CAF%20Article.pdf)

So what does this all mean to you??? It means that the operation of a data center using 'best practices' as recommended by ASHRAE will be much more manageable and potentially much more economical. We no longer have to 'baby' the IT gear, and treat it will soft kidd gloves. Intel, Seagate, Infineon and a slew of other IT gear component makers have gone to great lengths to design their individual component-level devices to work hard in a wide range of environments, and we have barely even approached the limits by any analysis. We have played it very safe for a very long time...

We can now feel empowered to stretch a bit. push a little faster, a little deeper and with a bit less rigor for the environment. A little common sense goes a long way...

Topics: Energy Efficiency, Cooling-Airflow

How to Win the Shell game. Don't Play It!

Posted by Mark Harris on Wed, Jun 09, 2010 @ 02:19 PM

So there I was, sitting in New York City a couple of weeks ago at The 451 Group's Uptime Institute Symposium, and spent a little time listening to Dean Nelson, the Sr. Director of eBay's Data Center services. He spoke about what eBay was doing with their new Salt Lake City data center and how it was paid for with their active cost savings initiatives. Sounds like the kind of data center we all dream about, and a management structure that understands long term winning strategy...

One of the most intriguing comments he made was regarding who pays the bill for power. Apparently, as soon as eBay moved the cost of power to the budget managed by the CIO, decisions were made in a much different manner. In fact, after the power bill was added to the CIO's bottom-line, he immediately ramped up it's efforts to reduce power consumption.  Surprising? Not really.

So the question bounced back to the top of my brain stack: Why don't we all just bite the bullet and add the power bill to the CIO's budget? Wouldn't that create the same catalyst for change that eBay saw? Wouldn't that shift efforts to reduce carbon, reduce cost, and become a Green corporate citizen into 5th gear everywhere? IT WOULD!!!!  Oh sure there are some logistic and measurement and data center monitoring issues, some economic G/L mechanics involved to implement the process, but for heaven's sake, we should encourage the proper behaviour, and stop hiding the problem. Hiding the budget as a 'burdened' cost, buried...

 

Frankly, it is very much like the Shell Game. Keep hiding the money so that know one knows where the money issue really belongs. Sure the CEO and CFO 'own' the power bills, but wouldn't it make sense to push the responsibility down a bit? To the teams that can actually DO SOMETHING CONSTRUCTIVE to lower these costs? Very few CIO's today pay (or are even aware of the detail for) the power bills for their data centers. My suggestion, follow eBay's lead and shift the G/L line items to the CIO and watch the rapid progress that will ensue...  (and when this higher level of interest takes hold, Modius will be there to help establish metric and measurement baselines by which to steer these cost improvements in very tangible ways!)

Topics: Energy Efficiency, Data Center Metrics, Data Center PUE, PUE

Latest Modius Posts

Posts by category

Subscribe via E-mail