Modius Data Center Blog

Data Center Economics 101: Cool when it's cheap!

Posted by Mark Harris on Wed, Jul 07, 2010 @ 11:47 AM

OK, I have been doing a bunch of reading about the highly innovative approaches to energy management being tested in places like Arizona. Phoenix as you can imagine sees temperature extremes like few other places in the country. (I remember stepping off a plane at Sky Harbor Airport in June 1979 and seeing an air temperature of 118-degrees). HEAT is a major topic in Phoenix. And as they say, "It's a dry heat".  That said, it is a great place for people and academia and technology. Lots of land, lots of great people, lots of sunshine.

cactus wip 13

So it was no wonder that a 'master plan' was created when revitalizing their economy over the past decade. New headquarter locations, businesses, downtown campuses and sprawling Data Centers have all sprung up and are considered some of the best in the nation. (Look at I/O Data Center's North 48th street facility as an example of a BIG new data center, with HALF A MILLION square feet coming online).

For the downtown area, an innovative approach was taken in the 2002 timeframe for cooling which I had not seen at this scale before. The local public utility (APS) created a commercial partnership called Northwind to provide cooling to the new sports stadium being built. Traditional approaches for cooling this size open-air pro sports stadium in 120-degree heat proved to be an interesting challenge, so innovative new ways of doing so were solicited. The challenge: Provide a comfortable environment for tens of thousands of sports fans during a ball game being played in the hot July and August afternoons. Somehow exploit the fact that lots of extra capacity in POWER was widely available during the middle of the night when kilowatt rates were low, and be prepared for massive cooling needs during the next DAY (about 12 hours later). The Opportunity: Figure out how to spend money on energy 12 hours before it was needed? How to STORE energy required for cooling at this scale effectively. They needed a massive 'energy battery'.

ice cubes

So, how did they solve this energy management task? ICE. It occured to their engineers that ICE is a great medium for storing energy. It can be created at any time, and used in the future as demand requires. Ultimately they built a highly efficient ICE plant that each night was able to manufacture 3 MILLION POUNDS of solid ICE when the power rates were the lowest. As each subsequent day progressed and business demands required cooling, the ICE would be used to absorb heat from water with the newly chilled water distributed as needed. The water was recaptured, and used to create ICE the next night in a closed loop system.

This approach worked SO well, that within the first few years, Northwind had sold it's entire energy savings capacity for the downtown plant to other commercial properties in the area. Turns out it really is quite easy to transport this chilled water at a relatively low cost in an area such as downtown Phoenix. Economies of scale play out nicely here.

Who would have thought? Make lots of ICE at midnight when it was cheap, and then use it to cool as needed the next day. And do so on a REALLY GRAND scale to the tune of MILLIONS of pounds each day. Actually there are a number of projects just like this now in operation, including several in New York City and Toronto to name a few.

Energy management innovation is key. Look past the traditional. Investments were made in well thought-out new technologies that could capture savings and return value to customers. Everybody Wins!

 

Topics: Energy Efficiency, data center monitoring, Cooling-Airflow, Energy Management

Why has it been so hard to deploy Data Center Monitoring?

Posted by Mark Harris on Tue, Jul 06, 2010 @ 09:24 AM

For those of you following my writings, you'll know that I've "been around the block" a few times in the IT world. For 30 years I've seen alot of technologies come and go. Various technologies always seem to sound great at first (to some people), but how those technologies play out over time is a measure of the capabilities, costs, and timing BUT more importantly a bit of 'chance'. Sometimes it just doesn't add up or make any sense, and yet certain solutions thrive while others fail. Data Center Management has always been considered an "ART" rather than a science. Emotions, previous experiences and personal focal points drive investments within the data center. The ART of data center design varies widely from company to company.

Poster VIII TransMar 1 1

That background is a good point of reference when considering the task at hand for today: Explaining just WHY it has been so hard to deploy data center monitoring, and include both IT gear AND facilities equipment in the standard IT planning and performance processes. As it turns out, IT gear vendors have done a fairly good job of standardizing management protocols and access mechanisms, but there have been simply too many incompatible facilities gear management systems over the years. Many are still very proprietary and/or undocumented or poorly documented. Additionally, the equipment manufacturers have been in NO hurry to make their devices communicate any better with anything in the outside world. "Their equipment, their tools" has been the way of life for facilities gear vendors. (I call it "Vendor-Lock").

Ironically, these "facilities" sub-systems (like power and cooling) would likely be considered today as THE most mission critical part of running IT cost-effectively. We need to find any answer....

Interoperability

So, we have two factors to consider:

1. Data Center Design is considered by many to be an ART rather than a science. Depending on the leadership, various differing levels of emphasis is paid in different technology areas.

2. Data Center Monitoring has been historically viewed as difficult to deploy across the field of devices and the resulting limited reports and views to be insignificant and non-impactful to the bigger picture.

Well, times have changed. Senior leadership across corporations are asking the IT organization to behave more like a science. The days of 'art' are drawing to a close. Accountability, effectiveness, ROI, efficiency are all part of the new daily mantra within IT. Management needs repeatable, defendable investments that can withstand any challenge, and yet allow for any change.

Additionally, with the price of energy over 3 years surpassing the initial acquisition price of that same gear, the most innovative Data Center Managers are taking a fresh new look at deploying active, strategic Data Center Monitoring as part of their baseline efforts. How else would a data center manager know where to make new investments in energy efficiency technologies without some means to establish baselines and continuous measurements towards results? How would they know if they succeeded?

Data Center Monitoring can be easily deployed today, accounting for all of any company's geographically distributed sites, leveraging all of their existing instrumentation (shipping in newly purchased IT gear for the past few years), and topping it off with a whole slew of amazing wireless sensor systems to augment it all.

Today, integrated data center monitoring across IT gear and facilities equipment is not only possible, but quite EASY to deploy for any company that chooses to do so.

You just Gotta-Wanna!

Topics: data center monitoring, data center analysis

Latest Modius Posts

Posts by category

Subscribe via E-mail