Modius Data Center Blog

How much better can it get? Data Center Energy Efficiency

Posted by Mark Harris on Fri, Jun 04, 2010 @ 11:34 AM

I was flipping through the 2007 report to congress issued by Jonathan Koomey ("Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431") and on Page 10 came across a very easy to read, but impactful diagram which provides some great insight into the future of the IT industry, and can be discussed in terms of end-users as well.

I suspect that this chart could be applied more or less to ANY individual company in their quest for energy efficiency. If there is some level of 'greening' at play in a corporation, then this chart can be a crystal ball into your 5 possible futures.

You can see from the diagram varying impacts on energy consumption, (starting at the top) going from taking NO NEW ACTION, all the way through DOING EVERYTHING POSSIBLE. I would suggest today that most companies are somewhere approaching the "Improved Operations Scenerio". If you look above, you'll see this green curve essentially takes the overhead out of operations, but does very little to have any significant long term effect on the SLOPE of the curve.

In the chart, the "State of the Art Scenerio" is a good depiction of what is POSSIBLE (expected) if all business processes are tuned and all equipment is refreshed with the latest. This would create a real-time infrastructure ("RTI" as defined by Gartner) that self-tunes itself based upon demand. Most importantly... It would also lower the most basic cost per transaction. A CPU cycle would actually cost less!

These are very exciting times ahead...

Topics: Data-Center-Best-Practices, Energy Efficiency, data center monitoring, data center analysis, data center energy monitoring, Energy-Efficiency-and-Sustainability, data center energy efficiency

Zombies are afoot! Data Center Monitoring is the weapon!

Posted by Mark Harris on Wed, May 05, 2010 @ 07:00 AM

Having walked through my share of data centers, it is always interesting to see such a heterogeneous amalgamation of IT gear that has accumulated since the data center itself was commissioned. While every data center designer and manager starts out with wild fanciful ideals about the pristine architecture of the data center, the actual complexion of the data center changes dramatically over time and we are left with rows and rows of assorted gear, all happily consuming power, blinking LEDs, and perhaps 20%-30% of these devices no longer in use... Zombies abound!

Perhaps Zombies is a harsh word, but the concept is the same. A non-trivial portion of the devices in the data center are powered, generating heat, consuming precious IP addresses, and yet performing NO actual work. Why? Their intended application changed over time, the project was never completed, their original workload was shifted elsewhere, a test bed that was never dismantled, and a dozen other reasons exist for large quantities of machines entering the Zombie realm, but there we have it, machine after machine that is in the living dead state, and WORSE THAN THAT, we do not have enough information about these devices to TURN THEM OFF. So they sit, consuming resources in the safety of the data center, avoiding decommissioning...  And here's the myth/rub: A server just idling along just running the operating system consumes 60%-70% of its total power before any workload is applied! A server doing NO work is wasting almost two-thirds of its maximum rated power! Note to self:  this is a real issue and not something we can choose to overlook any longer. With the price of power at record highs, and power increasing by 7% per year as far as we can see in the future, WE HAVE to find these Zombies and kill them.

How can we reclaim the resources being consumed by these Zombies? We have to build designs that intelligently monitor power consumption and pro-actively and continually test to see if those resources are efficiently doing work. We need to observe power consumption either directly using embedded sensors (such as the Energy-Star compliance servers) or with intelligent power distribution devices (ideally with per-outlet metrics). Here is the secret: Zombies all have a similar trait... they stay fairly constant in their power consumption. A server will likely consume almost two-thirds of its maximum power before any loads and work is applied. A Zombie server therefore will continue to consume the same two-thirds of its rated values every time you look at it.

Creating new IT best practices which identify the need for per-device power monitoring is the first step. And the second step is deploying an intelligent monitoring tool which has the ability to look over longer periods of time at the energy being consumed on a device level basis. Some simple standard deviation math will result in servers that can no longer hide their 'walking dead' status. Pro-Active monitoring will identify Zombies and allow you to reclaim power, space and cooling quite easily! 

 

Topics: Data-Center-Best-Practices, data center monitoring, data center analysis, data center management, data center operations, Energy-Efficiency-and-Sustainability

Data Center Monitoring: Instrumentation Now!

Posted by Mark Harris on Sun, Feb 28, 2010 @ 07:00 AM

Over the past year, many customers have found themselves in the midst of very real ‘Sustainability’, ‘Eco-efficiency’ or ‘Green’ initiatives. The core requirement of these initiatives is to establish energy and efficiency baselines to ultimately determine how energy is being used and where optimizations can be made to improve performance. These are very visible, corporate governance-style initiatives which tend to appear in quarterly reports. Both the CIO and CFO are very serious about taking proactive steps to demonstrate and report where investments are being made to get this skyrocketing cost under control.

One of the areas being investigated deals with the various types of instrumentation available within the modern data center. More specifically, the CIO/CFO are looking for their IT management team to take advantage of available tools, setting up a well-defined means to monitor all available energy-related data points in real-time and building an ITIL-inspired run-book of “continuous optimization,” more commonly refered to as “operational intelligence.”

It can be shown that modern data centers are complex systems with a tremendous quantity of physical infrastructure devices already in place: some are components with monitoring capabilities built-in, some with monitoring features optionally available, and finally others without any monitoring capabilities whatsoever. IT Managers are now realizing that more real-time monitoring is always better to help make better informed decisions to support these ‘Greening’ initiatives. Granular, concise, real-time information will allow trends to be seen, thresholds to be set, and plans to be made. Tactically, there are various approaches to device instrumentation and most IT situations will actually require a combination of several instrumentation technologies to work together, allowing a complete picture of status, availability, capacity and efficiency.

It is critically important to note that the current less-than-optimal state of active energy monitoring through instrumentation within the modern data center is a direct result of the historical complexity to do so. There has been a complete lack of comprehensive distributed enterprise-class solutions to gather, analyze, and make informed energy management decisions across the litany of raw data sources.

Ultimately, the technology to provide this continuous monitoring of vast discreet sources of data points is now available to be deployed and consumed at will. In support of these corporate initiatives, looking forward is critical because the game has changed, dramatically. The stakes are higher. The players have stepped up.

Topics: Data-Center-Best-Practices, data center monitoring, Energy-Efficiency-and-Sustainability, data center reporting, device interfaces

Latest Modius Posts

Posts by category

Subscribe via E-mail