Modius Data Center Blog

Data Center Analysis, Monitoring may not always be the first step...

Posted by Mark Harris on Fri, May 28, 2010 @ 02:54 PM

While I've seen my share of some pristine new data centers over the past few years, as well as a huge number of large scale retro-fit projects where old centers are being turned into new usable data center space, I have also seen an alarming number of older 'house of cards' data centers that are up in modern production and appear to be 'hands-off'.

These data centers are typically chock full of older devices and interconnects that were passed down from generation to generation of IT managers, only to realize that what they inherited was unmanageable. While it is true that these data centers will ultimately find their way into extinction in a world focused on operational efficiency and pro-active management and best practices, we can all feel the pain involved when we encounter something like this.

Above is one of the most interesting centers I've seen, and would appear to have conflicting priorities as to what is required to move forward. While I don't have a comprehensive sequence of steps required to migrate to a highly supportable, efficient and monitored data center, let me suggest one step that will help tremendously... Find the YELLOW patch cord and disconnect it.

Seriously, when I saw this photo I had to laugh and take a second look. Was it some new thermal blanketing technology? Or a way to eliminate blanking panels? The reason I make light here is that there are countless data centers that are in similiar out-of-spec designs and would benefit from adopting new data center technologies, new power distribution, cooling and monitoring solutions, but are challenged by WHERE TO BEGIN and the magnitude of the task at hand.

In the monitoring world for instance where Modius delivers value, we regularly find data centers with NO VISIBILITY to their energy usage and easily can identify hundreds or thousands of points of monitorable data that would help get energy usage under control. We are ready willing and able to take on chaos and make sense of it.

Topics: Energy Efficiency, data center analysis, data center management, real-time metrics, data center temperature sensors, data center infrastructure

Fine Corinthian Leather... or Data Center Analysis?

Posted by Mark Harris on Tue, May 25, 2010 @ 09:23 AM

 

Think back to the last time your purchased a new car. I would bet that within the first 30 minutes of actually looking at the brochures or sitting in the car, the attention turned to the Leather seats, body color, Stereo system and electronics package.

By inference, the consumer (you) had already assumed and agreed that the car foundation itself was as stated in the data sheet and their design engineers had done their job building a functional car. It had a chassis, it had an engine of a certain size, and it was as speedy and efficient as the TV commercial showed. No need to be concerned that the physical layer had any issues. Somehow the car would perform.

Instead, your attention was to the 'soft' details. There you are, buying a $30,000 car, and most of the sales configuration and cost discussion was about the $3000-$4000 worth of options. Most people don't even know how big the gas tank is when they drive home in the car!

The Data Center is much the same. The underpinnings for most data centers have for the most part been specified by the building design engineers of record, built per spec, and typically installed far away from view. The mechanical and electrical structures were designed and installed based upon equipment resource requirements and assumptions at the time, and at the end of the day, the IT organization ultimately 'inherited' what was installed. How many watts per square foot were really possible? What is the redundant Cooling capacity? None of these critical resource available capacities or real-time usage is actually well understood or even visible to the IT organization over time. (And UNTIL LATELY, not even much concern about it). This situation is compounded by the fact that all of the major IT vendors are now selling boxes that consume 2-4 times the amount of power in the same space as the units shipped just two years ago. It can be seen that the data center is a VERY dynamic system, and the most valueable on-going data center analysis and KPIs must be based upon it's real-time aspects.

While IT as a whole has focused for years on their own 'Fine Corinthian Leather", (like virtualization/operating systems, storage and networks), the real challenge at hand today is to better understand the real-time performance of the chassis. The amount of fuel in the gas tank and it's current efficiency, the engine performance, the available redundancy systems, etc.

Don't get me wrong, I am a huge fan of Fine Corinthian Leather, but I think it's prudent to understand the bigger picture before claiming victory...

Topics: data center monitoring, Data Center Metrics, data center analysis

Uptime Institute Data Center Symposium in New York was amazing!

Posted by Mark Harris on Fri, May 21, 2010 @ 04:32 PM

New York City 

Just got back from the Uptime Institute's latest data Center conference held in New York City. In a nutshell, it was everything that is interesting in the data center, only in the year 2010 this really means the physical layer!

"Physical Layer" you say? Yes, the physical side of the data center is driving all of the column inches in the press these days. That is where the chaos and panic around power and cooling AND costs and carbon all come together. In 2010, THIS is where the challenges are. This is where the opportunities exist to demonstrate thought leadership once again, and based on what I saw, people are rising to the challenge!

Frankly, the data center has far too long been a 'mysterious black box' where Intel meets Microsoft meets Linux meets storage and networks. This logical stuff USED TO BE the hard part. Physical resources were ASSUMED to be under control. Those days are gone. Today, it is a GIVEN by the CIO that the technologists in the crowd will figure out how to string together all the logical data processing stuff since modern servers and routers and everything in between have the ability to provide a fairly similiar set of functions and compatibility today. It is the PHYSICAL LAYER which is driving everyone CRAZY!!!!

So in New York, we had over a thousand people, all gathered with more than a hundred vendors talking about cooling strategies, powering data centers with high voltage DC, monitoring practices and advances in Computational Fluid Dynamics. There were the cable managers, PDUs, batteries and floor tile systems. Layer-0 Physical Layer stuff is SO TANGIBLE and comes in a wide range of colors, TAN, GREY and BLACK!

All in all, a great use of time, a place to make or renew relationships across the industry, and a suitable challenge to each attendee's traditional ways of thinking about the challenges in building and operating the required data centers of today....

Topics: data center monitoring, Data Center Metrics, Data Center Power, Uptime Institute, data center infrastructure

Data Center Monitoring from Modius Summarized in Video

Posted by Mark Harris on Thu, May 20, 2010 @ 09:26 PM

Modius CEO Craig Compiano explains the Modius approach to data center monitoring in this video posted on YouTube. 

Originally featured in DataCenterKnowlege.com, this short video provides a short introduction to Modius OpenData.

To see the original article, please go to:

http://www.datacenterknowledge.com/archives/2010/03/22/data-center-monitoring-with-modius/

From Data Center Knoweledge: 

"At Data Center World we had the chance to speak with Craig Compiano, CEO of Modius, a San Francisco company that makes monitoring software for IT infrastructure. Modius’ motto is “measuring more things in more places more easily,” with the ability to integrate power usage and environmental readings from data centers, server rooms, branch offices, and IDF closets. In this video, Compiano provides an overview of Modius and the landscape for monitoring software. This video runs about 2 minutes, 30 seconds."

 

Topics: data center monitoring, Data-Collection-and-Analysis, Sensors-Meters-and-Monitoring, data center operations, data center infrastructure

Data Center Metrics: Many Metrics live OUTSIDE of the Data Center!

Posted by Mark Harris on Fri, May 14, 2010 @ 10:14 AM

With the new focus on EFFICIENCY within the data center, it is quite clear that there are TWO major sub-systems that actually comprise the 'data center'... the IT / Infrastructure stuff INSIDE the rooms (i.e. on the raise floor) and then the Facilities stuff which is distributed through the facility and outside in the back yards. For years, the IT and Facilites groups were autonomous, and simply expected the other to exist.

Today, it is very short-sighted to look at data center efficiency as ONLY a function of the Metrics available in the rooms themselves. We must not ignore the Metrics about the power generation and distribution and Cooling plants. Remember, for every WATT going into a room, it had to COME FROM somewhere, and it had to be COOLED in some way!

Modius recognized that need to a SINGLE system that gathers performance metrics from IT and Facilities and created the only distributed technology that has no limitations of scale or geography. It's pretty cool stuff (in my unbiased view! LOL)

Imagine being able to say "How much POWER is my Fortune 100 company using right now?" I bet the CIO and CFO would love to know that. "How much is Power costing me this month?", "What is the PUE of each of my data centers?", "What is my Carbon Footprint?", etc etc etc.

These answers are possible today, and Modius customers are doing so TODAY! Data Center Metrics live across the data center and facilities, and can be treated as parts of a single system--- if you want to!

Topics: Data Center Metrics, Data Center PUE, Data Center Power

Zombies are afoot! Data Center Monitoring is the weapon!

Posted by Mark Harris on Wed, May 05, 2010 @ 07:00 AM

Having walked through my share of data centers, it is always interesting to see such a heterogeneous amalgamation of IT gear that has accumulated since the data center itself was commissioned. While every data center designer and manager starts out with wild fanciful ideals about the pristine architecture of the data center, the actual complexion of the data center changes dramatically over time and we are left with rows and rows of assorted gear, all happily consuming power, blinking LEDs, and perhaps 20%-30% of these devices no longer in use... Zombies abound!

Perhaps Zombies is a harsh word, but the concept is the same. A non-trivial portion of the devices in the data center are powered, generating heat, consuming precious IP addresses, and yet performing NO actual work. Why? Their intended application changed over time, the project was never completed, their original workload was shifted elsewhere, a test bed that was never dismantled, and a dozen other reasons exist for large quantities of machines entering the Zombie realm, but there we have it, machine after machine that is in the living dead state, and WORSE THAN THAT, we do not have enough information about these devices to TURN THEM OFF. So they sit, consuming resources in the safety of the data center, avoiding decommissioning...  And here's the myth/rub: A server just idling along just running the operating system consumes 60%-70% of its total power before any workload is applied! A server doing NO work is wasting almost two-thirds of its maximum rated power! Note to self:  this is a real issue and not something we can choose to overlook any longer. With the price of power at record highs, and power increasing by 7% per year as far as we can see in the future, WE HAVE to find these Zombies and kill them.

How can we reclaim the resources being consumed by these Zombies? We have to build designs that intelligently monitor power consumption and pro-actively and continually test to see if those resources are efficiently doing work. We need to observe power consumption either directly using embedded sensors (such as the Energy-Star compliance servers) or with intelligent power distribution devices (ideally with per-outlet metrics). Here is the secret: Zombies all have a similar trait... they stay fairly constant in their power consumption. A server will likely consume almost two-thirds of its maximum power before any loads and work is applied. A Zombie server therefore will continue to consume the same two-thirds of its rated values every time you look at it.

Creating new IT best practices which identify the need for per-device power monitoring is the first step. And the second step is deploying an intelligent monitoring tool which has the ability to look over longer periods of time at the energy being consumed on a device level basis. Some simple standard deviation math will result in servers that can no longer hide their 'walking dead' status. Pro-Active monitoring will identify Zombies and allow you to reclaim power, space and cooling quite easily! 

 

Topics: Data-Center-Best-Practices, data center monitoring, data center analysis, data center management, data center operations, Energy-Efficiency-and-Sustainability

Silicon Carbide Chips could Make Data Center Cooling Obsolete!

Posted by Mark Harris on Mon, May 03, 2010 @ 05:05 AM

Imagine a computer processing chip that could run at any speed, without any cooling. Imagine that this processor could be mass produced using existing technologies, and using off-the-shelf substrate materials. Well, this is not fantasy and I was reminded the other day about the work NASA has been doing for a few years...

It is true! Nasa has been demonstrating a set of chip technologies that have been able to operate at over 1000-degrees F for extended periods of time. While this is remarkable for NASA, now adjust your focal point to the usage of this technology for standard IT purposes. Cooling as we know it today would be a thing of the past. We might have cooling just for the human occupied areas, and perhaps some filtering still required, but here we'd see data centers running happily at over 100 degrees F.

http://arstechnica.com/hardware/news/2007/09/nasa-designs-new-ultra-high-temperature-chips.ars

Finally PUE of 1.00! Curious reading to be sure...

Topics: data center cooling, Data Center PUE, Cooling-Airflow, data center energy efficiency

Latest Modius Posts

Posts by category

Subscribe via E-mail