Modius Data Center Blog

Data Center Cooling Computation Fluid Dynamics… on Steroids

Posted by Donald Klein on Mon, Sep 27, 2010 @ 03:37 PM

Data Center CFDComputational Fluid Dynamic (CFD) software provides modeling of data center airflow and quick identification of hot spots.  A CFD system’s three-dimensional, multi-colored thermal maps are downright sexy, and, if you’ll pardon the pun, extremely cool.  When changes are made to the data center intentionally, CFD analysis can be repeated to detect the introduction of new thermal problems.  So far, so good.

DC Cooling MistakeBut what happens when the data center changes unintentionally?  Today, CFD users require real-time thermal imaging of hot spots that could result from contingencies like equipment failure, blockage or cabinet overloading.  Furthermore, users want more than just problem visualization – they want recommendations for problem mitigation.  They want a CFD model with some muscle – in effect, a CFD on steroids.

 

What is a CFD on Steroids, and more importantly, why do we need it?

The CFD on steroids works in real-time by collecting and synthesizing all available sensor data within the data center.  It leverages wireless, wired, server-based and return/discharge air-temperature readings to determine not only the immediate problem, but also the immediate impact.  This high-fidelity monitoring system renders a thermal topology map and also sends immediate notification to operations personnel stating what temperature has been registered, where it is located, and that urgent action is needed.

Really pumping you up

Data Center MonitoringThe next level of growth in temperature control is temperature-based reaction.  Data Center operators are now looking not only at identification but also action automation through demand-driven cooling directly to the cabinet.  By leveraging Variable Frequency Drives (VFD) in cooling units, remote commands can adjust cooling at the point of demand.  This can reduce power costs substantially and can prevent a cabinet meltdown.  Automated actions can be taken with the existing Building Management System (BMS) without having to rip out and replace the entire system.  Integration of CFD can make the BMS smarter - processing and synthesizing a vast array of data, encoding commands in building-management language, and passing reliable information to the appropriate destination so that the secure communication infrastructure can be fully maintained.  Modius OpenData is currently being leveraged by customers to pump up their BMS, leverage the current infrastructure, prevent cooling related outages, and save money in power-related cooling.

Topics: data center monitoring, data center cooling, data center analysis, data center management, BACnet, data center temperature sensors, Cooling-Airflow, Energy Analysis

Why has it been so hard to deploy Data Center Monitoring?

Posted by Mark Harris on Tue, Jul 06, 2010 @ 09:24 AM

For those of you following my writings, you'll know that I've "been around the block" a few times in the IT world. For 30 years I've seen alot of technologies come and go. Various technologies always seem to sound great at first (to some people), but how those technologies play out over time is a measure of the capabilities, costs, and timing BUT more importantly a bit of 'chance'. Sometimes it just doesn't add up or make any sense, and yet certain solutions thrive while others fail. Data Center Management has always been considered an "ART" rather than a science. Emotions, previous experiences and personal focal points drive investments within the data center. The ART of data center design varies widely from company to company.

Poster VIII TransMar 1 1

That background is a good point of reference when considering the task at hand for today: Explaining just WHY it has been so hard to deploy data center monitoring, and include both IT gear AND facilities equipment in the standard IT planning and performance processes. As it turns out, IT gear vendors have done a fairly good job of standardizing management protocols and access mechanisms, but there have been simply too many incompatible facilities gear management systems over the years. Many are still very proprietary and/or undocumented or poorly documented. Additionally, the equipment manufacturers have been in NO hurry to make their devices communicate any better with anything in the outside world. "Their equipment, their tools" has been the way of life for facilities gear vendors. (I call it "Vendor-Lock").

Ironically, these "facilities" sub-systems (like power and cooling) would likely be considered today as THE most mission critical part of running IT cost-effectively. We need to find any answer....

Interoperability

So, we have two factors to consider:

1. Data Center Design is considered by many to be an ART rather than a science. Depending on the leadership, various differing levels of emphasis is paid in different technology areas.

2. Data Center Monitoring has been historically viewed as difficult to deploy across the field of devices and the resulting limited reports and views to be insignificant and non-impactful to the bigger picture.

Well, times have changed. Senior leadership across corporations are asking the IT organization to behave more like a science. The days of 'art' are drawing to a close. Accountability, effectiveness, ROI, efficiency are all part of the new daily mantra within IT. Management needs repeatable, defendable investments that can withstand any challenge, and yet allow for any change.

Additionally, with the price of energy over 3 years surpassing the initial acquisition price of that same gear, the most innovative Data Center Managers are taking a fresh new look at deploying active, strategic Data Center Monitoring as part of their baseline efforts. How else would a data center manager know where to make new investments in energy efficiency technologies without some means to establish baselines and continuous measurements towards results? How would they know if they succeeded?

Data Center Monitoring can be easily deployed today, accounting for all of any company's geographically distributed sites, leveraging all of their existing instrumentation (shipping in newly purchased IT gear for the past few years), and topping it off with a whole slew of amazing wireless sensor systems to augment it all.

Today, integrated data center monitoring across IT gear and facilities equipment is not only possible, but quite EASY to deploy for any company that chooses to do so.

You just Gotta-Wanna!

Topics: data center monitoring, data center analysis

Data Center Monitoring - MUST be Enterprise in Scale!

Posted by Mark Harris on Tue, Jun 22, 2010 @ 03:15 PM

Over the course of meeting with perhaps 100 customers over the last 6 months, it has become painfully clear to me that there is widescale and growing confusion about Real-Time Data Center Monitoring.

I would suggest that Real-Time monitoring which answers MOST customers' needs MUST have a number of specific capabilities which the vast majority of what's available today do NOT:

1. Scale. Most shipping Data Center Management and Monitoring solutions fail to realize that SCALE is a big deal. Monitoring 100 devices on a trade show floor demo is entirely different that deploying true monitoring across 20 sites, each with thousands of devices. You simply can't use the same ARCHITECTURE, and all the marketing fluff in the world won't solve this fundamental structure issue. The ONLY way to scale this is using a DISTRIBUTED architecture.

2. Device Coverage. These same vendors will tell you that they speak SNMP and that everything you need to monitor speaks SNMP. Nonsense! Firstly, there are many protocols including Mod-Bus, SNMP, BACnet, WMI, Serial, etc, etc. Secondly, just supporting the protocol doesn't get you much closer to the device knowledge. Each device has to be specifically understood to read the required values. In most vendor's proposals, this shows up as "Professional Services" which means 'We'll figure it out on the job, on your dime'.

3. Real-Time Monitoring MUST store observed metrics and KPIs over long periods of time. I would suggest that while there are many reasons why most customers want to see real-time monitoring, the vast majority of these reasons are TIME-BASED. The monitored values or metrics need to be collected, time-stamped, stored, and available openly to run analysis upon. While customers may want to know that the data center is consuming 350kW this instant, what they REALLY want to know is that the data center WAS consuming 275kW 3 months ago, 310kW last month, 350kW today, and then PROJECT the future date of the wall that they will hit of the 500kW feed from the power utility.

The road ahead will continue to be littered with failed deployments of real-time management solutions which do NOT realize the dream of Data Center Monitoring. Customers should challenge their vendors to answer ALL of the tough questions. Consider the old-school 'Get it in Writing' approach, and then be very specific about your expectations, needs, and acceptance criterior...

Let's ALL win this GREEN game!

Topics: data center monitoring, Data-Collection-and-Analysis, Sensors-Meters-and-Monitoring, data center analysis, IT Asset Management

Latest Modius Posts

Posts by category

Subscribe via E-mail