Modius Data Center Blog

Illuminating DCIM tools: Asset Management vs. Real-time Monitoring

Posted by Donald Klein on Wed, Dec 15, 2010 @ 11:26 AM

Gartner DCIM ModiusIn the news recently, there has been a lot of discussion around a new category of software tools focusing on unified facilities and IT management in the data center.  These tools have been labeled by Gartner as Data Center Infrastructure Management (DCIM), of which Modius OpenData is a leading example (according to Gartner).

In reality, there are multiple types of tools in this category - Asset Management systems and Real-time Monitoring systems like Modius.  The easiest way to understand the differences is to reflect on two key elements: 

  • How the tools get the data?
  • And how time critical is the data?

Generally speaking, data center Asset Management systems, like nlyte, Vista, Asset-Point, Alphapoint, etc., are all reliant on 3rd party sources to either facilitate data entry of IT device 'face plate' specs, or are fed collected data for post process integration. 

The data processing part is what these systems do very effectively, in that they can build a virtual model of the data center and can often predict what will happen to the model based on equipment 'move, add or change' (MAC). These products are also strong at utilizing that model to build capacity plans for physical infrastructure, specifically power, cooling, space, ports, and weight. 

To ensure that the data used is as reliable as possible the higher priced systems contain full work-flow and ticketing engines. The theory being that by putting in repeatable processes and adhering to them, the MAC will be entered correctly in the system. To this day, I have not seen a single deployed system that is 100% accurate.  But for the purposes they are designed for (capacity and change management), these systems work quite well.

Real time accurate dataHowever, these systems are typically not used for real-time alarm processing and notification as they are not, 1) Real-time, and 2) Always accurate.

Modius takes a different approach.  As compared with Asset Management tools, Modius gets its data DIRECTLY from the source (i.e. the device) by communicating in its native protocol (like Modbus, BACnet, and SNMP) versus theoretical 'face plate' data from 3rd party sources.  The frequency of data collection can vary from 1 poll per minute, to 4 times a minute (standard), all the way down to the ½ second.  This data is then collected, correlated, alarmed, stored and can be reported over minutes, hours, days, weeks, months or years. The main outputs of this data are twofold:

  • Modius AlarmsCentralized alarm management across all categories of equipment (power, cooling, environmental sensors, IT devices, etc.)
  • Correlated performance measurement and reporting across various catagories (e.g. rack, row, zone, site, business unit, etc.)

Modius has pioneered real-time, multi-protocol data collection because the system has to be accurate 100% of the time.  Any issue in data center infrastructure performance could lead to a failure that could affect the entire infrastructure.  This data is also essential in optimizing the infrastructure in order to lower cooling costs, increase capacity, and better management equipment.

Both types of tools -- Asset Management tools and Real-time Monitoring systems -- possess high value to data center operators utilizing different capabilities.  The Asset tools are great for planning, documenting, and determining the impacts of changes in the data center.  Modius real-time monitoring interrogates the critical infrastructure to make sure systems are operating correctly, within environmental tolerances, and established redundancies.  Both are complimentary tools in maintaining optimal data center performance.

Because of this inherent synergy, Modius actively integrates with as many Asset Management tools as possible, and supports a robust web services interface for bi-directional data integration. To find out more, please feel free to contact Modius directly at info@modius.com.

Topics: Data-Collection-and-Analysis, data center capacity, data center operations, real-time metrics, Data-Collection-Processing, data center infrastructure, IT Asset Management

Do I really need $1M to make my Data Center HVAC system smarter? ...

Posted by Donald Klein on Wed, Sep 22, 2010 @ 01:03 PM

... Or is there a cheaper alternative?

The latest advent in data center cooling is intelligent networked HVAC systems.  The HVAC systems are intelligently managed to allow remote sensors to provide feedback so that the HVAC system can tune cooling to meet the dynamic demand of the IT infrastructure.  The systems are “intelligent” in that they can change the speeds/frequency of the fan (VFD) to provide more or less air to the cooling zones and cabinets supported by the cooling system.  Further, they can auto-engage the economizer (for ambient cooling) and control water valves to provide greater efficiency to powering air-conditioning units.  They are also on a network so that they can be controlled in total rather than only independently, with one turning up while another could be throttling down. 

Data Center HVACAll very, very, cool stuff and can greatly influence one of the largest data center cost, powered cooling.  Ok, now the downside, wow.. is it really $1M to do it.  In most cases, the answer is yes. The cooling system manufacturers are hoping that you will replace your existing system and allow them to generate a services engagement for them to spend the next year turning up and tuning the system. 

Data Center IntelligenceSo here is the question … Is there any way to make my existing HVAC smarter and NOT spend the $1M??  Glad you asked and yes there is.  Before spending that cash, there are three steps you can take in making your existing more efficient and they include:

  1. Installing Variable frequency drives
  2. Unifying data from temp/humidity monitoring at the cabinet
  3. Compute, measure, and integrate into the BMS

 

Step 1. Install Variable Frequency Drives for controlling airflow

Data Center VFDAs discussed previously in earlier blogs, VFD’s will provide the throttle necessary to achieve energy efficiency.  Several states, including California, are providing rebated for installing VFD’s and pay for nearly 60% of the cost of the equipment (for more information on this topic, contact us at info@modius.com, and we can help put you in touch with the right people).  But remember … VFD’s are only as good as the control procedures you put in place to in order to modulate the cooling as required at the rack level.

Step 2. Unify data from a broad cross-section of temperature and humidity instrumentation points

Data Center InstrumentationIn order to get the best possible data about what is actually happening at the rack level, there are several practical ways to extend your temperature and humidity instrumentation across your environment.  This may include not only deploying the latest generation of inexpensive  wirefree environmental sensors, as well as unifying data that is already being captured by existing instrumentation from wired, wireless, power strip-based or server-based instrumentation.  

The most cost effective way is to leverage the environmental data  the new servers are already collecting (often referred to as chassis-level instrumentation).  The new servers from the leading three vendors register both the server inlet and exhausted temperature.  Depending on the deployment architecture, this can provide you with a lot of fidelity including front/rear, min, max, average, and standard at the bottom, middle and top of the cabinet. 

In most cases, this is enough information to provide equipment demand for direct cooling.  Where you don’t have newer servers that support temperature, wireless sensors are the next best option.  There are several vendors on the market that make these products and are nice in that they are easy to set up and you can place just about anywhere.  If you have data being generated from power strips or wired sensors, incorporate those as well (the more information, the better).

Step 3. Compute, measure, and integrate into the BMS

Building management systems are traditionally very good at controlling systems such as VFDs and recognizing critical alarms.  What they are not good at is being easy to configure, integrate or extend across the network.  This is where you need to be able to provide a booster to how data is collected and synthesized. 

Modius OpenData is used to collected real-time data across the network into potentially hundreds of new devices and thousands of newly collected points.  Once the data is collected from servers, wireless sensors, pdu’s and wired sensors the data is correlated against key performance metrics then fed to the building management system so that it may adjust the VFD’s, water flow, and economizer.  Example metrics might be:

  • Rack-by-rack temperature averages for inlet and outlet
  • Row-by-row averages with alarm thresholds for any racks which exceed the row average by a particular margin
  • Delta-T with alarms for specific thresholds

These types of computations can be based off of unified data from a variety of sources (sensors, strips, servers, etc.), all of which can be used to make your existing HVAC system smarter.  The most important point is to continually measure as you go and make a series of small or incremental optimizations based off of verified data.  The best news is that this architecture is the fraction of the cost of what new HVAC infrastructure costs and leverages your existing building management system.

Topics: data center cooling, Data-Collection-and-Analysis, Data Center PUE, data center operations, BACnet, data center temperature sensors, Data-Collection-Processing, data center infrastructure

What You Really, Really Need: The Mother of all Data Center Monitors!

Posted by Donald Klein on Tue, Aug 31, 2010 @ 11:26 AM

You may have asked yourself, “Why do I need another monitoring and reporting product if I already have five?”  True, you most likely don’t need another monitoring product, but rather what you really, really need is a system to link these systems together. 

Why?  Because several different monitoring systems operating in their own silos doesn’t help you improve your business.  Instead, what you need to do is build business logic for optimization and capacity expansion strategies, as well as decrease the time spent to repair problems. 

To do this effectively, you need a super system: what we call the “mother of all monitors”.  This is a system that cannot only collect a superset of monitoring data from different point solutions, but also connect directly to other devices that may not currently be monitored (e.g. generators, transfer switches, breaker panels, etc.).  And it needs to do this with the kind of scalability, analytics, and ability to integrate with other management systems that you would expect from an enterprise-class tool. 

Here at Modius, we are already seeing this happen in the field.  There is a current trend among data center managers to link their monitoring platforms together so that they have one common central platform to view and navigate to distributed monitoring systems.  We have designed our application, OpenData, with a  “Monitor of Monitors” architecture in order to provide operators with a single pain of glass into both the facilities infrastructure including power-chain, cooling, and redundancies as well as IT system level information.

MOM2

The key problems solved are:

  1.  System-level metrics - Link system level IT metrics to facilities capacities  
  2.  Trouble shooting - Accelerate trouble shooting and fault dependency mapping
  3. Alarm management - Reduction in “noise-level” alarms
  4. Analytics - Building business-level metrics (BI) for capacity, efficiency, etc.
  5. Controls-based integrations – Improved automation based on broad data capture

Here is some more detail on each of these benefit areas …

1)      System-level metrics

Typically, IT system-level metrics are collected by system management tools and will provide logical properties based on MIB-2 or the Host MIB (RFC-1514).  This provides IT managers with data on the operating health of the equipment and capacity related to CPU, Disc, I/O, Memory.  What management systems typically do not provide, however, is how facilities (power, cooling, etc.) impacts the cost of operations and the amount of optimal cooling. 

By linking IT system-level metrics with unified facilities monitoring through a single portal, higher level business and operating metrics can be formulated to reduce the cost of operations by tuning available cooling resources to the actual needs of each server instance or other IT gear.

2)      Trouble shooting

By consolidating event and performance data into a single view, you can quickly determine the cascade of failures with the visibility to determine the impacts of facility equipment.  An example could be a PDU failure and what devices are in the path of the affected circuit.  In redundant environments there will be a fail-over to the second PDU but in most cases the assurances of a successful hand-off are difficult to predict.  By linking both facilities BMS, PDU’s, UPS, Genset with system level IT information the relationships are documented, visualized, correlated and actively monitored.

3)      Reduction in rogue alarms

By linking point solutions and consolidated even level data, a complete historical view may be achieved.  Through this historical view, alarm flows can be optimized and reduced operationally.  An example would be a BMS received alarms at a rate where the alarms become noise as they are not easily tuned.  Also contextually, it is very difficult to look at what a typical operating condition is as there is not enough or broad enough history to proactively set truly meaningful thresholds or deviations.

4)      BI-based business metrics

With a single point of consolidation, you can quickly build reports and dashboards across platforms.  An example would be a stock chart type view when you can visualize a period of time.  This is used to determine deviations from the norm which might cause downtime or affect operational performance.  With several independent systems it becomes impossible to correlate based on time or carry enough history to gain the insight necessary to prevent a potential outage.

5)      Single application launch point

The “Monitor of Monitor” architecture brings a unified structure to gain access to operational and control systems.  An example use case would be to identify cooling requirements based on broad-based data capture (e.g. an array of environmental sensors at the rack level, or real-time server-inlet temperatures taken directly from servers themselves) and then tie the resulting performance metrics into building control systems to tune VFD’s and cooling output.  Integrating the BMS application directly to the monitoring system allows the use the real-time data required and feedback mechanism to optimize cooling and cost without overheating the IT equipment.

Conclusion

If you would like more detail on how Modius can help with any the above topic areas, please reach out directly using info@modius.com, and we will be happy to set up an appointment.

Topics: data center monitoring, Data-Collection-and-Analysis, Data Center Metrics, Data Center PUE, data center energy monitoring, real-time metrics, Data-Collection-Processing, data center alarming

Latest Modius Posts

Posts by category

Subscribe via E-mail