Modius Data Center Blog

Using the Right Tool for the Job: Modius OpenData vs. a BMS

Posted by Marina Thiry on Thu, Jul 21, 2011 @ 12:24 PM


We are sometimes asked how Modius OpenData is different than a BMS. “Why should I consider Modius OpenData when I already have a BMS?”

In short, the answer comes down to using the right tool for the job.  A BMS is installed at a large building to monitor and control the environment within that building, for example: lighting, ventilation, and fire systems. It helps facility managers better manage the building’s physical space and environmental conditions, including safety compliance.  As concerns about energy conservation have gained critical mass, feature enhancements to BMSs have evolved to become more attuned to energy efficiency and sustainability. However, this doesn’t make a BMS a good tool for data center optimization any more than a scissors can be substituted for a scalpel.

Unlike a BMS, OpenData software by Modius was designed to uncover the true state of the data center by continually measuring all data points from all equipment, and providing the incisive decision support required to continually optimize infrastructure performance. Both facility and IT managers use OpenData to gain visibility across their data center operations, to arrive at an energy consumption baseline, and then to continually optimize the critical infrastructure of the data center—from racks to CRACs. The effectiveness of the tool used for this purpose is determined by the:

  • operational intelligence enabled by the reach and granularity of data capture, accuracy of the analytics, and the extensibility of the feature set to utilize the latest data center metrics
  • unified alarm system to mitigate operational risk
  • ease-of-use and flexibility of the tool to simplify the job

To illustrate, following are the top three differences between OpenData and a typical BMS that make OpenData the right tool to use for managing and optimizing data center performance.

  1. OpenData provides the operational intelligence, enabled by the reach and granularity of data capture, accuracy of the analytics, and the extensibility of the feature set, to utilize the latest data center metrics. Modius understands that data center managers don’t know what type of analysis they will need to solve a future problem. Thus, OpenData provides all data points from all devices, enabling data center managers to run any calculation and create new dashboards and reports whenever needed. This broad and granular data capture enables managers to confidently assess their XUE[1], available redundant capacity, and any other data center metric required for analysis. Moreover, because all of the data points provided can be computed at will, the latest data center metrics can be implemented at any time. In contrast, a BMS requires identifying a set of data points upon its installation. Subsequent changes to that data set require a service request (and service fee), which means that even if the data is collected in real-time, it may not be available to you when needed. Thus, the difficulty and expense of enabling the networked communications and reporting for real-time optimization from a BMS is far beyond what most would consider a “reasonable effort” to achieve.


  2. OpenData provides a unified alarm system to mitigate operational risk. With OpenData, end-users can easily set thresholds on any data point, on any device, and edit thresholds at any time. Alarms can be configured with multiple levels of escalation, each with a unique action. Alarms can be managed independently or in bulk, and the user interface displays different alarm states at a glance. In contrast, with a typical BMS integration the system only reports alarms native to the device—i.e., it  doesn’t have access to alarms other than its own mechanical equipment. When data center managers take the extra steps to implement unified alarming (e.g., by feeding into the BMS the relay outputs or OPC server-to-server connections from the various subcomponents), they will often only get the summary alarms as a consequence of the cost charged per point and/or the expense of additional hardware modules and programming services to perform the communication integration with third-party equipment. Thus, when personnel receive an alarm, they have to turn to the console of the monitoring system that “owns” the alarming device to understand what is happening.
    Alarm Monitoring BMS
  3. Ease of use and flexibility to simplify the job. OpenData is designed to be user-driven: it is completely configurable by the end-user and no coding is required, period. Learning how to use OpenData takes approximately a day. For example, OpenData enables users to add new calculations, adjust thresholds, add and remove equipment, and even add new sites. In contrast, using a BMS to pro-actively make changes is virtually impossible to administer independently. Because the BMS is typically one component of a vendor’s total environmental control solution, the notion of “flexibility” is constrained to what is compatible with the rest of their solution offerings. Consequently, a BMS adheres to rigid programming and calculations that frequently require a specialist to implement changes to the configuration, data sets, and thresholds.

In summary, the only thing constant in data centers is flux. Getting the right information you need—when you need it—is crucial for data center up-time and optimization. For the purpose of performance monitoring and optimization, using a BMS is more problematic and ultimately more expensive because it is not designed for broad and granular data capture, analysis and user configuration.  Ask yourself: What would it take to generate an accurate PUE report solely using a BMS? 

The following table summarizes key differences between OpenData and a BMS, including the impact to data center managers.

BMS OpenData DCIM



[1] The “X” refers to the usage effectiveness metric de jour, whether it is PUE, pPUE, CUE, WUE, or something new.

Topics: data center monitoring, BMS, DCIM, monitoring, optimization

Measuring PUE with Shared Resources, Part 2 of 2

Posted by Jay Hartley, PhD on Wed, May 25, 2011 @ 05:02 PM

PUE in an Imperfect World

Last week I started discussing the instrumentation and measurement of PUE when the data center shares resources with other facilities. The most common shared resource is chilled water, such as from a common campus or building mechanical yard. We looked at the simple way to allocate a portion of the power consumed by the mechanical equipment to the overall power consumed by the data center.

The approach there assumed perfect sub-metering of both the power and chilled water, for both the data center and the mechanical yard. Lovely situation if you have it or can afford to quickly achieve it, but not terribly common out in the hard, cold (but not always cold enough for servers) world. Thus, we must turn to estimates and approximations.

Of course, any approximations made will degrade the ability to compare PUEs across facilities--already a tricky task. The primary goal is to provide a metric to measure improvement. Here are a few scenarios that fall short of the ideal, but will give you something to work with:

  • Calculate PUE pPUECan’t measure data-center heat load, but have good electrical sub-metering. Use electrical power as a substitute for cooling load. Every watt going in ends up as heat, and there usually aren’t too many people in the space routinely. Works best if you’re also measuring the power to all other non-data-center cooled space. The ratio of the two will get you close to the ratio of their cooling loads. If there are people in a space routinely, add 1 kWh of load per head per 8-hr day of light office work.
  • Water temperature is easy, but can’t install a flow meter. Many CRAHs control their cooling power through a variable valve. Reported “Cooling Load” is actually the percentage opening of the valve. Get the valve characteristics curve from the manufacturer. Your monitoring system can then convert the cooling load to an estimated flow. Add up the flows from all CRAHs to get the total.
  • Have the head loads, but don’t know the mechanical yard’s electrical power. Use a clamp-on hand meter to take some spot measurements. From this you can calculate a Coefficient of Performance (COP) for the mechanical yard, i.e., the power consumed per cooling power delivered. Try to measure it at a couple of different load levels, as the real COP will depend on the % load.
  • I’ve got no information about the mechanical yard. Not true. The control system knows the overall load on the mechanical yard. It knows which pumps are on, how many compressor stages are operating, and whether the cooling-tower fan is running. If you have variable-speed drives, it knows what speed they’re running. You should be able to get from the manufacturer at least a nominal COP curve for the tower and chiller and nominal power curves for pumps and fans. Somebody had all these numbers when they designed the system, after all.

Whatever number you come up with, perform a sanity check against the DOE’s DCPro online tool. Are you in the ballpark? Heads up, DCPro will ask you many questions about your facility that you may or may not be prepared to answer. For that reason alone, it’s an excellent exercise.

It’s interesting to note that even the Perfect World of absolute instrumentation can expose some unexpected inter-dependencies. Since the efficiency of the mechanical yard depends on its overall load level, the value of the data-center PUE can be affected by the load level in the rest of the facility. During off hours, when the overall load drops in the office space, the data center will have a larger share of the chilled-water resource. The chiller and/or cooling-tower efficiency will decline at the same time. The resulting increase in instantaneous data center PUE does not reflect a sudden problem in the data center’s operations; though it might suggest overall efficiency improvements in the control strategy.

PUE is a very simple metric, just a ratio of two power measurements, but depending on your specific facility configuration and level of instrumentation, it can be remarkably tricky to “get it right.” Thus, the ever-expanding array of tier levels and partial alternative measurements. Relatively small incremental investments can steadily improve the quality of your estimates. When reporting to management, don’t hide the fact that you are providing an estimated value. You’ll only buy yourself more grief later when the reported PUE changes significantly due to an improvement in the calculation itself, instead of any real operational changes.

The trade-off in coming to a reasonable overall PUE is between investing in instrumentation and investing in a bit of research about your equipment and the associated estimation calculations. In either case, studying the resulting number as it varies over the hours, days, and seasons can provide excellent insight into the operational behavior of your data center.

Topics: BMS, Dr-Jay, PUE, instrumentation, Measurements-Metrics, pPUE

Measuring PUE with Shared Resources, Part 1 of 2

Posted by Jay Hartley, PhD on Wed, May 18, 2011 @ 09:02 AM

Last week I wrote a little about measuring the total power in a data center, when all facility infrastructure are dedicated to supporting the data center. Another common situation is a data center in a mixed environment, such as a corporate campus or an office tower, at which the facility resources are shared. The most common shared resource is the chilled-water system, often referred to as the “mechanical yard.” As difficult as it sometimes can be to set up continuous power monitoring for a stand-alone data center, it is considerably trickier when the mechanical yard is shared. Again, simple in principle, but often surprisingly painful in practice.

Mixed Use Facility

One way to address this problem is to use The Green Grid’s partial PUE, or pPUE. While the number should not be used as a comparison against other data centers, it provides a metric to use for tracking improvements within the data center.

This isn’t always a satisfactory approach, however. Given that there is a mechanical yard, it’s pretty much guaranteed to be a major component of the overall non-IT power overhead. Using a partial PUE (pPUE) of the remaining system and not measuring, or at least estimating, the mechanical yard’s contribution masks both the overall impact of the data center and the impact of any efficiency improvements you make.

There are a number of ways to incorporate the mechanical yard in the PUE calculations. Full instrumentation is always nice to have, but most of us have to fall back on approximations. Fundamentally, you want to know how much energy the mechanical yard consumes and what portion of the cooling load is allocated to the data center.

Data Center Mechanical Plant

The Perfect World

In an ideal situation, you have the mechanical yard’s power continuously sub-metered—chillers, cooling towers, and all associated pumps and fans. Not unusual to have a single distribution point where measurement can be made. Perhaps even a dedicated ATS. Then for the ideal solution, all you need is sub-metering of the chilled-water going into the data center.

The heat load, h, of any fluid cooling system can be calculated from the temperature change, ∆T, and the overall flow rate, qh=Cq∆T, where C is a constant that depends on the type of fluid and the units used. As much as I dislike non-metric units, it is easy to remember that C=500 when temperature is in °F and flow rate is in gal/min, giving heat load in BTU/h. (Please don’t tell my physics instructors I used BTUs in public.) Regardless of units, the total power to allocate to your data center overhead is Pdc=Pmech (hdc⁄hmech). Since what matters is the ratio, the constant C cancels out and you have Pdc=Pmech (q∆Tdcq∆Tmech ).

You’re pretty much guaranteed to have the overall temperature and flow data for the main chilled-water loop in the BMS system already, so you have q∆Tmech. Much less likely to have the same data for just the pipes going in and out of your data center. If you do, hurrah, you’re in The Perfect World, and you’re probably already monitoring your full PUE and didn’t need to read this article at all.

Perfect and You Don’t Even Know It

Don’t forget to check the information from your floor-level cooling equipment as well. Some of them do measure and report their own chilled-water statistics, in which case no additional instrumentation is needed. In the interest of brand neutrality, I won’t go into specific names and models in this article, but feel free to contact me with questions about the information available from different equipment.

Perfect Retrofit

If you’re not already sub-metered, but you have access to a straight stretch of pipe at least a couple feet long, then consider installing an ultrasonic flow meter. You’ll need to strap a transmitter and a receiver to the pipe, under the insulation, typically at least a foot apart along the pipe. No need to stop the flow or interrupt operation in any way. Either inflow or outflow is fine. If they’re not the same, get a mop; you have other more pressing problems. Focus on leak detection, not energy monitoring.

If the pipe is metal, then place surface temperature sensors directly on the outside of the inflow and outflow pipes, and insulate them well from the outside air. Might not be the exact same temperature as the water, but you can get very close, and you’re really most concerned about the temperature difference anyway. For non-metal pipes, you will have to insert probes into the water flow. You might have available access ports, if you’re lucky.

The Rest of Us

Next week I’ll discuss some of the options available for the large population of data centers that don’t have perfect instrumentation, and can’t afford the time and/or money to purchase and install it right now.

Topics: BMS, Dr-Jay, PUE, instrumentation, Measurements-Metrics, pPUE

Latest Modius Posts

Posts by category

Subscribe via E-mail