Modius Data Center Blog

Jay Hartley, PhD

Recent Posts

Measuring PUE with Shared Resources, Part 1 of 2

Posted by Jay Hartley, PhD on Wed, May 18, 2011 @ 09:02 AM

Last week I wrote a little about measuring the total power in a data center, when all facility infrastructure are dedicated to supporting the data center. Another common situation is a data center in a mixed environment, such as a corporate campus or an office tower, at which the facility resources are shared. The most common shared resource is the chilled-water system, often referred to as the “mechanical yard.” As difficult as it sometimes can be to set up continuous power monitoring for a stand-alone data center, it is considerably trickier when the mechanical yard is shared. Again, simple in principle, but often surprisingly painful in practice.

Mixed Use Facility

One way to address this problem is to use The Green Grid’s partial PUE, or pPUE. While the number should not be used as a comparison against other data centers, it provides a metric to use for tracking improvements within the data center.

This isn’t always a satisfactory approach, however. Given that there is a mechanical yard, it’s pretty much guaranteed to be a major component of the overall non-IT power overhead. Using a partial PUE (pPUE) of the remaining system and not measuring, or at least estimating, the mechanical yard’s contribution masks both the overall impact of the data center and the impact of any efficiency improvements you make.

There are a number of ways to incorporate the mechanical yard in the PUE calculations. Full instrumentation is always nice to have, but most of us have to fall back on approximations. Fundamentally, you want to know how much energy the mechanical yard consumes and what portion of the cooling load is allocated to the data center.

Data Center Mechanical Plant

The Perfect World

In an ideal situation, you have the mechanical yard’s power continuously sub-metered—chillers, cooling towers, and all associated pumps and fans. Not unusual to have a single distribution point where measurement can be made. Perhaps even a dedicated ATS. Then for the ideal solution, all you need is sub-metering of the chilled-water going into the data center.

The heat load, h, of any fluid cooling system can be calculated from the temperature change, ∆T, and the overall flow rate, qh=Cq∆T, where C is a constant that depends on the type of fluid and the units used. As much as I dislike non-metric units, it is easy to remember that C=500 when temperature is in °F and flow rate is in gal/min, giving heat load in BTU/h. (Please don’t tell my physics instructors I used BTUs in public.) Regardless of units, the total power to allocate to your data center overhead is Pdc=Pmech (hdc⁄hmech). Since what matters is the ratio, the constant C cancels out and you have Pdc=Pmech (q∆Tdcq∆Tmech ).

You’re pretty much guaranteed to have the overall temperature and flow data for the main chilled-water loop in the BMS system already, so you have q∆Tmech. Much less likely to have the same data for just the pipes going in and out of your data center. If you do, hurrah, you’re in The Perfect World, and you’re probably already monitoring your full PUE and didn’t need to read this article at all.

Perfect and You Don’t Even Know It

Don’t forget to check the information from your floor-level cooling equipment as well. Some of them do measure and report their own chilled-water statistics, in which case no additional instrumentation is needed. In the interest of brand neutrality, I won’t go into specific names and models in this article, but feel free to contact me with questions about the information available from different equipment.

Perfect Retrofit

If you’re not already sub-metered, but you have access to a straight stretch of pipe at least a couple feet long, then consider installing an ultrasonic flow meter. You’ll need to strap a transmitter and a receiver to the pipe, under the insulation, typically at least a foot apart along the pipe. No need to stop the flow or interrupt operation in any way. Either inflow or outflow is fine. If they’re not the same, get a mop; you have other more pressing problems. Focus on leak detection, not energy monitoring.

If the pipe is metal, then place surface temperature sensors directly on the outside of the inflow and outflow pipes, and insulate them well from the outside air. Might not be the exact same temperature as the water, but you can get very close, and you’re really most concerned about the temperature difference anyway. For non-metal pipes, you will have to insert probes into the water flow. You might have available access ports, if you’re lucky.

The Rest of Us

Next week I’ll discuss some of the options available for the large population of data centers that don’t have perfect instrumentation, and can’t afford the time and/or money to purchase and install it right now.

Topics: BMS, Dr-Jay, PUE, instrumentation, Measurements-Metrics, pPUE

Monitoring Total Energy for PUE

Posted by Jay Hartley, PhD on Mon, May 09, 2011 @ 02:34 PM

I am routinely surprised at how difficult it can be to determine the total energy consumption for many data centers. Stand-alone data centers can at least look at the monthly bill from the utility, but as the Green Grid points out when discussing PUE metrics, continuous monitoring is preferred whenever possible. Measurement in an environment where resources, such as chilled water, are shared with non-data center facilities can be even more complex. I’ll discuss that topic in the coming weeks. For now, I want to look just at the stand-alone data center.

PUE, Dashboard, Monitoring

In general, the choices are pretty simple for a green-field installation. The only real requirement is commitment to buying the instrumentation. Solid-core CTs are cheaper, and generally smaller for the same current range. Wiring in the voltage is easy. Retrofits are more interesting. Nobody likes to work on a hot electrical system, but shutting down a main power feed is a risky process, even with redundant systems.

One logical metering point is the output of the main transfer switches. Many folk assume they already have power metering on their ATS. It has an LCD panel showing various electrical readings, after all. Unfortunately, more often than not, only voltage is measured. That’s all the switch needs to do its job. Seems that the advanced metering option is either overlooked or the first thing to go when trimming the budget.

Retrofitting the advanced option into an ATS is not trivial. Clamping on a few CTs might not seem tough, but the metering module itself generally has to be completely swapped out. Full shut-down time.

A separate revenue-grade power meter is not terribly expensive these days. In some cases it may even be competitive with the advanced metering option from your ATS manufacturer. Meters that include power-quality metrics such as THD can be found for less than $3K, CTs included. Such a meter could be installed directly on the output of the ATS, but the input of the main distribution panel is generally a better option.

Clamping on the CTs is relatively straightforward, even on a live system, though it can be tricky if the cabling is wired too tightly. Slim, flexible Rogowski coils are an excellent option in this case. A bit pricier, but ease of installation can make back the difference in labor pretty quickly.

For voltage sensing, distribution panels often have spare output terminals available. This is ideal in a retrofit situation, and desirable even in a new install. Odds are the breaker rating is higher than the meter could handle, so don’t forget to include protection fusing. If no spare circuit is available, you can perhaps find one that is at least non-critical, such as a lighting circuit, and could be shut down long enough to tie in the voltage.

Worst-case retrofit scenario, you have no local voltage connections available. CTs alone are better than nothing. A good monitoring system can combine those readings with nominal voltages, or voltages from the ATS, to provide at least apparent power. Most meters can be powered from a single-phase voltage supply, even 110V wall power. I recommend springing for the full power meter even in this case. At some point you’ll likely have some down time, hopefully scheduled, on this circuit, and you can perform the full proper wiring at that time.

The final decision about your meter is whether to get the display. If your goal is continuous measurement (i.e., monitoring), the meter should be communicating with a monitoring system. The LED or LCD display will at best provide you a secondary check on the readings. The option also complicates the installation, because you need some kind of panel mounting to hold it and make it visible. It can become more of a space issue than one might expect for a 25-sq. inch display. Avoiding the full display output saves on the cost of the meter, and saves even more on the installation labor.

Look for a meter with simple LEDs or some other indicator to help identify wiring problems like mis-matched current and voltage phases. If the meter is a transducer only, have the monitoring system up and running, and communication wiring run, before installing the meter, so you can use its readings to troubleshoot the wiring. Nobody wants to open that panel twice!

Continuous monitoring of total power is critical to managing the capacity and efficiency of a data center. Whether your concern is PUE, carbon footprint, or simply reducing the energy bill, the monthly report from the utility won’t always provide the information you need to identify specific opportunities for improvement. Even smart meters might not be granular enough to identify short-term surges, and won’t allow you to correlate the data with that from other equipment in your facility. It’s hard to justify skimping on one or two meters for a new data center. Even in a retrofit situation, consider a dedicated meter as an early step in your efficiency efforts.

Topics: Energy Efficiency, Dr-Jay, PUE, data center energy monitoring, monitoring

The Water Cooler as a Critical Facility Infrastructure

Posted by Jay Hartley, PhD on Mon, May 02, 2011 @ 04:31 PM

Any data center manager can rattle off the standard list of critical facility equipment in the data center: generator, transfer switch, UPS, PDU, CRAC, fire system, etc. At times, however, one must take a step back and broaden one's view when determining what is critical. Unfortunately, too often we don't realize we're missing something important until after disaster strikes. In the hopes of heading off some future disasters, I share with you the following cautionary tale. I'll give you the take-away message in advance: "Look up!"

Scene:  A corporate office tower in Anytown, USA. A data center consumes the bulk of one floor. It is an efficient, well-maintained data center, with dual, dedicated utility feeds supplying a 2N-redundant power system, backup generator, and redundant chillers. It also boasts a years-long history of non-stop 100% reliable operation.

Blog   Water CoolerThe office floors above the data center all have essentially identical layouts, consisting of conference rooms, cube farms, and the occasional honest-to-goodness office.  Centrally located on each floor is an efficient, well-maintained kitchenette. In each kitchenette is a water cooler. Like many of its kind where the tap water is potable, this water cooler is plumbed directly to the sink. The ¼-inch white plastic tubing is anchored in place with small brass ferrules. This system has been doing yeoman's work for years, reliably delivering chilled, filtered drinking water to the employees with better than 99% up time, allowing for scheduled maintenance.

Action:  Disaster strikes, in accordance with Murphy's Law, late one weekend night. The water cooler’s plastic plumbing finally succumbs to age and stress. Water streams onto the floor unchecked, quickly covering the linoleum surface and finding its way into the wall. There it heads in water's favorite direction, down, passing easily through the matching kitchenette walls in the identical floor plans below.

The water continues until reaching a floor with a dramatically different layout. Temporarily stopped in its pursuit of gravity, the water gathers its forces, soaking into the obstruction until eventually, like the plastic tube, the ceiling tile succumbs. The next obstruction happens to be a PDU and a couple of neighboring server racks in the data center. They too succumb, we assume rather spectacularly.

Data Center Water LeakMeanwhile, back in the kitchenette, the leak is discovered during a security sweep and the flow is cut off, but human intervention has come too late for the electronics down below. Power redundancy saved all servers that were not directly water-damaged, so only a few internal business applications took an uptime hit, along with the kitchenette. Over $100,000 of damage, thanks to the failure of a few pennies of plastic tubing in a “non-critical” part of the facility.

 

Solution:  One could easily focus on the data center itself and protecting its equipment:  Place catch basins in the ceiling and extend the raised-floor leak detection system into them. That would help, and perhaps give a bit more warning. Not a bad idea in any case, if you have the time and money. Better solution? Inexpensive, off-the-shelf, floor leak detectors come in kits with automatic shut-off valves. Available online or in your local hardware store for home use in laundry rooms. An audible alarm is nice, but does an alarm make a noise if no one is there to hear it? Definitely get one with a second, normally-closed contact closure to link into your monitoring system. (You do have one, don’t you? Consider OpenData ME, SE, or EE!) Stop the leak early, and get advanced notice.

While you're at it, pick one up for that efficient, well-maintained, and oh-so-convenient second-floor laundry room in your home!

I hope you've enjoyed this tale. In the coming weeks, I'll share additional stories from the field as well as my musings on monitoring, instrumentation, and metrics. Visit my blog next week for insights on metering total energy for PUE—and a tip shared about the ATS.

Topics: Data-Center-Best-Practices, critical facility, leak detection, Dr-Jay, Data-Collection-and-Analysis, Sensors-Meters-and-Monitoring, Uptime-Assurance, monitoring

Latest Modius Posts

Posts by category

Subscribe via E-mail