Modius Data Center Blog

Data Center Energy Monitoring: The 4 Most Common Approaches

Posted by Mark Harris on Thu, Apr 29, 2010 @ 05:09 AM

Understanding the power consumption of any given discrete device in the data center may be accomplished in a number of ways including measurement and modeling technologies. While many approaches have been tried over the years, today there are four main ways to determine the power being consumed.

  • Faceplate Values. Each manufacturer places a service value ‘plate’ which identifies things like model and serial numbers, manufacturer’s contact information, safety certifications and power requirements. The power requirements are usually listed as the voltage range acceptable for the included power supplies, as well as the maximum current to be drawn by any configuration and working condition of the device. For a complex device, this faceplate power consumption value is listed as the maximum possible and may be 4 or 5 times the actual power being drawn in normal operating conditions. Since this is printed information required on every device, it essentially has no additive administrative no-cost.

  • iPDU Monitoring per outlet. Newer environments have begun to deploy measured or metered power distribution devices within each rack. These iPDU have enough intelligence to allow network inquiries to be made of the iPDU itself, with the most granular of these devices offering discrete values for the power being consumed PER-OUTLET. These PER-OUTLET iPDUs make ideal sources of raw power consumption values, although they tend to be costly to do so.

  • Monitoring via operating system service. Most modern hardware telco, server and switch designs and their associated operating systems include what is known as ‘System Services’ or ‘Daemons’ which are intended to allow access to granular operating information. In most modern cases, device drivers are included in the standard software builds which enable power consumption metrics to be read from the actual power supply unit, assuming that the power supply was instrumented in hardware when the device was manufactured. In cases where this hardware instrumentation exists, there are no additive costs to gaining access to the power consumption for these devices across an IT infrastructure.

  • Modeling the device. It could be argued that a tremendous portion of the installed IT equipment that was purchased more than 3 years ago has little or no instrumentation capability in hardware. In these cases it is impossible to programmatically read power consumption metrics. Instead one approach has been to model the power consumed based upon a model of the hardware configuration of the device. Mostly for servers, it could be argued that a good approximation for a device can be calculated by knowing an inventory of components inside each device, and then the power consumption of each of those components. Coupled with some workload information and a fair assessment of consumption can be derived.
It should be noted that each and every Enterprise will likely find themselves dealing with MULTIPLE approaches (from the above list) in determining power consumption.  Some devices and configurations will lend themselves to highly granular network inquiry, while other older devices may need to be modeled to determine power.  It is these sources of power consumption that will need to be gathered, normalized and then ultimately fed into some form of higher value asset or resource management suite.

 

Topics: data center energy monitoring, data center energy efficiency, Measurements-Metrics, data center infrastructure

Data Center Environmental Monitoring: Think Beyond Wireless Sensors!

Posted by Mark Harris on Thu, Apr 29, 2010 @ 05:00 AM

It is funny how many times I have recently visited larger data centers considering 'Green IT' or other Efficiency initiatives and find high priority funded projects  for wireless temperature sensors. True wireless environment technology is some of the coolest 'tangible' stuff I have seen in a long time. It is a high-tech version of the kind of technologies that we all grew up with, things we all just inherently 'get'. Temperature and Humidity Sensors. What could be simplier? 

Here it is in 2010, and is important to realize that finally there are a number of great choices for wireless sensor solutions out there either using Active RFID or 802.15.4 (zigbee) technologies. A customer today really can deploy a fairly granular 'mesh' of sensors in data centers and related facilities areas without much difficulty. The sensors are simple, small, have long battery lives (> 3 years each) and low-cost. All of the solutions have easy to install packaging with double-sided tape or velcro. How easy is that?

Well, I would argue that the REAL VALUE for wireless temperature and humidity environmental sensors are NOT the sensors themselves, nor the data derived from each individual sensor but the aggregation of all of the data from all of the devices, rolled together with the metric data from the co-located IT gear and the facilities deployed HVAC gear, all normalized and easily accessible using ordinary tools. EXCEL anyone? (Or for the web-bies in the crowd, "Xcelsius Anyone?"). Imagine being able to plot the PUE of your data center as a function of outside temperature, or the total power consumption as a function of actual CPU processing (IT load). Remember, sensors can be found everywhere in your data center as discrete wired and wireless boxes, as well as embedded in every IT device purchased in the past 3 years, such as your servers, routers, firewalls and storage directors as well as in every PDU or iPDU (power strip). Sensors are everywhere just waiting to be queried for their metrics!

Customers should think BIGGER. Push the envelope and think PAST the wireless sensors (which ARE very cool), think PAST the pretty pictures that any one of the wireless vendors can draw, and focus on how to transform ALL of the data that you can get your hands on into actionable, cost saving information that can be directly applied in the BIGGER picture of running the IT structure at the lowest cost possible, supporting SLAs, etc.

Topics: Data-Collection-and-Analysis, Sensors-Meters-and-Monitoring, data center analysis, data center temperature sensors, data center energy efficiency

Visual Asset Management - How about some Real-Time metrics with that?

Posted by Mark Harris on Sun, Apr 04, 2010 @ 05:13 AM

The granular management of all assets being placed or moved within a datacenter has become highly desirable over the past several years. Important to note is that most major companies will claim to already solved the asset management needs with an array of typically disconnected and many times complex sets of tabular asset manager products. These same companies are now quietly looking for 'something else' to help get them to where they 'really' need to be... 

The newest generation of asset management suites are focused on visually representing assets with a drag-and-drop approach to adds, moves and changes. These new lifecycle management suites allow equipment to be added, moved or changed in existing facilities in a highly predictable and efficient manner. Examples of these modern suites include Aperture, Altima/Netzoom, Rackwise, nLyte, Avocent, ShowRack, APC, VisualDatacenter, Raritan/dcTrack, FieldView and a handful of others. Each of these management software suites has been crafted to allow complex data centers to be visually articulated with a high degree of fidelity, identifying everything from the manufacturer, model and serial number, to the purchase date, PO number, owner’s name and physical location.

In typical scenario, the user will graphically navigate using a drill-down tool which mimics the ‘Google Earth’ model… starting with very macro views and then selectively drilling-down to progressively more detailed views of smaller areas. In each view, various operational metrics are constantly reported such as ‘power being consumed’ within the current view. Ultimately single discrete values can be displayed.

Historically, these suites have relied on ‘faceplate’ information. This faceplate information is based upon the manufacturer’s published specification for a specific given device. It is usually the maximum value. A 1U web server for instance may have a published faceplate power consumption of 450 Watts, but the actual power draw in normal operation may be a much lower 150Watts or less.  This discrepancy creates the potential for huge errors and inefficiencies when planning for overall capacity and expansion opportunities.

Consequently, one of the newest customer requirements needing to be addressed by EACH of the asset management suite vendors is to add real-time metric data. The desired metric data will obviously include the value for Power consumption, but may also include less intuitive values for fans speeds, inlet and CPU temperature, CPU and RAM utilization, available disk space, etc.  While these values are relatively easy to come by as an individual user of each system, many different technologies must be exercised to programmatically and remotely retrieve these values in real-time.

This is currently where many of the latest generation of visual Asset Managers struggle. While their systems are amazing at handling the visual manipulation of IT assets, moving racks and routers along floorplans and data centers, the systems are simply not built with a large enterprise in mind when it comes to gathering Real-Time metric data. Gathering metric data for 12 servers at a trade-show is very appealing, but doing the same type of metric gathering in production against 12,000 or 112,000 servers is a bigger fish to fry. To do so requires a distributed collection architecture that is purpose built to collect any and all data from any device which is network addressable.

Real-Time monitoring with OpenData is the technology that will support the replacement of these faceplate ‘theoretical’ values with actual observed values… allowing a significantly more accurate view for planning purposes. Modius' OpenData(r) is built on a fully distributed bus architecture, is firewall friendly, and can be deployed easily to provide any asset management tool's need for Real-Time monitoring. OpenData  SUPPORTS rather than replaces Asset Management suites, and has been crafted with API's and Web Services interfaces to allow the OpenData gathered metric data to be CONSUMED by any number of other applications, including the current crop of Visual Asset Managements solutions. The combination of a best-of-breed visual asset management tool with a highly granular metric monitoring solution like Modius OpenData allows business costs to be much more understood and ultimately will allow existing data centers to provide significantly more capacity and increases the lifespan of the data center itself.

Topics: data center monitoring, real-time metrics, Measurements-Metrics, IT Asset Management

Latest Modius Posts

Posts by category

Subscribe via E-mail