The granular management of all assets being placed or moved within a datacenter has become highly desirable over the past several years. Important to note is that most major companies will claim to already solved the asset management needs with an array of typically disconnected and many times complex sets of tabular asset manager products. These same companies are now quietly looking for 'something else' to help get them to where they 'really' need to be...
The newest generation of asset management suites are focused on visually representing assets with a drag-and-drop approach to adds, moves and changes. These new lifecycle management suites allow equipment to be added, moved or changed in existing facilities in a highly predictable and efficient manner. Examples of these modern suites include Aperture, Altima/Netzoom, Rackwise, nLyte, Avocent, ShowRack, APC, VisualDatacenter, Raritan/dcTrack, FieldView and a handful of others. Each of these management software suites has been crafted to allow complex data centers to be visually articulated with a high degree of fidelity, identifying everything from the manufacturer, model and serial number, to the purchase date, PO number, owner’s name and physical location.
In typical scenario, the user will graphically navigate using a drill-down tool which mimics the ‘Google Earth’ model… starting with very macro views and then selectively drilling-down to progressively more detailed views of smaller areas. In each view, various operational metrics are constantly reported such as ‘power being consumed’ within the current view. Ultimately single discrete values can be displayed.
Historically, these suites have relied on ‘faceplate’ information. This faceplate information is based upon the manufacturer’s published specification for a specific given device. It is usually the maximum value. A 1U web server for instance may have a published faceplate power consumption of 450 Watts, but the actual power draw in normal operation may be a much lower 150Watts or less. This discrepancy creates the potential for huge errors and inefficiencies when planning for overall capacity and expansion opportunities.
Consequently, one of the newest customer requirements needing to be addressed by EACH of the asset management suite vendors is to add real-time metric data. The desired metric data will obviously include the value for Power consumption, but may also include less intuitive values for fans speeds, inlet and CPU temperature, CPU and RAM utilization, available disk space, etc. While these values are relatively easy to come by as an individual user of each system, many different technologies must be exercised to programmatically and remotely retrieve these values in real-time.
This is currently where many of the latest generation of visual Asset Managers struggle. While their systems are amazing at handling the visual manipulation of IT assets, moving racks and routers along floorplans and data centers, the systems are simply not built with a large enterprise in mind when it comes to gathering Real-Time metric data. Gathering metric data for 12 servers at a trade-show is very appealing, but doing the same type of metric gathering in production against 12,000 or 112,000 servers is a bigger fish to fry. To do so requires a distributed collection architecture that is purpose built to collect any and all data from any device which is network addressable.
Real-Time monitoring with OpenData is the technology that will support the replacement of these faceplate ‘theoretical’ values with actual observed values… allowing a significantly more accurate view for planning purposes. Modius' OpenData(r) is built on a fully distributed bus architecture, is firewall friendly, and can be deployed easily to provide any asset management tool's need for Real-Time monitoring. OpenData SUPPORTS rather than replaces Asset Management suites, and has been crafted with API's and Web Services interfaces to allow the OpenData gathered metric data to be CONSUMED by any number of other applications, including the current crop of Visual Asset Managements solutions. The combination of a best-of-breed visual asset management tool with a highly granular metric monitoring solution like Modius OpenData allows business costs to be much more understood and ultimately will allow existing data centers to provide significantly more capacity and increases the lifespan of the data center itself.
Modius Data Center Blog
Visual Asset Management - How about some Real-Time metrics with that?
Posted by Mark Harris on Sun, Apr 04, 2010 @ 05:13 AM
Topics: data center monitoring, real-time metrics, Measurements-Metrics, IT Asset Management
Over the past year, many customers have found themselves in the midst of very real ‘Sustainability’, ‘Eco-efficiency’ or ‘Green’ initiatives. The core requirement of these initiatives is to establish energy and efficiency baselines to ultimately determine how energy is being used and where optimizations can be made to improve performance. These are very visible, corporate governance-style initiatives which tend to appear in quarterly reports. Both the CIO and CFO are very serious about taking proactive steps to demonstrate and report where investments are being made to get this skyrocketing cost under control.
One of the areas being investigated deals with the various types of instrumentation available within the modern data center. More specifically, the CIO/CFO are looking for their IT management team to take advantage of available tools, setting up a well-defined means to monitor all available energy-related data points in real-time and building an ITIL-inspired run-book of “continuous optimization,” more commonly refered to as “operational intelligence.”
It can be shown that modern data centers are complex systems with a tremendous quantity of physical infrastructure devices already in place: some are components with monitoring capabilities built-in, some with monitoring features optionally available, and finally others without any monitoring capabilities whatsoever. IT Managers are now realizing that more real-time monitoring is always better to help make better informed decisions to support these ‘Greening’ initiatives. Granular, concise, real-time information will allow trends to be seen, thresholds to be set, and plans to be made. Tactically, there are various approaches to device instrumentation and most IT situations will actually require a combination of several instrumentation technologies to work together, allowing a complete picture of status, availability, capacity and efficiency.
It is critically important to note that the current less-than-optimal state of active energy monitoring through instrumentation within the modern data center is a direct result of the historical complexity to do so. There has been a complete lack of comprehensive distributed enterprise-class solutions to gather, analyze, and make informed energy management decisions across the litany of raw data sources.
Ultimately, the technology to provide this continuous monitoring of vast discreet sources of data points is now available to be deployed and consumed at will. In support of these corporate initiatives, looking forward is critical because the game has changed, dramatically. The stakes are higher. The players have stepped up.
Topics: Data-Center-Best-Practices, data center monitoring, Energy-Efficiency-and-Sustainability, data center reporting, device interfaces
Data Center Infrastructure: Monitoring via LAN or Serial Interfaces
Posted by Mark Harris on Wed, Feb 24, 2010 @ 07:00 PM
When the topic of data center infrastructure comes up, there exists some confusion regarding how the two technologies, Serial and LAN, relate. Let me start by saying that nearly every piece of equipment built in the last 20 years includes at least one form of core controller interface. In fact, the engineering teams that build this type of equipment will tell you that one of the very first portions of a control system developed is the console/monitor access interface because it is this interface that typically is used to help continue to develop and debug the controller itself (as well as check it along the way for proper operation). Hence, every server, switch, router, and firewall, as well as every PDU, UPS, CRAC and Generator, has one – some form of interface exists in all Enterprise-class devices!
That said, the technology for these interfaces has changed over time. RS232 (and the very similar RS485) were all the rage for connectivity (due to simplicity and low costs) in the 70s and 80s. With the advent of true ‘networking’, Ethernet became popular (ironically for the very same reasons) in the early 90s and continues to be widely deployed as the interface standard to this day. However, the mechanisms to interact with these two types of interface are vastly different.
With Serial interfaces, the most common protocol is an ASCII-based ‘Text’ command line protocol. Commands are built using strings of characters, and the results are returned as strings of characters. For instance, a user could build a text command (of 18 characters) such as “SHOW SYSTEM UPTIME” which may result in the resulting series of 9 characters “1D 23H10M” to show 1 day and 23 hours and 10 minutes. The key point with regards to a Command-Line Interface protocol is that is specific to each and every vendor and in many cases each model number within that vendor’s catalog. This ultimately requires very model-specific device awareness in order to be able to communicate with this type of serial interface. Ultimately, the information being retrieved from these interfaces is going to be consumed by network attached servers and monitoring applications. Consequently, there are two steps needed to be able to deal with serial interfaces: 1) A physical conversion to get the information into a format suitable for the network to transport; and 2) the logical translation of ASCII commands and responses to networked packet values in tables. This is done using a device(s) sometimes referred to as a “gateway.” While these two conversions could be separated, they typically are included in a vendor-supplied gateway devices with an RS232 or RS485 port on one side, some small conversion processor inside, and the LAN port on the other side.
With LAN-based (Ethernet) interfaces it is much easier. Many standardized protocols exist to communicate natively from the device to the network, with the most common of these being SNMP and its inclusion of MIBs to describe how the informational packets are organized. SNMP (and the MIB) allows a network inquiry to be made against a table of operational values within the target device, and the results are formatted as expected values within the returned data packet. While there are some detail peculiarities, in general network-based protocols are much more standardized and widely accepted as the modern means.
What does this mean to you? If a device has a network interface, then a high probability exists that you’d be able to easily access and understand the performance values without any conversions whatsoever. Any modern intelligent iPDU (or Power Strip) is a great example of a device like this. It has a LAN connection and can report (in a known format) the power at each outlet and temperature of the unit itself by inquiring with a simple SNMP command. Devices like these have IP addresses and appear on the corporate network just like any other component. Conversely, if a particular device has ONLY a Serial interface, then look for a physical and logical gateway solution to do the conversion. These gateways are very specific (purpose built) for each model device and are usually supplied by the application provider that intends to consume the performance information.
Topics: data center monitoring, BACnet, Protocols-Phystical-Layer-Interfaces, device interfaces, modbus