Modius Data Center Blog

Data Center Management must include continuous real-time monitoring.

Posted by Mark Harris on Fri, Jun 25, 2010 @ 09:40 AM

I spend a great deal of time talking about data center efficiency and the technologies available to assist in driving efficiency up. Additionally a great deal of my time is spent discussing how to determine success in the process(es). What I find is that there is still a fundamental missing appreciation for the need for 'continuous' real-time monitoring to measure success using industry norms such as PUE, DCIE, TCE and SWaP. I can't tell you how many times someone will tell me that their PUE is a given value, and look at me oddly when I ask 'WHEN was that?'. It would be like me saying 'I remember that I was hungry sometime this year'. The first response would clearly be 'WHEN was that?'

food

Most best practice guidelines and organizations involved here, (such as The Green Grid, and ITIL) are very clear that the improvement process must be continuous, and therefore the monitoring in support of that goal must also be. PUE for instance WILL vary from moment to moment based upon time of day and day of year. It is greatly affected by IT loads AND the weather for example. PUE therefore needs to be a running figure, and ideally monitored regularly enough that the Business IT folks can detremine trending and other impacts of new business applications, infrastructure investments, and operational changes as they affect the bottom line.

Monitoring technologies should be deployed that are installed permanently. In general, 'more is better' for data center monitoring. The more meters, values, sensors and instrumentation you can find and monitor, the more likely you'll have the raw information needed to analyze the data center's performance. Remember, PUE is just ONE KPI that has enough backing to be considered an indicator of success or progress. There surely will be many other KPIs determined internally which will require various sets of raw data points. More *IS* better!

We all get hungry every 4 hours, why would we monitor our precious data centers any less often?

Topics: Data Center PUE, data center management, real-time metrics

ASHRAE raises (and lowers) the bar for Data Center Cooling!

Posted by Mark Harris on Wed, Jun 23, 2010 @ 12:54 PM

It's finally here, the ASHRAE Technical Committee 9.9 has released new recommendations for the temperature and humidity most ideal for data centers.

In a nutshell, dry bulb temperature recommendations now extend down to 64.4-degrees F, and UP to 80.6-degrees F and the humidity range is also expanded at both ends.

Both of these are VERY realistic in today's real world. Extending the LOWER limit  down to 64.4F eliminates a great deal of need to mix HOT and COLD previously required to maintain the previous low limit of 68-degrees F. I could never really get a handle on why the recommendation of 68-degree was imposed. It seems to be counter-intuitive that a data center manager that mainly has a heat issue would be required to add heat back into the precious cooling stream... hence with the lower value, the DC manager will have to do this mix LESS often. Nice!

Perhaps more important for the majority of data center operators, is the official sanction to extend the UPPER limit now to 80.6-degree F. Touche'!!!!  We all know that IT gear is spec'd well above these figures, and raising data center temperatures by even a single degree makes a significant impact into cooling costs. Immediately apparent is the ability to use economizer technologies for a much higher percentage of the hours each year.

The TC 9.9 guideline also shows some real thought for Moisture, with the UPPER and LOWER limits tuned to today's conditions and technologies.

The changes to the relative humidity guideline addresses the risks associated with Electro-static discharge (too low) and Conductive Anodic Filament growth (too high). This CAF basically occurs in dense PC board laminate dielectrics when tiny filaments of Copper spring out due to moisture and sometimes cause semiconductor-like connectivity between adjacent routes and vias (holes).

 

(Here is some light reading on CAF:  http://www.parkelectro.com/parkelectro/images/CAF%20Article.pdf)

So what does this all mean to you??? It means that the operation of a data center using 'best practices' as recommended by ASHRAE will be much more manageable and potentially much more economical. We no longer have to 'baby' the IT gear, and treat it will soft kidd gloves. Intel, Seagate, Infineon and a slew of other IT gear component makers have gone to great lengths to design their individual component-level devices to work hard in a wide range of environments, and we have barely even approached the limits by any analysis. We have played it very safe for a very long time...

We can now feel empowered to stretch a bit. push a little faster, a little deeper and with a bit less rigor for the environment. A little common sense goes a long way...

Topics: Energy Efficiency, Cooling-Airflow

Data Center Monitoring - MUST be Enterprise in Scale!

Posted by Mark Harris on Tue, Jun 22, 2010 @ 03:15 PM

Over the course of meeting with perhaps 100 customers over the last 6 months, it has become painfully clear to me that there is widescale and growing confusion about Real-Time Data Center Monitoring.

I would suggest that Real-Time monitoring which answers MOST customers' needs MUST have a number of specific capabilities which the vast majority of what's available today do NOT:

1. Scale. Most shipping Data Center Management and Monitoring solutions fail to realize that SCALE is a big deal. Monitoring 100 devices on a trade show floor demo is entirely different that deploying true monitoring across 20 sites, each with thousands of devices. You simply can't use the same ARCHITECTURE, and all the marketing fluff in the world won't solve this fundamental structure issue. The ONLY way to scale this is using a DISTRIBUTED architecture.

2. Device Coverage. These same vendors will tell you that they speak SNMP and that everything you need to monitor speaks SNMP. Nonsense! Firstly, there are many protocols including Mod-Bus, SNMP, BACnet, WMI, Serial, etc, etc. Secondly, just supporting the protocol doesn't get you much closer to the device knowledge. Each device has to be specifically understood to read the required values. In most vendor's proposals, this shows up as "Professional Services" which means 'We'll figure it out on the job, on your dime'.

3. Real-Time Monitoring MUST store observed metrics and KPIs over long periods of time. I would suggest that while there are many reasons why most customers want to see real-time monitoring, the vast majority of these reasons are TIME-BASED. The monitored values or metrics need to be collected, time-stamped, stored, and available openly to run analysis upon. While customers may want to know that the data center is consuming 350kW this instant, what they REALLY want to know is that the data center WAS consuming 275kW 3 months ago, 310kW last month, 350kW today, and then PROJECT the future date of the wall that they will hit of the 500kW feed from the power utility.

The road ahead will continue to be littered with failed deployments of real-time management solutions which do NOT realize the dream of Data Center Monitoring. Customers should challenge their vendors to answer ALL of the tough questions. Consider the old-school 'Get it in Writing' approach, and then be very specific about your expectations, needs, and acceptance criterior...

Let's ALL win this GREEN game!

Topics: data center monitoring, Data-Collection-and-Analysis, Sensors-Meters-and-Monitoring, data center analysis, IT Asset Management

Going GREEN does NOT mean Going CHEAP!

Posted by Mark Harris on Mon, Jun 14, 2010 @ 11:32 AM

The IT industry has focused a tremendous amount of attention to the concept of 'GREEN' over the past 5 years. Many of the players, both IT vendors and consumers of IT gear alike have created GREEN Officers or Sustainability Czars, and even whole organizations that focus on 'greening' a company or a product strategy. Green is timely and exciting and viewed as a good corporate citizen thing to do. However, the realities of COSTS are now beginning to materialize.

While it is very exciting to standup in front of your shareholders and articulate all of the GREEN initiatives in progress, there have been a number of recent 'green' projects conceived a few years ago that have been put 'on hold' pending funding. The reality of GREEN is that it COSTS money! It may cost money short term, or it may be long term. Green is not cheap. In some cases an ROI can be calculated to show savings over longer periods of time, in somes cases new technologies must be invented to make any difference in costs.

Consider the grocery store analogy. An organic pear may be 40% higher in cost than a 'generic' pear. Yes, everybody knows that organic is healthier, but how many people are willing to spend 20%-40% MORE for the Organic versions of their groceries? Oh sure, at first you tried a few, but the likelihood is that many of us switched back to regular foods and continue to buy non-Organic groceries due to cost.

Another gem... I priced a 5kW solar system for my house a year or two ago, and with a total cost of over $50K, I calculated the break-even point (after rebates!) to be 9 years! Hummm, so I would have to write a check for $50K, and then over the next 9 years would get my $50K back, and THEN I might start saving money...

In the world of IT, we have the same thing happening today. Many of the biggest companies that jumped into 'GREEN' early because they thought it was a good corporate citizen move while at the same time believing it would somehow save them money, are now finding that 'going green' COSTS money. REAL money! It may be an upfront cost with a 3-5 year payback, or it could be permanent ongoing costs. The fact is TODAY that a kW of power generated by Wind or Solar has a cost of 5-10 TIMES that of fossil fuel generate power. (See the URL: http://greenecon.net/understanding-the-cost-of-solar-energy/energy_economics.html).

Our best bet today is to use advanced monitoring to determine WHERE energy is being used, and how exactly how much by each application. This will set the stage for future investments in green technologies to be deployed. And remember, "Going Green" does NOT mean your energy efficiency is going to be better. You could running your entire data center on renewable power, but do so with a horrible PUE due to process and architecture problems. Wasting a watt is wasting a watt, regardless of where the watt came from.

We have the opportunity to push each other towards data center innovation, the creation of new power and cooling technologies, various regulatory reforms to spur investment even furthar and above all, demand accountability across the board.

Topics: Data-Center-Best-Practices, data center monitoring, PUE, Energy-Efficiency-and-Sustainability, data center energy efficiency, Energy Analysis

Modius Teams with GroundWork for Unified Data Center Monitoring

Posted by Donald Klein on Fri, Jun 11, 2010 @ 03:24 PM

One project that we have been working on at Modius is teaming with our friends at GroundWork Open Source (GWOS) on unifying their comprehensive IT monitoring with Modius facilities infrastructure monitoring.

Here is our recent webcast on the integration between our two products.  GWOS hosted this webinar from their offices, and many of the people in the audience were IT Operations professionals. 

To watch the webinar, please go here:

Unified Infrastructure Monitoring with Modius & GroundWork

 
 
 
 
 
 
 
 

Topics: data center monitoring, Data-Collection-and-Analysis, Sensors-Meters-and-Monitoring, data center operations, data center infrastructure, IT Asset Management

How to Win the Shell game. Don't Play It!

Posted by Mark Harris on Wed, Jun 09, 2010 @ 02:19 PM

So there I was, sitting in New York City a couple of weeks ago at The 451 Group's Uptime Institute Symposium, and spent a little time listening to Dean Nelson, the Sr. Director of eBay's Data Center services. He spoke about what eBay was doing with their new Salt Lake City data center and how it was paid for with their active cost savings initiatives. Sounds like the kind of data center we all dream about, and a management structure that understands long term winning strategy...

One of the most intriguing comments he made was regarding who pays the bill for power. Apparently, as soon as eBay moved the cost of power to the budget managed by the CIO, decisions were made in a much different manner. In fact, after the power bill was added to the CIO's bottom-line, he immediately ramped up it's efforts to reduce power consumption.  Surprising? Not really.

So the question bounced back to the top of my brain stack: Why don't we all just bite the bullet and add the power bill to the CIO's budget? Wouldn't that create the same catalyst for change that eBay saw? Wouldn't that shift efforts to reduce carbon, reduce cost, and become a Green corporate citizen into 5th gear everywhere? IT WOULD!!!!  Oh sure there are some logistic and measurement and data center monitoring issues, some economic G/L mechanics involved to implement the process, but for heaven's sake, we should encourage the proper behaviour, and stop hiding the problem. Hiding the budget as a 'burdened' cost, buried...

 

Frankly, it is very much like the Shell Game. Keep hiding the money so that know one knows where the money issue really belongs. Sure the CEO and CFO 'own' the power bills, but wouldn't it make sense to push the responsibility down a bit? To the teams that can actually DO SOMETHING CONSTRUCTIVE to lower these costs? Very few CIO's today pay (or are even aware of the detail for) the power bills for their data centers. My suggestion, follow eBay's lead and shift the G/L line items to the CIO and watch the rapid progress that will ensue...  (and when this higher level of interest takes hold, Modius will be there to help establish metric and measurement baselines by which to steer these cost improvements in very tangible ways!)

Topics: Energy Efficiency, Data Center Metrics, Data Center PUE, PUE

American Clean Energy and Security Act of 2009 - Waxman-Markey

Posted by Mark Harris on Tue, Jun 08, 2010 @ 02:52 PM

 

With all of the efforts to get energy under control, it is not surprising that there are a number of new energy bills making their way through congress. One of the most 'spectacular' in-process bills with wide-ranging energy inferences is the Waxman-Markey Bill, or "HR2454". Officially it is called the "American Clean Energy and Security Act of 2009" and it has three basic parts. This summary is provided by the Congressional research Service as follows:

"American Clean Energy and Security Act of 2009 - Sets forth provisions concerning clean energy, energy efficiency, reducing global warming pollution, transitioning to a clean energy economy, and providing for agriculture and forestry related offsets. Includes provisions: (1) creating a combined energy efficiency and renewable electricity standard and requiring retail electricity suppliers to meet 20% of their demand through renewable electricity and electricity savings by 2020; (2) setting a goal of, and requiring a strategic plan for, improving overall U.S. energy productivity by at least 2.5% per year by 2012 and maintaining that improvement rate through 2030; and (3) establishing a cap-and-trade system for greenhouse gas (GHG) emissions and setting goals for reducing such emissions from covered sources by 83% of 2005 levels by 2050."

So what does this mean for us? Well, the first point is a good one: Energy Suppliers will have to create at least 20% of their power from renewable sources over the next 10 years. Like Solar and Wind power. Sounds good huh? Green as it gets. The only drawback to you and me is cost. Green is expensive. Using today's technologies, Green power will increase the price per kW for residential, commercial and industrial users. Greening is good for the environment, but will increase the rate at which power bills go UP. Nothing in life is FREE.

The second point is where we can all get more actively involved. Personally. For the next 20 years, we are all expected to help the nation become 2.5% (year over year) more efficient in our use of power at home and at work. Every year, 2.5% more effective, compounded. To do so, we'll all be buying CFLs and LED lights, using more microwave ovens, and during the summer at work we'll all enjoy the same 76-degree office temperatures that our datacenters will be driven to. Energy Efficiency is the name of the game! The car makers will also step up and happily sell us hybrid and BEV vehicles to help do their part. (Have you seen the new CODA Automotive BEV cars? Cool.)

Lastly, the bill states that we (the country) need to reduce total Carbon emissions by more than 80% of the level that we saw in 2005... but we have 40 years to do so. Hummm... Imagine the amount of change required in these 40 years to reduce carbon emssions by 80%, but then again, consider what life was like 40 years ago in 1970; The commercial cell phone didn't yet exist, nor did iPods and x86 computers, we hadn't seen Disco or StarWars yet, color TVs were just coming out and all the cars had V-8 engines! 

Actually, I am a huge proponent of this and similiar bills. It significantly raises the awareness that we ALL need to do something, NOW! Every light left on, every old server left spinning, every little piece counts. If we can all just get in the normal mode of saving energy because it's the right thing to do, then we all relish in the long-term rewards from doing so...

Topics: Data-Center-Best-Practices, Energy Efficiency, PUE, Energy-Efficiency-and-Sustainability, data center energy efficiency, Energy Analysis

Energy Star for Data Centers - It's a GOOD thing!

Posted by Mark Harris on Mon, Jun 07, 2010 @ 01:56 PM

OK we have heard about the 'Greening' world around us, the price of power, the costs of cooling, the need for energy efficiency and ultimately The Green Grid's "PUE" KPI for a few years now. What originally sounded like a great way to definitively calculate the energy efficiency of getting IT work done, still seems like a great way to do so, but also seems like just the START of the journey...

Remembering that alot of work went in to the creation of PUE, it is considered by many to be a great place to start TODAY towards the goal of optimizing energy usage. Remember, you can't optimize that which you don't understand. That said, PUE may not be viewed down the road as the single best metric, but for now, it is MUCH better than what we had just a few years ago. Nothing. PUE is a metric that is well understood and can be determined for ANY END-USER that chooses to calculate it. It can be calculated in real-time using a fairly small investment in time and resources.

 

Today the EPA took the next step to allow end-users to compare their energy conservation and efficiency efforts to those of their peers. Basically, any company the wishes to can audit their PUE, document their findings, hire a PROFESSIONAL (recognized audit partner) to verify their claims, and then submit to the EPA. Those data centers that rank in the top 25% of their peer group will be considered as having an 'Energy Star' compliant data center. (And the bragging rights that go with the star).

So what does this mean to the industry? Well, I think we'll hear alot of companies that applaud the move by the EPA for Energy Star data center recognition. Many companies have worked hard to eliminate energy inefficiencies and love telling the world about their successes. The new Energy Star rating will allow this message to be even louder, since it will provide some apples-to-apples comparison. It supports the ROI measurements for these efforts. Peers will get a sense of what is POSSIBLE by people doing like environments. Some CIOs and CFOs will stand up and say, "Why is my closest competitor X% more energy efficient making the same type of widget?"

We will also see a bunch of complaining about the use of 'PUE' as the main KPI used in the determination for Energy Star. The more vocal opponents will argue that PUE as a KPI is err'd from the start or meaningless and can be manipulated or contrivedby the unscrupulous. In turn, we'll see a resurgence of pushes for "DCeP" (or one of the 10 proposed proxies) as a better KPI from these nay-sayers. I say it's good to see more energy on KPIs like DCeP, but we need some forcing function, NOW! Rememeber, the goal is to get companies to ACT NOW... mid course corrections welcome!

I think PUE was a great first step. I think Energy Star for Servers and then Energy Star for Data Centers is a great SECOND step(s), but why would we be nieve to think all of this would stop there?

Energy Star for Data Centers is circa 2010. Perhaps the folks at EPA will have a Energy-Star-PLUS recognition in 2012 (they could call it "Energy Star for Data Centers 2012" or similiar nomenclature) based upon any potentially agreed upon proxy for DCeP. Or perhaps they would use a different metric/KPI? Not sure. But what I am sure is, that we need to force ENERGY EFFICIENCY PROGRESS NOW. For companies to stand up, articulate their best practices and be tested and challenged by their constituents. We all need to LISTEN and LEARN from each other.

Status quo will no longer work. As an industry we need to push the design and re-architectures of existing space to be highly efficient. Too much waste in the past and nobody really understood it. We need to do the hard work, build containment aisles or modify air flow on on inlet-temp or overall pressure, we need to install sensors and monitoring, install spot cooling, refresh older hardware servers, etc. etc etc.

The energy efficiency work has just started,  and it's a very long road ahead. Let's stay on track and work towards a common goal. Doing more with less, making every KiloWatt count, reducing the cost of doing business. Remember, we are all on the same planet, using the same resources.

The EPA's "Energy Star for Data Centers" 2010 is a GOOD thing...

Topics: Data-Center-Best-Practices, Energy Efficiency, PUE, data center energy monitoring, Energy-Efficiency-and-Sustainability, data center energy efficiency

How much better can it get? Data Center Energy Efficiency

Posted by Mark Harris on Fri, Jun 04, 2010 @ 11:34 AM

I was flipping through the 2007 report to congress issued by Jonathan Koomey ("Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431") and on Page 10 came across a very easy to read, but impactful diagram which provides some great insight into the future of the IT industry, and can be discussed in terms of end-users as well.

I suspect that this chart could be applied more or less to ANY individual company in their quest for energy efficiency. If there is some level of 'greening' at play in a corporation, then this chart can be a crystal ball into your 5 possible futures.

You can see from the diagram varying impacts on energy consumption, (starting at the top) going from taking NO NEW ACTION, all the way through DOING EVERYTHING POSSIBLE. I would suggest today that most companies are somewhere approaching the "Improved Operations Scenerio". If you look above, you'll see this green curve essentially takes the overhead out of operations, but does very little to have any significant long term effect on the SLOPE of the curve.

In the chart, the "State of the Art Scenerio" is a good depiction of what is POSSIBLE (expected) if all business processes are tuned and all equipment is refreshed with the latest. This would create a real-time infrastructure ("RTI" as defined by Gartner) that self-tunes itself based upon demand. Most importantly... It would also lower the most basic cost per transaction. A CPU cycle would actually cost less!

These are very exciting times ahead...

Topics: Data-Center-Best-Practices, Energy Efficiency, data center monitoring, data center analysis, data center energy monitoring, Energy-Efficiency-and-Sustainability, data center energy efficiency

Data Center Cooling has many components...

Posted by Mark Harris on Thu, Jun 03, 2010 @ 03:37 PM

Just read about a new innovative way to address the cooling requirements within the data center worthy of mention here. As no surprise, the data center energy management challenge has many parts to it, and as we all are seeing, MANY different new solutions will be required and combined over time to fully embrace the REALM OF WHAT'S POSSIBLE. Oh sure, everyone will have their favorite 'energy saver' technology. We saw this happen with Virtualization, and we saw it happen with Variable Frequency Drive controllers for data center fans.

Well, what if we take a look WITHIN the servers themselves and consider the opportunities there? Does the WHOLE server generate heat? NO. Key parts do, like the CPU, chipset, VGA chip and Memory & controllers. So why do we have to BLOW SO MUCH air across the entire motherboard, using bigger expensive to operate fans? Wouldn't it be better to SPOT COOL just where the heat is? Reminder, the goal is to just move the heat away from the chips that generate heat. We don't need to move large volumes of air just for the thrill of air handling....

I have seen two competing advances in this space. One maturing approach has been adopted in 'trials' by some of the biggest server vendors. They offer liquid based micro heat exchanger equipped versions of some of their commercial server product lines. This means these special servers have included PLUMBING/cooling pipes into the server chassis themselves, and the circulating fluid moves the heat away from the server's heat-generating chips. Take a look right next to the LAN port and power plug in the back, and you'll see an inlet/outlet fitting for liquid! Basically fluid based heat removal. Humm, harkens back to the 80's when big IBM 390s were using water cooling when everyone else went to air. (As a note, fluid cooling is making a resergence as liquid cooling becomes popular once again...).

So now I see a new approach... 'solid state' air jets. Air jets? Yes really small air movers that are essentially silent, have no moving parts, and consume tiny bits of power. Turns out at least one vendor has created really small 'jets' which have proven that you can move LOTS of air without any moving parts. Yes, they are also really silent, and can magically create large amounts of air movement in really small spaces. Using this technology, you can target just the chips that need cooling with relative 'hurricanes', and then simply use small standard fans to carry this (now easily accessible) hot air out of the box.

What results in savings does the spot jets achieve? In their published test, they reduced the standard high power fan speed from 9000 rpm to 6500 rpm, going from 108watts originally to only  62watts. Add back into this an estimated 10% energy cost for the air jets themselves, and the net savings for fans inside the box is about 30%. Remember, FANs account for nearly 47% of a data centers' entire cooling energy consumption, so reducing FAN speeds inside AND outside the boxes is critical to long term power savings.

Lastly, how do you know all your effort has paid off??? Monitor FAN speeds! I'll say it a million times, monitoring FAN speeds is very important. The slower the run, the less they consume. Monitor, Monitor, Monitor!!!

Topics: Energy Efficiency, data center monitoring, data center cooling, Cooling-Airflow

Data Center Monitoring: Out-of-Band versus In-Band.

Posted by Mark Harris on Wed, Jun 02, 2010 @ 12:02 PM

There was a time where x86 hardware systems and the applications and operating systems chosen to be installed upon them were considered good, but not 'bet your business' great. Reliability was less than ideal. Early deployments saw smaller numbers of servers, and each and every server counted. The applications themselves were not decomposed well enough to share the transaction processing, so failures of any server impacted actual production. Candidly I am not sure if it was the hardware or software that was mostly at fault, or a combination of both, but the concept of server system failures was a very real topic. High Availability or "HA" configurations were considered standard operating procedure for most applications.

The server vendors responded to this negative challenge by upping their game, designing much more robust server platforms and using higher quality components, connectors, designs, etc. The operating system vendors rose to the challenge by segmenting their offerings to offer industrial strength 'server' distributions and 'certified platform' hardware compatibility programs. This made a huge difference and TODAY, modern servers rarely fail. They run, they run hard and are perceived to be rock solid if provisioned properly.

Why the history? Because in these early times for servers, their less than favorable reliability characteristics required some form of auxillary bare metal 'out of band' access for these servers to correct operational failures at the hardware level. Technologies such as Intel's IPMI and HP's ILO became commonplace discussion when looking to build data center solutions with remote remediation capabilities. This was provided by an additional small CPU chip called a BMC that required no loading, no firmware, nothing but power to communicate sensor and status data with the outside world. The ability to Reboot a server in the middle of the night over the internet from the sys admin's house was all the rage. Technologies like Serial Console and KVM were the starting point, followed by these Out-of-Band (ILO & IPMI).

Move the clock forward to today, and you'll see that KVM, IPMI & ILO are interesting technologies and critical for specific devices which are still considered critical to core businesses as they are mostly applicable when a server is NOT running any operating system or the server has halted and is no longer 'on the net'. In most all other times, when the operating system itself IS running and the servers are on the network and accessible, server makers have supplied standard drivers to access all of the sensors and other hardware features of the motherboard and allow in-band remote access with technologies such as SSH and RDP.

 

Today, it makes very little difference whether a monitoring system uses operating system calls or out-of-band access tools. The same sensor and status information is available through both sets of technologies and it depends more on how the servers are physically deployed and connected. Remember, a huge percentage of Out-of-Band ports remain unconnected on the back of product servers. Many customers consider the second OOB connection to be costly and redundant in all but the worst/extreme failure conditions. (BUT critically important for certain type of equipment, such as any in-house DNS servers, or perhaps a SAN storage director)

Topics: data center monitoring, data center temperature sensors, Protocols-Phystical-Layer-Interfaces

Latest Modius Posts

Posts by category

Subscribe via E-mail