Modius Data Center Blog

Data Center Monitoring in the Cloud

Posted by Jay Hartley, PhD on Tue, Jun 21, 2011 @ 11:24 AM

modius, opendata, logoModius OpenData has recently reached an intriguing milestone. Over half of our customers are currently running the OpenData® Enterprise Edition server software on virtual machines (VM). Most new installations are starting out virtualized, and a number of existing customers have successfully migrated from a hard server to a virtual one.

In many cases, some or all of the Collector modules are also virtualized “in the cloud,” at least when gathering data from networked equipment and network-connected power and building management systems. It’s of course challenging to implement a serial connection or tie into a relay from a virtual machine. It will be some time before all possible sensor inputs are network-enabled, so 100% virtual data collection is a ways off. Nonetheless, we consider greater than 50% head-end virtualization to be an important achievement.

This does not mean that all those virtual installations are running in the capital-C Cloud, on the capital-I Intranet. Modius has hosted trial proof-of-concept systems for prospective customers on public virtual machines, and a small number of customers have chosen to host their servers “in the wild.” The vast majority of our installations, both hardware and virtual, are running inside the corporate firewall.

Data Center, Virtualization, Monitoring Many enterprise IT departments are moving to a virtualized environment internally. In many cases, it has been made very difficult for a department to purchase new actual hardware. The internal “cloud” infrastructure allows for more efficient usage of resources such as memory, CPU cycles, and storage. Ultimately, this translates to more efficient use of electrical power and better capacity management. These same goals are a big part of OpenData’s fundamental purpose, so it only makes sense that the software would play well with a virtualized IT infrastructure.

There are two additional benefits of virtualization. One is availability. Whether hardware or virtual, OpenData Collectors can be configured to fail-over to a secondary server. The database can be installed separately as part of the enterprise SAN. If desired, the servers can be clustered through the usual high-availability (HA) configurations. All of these capabilities are only enhanced in a highly distributed virtual environment, where the VM infrastructure may be able to dynamically re-deploy software or activate cluster nodes in a number of possible physical locations, depending on the nature of the outage.

Even without an HA configuration, routine backups can be made of the entire virtual machine, not simply the data and configurations. In the event of an outage or corruption, the backed-up VM can be restored to production operation almost instantly.

The second advantage is scalability. Virtual machines can be incrementally upgraded in CPU, memory, and storage capabilities. With a hardware installation, incremental expansion is a time-consuming, risky, and therefore costly, process.  It is usually more cost-effective to simply purchase hardware that is already scaled to support the largest planned installation. In the meantime, you have inefficient unused capacity taking up space and power, possibly for years. On a virtual machine, the environment can be “right sized” for the system in its initial scope.

Overall, the advantages of virtualization apply to OpenData as with any other enterprise software. Lower up-front costs, lower long-term TCO, increased reliability, and reduced environmental impact.  All terms that we at Modius, and our customers, love to hear.

Topics: Energy Efficiency, DCIM, monitoring, optimization, Energy Management, Energy Analysis, instrumentation

Data Center Economics 101: Cool when it's cheap!

Posted by Mark Harris on Wed, Jul 07, 2010 @ 11:47 AM

OK, I have been doing a bunch of reading about the highly innovative approaches to energy management being tested in places like Arizona. Phoenix as you can imagine sees temperature extremes like few other places in the country. (I remember stepping off a plane at Sky Harbor Airport in June 1979 and seeing an air temperature of 118-degrees). HEAT is a major topic in Phoenix. And as they say, "It's a dry heat".  That said, it is a great place for people and academia and technology. Lots of land, lots of great people, lots of sunshine.

cactus wip 13

So it was no wonder that a 'master plan' was created when revitalizing their economy over the past decade. New headquarter locations, businesses, downtown campuses and sprawling Data Centers have all sprung up and are considered some of the best in the nation. (Look at I/O Data Center's North 48th street facility as an example of a BIG new data center, with HALF A MILLION square feet coming online).

For the downtown area, an innovative approach was taken in the 2002 timeframe for cooling which I had not seen at this scale before. The local public utility (APS) created a commercial partnership called Northwind to provide cooling to the new sports stadium being built. Traditional approaches for cooling this size open-air pro sports stadium in 120-degree heat proved to be an interesting challenge, so innovative new ways of doing so were solicited. The challenge: Provide a comfortable environment for tens of thousands of sports fans during a ball game being played in the hot July and August afternoons. Somehow exploit the fact that lots of extra capacity in POWER was widely available during the middle of the night when kilowatt rates were low, and be prepared for massive cooling needs during the next DAY (about 12 hours later). The Opportunity: Figure out how to spend money on energy 12 hours before it was needed? How to STORE energy required for cooling at this scale effectively. They needed a massive 'energy battery'.

ice cubes

So, how did they solve this energy management task? ICE. It occured to their engineers that ICE is a great medium for storing energy. It can be created at any time, and used in the future as demand requires. Ultimately they built a highly efficient ICE plant that each night was able to manufacture 3 MILLION POUNDS of solid ICE when the power rates were the lowest. As each subsequent day progressed and business demands required cooling, the ICE would be used to absorb heat from water with the newly chilled water distributed as needed. The water was recaptured, and used to create ICE the next night in a closed loop system.

This approach worked SO well, that within the first few years, Northwind had sold it's entire energy savings capacity for the downtown plant to other commercial properties in the area. Turns out it really is quite easy to transport this chilled water at a relatively low cost in an area such as downtown Phoenix. Economies of scale play out nicely here.

Who would have thought? Make lots of ICE at midnight when it was cheap, and then use it to cool as needed the next day. And do so on a REALLY GRAND scale to the tune of MILLIONS of pounds each day. Actually there are a number of projects just like this now in operation, including several in New York City and Toronto to name a few.

Energy management innovation is key. Look past the traditional. Investments were made in well thought-out new technologies that could capture savings and return value to customers. Everybody Wins!

 

Topics: Energy Efficiency, data center monitoring, Cooling-Airflow, Energy Management

Latest Modius Posts

Posts by category

Subscribe via E-mail