despite_trends_in_it__man_75182_142408You may believe, in the current environment of cloud migration, that the onsite data center has passed into oblivion, or, at least, been severely diminished. Part and parcel of this change would mean managing power and cooling would also be a thing of the past. You may believe this, but you’d be wrong.

Cloud Computing
As more and more businesses migrate to third-party or in-house clouds, you would expect the power and cooling demands in the in-house data center to diminish, but this is not the case. The power and cooling demands have shifted focus; where before the attention was on the servers and storage racks, it now has moved to the wiring closets.

For businesses that moved their applications and data to a third-party cloud, the network has suddenly become the businesses’ lifeline; without connectivity to the cloud, the business shuts down until the lifeline is restored. So, the wiring closet has moved into the cat-bird’s seat. Once an afterthought, it’s now the number-one concern. Keeping the network up still requires managing power. If the building loses power, the network, with its routers and switches, must still stay up and running. Cooling still comes into play as well. Overheated equipment still fails, whether or not it’s the largest server in the building or the switch out to the internet.

In-House Clouds and Virtualization
Many large businesses have moved to cloud computing, as it facilitates access to the data from the mobile world. In-house clouds also mean servers, racked storage devices, a wiring closet and lots of heat. Besides the power management concerns, keeping the equipment cool is a primary concern.
Some businesses choose to maximize their machines’ processing powers. They do this through virtual machines. Virtual machines can give applications their own operating system and database without the expense of building and maintaining a separate box. Virtual machines are easy to create, and they solve a variety of needs that used to require more hardware, software, cabling, networking and cooling.

While getting the most of an existing machine makes economic sense, it does cause the machine to put out significantly more heat. This means managing your hardware in such a way that hot machines run together while cooler machines are separated. Separating your hardware in this manner requires planning. You want to investigate your options before you jump in and lock yourself into a strategy which will cost you more money down the road.

Data Center Infrastructure Management
Far from alleviating power and cooling concerns, the current IT trends are making power and cooling more critical than ever. If your business’s critical applications and data are in the cloud, you better be able to reach it, 24/7, 365. This means battery backups, uninterruptable power supplies and power distribution units. It also means being able to manage your power needs from anywhere you are. The same goes for cooling your equipment. If your network is critical, the wiring closet had better not overheat. The same goes for all the bridges, routers and switches that keep you connected to your data.

To address these concerns, data center infrastructure management applications have been developed. These new tools allow the data center manager to have control over the power and cooling needs of his computer room. The consoles also provide data for planning purposes as they allow a manager to plan where his or her hot and cool zones are with future expansion in mind. The business that takes its computer needs seriously will want to take advantage of a DCIM package.
The more things change, the more they stay the same. In terms of computing, it holds true; the more trends that evolve, the more the basic needs of computing stay the same. In terms of power management and cooling needs, this is the truth, indeed.

Used with permission from Article Aggregator