Software-Defined Power: the Last Piece to the Software-Defined Data Center

Virtualization is becoming pervasive in data centers. Most physical servers now operate multiple Virtual Machines (VMs). Data is stored in Storage Area Networks (SANs) and as Network-attached Storage (NAS). Networks are becoming software-defined. And “The Cloud” has taken virtualization into Cyberspace with software (applications) and even infrastructure (servers and storage) now being available as services in managed SaaS and IaaS offerings.

Virtualization is becoming pervasive in data centers. Most physical servers now operate multiple Virtual Machines (VMs). Data is stored in Storage Area Networks (SANs) and as Network-attached Storage (NAS). Networks are becoming software-defined. And “The Cloud” has taken virtualization into Cyberspace with software (applications) and even infrastructure (servers and storage) now being available as services in managed SaaS and IaaS offerings.

With virtualization now end-to-end in the IT assets, the next step involves making it top-to-bottom in the data center, and this idea is embodied in the all-encompassing “Software-Defined Data Center” concept. Most vendors and analysts seem to be taking too narrow of a perspective, however, on the potential of Software-Defined Data Centers. Everyone recognizes the obvious storage, server and network elements of SDDC, of course, but very few consider the power and cooling infrastructures required for these IT assets and the facility itself.

Perhaps those taking such a narrow view of SDDC question if it is even possible to virtualize power and cooling. Or they might wonder if the benefits make it worthwhile. Power Assure believes the answer to both questions is Yes.

The ability to define and, therefore, control something in software requires creating a layer of abstraction between the virtual (or logical) and the physical resources. With servers, the hypervisor creates the Virtual Machines that share the CPU, memory and input/output resources of the physical server. With storage, that layer is (logically enough) the Logical Unit Number or LUN for sharing the physical disk drives.

The requisite logical or virtual layer of abstraction already exists in most data centers in the power and cooling infrastructures through a variety of means, including industry standard protocols, management applications for the power distribution units and computer room air conditioner, a Building Management System (BMS) or a Data Center Infrastructure Management (DCIM) system. Some of this software also interfaces with various IT management systems, putting within reach the end-to-end and top-to-bottom control envisioned with a Software-Defined Data Center.

While the foundation may exist for fulfilling the all-encompassing view of SDDC, very few vendors are pursuing such a strategy today. That is expected to change, however, because the benefits of including Software-Defined Power in a Software-Defined Data Center are both real and compelling.

The Benefits of Software-Defined Power

The benefits of Software-Defined Power derive from the ability to take into account the availability, dependability and quality of electricity when determining the best and most reliable data center to support the service level guarantee for a given application workload. This requires interaction with the systems actually managing or load-balancing the virtual clusters and/or physical servers, and that in turn would require close cooperation between IT and Facilities personnel, which is still difficult to achieve today. The effort is worth it, though, to achieve two significant benefits that cannot be realized without such integration.

The first benefit is maximizing application uptime. More than half of all application downtime in data centers today is caused by power problems, and that percentage is expected to increase as the electric grid struggles to meet a growing demand on an aging infrastructure, resulting in even more brownouts, blackouts and other power quality problems.

A major reason power is now the cause of so much downtime is, of course, the minimization or elimination of single points of failure through the virtualization of the physical IT infrastructure. By contrast, continuous operation during a power outage cannot be guaranteed by the typical data center’s uninterruptible power supply (UPS), transfer switch and backup generator(s).

As part of a business continuity or disaster recovery strategy, most organizations now operate multiple, geographically-dispersed data centers, or use cloud-based services for their BC/DR needs. This investment is justified to protect against catastrophic events caused by major natural disasters, but the arrangement can also afford greater immunity from more mundane (and increasingly routine) power problems on the grid.

Even in a dual Tier 4 data center configuration, however, power-related issues can bring down a complete application, and adding a third Tier 4 data center is rarely cost-justifiable. The reason is: two Tier 4 data centers normally provide less than 10 seconds of downtime per year. Is that additional 10 seconds really worth the cost of another data center?

Instead, implementing Software-Defined Power minimizes power-related downtime by shifting the application workload to the data center with the most available and dependable power at any given time. Including power as a software-defined element of the application environment (along with servers, storage and networks) makes it possible to abstract applications fully from all physical resources within any individual data center, and that in turn enables application workloads to be shifted and shed more intelligently between or among data centers.

In addition to increasing availability by affording greater immunity from unplanned downtime caused by undependable power, shifting application workloads across data centers also makes it easier to schedule the planned downtime needed for routine maintenance and upgrades within in each data center. While the costs savings that result from avoiding downtime are always difficult to quantify in a general way (and are therefore ignored here), these improvements have the effect of minimizing downtime (whatever the cost) with absolutely no adverse impact on service levels or quality of service.

The holistic allocation of IT resources within and across data centers leads to the second major benefit of Software-Defined Power: a reduction of up to 50 percent in the energy needed to operate and cool those IT resources. The reason for the savings is that by shifting load to a distant data center, it becomes possible to shed that load locally, enabling some or all of those servers to be powered down until needed again. This same ability to de- and re-active servers using automated runbooks can also be used to match capacity to the application workload within a single data center, affording additional savings.

Another factor in the cost savings is that when power is the most available and dependable, it is also the most affordable. This inevitably occurs at night, so shifting the application workload to “follow the moon” ensures that the organization is always paying the lowest possible rate for electricity—and should need less of it by being able to use more outside air for cooling.

The benefits of Software-Defined Power—maximizing application-level reliability within the constraints of the service level guarantees—is what Power Assure calls ultimate application availability. And perhaps best of all: the annual savings can be as much as three times the cost of implementing Software-Defined Power, resulting in an extraordinarily compelling return on the investment, even without including the savings from not needing to build another data center.

To Top