What is the Optimal Efficiency Target for Your System Design?

The focus on efficiency and greening has grown immensely over the years, particularly in the data center environment. For system designers, power and efficiency have gone from being a low-priority topic to the top of the list for design constraints as OEMs and their customers have identified this as one of the utmost areas of concern.

The focus on efficiency and greening has grown immensely over the years, particularly in the data center environment. For system designers, power and efficiency have gone from being a low-priority topic to the top of the list for design constraints as OEMs and their customers have identified this as one of the utmost areas of concern. This editorial will investigate the conflicting opinions of stakeholders, working on the exact same solution, regarding what the efficiency curve should look like. That will help a team of power supply design stakeholders traverse the mire and answer the question “What is the optimal efficiency target for your system design?”

So the first question that seems common is “What is the peak efficiency target for a power supply design that will meet the system power budget needs?” This is one of the most overrated questions in all of power design. For one, peak efficiency is just that, the peak. Rarely is there a pragmatic application in which there is a completely predictable, steady-state load that will adhere to that ideal peak on the load curve. Perhaps a better, initial question could be “What solution is my power supply seeking to achieve for my end customer?” It may nearly always boil down to cost, but can still take on several, conflicting goals. In data center hardware applications, this typically involves three key design parameters: 1.) allow the customer to increase their performance within the same power footprint (i.e. – potentially power capped by the utility company); 2.) allow the customer to reduce their power footprint, while maintaining their performance (also same as increasing data center performance density); or 3.) match the actual system load to the Safety label rated maximum as close as possible to minimize infrastructure overdesign due to legal building requirements. Of course, one may say the true goal is a hybrid of these options, but they have intentionally been segregated to make the point that they are often in conflict and it also simplifies the mathematical trends.

Once you have decided what your application motivation is, it is time to think about the most appropriate power topologies and implementation to enable this goal. We will not delve into topologies here, but rather focus on the logic process that helps us justify the rationale for one particular topology over another. Following this process should allow for optimization of each load portion of the efficiency curve and optimize the integrated power usage of each load portion to get as close as possible to that ideal, flat efficiency curve.

“What is the ideal load portion to optimize my design for?” Is it 100%? 50%? 20%? (By the way, it is no coincidence that these are the key points to define targets for standards such as the 80 PLUS® certification.) This can be a trick question if you have not gone through the initial, aforementioned thought experiment to carefully consider the true needs of the customer.

To make the point, let us do some quick math. Most data center hardware solutions have redundant, front-end solutions that are required to share current equally (within a reasonable tolerance). Assuming these are always in a share condition, this automatically make the maximum load point for a given unit 100/n, where n is the total number of supplies that must share. This means that if your system requires four units to share, the maximum operating point on the load curve for a single unit is 25%, which makes the load portion 0-25%. In this scenario, anything above 25% on the load curve is absolutely irrelevant and therefore it does not matter what the efficiency curve looks like beyond this point. Your efficiency could be 30% or 110%, but if you never operate at these points on the load curve, then these points serve no purpose except for an empty spec on a datasheet. For this scenario, optimizing the design for 15-20% of load range may have the biggest impact at the end of the day. In a single-use configuration, perhaps optimizing for 60-80% of load range is most pragmatic after typical load and system derate needs are taken into consideration.

Of course, the exception to this is if the solution must support alternate configurations (N+1, 2N+1, 2N+2, etc.). If one truly wants to have a good prediction for actual power performance (for example to help the customer meet their expected improvements to their utility bill), then be sure to integrate over that load portion and find the true power usage. A single point on the efficiency curve will not provide an accurate predictor of this information.

Now that we have tackled the efficiency curve, let us focus on something that is far more important in terms of actually reducing the overall amount of watts used, which is utilization. The most efficient power supplies and systems in the world are the ones that are turned off. This is a topic that is more in the hands of system software/firmware architects, but it takes a major push from power engineering to make these folks realize that EVERYONE is a stakeholder in the power supply design and not just the Power Engineer. The quickly maturing tools for virtualization and cloud computing have provided all the hardware/software hooks and infrastructure necessary to eliminate solutions from the constraints of any one rack and therefore, allow the maximum consolidation of loads into as few of pieces of physical hardware as possible. As an added bonus, consolidation to increase utilization allows one to get much closer to a load profile that is less dynamic and more predictable on the load curve, which enables better utilization of the efficiency curve. Any hardware that is not being utilized should be put in a standby (i.e. – low power state) or even turned off, based on the demands of the dynamic support model for a given application.

In summary, the overall point here is that utilization and mitigating the customers’ headaches are the TRUE figures of merit in shaping that efficiency curve. The technology to enable this has been around for nearly a decade now and is beyond a state of infancy. So the next time that Marketing person or System Design Manager tells you what the peak efficiency of your next design needs to be, ask them: Is it more important to put a single number on a spec sheet or provide a better product for the customer? Also, be sure to remind them of the importance of integrals!

To Top