In the age of power conciseness and attempts to reduce the power footprint in the data center, an interesting trend has started to emerge. We've heard of more and more people removing the second power supply from the Dell™ PowerEdge™ servers to reduce power consumption. It seems that many people think they cut the power consumption by half since they have removed half the power supplies.This is an interesting theory to explore, and since we have some of the older servers and power monitoring equipment, we can easily measure the savings. So let's see exactly what the power differences are across two generations of servers from removing the redundant power supply.
EXTECH 380803 Power Analyzer
A CPU load generator was used to produce 50% and 100% CPU loads on each system. Power was measured on each system at idle (near 0% CPU utilization), 50%, and 100%. 60 samples of each measurement were taken and averaged for each result. Each test was run with both power supplies in the server, and then again with a power supply removed.The results were then compared to get the percentage in wattage savings between running the same test with a single power supply and with two power supplies.
Not 50% savings, but some savings.You can see that in our configurations, we are nowhere near the maximum power ratings stamped on the power supplies. Power supplies "supply" only the power that the components in the server are demanding. With two power supplies in the system, that demand load is spread across the supplies in an equal manner. Two major conclusions from looking at the results. First, with older servers there is a bigger saving.Second, as the load on the server goes up, the saving goes down.Are the savings worth the loss of a redundant power supply ? I don't know, you have to tell me. Please weigh in your arguments below by starting a thread.