In the age of power conciseness and attempts to reduce the power footprint in the data center, an interesting trend has started to emerge. We've heard of more and more people removing the second power supply from the Dell™ PowerEdge™ servers to reduce power consumption. It seems that many people think they cut the power consumption by half since they have removed half the power supplies.

This is an interesting theory to explore, and since we have some of the older servers and power monitoring equipment, we can easily measure the savings. So let's see exactly what the power differences are across two generations of servers from removing the redundant power supply.

Lab Configuration

Dell PowerEdge 1850 Dell PowerEdge 2850 Dell PowerEdge 2950
2 x Xeon 3.6GHz
4 x 2GB DIMMs
PERC 4 RAID Controller
2 x 550 Watt PS
2 x Xeon 3.6GHz
4 x 2GB DIMMs
Qlogic Fibre HBA
PERC 4 RAID Controller
2 x 700 Watt PS
2 x X5460 3.16GHz Xeons
8 x 4GB DIMMs
PERC 5E RAID Controller
4 x 132GB SAS HDD
2 x 750 Watt PS

EXTECH 380803 Power Analyzer

Test Methodology

A CPU load generator was used to produce 50% and 100% CPU loads on each system. Power was measured on each system at idle (near 0% CPU utilization), 50%, and 100%. 60 samples of each measurement were taken and averaged for each result. Each test was run with both power supplies in the server, and then again with a power supply removed.

The results were then compared to get the percentage in wattage savings between running the same test with a single power supply and with two power supplies.

Test Results

Dell PE 1850 Single PS Watts Dual PS Watts % Difference
Idle CPU 208 235 11.49%
50% CPU 336 359 6.41%
100% CPU 354 374 5.35%

Dell PE 2850 Single PS Watts Dual PS Watts % Difference
Idle CPU 230 249 7.63%
50% CPU 365 380 3.95%
100% CPU 385 400 3.75%

Dell PE 2950 Single PS Watts Dual PS Watts % Difference
Idle CPU 288 299 3.68%
50% CPU 394 408 3.43%
100% CPU 438 446 1.79%


Not 50% savings, but some savings.

You can see that in our configurations, we are nowhere near the maximum power ratings stamped on the power supplies. Power supplies "supply" only the power that the components in the server are demanding. With two power supplies in the system, that demand load is spread across the supplies in an equal manner.

Two major conclusions from looking at the results. First, with older servers there is a bigger saving.

Second, as the load on the server goes up, the saving goes down.

Are the savings worth the loss of a redundant power supply ? I don't know, you have to tell me. Please weigh in your arguments below by starting a thread.