Does anyone know of getting a way of getting a powerful GPU (Graphics Process Unit) in Dell R910 or R710 Server?
The R710 and R910 both has the capacity for a x16 Slot (R710 with an expansion module) , but x16 slot is only restricted to 25w power and specs specifically state the 150W ATX specification is not supported - This means only tiny video cards with low power draw...
- HP DL380's can support the bigger 150W cards out of the box, but we're a dell shop and buy different vendor kit isn't really an option.- I was also looking at the NVidia Quadro 2200 D2 which uses PCIEx8 expansion cards to offload the GPU to processing to the dedicated 1U Applicance, but thats a bit over kill for our needs (and likely over our budget)
Ideally I'm looking for a way to stuff a fairly decent GPU into a R710 or R910 for proper 3D acceleration.
Any suggestions appreciated.
Hello, has anyone found a solution to this.
I have ATI FirePro v7800 for use with RemoteFX. I need to find power for the 6 pin plug on the GPU
Tesla S Series:
From what i have been able to determine, the PCIe spec allows slot power requirements to be negotiated at system start up (system reset). A cards initial power draw is dependent on the detected sense pins on the card and the PEG connector configuration. Note that the initial power draw on a 300W graphics card is 25W from PCIe plus 50W from the 2x3 PEG connector plus another 100W from the 2x4 PEG connector (when both sense signals are grounded). That is, we have an initial power draw of 175W total available at start up of a 300W graphics card. When system reset is released, the Slot_Power_Limit (SPL) signal is sent to the card. If the SPL signal is bigger than the initial power draw of the card (which will not be our case), the card can draw up to the slot power limit (75W for a graphics card capable slot). If the SPL signal is smaller than the initial power draw (which is our case), the card can ignore the signal and continue to draw the initial power limit (25W for non graphics slots). This means that we have a total of 175W available for a 300W graphics card (as compared to a fully powered 300W card which gets 75W from PCIe, 75W from the 2x3 PEG connector and 150W from the 2x4 PEG connector).
The PCIe specs also indicate that power rails from a graphics PEG connector and the PCIe slot must be treated as coming from independent rails but can one assumes the card has some smarts to apportion and isolate between allowed PCIe power and PEG power (more so than the above initial power draw limits)? Categorical statements as to how graphics cards can cope with power apportioning or how a graphics card can throttle graphics throughput based on power limitations are very difficult or impossible to get (most L1 tech support read of scripts). I have tried to get clarification on the specifics of these issues but getting hold of Dell and Graphics card design experts is impossible. The official line from Dell and Graphics card manufacturers is always 'not supported'.
Anyway, we see the Dell PCIe power limit of 25W is intact but you still need to find where to steel 150W of power for the 2x3 and 2x4 PEG connectors. In my case, i was looking at hacking into the connector between the power supply distribution board and motherboard/ and/or backplane (of my T610) but have not done so yet. Note that this has it's own issues as you will not be able to fully expand your server with all drive bays, memory, etc.
The other consideration is whether the enclosure can actually handle the heat dissipation that a graphics card can produce. Being that these are servers with noisy iDRAC fan control, one wonders how much of a problem this can be. Server cards don't have fans and a card with its own fan exhausting though the back connector should if anything produce more cooling to surrounding cards due to increased airflow in the region (unless it is fighting against the system fans creating dead zones).
Finally there is the mechanical considerations of fitting a card remembering that 300W graphics cards can be 55mm thick and occupy 3 slots but as i haven't seen a R910/R710, i don't know if this an issue (the T610 has plenty of room).
So, if you want a powerful graphics card with your server, you will have to decide whether you will go the supported route for $$$ or the 'it's my fault if it fails' route and spend much less $. If it's the later, consider the mechanical issues of graphics card fit when choosing a card but also consider needed PEG power cabling and cooling requirements before going down this route. (You may also consider buying a 'PEX16IX PCI Express x16 bus isolation extender' for testing purposes as this will allow you to take PCIe current measurements and verify that PCIe/PEG power issues aren't a problem under load).
You would also want to verify that the OS doesn't suffer issues under loaded graphics considering the card is not running with full power (and may not cope with brownouts so to say). In essence and to minimize risk and blame, you need to self certify the chosen card for mechanical fit, power and cooling considerations. This, in itself, has costs but if you go down the 'not supported path', its better to do the leg work yourself rather than say 'xxx of the support forum said it should work' :)'
Having said that, i have successfully installed an old 32W XFX GeForce 7600 GS which i modified to be a x8 card for use on my T610. As yet i have not bought a new powerful graphics card nor hacked into the power feed to the motherboard to steal a couple of PEG connector power feeds.
Oh, and if you are thinking of SLI/CrossFire, well that's another kettle of fish and requires an even longer walk down the unsupported path.
i have been pondering this problem for a few weeks.
Generally im the first in line to cutting and splicing wiring harnesses to give me the extra plugs i require. How ever keeping power consuption in mind i didnt want to over draw the power rails and supplies.
so i could either
a: add another power supply. a great thread by the bitcoiners
remove 4 drives. change the remaining 2 drives from lff to sff or ssd. using dell adapter bracket
4 of these - http://www.scsi4me.com/hp-398291-001-sas-sata-adapter-for-sas-hard-drives.html
2 of these - http://www.amazon.com/StarTech-com-SATPCIEX8ADP-6-Inch-Express-Adapter/dp/B007Y8FSMQ/ref=pd_sim_e_3
1 of these - http://www.aliexpress.com/item/Free-Shipping-Cheap-Dual-6-Pin-PCI-Express-Female-to-8-Pin-PCI-Express-Male-Y/543860007.html
1 of these - http://www.cwc-group.com/8pin6pin.html
reduced power load from reduction of hdd power consuption should equal the required power for the card.
will update once tested. not sure when i will be doing this.
Did you find a solution to this problem? I would like to put an AMD Firepro in a Dell R815 server.
I will be following this as well, for remotefx. I was looking at the v7900, w7000, w600, and a slew more. The slot is capable of holding a 9.5" long card, and 4.376" tall card. I found the nvidia GTX 760 is just the right size to the t. Zotec USA ZT-70406-10P is the model and it is much faster then the firepro counterparts. I am running into the same issue with power though. The card requires 170W of power. I verified the ports on the R715, and it will support 2 cards at 25W or 4 at 15W. Wow, what power! :P Anyway, if anyone figures this out, please post, it would help us all!
Oh, I wanted to mention that the AMD Firepro W5000 only requires 75 Watts, I was looking at it as well. Very close compared to the GTX760. Performance difference is nuts:
W5000 Benchmark: 2959
GTX 760 Benchmark: 5004
Almost twice as fast and MUCH cheaper. Half the price for twice the performance! But power is our enemy.
Sorry, I forgot that in a couple of our servers r815, r515, r415 we used a quaddro nvs 440 with no problems, and it greatly improved the console performance and it pulls 31 Watts. Works great. Performance is weak compared to the other units.
@thegadget, PCIe slots come in two variants, those that are meant for graphics cards and can thus provide 75W and all other slots that can provide 25W. The variant is not dependant on the number of lanes the slot actually provides though in desktop cases many slots are graphics capable and thus they can provide 75W. In desktops, x16 slots are always graphics capable but in older servers, which are a different beast, they are not graphics capable and thus we are limited to 25W...
Only the bios/board designer really know what happens during power negotiation at boot time but in any case the server documentation is clear, you are limited to 25W...
Like you, i have has success with a low end graphics card (in my T610). My card uses around 30W as a maximum but as it is never stressed by intensive graphics tasks, it likely uses much less and as such has been rock stable for a few years now...
The difficulty is that information on how the bios/boot process/motherboard design/card firmware/card design actually impacts the power negotiation at startup is unknown. As such, whether we can mod the card firmware/card hardware to re-apportions power from the PCIe slot and the 2x3/2x4 PEG connectors to meet the 25W limit is unknown. Likely this can be determined but one would need access to some test equipment and some time to reverse engineed the designs or access to the designers for some guidance.
Without such designed help, which doesn't seem to be forthcomming, it's a difficult battle. I simply side stepped the problem as the XFX GF7600, which when not stressed remains within the 25W power budget and such an arrangement meets my needs.
There is no simple way around the power issue. So if you need graphics grunt, the simplest way forward is a new server/workstation... Unless you enjoy the reverse enginnerring challenge that is
I dont disagree with the power issues from the PCIE-x port. My server i have has two 1100 watt power supplies. I dont think power supply is my problem. The issue is Dell didnt design the R815 to accodate a GPU with 6 Pin Power connector.
I found a couple of these on Newegg.com. Going to try them out and see if they will work.
@rflannary, most servers have an excess of available power if they are not fully loaded with hungry components. In such lightly configured situations there are ways to steal PEG conector power from the various power lines (though you'd need some knowledge to do this safely). As such, resorting to addons like the E-Power EP-450CD to provide missing PEG power should not be needed (and should really be avoided since this will likely cause other power problems).
However the real issue is one of server mobo/BIOS design as the PCIe slots are not designed to supply 75W in these older servers (as they are not graphics capable slots and thus limited to 25W). Frankly trying to pull 75W from a 25W PCIe slot may cause unknown problems since the function responsible for PCIe power negotiation that occurs at startup is itself unknown to me as i am not a mobo/BIOS designer. Problems could manifest as errors flagged by the server up to sudden system shutdown.
Further compounding this above uncertanty is that we also know nothing about graphics card design. Specifically what happens if a card is limited to 25W from the PCIe slot at startup but needs more power. We really don't know whether a card can supplement such a PCIe power shortfall via pulling more power from the PEG (2x3 or 2x4) connectors (if they exist on the card). Problems could manifest as graphics instability, lockup or sudden system shutdown/restart.
Adding a supplementry power supply, like the E-Power EP-450CD, does nothing to resolve such uncertanty since your not addressing the issue of PCIe power shortfall.
Unfortunately only server mobo/BIOS designers really know what will happen if a card tries to pull more than 25W from the PCIe slot and then only the card designer knows how this will impact the function provided by the card. And since no server BIOS/mobo designer or graphics card designer has stepped up to the plate to clarify such uncertanty, then if you want reliability you must do lots of testing yourself (which requires specialist tools and knowledge). Anything else is simply half baked. Simply relying on some plug and play power addition and the OS to come up as your only 'test' will give you a false sence that all is OK and this may cost you later (hardware failure, software instability, etc)...
If only a BIOS/mobo/graphics card designer will come to the party and fess up some knowledge
Atfer adding one of these external power supplies.
Adding the GPU and drivers to the server. Hyper-V reports the GPU present and working.