Graphics GPU for PowerEdge R710 or R910

Servers

Servers
Information and ideas on Dell PowerEdge rack, tower and blade server solutions.

Graphics GPU for PowerEdge R710 or R910

This question is not answered

Guys,

Does anyone know of getting a way of getting a powerful GPU (Graphics Process Unit) in Dell R910 or R710 Server?

The R710 and R910 both has the capacity for a x16 Slot (R710 with an expansion module) , but  x16 slot is only restricted to 25w power and specs specifically state the 150W ATX specification is not supported - This means only tiny video cards with low power draw...

- HP DL380's can support the bigger 150W cards out of the box, but we're a dell shop and buy different vendor kit isn't really an option.
- I was also looking at the NVidia Quadro 2200 D2 which uses PCIEx8 expansion cards to offload the GPU to processing to the dedicated 1U Applicance, but thats a bit over kill for our needs (and likely over our budget)

Ideally I'm looking for a way to stuff a fairly decent GPU into a R710 or R910 for proper 3D acceleration.

Any suggestions appreciated.

All Replies
  • Hello, has anyone found a solution to this.

    I have ATI FirePro v7800 for use with RemoteFX. I need to find power for the 6 pin plug on the GPU

  • Stuart, your best bet is going to be to use an external GPU appliance such as the NVIDIA Quadroplex for visualization, or the Tesla S series appliance if you are doing GPU computing. These appliances connect to the host server's PCIe bus through a HIC card and external PCIe cable. Feel free to email me for more info or pricing: <ADMIN NOTE: Email id removed per privacy policy>

    Quadroplex:

    http://www.nvidia.com/object/product_quadroplex_2200_s4_us.html

    Tesla S Series:

    http://www.nvidia.com/object/product_quadroplex_2200_s4_us.html

    Dave

    http://www.advancedhpc.com/

  • From what i have been able to determine, the PCIe spec allows slot power requirements to be negotiated at system start up (system reset). A cards initial power draw is dependent on the detected sense pins on the card and the PEG connector configuration. Note that the initial power draw on a 300W graphics card is 25W from PCIe plus 50W from the 2x3 PEG connector plus another 100W from the 2x4 PEG connector (when both sense signals are grounded). That is, we have an initial power draw of 175W total available at start up of a 300W graphics card. When system reset is released, the Slot_Power_Limit (SPL) signal is sent to the card. If the SPL signal is bigger than the initial power draw of the card (which will not be our case), the card can draw up to the slot power limit (75W for a graphics card capable slot). If the SPL signal is smaller than the initial power draw (which is our case), the card can ignore the signal and continue to draw the initial power limit (25W for non graphics slots). This means that we have a total of 175W available for a 300W graphics card (as compared to a fully powered 300W card which gets 75W from PCIe, 75W from the 2x3 PEG connector and 150W from the 2x4 PEG connector).

    The PCIe specs also indicate that power rails from a graphics PEG connector and the PCIe slot must be treated as coming from independent rails but can one assumes the card has some smarts to apportion and isolate between allowed PCIe power and PEG power (more so than the above initial power draw limits)? Categorical statements as to how graphics cards can cope with power apportioning or how a graphics card can throttle graphics throughput based on power limitations are very difficult or impossible to get (most L1 tech support read of scripts). I have tried to get clarification on the specifics of these issues but getting hold of Dell and Graphics card design experts is impossible. The official line from Dell and Graphics card manufacturers is always 'not supported'.

    Anyway, we see the Dell PCIe power limit of 25W is intact but you still need to find where to steel 150W of power for the 2x3 and 2x4 PEG connectors. In my case, i was looking at hacking into the connector between the power supply distribution board and motherboard/ and/or backplane (of my T610) but have not done so yet. Note that this has it's own issues as you will not be able to fully expand your server with all drive bays, memory, etc.

    The other consideration is whether the enclosure can actually handle the heat dissipation that a graphics card can produce. Being that these are servers with noisy iDRAC fan control, one wonders how much of a problem this can be. Server cards don't have fans and a card with its own fan exhausting though the back connector should if anything produce more cooling to surrounding cards due to increased airflow in the region (unless it is fighting against the system fans creating dead zones).

    Finally there is the mechanical considerations of fitting a card remembering that 300W graphics cards can be 55mm thick and occupy 3 slots but as i haven't seen a R910/R710, i don't know if this an issue (the T610 has plenty of room).

    So, if you want a powerful graphics card with your server, you will have to decide whether you will go the supported route for $$$ or the 'it's my fault if it fails' route and spend much less $. If it's the later, consider the mechanical issues of graphics card fit when choosing a card but also consider needed PEG power cabling and cooling requirements before going down this route. (You may also consider buying a 'PEX16IX PCI Express x16 bus isolation extender' for testing purposes as this will allow you to take PCIe current measurements and verify that PCIe/PEG power issues aren't a problem under load).

    You would also want to verify that the OS doesn't suffer issues under loaded graphics considering the card is not running with full power (and may not cope with brownouts so to say). In essence and to minimize risk and blame, you need to self certify the chosen card for mechanical fit, power and cooling considerations. This, in itself, has costs but if you go down the 'not supported path', its better to do the leg work yourself rather than say 'xxx of the support forum said it should work' :)'

    Having said that, i have successfully installed an old 32W XFX GeForce 7600 GS which i modified to be a x8 card for use on my T610. As yet i have not bought a new powerful graphics card nor hacked into the power feed to the motherboard to steal a couple of PEG connector power feeds.

    Oh, and if you are thinking of SLI/CrossFire, well that's another kettle of fish and requires an even longer walk down the unsupported path.

  • i have been pondering this problem for a few weeks.

    Generally im the first in line to cutting and splicing wiring harnesses to give me the extra plugs i require. How ever keeping power consuption in mind i didnt want to over draw the power rails and supplies.

    so i could either

    a:  add another power supply.  a great thread by the bitcoiners

    -  https://bitcointalk.org/index.php?topic=379677.0;all

     or

    remove 4 drives. change the remaining 2 drives from lff to sff or ssd. using dell adapter bracket 

     

    then use

    4 of these -  http://www.scsi4me.com/hp-398291-001-sas-sata-adapter-for-sas-hard-drives.html

    2 of these - http://www.amazon.com/StarTech-com-SATPCIEX8ADP-6-Inch-Express-Adapter/dp/B007Y8FSMQ/ref=pd_sim_e_3

    1 of these - http://www.aliexpress.com/item/Free-Shipping-Cheap-Dual-6-Pin-PCI-Express-Female-to-8-Pin-PCI-Express-Male-Y/543860007.html

    1 of these - http://www.cwc-group.com/8pin6pin.html

    reduced power load from reduction of  hdd power consuption should equal the required power for the card.

    will update once tested. not sure when i will be doing this.