vSphere 4.1 upgrade - Hypervisors and Solutions - Virtualization - Dell Community

vSphere 4.1 upgrade

Your Virtualization Community: Optimize the performance of your virtual environment.

vSphere 4.1 upgrade

  • Hi there,

    I am upgrading to 4.1 from 4.0 U1, and I am wondering if using hardware iscsi is better or equal to using the software iscsi solution. Here is some background we have 6 x R610 Servers (2 processors, 32GB ram and 4 x broadcom 5709 1GB iscsi nic) and we have a PS4000X. I have read during the initial setup in 4.0 that I should be using jumbo frames for maximum performance. During the setup of 4.1 the hardware hba's for the 5709 nics I found that it won't accept the jumbo frame configuration and doesn't mount the data stores.

    Could we confirm this is correct and that the 5709 doesn't support hardware hba and jumbo frames?
    Secondly can anyone tell me if the hardware hba at a MTU of 1500 will keep up to the software hba with a MTU of 9000?
  • Hi @aseniuk,

    To answer your 1st question, the 5709 doesn't support jumbo frames or IPv6 as a hardware iSCSI initiator. Please see the excerpt below from VMware's vSphere 4.1 Release Notes. As for your 2nd question, if you use the software iSCSI initiator, you should be able to put in place end-to-end jumbo frames. Use "vmkping -s 9000 " from you ESX host servers to validate that you have end-to-end jumbo frames.

    Per the vSphere 4.1 release notes at http://www.vmware.com/support/vsphere4/doc/vsp_esx41_vc41_rel_notes.html under the "Known Issues" section:

    The Known Issues section covers Functionality Caveats and provides a List of Known Issues.
    Functionality Caveats

    IPv6 Disabled by Default. IPv6 is disabled by default when installing ESX 4.1.

    Hardware iSCSI. Broadcom Hardware iSCSI does not support Jumbo Frames or IPv6. Dependent hardware iSCSI does not support iSCSI access to the same LUN when a host uses dependent and independent hardware iSCSI adapters simultaneously.

  • Do you have to enable something in the BIOS to for the 5709 to be properly detected as an hba?
  • KongY, Just ran into this thread with the obvious next question: Given this limitation (here on a bunch of M805's & R805's with PS4000E/PS5000E/PS5000XV's), which set up would be 'best'?
    Should we use the 5709's as HBA's with offload and without jumbo-frames, or would the sw-iSCSI initiator with jumboframes and extra CPU-load be the way to go?

    I can imagine the answer may depend on average CPU and / or storage array load, but I do hope you can point me in the right direction.
  • I have hit this issue, and this is what I have come up with. I have migrated from ESX 4.0 U1 to ESXi 4.1

    3 R710's
    2 PS6000's
    Stack of Juniper EX4200's
    Running Jumbo Frames SW iSCSI HBA

    As soon as I seen that the 4.1 would support the 5709's as a HW HBA I really wanted to try it out. Well it turned out really bad for me.

    Went ahead and changed all my switch settings back to 1514 MTU for the reason that the 5709's did not support Jumbo frames, was assuming that the CPU offload would be better in the long run just for the fact it is pretty easy for even a 1514 MTU to saturate a 1 Gb connection. But basically what happened is I noticed a dramatic drop in performance on my SAN, especially with vStorage Motion. I was taking about 2 Hours or more to move a 20 GB VM whereas before the change it was only taking about 20 Mins or less. Now this could have been my setup (maybe there was something I missed). But in the end I went ahead and went back to the SW iSCSI and Jumbo Frames. When I did that the performance went back to normal.

    Hopefully if someone else has experience with this they can update the thread.
  • @kamp@eur.nl-

    I would say that Jumbo Frames with the software iSCSI initiator would be the way to go if you are not fully- or over-committing your CPU resources. Also, from previous HW offload testing, I would say that the benefit is only as good as its software stack. I've seen HW offload free up CPU cycles but at the expense of waiting for memory to free up. The end result was a mixed bag of throughput depending on your R/W ratios and sequential vs random IO patterns.
  • Hi @rickrbyrne,

    Thank you for sharing. Likewise, I hope other community members will share their experience as well.
  • @KongY : Thank you for the reply, we'll just (happily) stay with the sw iSCSI initiator!
  • Ran into this problem myself. I'm getting massacred by the 5709's trying to talk to my iSCSI targets! (source is an R710) I need a beer... :)
  • I have been doing some testing on this scenario....
    Software iSCSI Initiator with Jumbo Frames vs Hardware dependant iSCSI Initiator without Jumbo Frames
    See here http://bit.ly/cAmY9w

    My findings are that the gain of performing iSCSI offload using the Broadcom 5709 hardware dependant iSCSI initiator is actually is massively disproportionate to the gains of using jumbo frames.

    Even though there is still the overhead of the software iSCSI initiator, it is jumbo frames that improve performance.

    While the hardware initiator is removing the overhead that the software initiator presents, the gain is negligible given the fact jumbo frames cannot be used with this boradcom hardware dependant initiator.

    More detail on the tests I did can be found here: http://www.vmadmin.co.uk/vmware/35-esxserver/252-esxihwswiscsijumbo

    Andy Barnes
  • Anyone having the same questions regarding hardware offload without jumbo frames, see the above post!

    Andy, you've provided all the information anyone could be looking for on this subject.
    Wish I'd have the equipment to perform such tests myself...

    Thanks a lot!

    Sebastiaan Kamp
    LIA - Erasmus University Rotterdam
  • I am looking for a custom R610 VMware ESX (not I) custom image. The one from VMware doesn't have the QLogic 8152's we have show up on the install. Thanks.

  • You can download a Dell customized version of the VMware ESXi ISO here: [View:]

    It will most likely contain the needed drivers.