M1000e/M600 with VMWare ESX 3.5 - Hypervisors and Solutions - Virtualization - Dell Community

M1000e/M600 with VMWare ESX 3.5

Your Virtualization Community: Optimize the performance of your virtual environment.

M1000e/M600 with VMWare ESX 3.5

  • Hi All,

    We have now taken delivery of our shiney new M1000e and was wondering if anyone out there had set these up with ESX3.5. We have purchased 4 x M6220 switches and hoping to get the leads to allow stacking them.

    We VMware you are recommended to use different physical NIC's for the Virtual Machines, the service console and the VMotion interface. But, this configuration doesn't really allow for resilience when you have 4 nics.

    With the 1955 chassis we were even more limited to only having 2 NIC's. in this case we created one vSwitch, added all the vmotion, service console and VM to the vSwitch and assigned both physical adaptors to this. The virtual switch did some very basic load balancing across the pNIC's and we had a solution. Then from the dell switch we configured the nofification of uplink failure and used etherchannel to team to uplinks to our Cisco core switch

    The big question is how do we now reliably configure the system for both performance and reliability when we have more options and i was wondering what configurations people have used?

    Thanks in advance

  • I just looked at this thread. I hope its not too late to reply.

    First, why do you have only 4 M6220 switches? MD1000e can have 6 fabrics. So I am assuming you have Fibre Channel.

    If you do not use HA, here is the recommended configurations

    NIC 0 - Fabric A1 - Service console
    NIC 1 - Fabric A2 - VMotion
    NIC 2 and NIC 3 will be teamed - Fabrics B1 and B2 for VMs

    If you want redundancy or have VMware HA:

    NIC 0 and NIC 1 will be teamed - Fabric A1 and Fabric A2 - shared for both VMotion and service console. It is recommended to have VMotion is a separate VLAN. For service console you can have NIC 0 as the primary NIC and NIC 1 as the primary NIC. That way they will use different NICs unless there is a failure.
    NIC 2 and NIC 3 will be teamed - Fabrics B1 and B2 for VMs

    Hope this helps.

    Dell Inc
  • Hi guys, balacs reply sounds like good advice to me. We're just getting started w/ our M1000e and M600 blades. We have 4 M6220's and 2 brocade 4424 san switches. We're having a wierd problem w/ ESX 3.5. If we install (ubuntu) linux on a blade, we can ping the blade just fine. If we install esx, we can't ping the blade or ping out from the esx service console. Have tried various port settings on the m6220 and vlan settings on the m6220 and esx.conf but no go. Any suggestions?