Blades Forums

Trying to understand internal ports on blade switches

Blades

Blades
Find insights from Dell experts on our Blade Solutions to help you deploy and manage your environment.

Trying to understand internal ports on blade switches

  • Hello, this is my first post to the Dell Tech Center, and looking for some advice.

    I'm trying to understand how internal ports work on blade switches (M6220 in particular), and having a hard time getting my head round this. I've read the Dell documentation about FlexIO and so on, and also to the M6220 documentation, but still struggling! Each blade switch from what i read has 16 internal ports.

    Some questions i have :

    - How are these ports used?
    - Is each switch allocated to a blade? Or are they viewed as a 'virtual pool' of switches?
    - How could all these internal ports be used? Does each blade server have a number of 'virtual NICs' that can be mapped to each internal port?
    - Can i use these internal ports to allow teaming?
    - If i'm running ESX, can i allocate internal ports to each virtual machine?
    - Does this mean that if i have virtual machines running on the same network and within the same blade enclosure that the network traffic between the virtual machines will be much faster?
    - With these switches, is it recommended to connect a SAN directly to the external ports, or through another rack mounted switch which then goes into the external ports on the blade switch?

    Any help would greatly be appreciated!

    cheers,

    Tim
  • Q: How are these ports used?
    A: Each bladeserver has its onboard LOMs (nics) mapped to internal ports on fabric A which corresponds to internal ports on I/O-modules A1 and A2 to provide I/O redundancy. The mezzanine cards for fabric B and C work in the same way except for the fact that they can be 1GbE, 10GbE, FC or Infiniband. There is a mapping scheme here -> http://support.dell.com/support/edocs/systems/pem/en/HOM/HTML/about.htm#wp1215722

    Q: Is each switch allocated to a blade? Or are they viewed as a 'virtual pool' of switches?
    A: No, a switch is never allocated to a single blade. You should always populate the switches in pairs for each fabric to provide redudancy since the ports on the bladeservers for that particular fabric is divided between the two I/O module slots for each fabric.

    Q: How could all these internal ports be used? Does each blade server have a number of 'virtual NICs' that can be mapped to each internal port?
    A: If you for example has a fully loaded M1000e chassi with eight full height M710 bladeservers which each have four onboard nics. Those four ports on each server is divided between I/O-modules A1 and A2. Eight servers would make it 16 ports for each I/O-module which is the exact number of internal ports that the pass-through modules or any ethernet switch have. The same goes for fabric B and C. You match the mezzanine card with a corresponding pair of I/O-modules for the same type of communication (ethernet, fibre channel or infiniband).

    Q: Can i use these internal ports to allow teaming?
    A: yes, they work just as ordinary nics

    Q: If i'm running ESX, can i allocate internal ports to each virtual machine?
    A: Since they are ordinary nics (just with a very good amount of redundancy) it shouldn't be any problem.

    Regards,

    Andreas Erson

  • Take a step back, maybe this will help.

    Imagine I had 16 1U servers, and I had a 20 port switch. To get connectivity, I would cable all the 1U servers to ports 1-16 in the switch, and leave ports 17-20 for connections to my backbone.

    It's the same thing in the blade, it's just that the physical wires you used to make connections are now "squished" into the backplane of the chassis. Still a one to one relationship of a hard wired connection from a blade to a port on the M6220. No difference.

    So with a blade that has two onboard NIC ports, one routes to the left switch and one to the right.

    Kong just showed me this doc, and it is EXACTLY what you need and should help out a ton - http://i.dell.com/sites/content/business/solutions/engineering-docs/en/Documents/NetworkingGuide_VI3_Blades.pdf

    Let us know if you have more questions.
  • Q: Does this mean that if i have virtual machines running on the same network and within the same blade enclosure that the network traffic between the virtual machines will be much faster?
    A: It depends, the backplane is passive so it can't route your network traffic between bladeservers. But if you have switches as your I/O-modules (and not pass-through modules) the network traffic will go to that switch and then back in to the receiving blade server if the receiving port is on that same I/O-module. If it's not it will go either over a stack cable (if for example using the M6220 switches for fabric A I/O-modules) or up through the uplink from the I/O-module to be routed higher up in your network infrastructure and down to the I/O-module on which the receiving port is mapped.

    Q: With these switches, is it recommended to connect a SAN directly to the external ports, or through another rack mounted switch which then goes into the external ports on the blade switch?
    A: There has been talk about the receiving buffers on the M6220 being to small for SAN traffic. Not sure if that has been sorted? Any way, if your using the M6220 to connect directly to a SAN (probably MD3000i or an EQL) you only have four 1GbE external port on each I/O-module. And for example the EQL PS6000 has four 1GbE per controller so it would occupy every external 1GbE port on the both M6220 I/O-modules. In that setup you would also need a stacking cable between the two M6220. Which leaves you with one module slots on each M6220 that you must use to uplink all your non-SAN traffic from the servers to your network. That solutions is cramped and isn't something I would recommend.

    Regards,

    Andreas Erson

  • Great post by Scott. In addition to the pdf he linked you should also read this one to get to know more about using a dedicated SAN network -> http://www.dell.com/downloads/global/partnerdirect/apj/Integrating_Blades_to_EqualLogic_SAN.pdf

    Regards,

    Andreas Erson

  • Thanks for all your help, off to do some reading now.


    cheers,

    Tim
  • Glad we could help - Andreas has some great points and just let us know what additional questions come from all the reading.

    Welcome to the site, and great first post. I think this will help out others as well !

    Thanks !
  • i know this is an old thread but i'm hoping...

    i have the same set up m6220 and m710 servers. i have 3 m710 in slots 1-3 on my chassis does server 1 map to g1 and g9 (internal ports) server 2 g2 and g10...

    thanks
  • You have two M6220 in I/O Module slots A1 and A2. You have three M710 blades in slots 1+9, 2+10 and 3+11. The four onboard nics on your M710s maps toward Fabric A as such:
    M710 (1+9) maps to A1-g1/g9 and A2-g1/g9
    M710 (2+10) maps to A1-g2/g10 and A2-g2/g10
    M710 (3+11) maps to A1-g3/g11 and A2-g3/g11

    Hoping that was the answer to your question?

    Regards,

    Andreas Erson

  • this may be off topic but do you know how these connect to an vmware box. on my 1st 710 server i have esx 4 (vsphere) installed and it list 8 virtual NICS vmnic0-vmnic7 do you know how these relate to the 710's "physical nic"

    thx