Hello EQL Family,
One of the most common deployment scenarios experienced in the field is that which includes EqualLogic SAN deployment where M1000e blade chassis solutions are present. Blade chassis like the M1000e introduce an added complexity that, if not configured properly, could adversely affect the scalability and performance characteristics of the SAN solution. Each blade chassis introduces the “blade IO module”. These modules are basically Ethernet switches or “pass-through” modules that allow the blade servers in each chassis to gain access to the datacenter network.
The pass-through module is a trivial case of exposing each blade server’s internal Ethernet ports – physically connected to the blade chassis backplane – to external 1Gb interfaceswithout any actual processing of the signal prior to egress from the pass-through modules.
The blade IO module is an actual Ethernet L2/L3 switching device similar in function and capabilities of a typical stand-alone, top of rack switch. As a switch, it allows communicationsfrom any port (internally or externally facing) to any other port on the switch or – through forwarding to an external port – to other switches in the external data center environment and thusto other networking devices – such as EqualLogic SAN array components – not internal to the blade chassis.
There are three categories of SAN designs for M1000e blade chassis integration:
Storage Direct Attached (DA)
This SAN design category includes configurations in which the EqualLogic PS Series array member ports are directly connected to the IO module switch ports within the blade chassis. This design can use Stack or LAG between IO module switches.
Host Pass-Through ( PT)
This SAN design category includes configurations in which the blade server host ports in the M1000e blade chassis are directly connected to external switches using 1GbE pass through IO modules in the M1000e blade chassis. The storage is also connected to the same external switches.
Two switch tiers (2T)
This SAN design category includes configurations in which the EqualLogic PS Series array member ports are connected to a tier of external switches while the server blade host ports are connected to a separate tier of IO module blade switches in the blade chassis. Both the switches within each switch tier and the switch tiers themselves are interconnected by either an administrative Stack or a LAG.
There were four network designs within this category:
Each of the design categories can be compared based on the following:
In conclusion, the different design categories are summarized and compared according to Total Uplink Bandwidth, Total ISL Bandwidth, maximum number of arrays, port ratios, and other criteria.
There are also recommendations based on each design from an administration, performance, fault tolerance, and scalability perspective.
Click for full details: Best Practices for Integrating M1000e Blade Chassis with EqualLogic Arrays
P.S There is a companion paper to this one, utilizing 10Gb switches, that will be published on or about 10/10/2012.There is also a follow on to this paper utilizing Force10, 10Gb switches, MXL, and s4810. The follow on will include Best Practices for configuring in anon-DCB environment and Best Practices for full End-to-End DCB configurations.
Can’t Wait for this one to come out!!
Twitter: @GuyAtDellEmail: firstname.lastname@example.orgChatter: @EQL Tech Enablement
Until next time,
Click here to access the Storage Infrastructure and Solutions (SIS) Team publications library. Which has all of our Best Practice White Papers, Reference Architectures, and other EQL documents:
To post a comment
login or create an account