Hi all - Apologies in advance for what will likely be a long post. Looking for some advice on re-configuring a live system to correct some errors made in the original setup. First, let me say that the environment is up and running with no known issues. The problem I am trying to fix involves getting Jumbo Frames to work correctly.
Brief description - 2 Dell 6224 stacked switches. 1 Equallogic PS4000 with active/passive controllers. 2 VMware hosts connecting to the storage. The Dell switches and the Equallogic box are not physically connected to the departmental LAN in any way; this part is a closed system. The 2 VMware hosts are connected to our departmental LAN to provide VM service to staff as well as the closed system to access the storage.
Everything connected to these switches lives in the default VLAN. That is, no separate VLAN was created for iSCSI or anything else. I have checked relevant settings countless times regarding jumbo frame settings, Flow Control, Storm Control, spanning tree, port fast, etc. and am confident that it is all correct on the switches. Likewise, I'm confident that the iSCSI vSwitches on the ESXi hosts are set up correctly. Despite all of that Equallogic Group Manager reports that connections are being made with standard frame length.
In trying to correct this I stumbled across a forum post indicating that jumbo frames are not supported on the default VLAN. If that is true, then that likely explains the issue. However, I cannot find any "official" Dell documentation that confirms this.
So, here's my dilemma. I can just leave things as is and not worry about it. As I said things are working just fine. Or I can try to correct this by creating and using a dedicated VLAN for iSCSI traffic. However, if I choose to do this it needs to be done in such a way as to not disrupt service to my end users. Hopefully, the fact that the PS4000 has dual controllers, each with 2 NICs, will allow me to incrementally fix this problem while keeping the system up and running.
If I create an iSCSI VLAN on the switches, I would then physically connect the NICs on the PS4000 and the NICs on the ESXi hosts to these particular ports, correct? Would this need to be a separate/new IP subnet from other traffic on the switch? Or can I maintain the same numbering scheme that I current have? If separate, is it necessary to creating any routing between these 2 subnets/VLANs? What about the PS4000 management port? Should it be part of the iSCSI VLAN? Or should it be on a different VLAN (same as ESXi management network)?
I can provide more info if necessary. Thanks for any suggestions.
First off, the array doesn’t have any configuration to enable Jumbo, because Jumbo is always enabled on the array. Jumbo is end-to-end, meaning that all devices in the path have to agree on the frame size. So if you do not get a Jumbo connection, then you would need to ensure that it’s enabled and configured correctly on either/or the switch and host NIC/HBA connected to the iSCSI network.
Regarding the Default VLAN, per the EqualLogic Configuration Guide (en.community.dell.com/.../2639.equallogic-configuration-guide.aspx) we do recommend that you use a separate VLAN (not the default) even for dedicated switches. The reason for this is that the default VLAN officially only supports standard frames. You can see this in "show vlan": the switch will only allow the MTU of VLAN 1 to be 1500 bytes. But even so, you can configure the default VLAN for Jumbo, and the switch will try to deliver every packet, even jumbo frames, so some jumbo frames will be delivered, but if the switch is very busy, then jumbo frames will be dropped. To ensure all frames are delivered in Jumbo, a separate non-default VLAN is needed.
We also have this portal for a suggested configuration for the 6248 (same as the 6224): en.community.dell.com/.../3615.rapid-equallogic-configuration-portal-by-sis.aspx See section #3 Switch Configurations
Once you have created the iSCSI VLAN, it would use a separate IP range (that is to say different than the default VLAN). However, you can use the same IP schema you are currently using and just move the existing devices to it). This would require some down time to create the iSCSI VLAN, and move the devices over to the new ports you assiged to the iSCSI VLAN (or you can just assigne the existing ports currently being used for iSCSI to the iSCSI VLAN) and assign 2-4 non used ports to the default VLAN.
Then when the VLAN is configured all devices (Servers and Array eth ports) are now on the new iSCSI VLAN. You would also need to configure your ISL or trunk between the two switches (and any other routing requirements you have) to pass the traffic of the new VLAN so that ARP broadcasts (spanning tree) is setup so that both switches know about the MAC addresses of all the devices regardless of which switch they are attached to. The switch configuration user guide can give you much more detail about configuring and setting up VLANs and ISL. A call to the Dell PowerEdge support team can give you guidance on this as well.
The PS Array management port should be on a separate VLAN, as stated above, you can “carve” out 2-4 ports (or whatever you need) on the default VLAN and assign this a separate ip schema, but typically you use your production IP address range (or your dedicated management IP range, if you have one configured). This allow for you to manage the arrays from a workstation that is not on the iSCSI network.
Typically any configuration changes on a NIC or switch will cause some type of interruption, so doing any work would require planning to ensure your SLA’s are met.
Follow me on Twitter: @joesatdell