I need to resolve an issue with a client where someone has configured a MD3000i connected via 2 x Powerconnect 5424 switches to provide storage to 3 x VMware ESX servers. They have been experiencing severe performance disk performance issues and I have been called into resolve the issue. I have a VMware and networking background but my experience is with EMC clariions and Cisco networking and I don't have any experience with Dell or iSCSI.
So far I have identified that whoever setup the system used the single subnet method where they have one vSwitch per ESX server that contains 2 physical adapters with each going to a different 5424 switch. According to the dell powervault configuration guide for ESX such a configuration is valid as both of the 5424 switches are trunked or ISL'd together and both devices are in the same broadcast domain. The problem with our configuration is that although they did use a cable to connect the 2 switches they did not configure the ports as trunk ports - both switches still have the default factory config.
So, it looks like I just need to connect the 2 x 5424 switches and configure each port as a trunk. The thing that I cannot work out and that I cannot find any configuration examples of is how I should configure the trunk port. Do I use LACP or LAG? If something knows can they place provide the command line steps or Dell OpenManage Switch Administrator steps for configuring the ports correctly.
Any help would be greatly appreciated.
I am assuming you are referencing figure 1a in the "storage_guide.pdf" doc. Using Dell terminology, you seem to be confusing VLAN trunk ports and Link aggregation groups (LAGs) in the description of your problem.
I believe your main problem is that Dell switches default to flowcontrol off and you are overrunning the switches.
- make sure you have the latest code. It is 18.104.22.168. Here is the link: http://support.dell.com/support/downloads/download.aspx?c=us&l=en&s=gen&releaseid=R207115&SystemID=PWC_5424&servicetag=&os=NAA&osl=en&deviceid=17003&devlib=0&typecnt=0&vercnt=4&catid=-1&impid=-1&formatcnt=2&libid=5&fileid=289945
- start with switches in default configuration (all ports in vlan 1 broadcast domain) and a single port connection between the 2 switches.
- turn on flow control on the switches:
console> enableconsole# configureconsole (config)# interface range ethernet allconsole(config-if)# flowcontrol on
- confirm flow control is on on servers and MD3000i
- check the performance now. It should be fine.
- with a single link between the 2 switches, there is the potential of over subscribing this link (which without flowcontrol, causes the switch to drop packets). To help alleviate this oversubscription, you can use multiple ports to create a LAG between the switches. This may be of limited benefit because of the small number of source and destiantion addresses in this type of network, but here is an example config for a 2 port LAG (disconnect switches from eachother before doing following config on both. then connect ports 1 and 2):
cosole> enableconsole# configureconsole (config)# interface range ethernet g1,g2console(config-if)# channel-group 1 mode on <<< I suggest "on" (no LACP) because LACP does not buy you much.console(config-if)# exitconsole(config)# interface port-channel 1console(config-if)# flowcontrol on
- one final thing. This switch has an iSCSI accelleration feature that may be causing problems. If the above suggestions still show poor performance, try turning off this feature.
console> enableconsole# configureconsole (config)# no iscsi enable