hi all - EqualLogic - Storage - Dell Community

hi all

Storage

Storage
Information and ideas on Dell storage solutions, including DAS, NAS, SAN and backup.

hi all

This question has been answered by Joerg

i have two servers R520 connected to two switches power connect 6224 connected to PS4200 storage

and another replica storage connected to the main storage lake in the figure belwo

Figure 1. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4000 Storage Array

 

i am using the one 10G link from each server connected to the 10G ports on switch. also 10 G links from switch1 to module 0 and switch2 to module 1. and the other links from servers to switches and from switches to storage is 1G? i faced problems with this scenario that server two cannot see the storage because its connected to module 1 (passive).

what i am going to do is connecting the 10G links from both switches to module 0 (active) and the 1 g links from switch to module 1 (passive).

correct me if im wrong and any suggestion for this connection since its limited to two 10G modules on each switch. one for the server and the other for storage controller.????????

Verified Answer
  • 1. PC6224 are not the best iSCSI switches on this planet.Yes we have use them too.

    2. The 10G Module for PC6224 are made for uplinks to other switches and not for connecting Server/Storage. I dont expect they come with buffers and so..... Yes we have use them to connect ESX Hosts to it... but it was only some DR setup and not a real life production environment.

    3. A EQL ist an Active/Standby controller model which means the Standby CM (Controller Module) offer Cache/CPU but no NetworkIO as long as it acts as the Standby.  (We leave the feature called Vertical Portfailover away at this point)

    If you use 2 phys. Switchtes you need an ISL between them or use the Stacking feature* for creating one logical switch. A Server is never connected directly to a EQL CM but always to one ore more switches. With the ISL in place any server can reach ETH0 and ETH1 from the active CM.

    * The PC6224 offer 2 slots on the back and in slot1 you can insert a Stacking Module. The 2nd. slot than can takes your 10G Module.


    Regards,
    Joerg

All Replies
  • 1. PC6224 are not the best iSCSI switches on this planet.Yes we have use them too.

    2. The 10G Module for PC6224 are made for uplinks to other switches and not for connecting Server/Storage. I dont expect they come with buffers and so..... Yes we have use them to connect ESX Hosts to it... but it was only some DR setup and not a real life production environment.

    3. A EQL ist an Active/Standby controller model which means the Standby CM (Controller Module) offer Cache/CPU but no NetworkIO as long as it acts as the Standby.  (We leave the feature called Vertical Portfailover away at this point)

    If you use 2 phys. Switchtes you need an ISL between them or use the Stacking feature* for creating one logical switch. A Server is never connected directly to a EQL CM but always to one ore more switches. With the ISL in place any server can reach ETH0 and ETH1 from the active CM.

    * The PC6224 offer 2 slots on the back and in slot1 you can insert a Stacking Module. The 2nd. slot than can takes your 10G Module.


    Regards,
    Joerg

  • Hello, 

    Joerg is completely correct.  This is not a supportable configuration.  

    Going 10GbE servers to GbE  is a 10:1 oversubscription.   If you have a 4210 then those 10GBE ports will negotiate down to GbE. 

    Performance and stability will suffer, especially under load.   You must get a proper 10GbE switch, or drop the 10GbE servers to GbE instead.    With only two ESXi servers your performance should be fine.

     Regards, 

    Don 

    Social Media and Community Professional
    #IWork4Dell
    Get Support on Twitter - @dellcarespro

  • hi joerg,

    thank you for your reply.

    1. yeah i know that this switch is not the best san switch but that what was available that time.

    3. thank you for this my question is if i connect server 2 to switch 2 and the switch 2 to ctrl 1 (standby) so server 2 will not connect to storage right ? actually this is my current setup but i will connect this link to the other port of ctrl 0 so both servers will be connected to ctrl 0 port 0 and 1. correct me if im wrong.

    about the switches i am doing lag between them i have problem of server 2 that cannot connect to storage or source. so do u recommend to do isl between them or connect the two servers to the same switch to communicate with the storage until to get stacking module? and could you please send me documents about isl on PC6224.

  • ISL means InterSwitchLink. Its a connection between switches. If you use a single cable or a multible by creating a LAG it doesnt matter (for a support configuration the ISL have to be 70% of all combined active EQL ports). The stacking option makes life easier when you start on a greenfield environment (downtime needed) and if offer 2x12Gbit bandwith and if you setup it once it workings for ages.

    We use PC5448 with a 4x1GbE LAG as ISL when we start with PS5000 years ago.

    Iam not sure is you do the cabling right. Within the box there is a poster with a cabling plan... you can download it from eqlsupport.dell.com/.../download_file.aspx as well.

    1. Your Servers need at a minium 2 ncis for iscsi. Each nic port goes to a different Switch

    2. You active CM have to nic ports. One named ETH0 and the 2nd. is ETH1. Each goes to different switch

    If everthing is setup correctly every port on an Server can "ping" every ETHx on the active CM. Every Server can "ping" every ETHx

    - Dont connect all Server ports to the same Switch

    - Dont connect ETH0 and ETH1 of a CM to the same Switch

    - Check if your LAG is working

    CM0/EHT0 -> Switch0
    CM0/EHT1 -> Switch1

    CM1/EHT0->Switch1
    CM1/EHT1->Switch0

    With this setup the vertical port failover feature can kicks and will offer 100% bandwith from the EQL site even if a Switch goes down. The feature is may confusing when setting up a environment when cabling isnt in place when powering on the units or plugin cables during the units is already up and running.

    Regards,
    Joerg

  • hi joerg,

    thank you so much for this, so to get them stacked is the best solution. i will think of it later..

    but on the two switches i have only two 10G modules one to the server and the other to the storage..

    can i make the other connections from servers to switches with the 1G 

    and from the switches to the other  equallogic controllers ports with 1G.? like below

    CM0/EHT0 -> Switch0  >>>10G
    CM0/EHT1 -> Switch1  >>> 1G

    CM1/EHT0->Switch1 10G
    CM1/EHT1->Switch0 1G

    i have two nics 1G for the servers available so i can use them.

    the switch config of the lag is as follow? correct me if i miss anything.

    Switch-01#show run
    !Current Configuration:
    !System Description "PowerConnect 6224, 3.3.13.1, VxWorks 6.5"
    !System Software Version 3.3.13.1
    !Cut-through mode is configured as disabled
    !
    configure
    vlan database
    vlan 101
    vlan routing 101 1
    exit
    hostname "Switch-01"
    stack
    member 1 1
    exit
    ip address none
    ip routing
    interface vlan 101
    name "iSCSI"
    routing
    ip address 192.168.2.1 255.255.255.0
    exit


    username "admin" password fd6e7ea7e78feab099aa72ccb6555922 level 15 encrypted
    !
    interface ethernet 1/g1
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g2
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g3
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g4


    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g5
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g6
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g7
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101


    exit
    !
    interface ethernet 1/g8
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g9
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g10
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g11


    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g12
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g13
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g14
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101


    exit
    !
    interface ethernet 1/g15
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g16
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g17
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g18


    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g19
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g20
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface ethernet 1/g21
    channel-group 1 mode auto
    exit
    !


    interface ethernet 1/g22
    channel-group 1 mode auto
    exit
    !
    interface ethernet 1/g23
    channel-group 1 mode auto
    exit
    !
    interface ethernet 1/g24
    channel-group 1 mode auto
    exit
    !
    interface ethernet 1/xg1
    switchport access vlan 101
    exit
    !
    interface ethernet 1/xg3
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit


    !
    interface ethernet 1/xg4
    spanning-tree portfast
    mtu 9216
    switchport access vlan 101
    exit
    !
    interface port-channel 1
    switchport mode trunk
    switchport trunk allowed vlan add 101
    mtu 9216
    exit
    exit

  • Just to make sure we're all on same page.  Do not connect 10GbE ports from either the server or to the array controllers.  All the servers and EQL array ports will need to be connected to the GbE ports only. 

    Also there is newer firmware for that switch. 

    Don 

    Social Media and Community Professional
    #IWork4Dell
    Get Support on Twitter - @dellcarespro

  • this is my only solution till i get the 10G switch...can i do it or no ?

    and what is the affect for that the 1G will be the backup links in case of failure.

    now i am connecting sw0 10g to Module 0 port 0

                                   sw1 10g to module 1 port 0 

    can i put the 10G of sw1 to module 0 port 1 ... and the 1G from each switch to module 1 port 0,1 

    in case of vertical failover i will have 1G not 10 G which its better than nothing?

    doable or not?

  • Question: Is that a VMware vSphere driven solution or what?

    For a ESXi host you have to configure a nic binding to the software iscsi modue and this cant handle something like 1GB + 10GB port setup. Also in a ESXi swiscsi setup you dont have an active or standby nic.


    I suggest to connect all EQL and Server (iscsi) port to 1G ports on both switch.

    - Use the 10G for the ISL (LAG)

    - Use the 10G for connecting the Server but only for LAN, vMOTION, FT like traffic and NOT for Storage

    Regards,
    Joerg

  • its hyper-v windows server 2012 R2.

    in 1G and 10G on servers solution  i can make NIC teaming for the both ports active the 10G standby the 1G

    i want to make the 10G i have useful?

    i will make the 1G connections last option. :(

    do i need any other config on the switches please is that ISL or no ?

  • Hello, 

     No.   Bonding isn't supported with MS iSCSI initiator, nor EQL.   

     Until you get a proper 10GbE switch for iSCSI you can't use those 10GbE ports in any way EQL.  

     However, with such a small environment you won't know the difference.  iSCSI is not about MB/sec.  Especially with a virtualized server.  IO rate is.  How quickly the storage device can process each IO request.  

     Once you get online and start monitoring the iSCSI traffic you will see that clearly.  

     Make sure you install SANHQ to properly monitor the EQL storage group. 

     Regards, 

    Don 

    Social Media and Community Professional
    #IWork4Dell
    Get Support on Twitter - @dellcarespro

  • hi all

    Equallogic Toplogy.pdf

    attached is the last topology Connection For the equallogic and servers,

    the active controller on MAin Equallogic is CM0, 

    CM1 (passive Controller) is connected to the Switches on 1G connection, 

    all connections are fine.. servers can see the shared volume from storage....

    i just have problem with server2 that cannot see the shared volume from storage unless after serveral restarts...it hangs when accessing the shared volume but after several restarts it becomes in good performance..

    the problem is in the cluster between the two servers. if Server 2 is the Owner of the cluster everything would be fine , connections, shared volume, Hyper V environment etc.

    but when i make Servre1 is the owner of the cluster , server 2 cannot see the shared volume 

    and moving the VMs between servers become Unavailable. is that behavior has any relation of connection to storage or its cluster issue. and i see a wierd behavior of only server 2..

    please if any one had faced such of this issue to tell me?

    many thanks guys

  • It looks like that you have ignore all help from the guiys here so why do you ask for more help?


    - Each Server needs as minimum a single connection to each switch. This is missing from your topology

    - CM1/EHT0 have to be connectet to Switch2

    - Be sure to install HIT on the MS Servers and than configure it. It will help to find misconfiguration

    - Using a different Setup for the Standby CM isnt a supportet config i think


    Question: Is your 10G module within the  PC a BaseT or a SFP+ one?

    Hint: Dell offers a Guide how to configure the switch ports for different type of Servers/Switch/Hypervisor. You can find it under en.community.dell.com/.../3615.rapid-equallogic-configuration-portal

    Regards,
    Joerg

  • Hi Joerg,

    believe me no, all notes are took in consideration , but i want to deal with what i have and what are available

    as much as i can.

    on the connection tap from storage ( Main) it shows me that server 1 is connected via MC 0 eth0 and Server 2 is connected via MC0 eth1

    i know about server connection to switch but, i only have two 10G modules on each switch for servrer and storage .

    - CM1/eth0 is connected to switch2?  i will check this but it should be cross connection right?

    - i will check the HIT also.

    - different setup YOU mean combination of 1G and 10G ,could you confirm please if you can from any document because i couldn't find any recommendation about this.

    its Base-T

    the question is such of this behavior of server 2 and cluster. is it related to cluster config or hardware of the server 2 or the connection that what i want to understand more.

    thank you so Much

  • Hello, 

     I think Joerg's frustration is that we have both indicated, repeatedly, that there is NO supported configuration with the HW you have currently that 10GbE can be used with the EQL storage. NONE. 

    No matter how you cable it up, you have the same problem.  The 10GbE ports on the 6224's is for interswitch links, not iSCSI traffic.  You would over run the switch.resulting in poor performance and connection instability.  

    Also, as I mentioned previously, you need to run all ports to/from server, and to/from EQL storage at GbE only.   Given the fact that you only have two servers, you will not notice the performance difference. 

     Regards, 

    Don 

    Social Media and Community Professional
    #IWork4Dell
    Get Support on Twitter - @dellcarespro