Windows MPIO Problems

Storage

Storage
Information and ideas on Dell storage solutions, including DAS, NAS, SAN and backup.

Windows MPIO Problems

This question has suggested answer(s)

Hi,

I have a two controller Compellent system with two dual port 10g iSCSI front end cards in each controller (virtual ports enabled), I have tried a couple of different ways of configuring front end ports and I believe I have followed the MPIO best practices document correctly but I am unable to get MPIO working properly with my windows servers.

I have two 10g switches which are currently connected together, front end ports are all in one fault domain as follows:

10.0.10.1/25 Fault Domain Control Port

10.0.10.2/25 Controller 1 Card 1 Port 1

10.0.10.3/25 Controller 1 Card 2 Port 1

10.0.10.4/25 Controller 2 Card 1 Port 1

10.0.10.5/25 Controller 2 Card 1 Port 1

10.0.10.6/25 Controller 1 Card 1 Port 2

10.0.10.7/25 Controller 1 Card 1 Port 2

10.0.10.8/25 Controller 2 Card 1 Port 2

10.0.10.9/25 Controller 2 Card 1 Port 2

My servers (windows 2008r2) have dual port cards with a connection to each switch and ip's in the same subnet as the front end ports, MPIO feature is installed, iSCSI MPIO enabled and "COMPELNTCompellent Vol" added to Device Hardware ID's.

In the windows iscsi initiator control panel I add two entries for the discovery portal one for each local ip address both connecting to the fault domain control port.

Once the portals are added 8 targets are discovered, the names match the iqn shown on the virtual ports in storage center.

As per the best practices PDF I select each target in turn and create two connections, one from each local IP on the server, at this point the server connectivity tab in storage center changes status from partially connected to Connected.

To check the MPIO connections I selected one of the targets and click devices, select a device and click MPIO, 8 connections are listed, where I would expect to see 16 (8 x Front End Ports * 2 x Server Initiator Ports)

Selecting each target in turn and clicking devices I can see that only 4 targets have connections, and they are all virtual ports on controller 2, the connections to targets on controller 1 do not have any devices listed.

Performance is terrible, 0.2mb/sec benchmark using HD Tune Pro, if I change the MPIO Load Balance Policy to Round Robin With Subset and then set two paths as active (one from each server IP), performance is much better.

I have tried a different setup with two fault domains, two subnets, and isolated switches, and I have the same problem that once MPIO is enabled not all connections list devices and performance is terrible unless paths are manually changed to use only two paths.

How can I fix this?

Is it better to setup one fault domain for all iSCSI front end ports or to have one for each switch? The documentation I've read mentions both and I'm not sure which is the best choice for maximum performance and failover.

Andy

All Replies
  • Andy,

    Have your reached out to Copilot yet?  Let me know if you have any issues engaging with them and I will help the process.  

    1-800-EZStore

  • Not a Windows admin and that 0.2 MB/sec is pretty atrocious (that's definitely worth some advice from Copilot), but on the other points, Round Robin is generally the way you want to go. For iSCSI land I've found the best setup in our environment (1 Gbps iSCSI vs your 10 Gbps) is two fault domains, with each switch encompassing a fault domain. I'll then do something like take Port 1 off of every controller card and stick it in FD1, all Port 2s in FD2.

    Keep in mind that active connections could change based on how many volumes you have mapped. If you only have one volume connected, then you will only have connections to the controller on which that volume is mounted. While the Compellent is a quasi active/active system overall, on the individual volume level everything is active/standby. Also, by using a single fault domain I would only expect 1 IP connection from the initiator to each iSCSI target since everything is in one subnet.

    Does your Windows box have real iSCSI HBAs or are these 10Gb NICs with software iSCSI?

  • I originally setup the switches as two fault domains exactly as you describe and then changed the setup when I had problems, I will change it back.

    Now that I understand its quasi active/active that makes sense of some of the things I've seen so far, and it appears to be possible to spread the load from different initiators across the two controllers so as long as failover works I've no problems with it not being real active/active.

    Not in the office today but I will do some more testing tomorrow and then get in touch with copilot if I still have problems.

    Our windows servers have 10G NIC's with software iSCSI, a couple will have HBAs soon.

    Thanks for the info.

    Andy

  • Software iSCSI will exacerbate that part then without two separate subnets. Since it's based in the OS, the software iSCSI initiator is going to be subject to the OS's routing table. When forming connections the lowest (or first) IP in that subnet will be used as a source. So even if you had two NICs with different IPs, only one would get utilized.

  • In the iSCSI initiator control panel its possible to select the local source ip for the connection, I've done this and seen both NICs utilised, but clearly I had the setup correct in the first place and will change it back to two subnets/FDs

    Andy

  • Ah... like I said, not a Windows admin ;)

    Good luck, keep us posted.