Author: Steven Lemons

What exactly is Linux multipathing? It’s the process used to configure multiple I/O access paths from Dell™ PowerEdge™ Servers to the Dell EMC™ SC Series storage block devices.

When thinking about how best to access these SC Series block devices from a Linux host, there are generally two configuration options available when needing the fail-over and load-balancing benefits of multipathing:

  • Dell EMC PowerPath™
  • Native Device Mapper Multipathing (DM-Multipath)

Much like gaming, each of us prefers a specific character configuration just as each Linux admin has their preferred method of multipathing configuration, this blog calls out PowerPath with its support for SC Series storage in addition to highlighting some of the native DM-Multipathing configuration attributes that might be taken at “default” value (you seasoned Linux admins will get that joke). While highlighting these additional configuration attributes, which might enrich your DM-Multipath journey (let’s face it, the wrong choice here is a headache generator), we’d also like to present some simple BASH functions for inclusion within your administrative tool belt.

The scenarios discussed in this blog adhere to the following Dell EMC recommended best practices. If you’re already familiar with our online resources, give yourself +10 Wisdom.

Note: All referenced testing or scripting was completed on RHEL 6.9, RHEL 7.3 & SLES 12 SP2.

Dell EMC PowerPath

PowerPath supports SC Series storage. Go ahead, take a moment to let that sink in.

YES! This exciting announcement was made at Dell EMC World 2016, in the live music capital of the world – Austin, TX.

So, let’s see here….

  • PowerPath supports the following Operating Systems (OSes): AIX®, HP-UX, Linux, Oracle Solaris®, Windows®, and vSphere ESXi®.
  • PowerPath supports many Dell EMC Storage Arrays, including Dell EMC Unity, SC Series, and Dell EMC VPLEX
  • PowerPath provides a single management interface across expanding physical and virtual environments, easing the learning curve
  • PowerPath provides a single configuration script for dynamic LUN scanning for addition or removal from host (/etc/opt/emcpower/emcplun_linux – this is a huge time saver vs cli crafting a bunch of for loops with echo statements when needing to scan SAN changes at the Linux host level)
  • PowerPath provides built-in performance monitoring (disabled by default) for all of its managed devices

The below screenshot of the powermt (PowerPath Management Utility) output highlights the attached SC Series storage array (red box) and the policy in use for the configured I/O paths (blue box) of the attached LUNs.

Since we’re talking about multipathing, I’d be remiss if I did not cover PowerPath’s configuration ability to provide load balancing and failover policies specific to your individual storage array. A licensed version of PowerPath will default to the proprietary Adaptive load balancing and failover policy that assigns I/O requests based on an algorithmic decision of path load and logical device priority. The below screenshot of the powermt man page shows some of these available policies.

If you haven’t taken the time to introduce PowerPath into your environment for multipathing needs, hopefully this blog sparked an interest that will get you in front of this outstanding multipathing management utility. You can give PowerPath a try, free for a 45-day trial, by reviewing PowerPath Downloads.

Native Device Mapper Multipathing (DM-MPIO)

Do you spend hours kludging that perfect /etc/multipath.conf file with your bash one-liners? For those keeping score, give yourself +10 Experience.

Configuring DM-MPIO natively on Linux, whether it’s RHEL 6.3, RHEL 7.3 or SLES 12 SP2, follows a programmatic approach by first finding those WWIDs being presented to your host from your SC Series array and then applying various configuration attributes on top of the multipath device (/dev/mapper/SC_VOL_01, for example) being created.

Speaking of finding WWIDs, the below find_sc_wwids function (available for your .bashrc consideration) should help and increase your score by +5 Ability.

find_sc_wwids () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{print $7}'`; do /usr/bin/echo -n "$x:";/usr/lib/udev/scsi_id -g -u $x; done | /usr/bin/sort -t":" -k2
}

Note: All .bashrc functions provided in this blog post were written and tested on RHEL 6.3, RHEL 7.3 & SLES 12 SP2.

While there are configurable attributes within the mulitpaths section for each specified device (alias, uid, gid, mode, etc.) of the configuration file, there are some attributes within the defaults section that may need adjusting according to the performance requirements of your project. The next couple of sections focus on these attributes while providing some coding examples to help with your unique implementation.

Default value for path_selector

Configuring DM-MPIO on RHEL 6.3, RHEL 7.3 or SLES 12 SP2 changes the default path_selector algorithm from “round-robin 0” to the more latency sensitive “service-time 0”. Performance gains can be obtained by moving to “service-time 0” due to its ability to monitor the latency of configured I/O paths and load balance accordingly. This is a much more efficient I/O path selector service than the prior round-robin method which simply balanced the load between all active paths, regardless of latency impacting those active paths.

Multipath device configuration attributes

Now that it’s time to start digging into the weeds of those configurable attributes of each multipath device being created, save yourself some cli kludging with the following dm_iosched_values function (available for your .bashrc consideration) and increase your score by +5 Ability.

dm_iosched_values () {
for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
do
  printf "%b\n" "Config files for /sys/block/$x/queue/"
  for q in `/usr/bin/find /sys/block/$x/queue/ -type f`;
  do
    printf "%b" "\t$x/queue$q:"
    /usr/bin/cat "$q"
  done
done
}

The above dm_iosched_values function will give you a quick reference to all configurable attributes, as shown in the below screenshot, which should help with configuring the host for optimal DM-MPIO performance.

max_sectors_kb

Depending on performance requirements of your project, the “max_sectors_kb” value of the host configured DM-MPIO devices may need to be changed for application requirements. The below .bashrc functions can help with both discovery and changing this value (available for your .bashrc consideration) and increasing your score with +5 Ability.

sc_max_sectors_kb () {
  for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb ->"; /usr/bin/cat /sys/block/$x/queue/max_sectors_kb;done
}

set_512_MaxSectorsKB () {
for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
do
  printf "%b" "$x:\tExisting max_sectors_kb:"
  /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
  /usr/bin/echo 512 > /sys/block/$x/queue/max_sectors_kb
  printf "%b" "\tNew max_sectors_kb:"
  /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
done
}

set_2048_MaxSectorsKB () {
  for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo 2048 > /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo -n "$x:New max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; done
}

When SC Series arrays present LUNs to the host, this value is automatically set based on what the SC array. For example, if the SC series had a 512k page pool, the host would set the “max_sectors_kb” value to 512k, the same for a 2MB page pool which the host would assign a “max_sectors_kb” value of 2048k.

queue_depth

Queue depth is one of those configuration values that must be tested and configured to achieve optimal performance within your SAN environment. The best way to achieve this optimal performance is by evaluating your SAN environment, testing the evaluation and then adjusting the values to meet your performance requirements. The below .bashrc functions can help in both discovery and changing this queue_depth value (available for your .bashrc consideration):

sc_queue_depths () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth -> ";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

set_64_queue_depth () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 64 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

set_32_queue_depth () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 32 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

Additional information

I know, this post almost qualified for an entire tl;dr. (Too Long, Didn’t Read) opt-out, yet we’re at the end. Please give yourself, +10 Stamina points.

With this post highlighting the PowerPath multipathing management utility while also touching on some of those unique configuration scenarios when working with native DM-MPIO, hopefully you are now thinking about your own multipathing policies in place and how they could be tuned up or replaced, altogether.

Dell EMC PowerPath

SUSE Storage Administration Guide

Red Hat Enterprise Linux 6 – DM Multipath

Red Hat Enterprise Linux 7 – DM Multipath