Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

NVMe on RHEL7

Summary: NVM Express [NVMe] or Non-Volatile Memory Host Controller Interface Specification (NVMHCI), is a specification for accessing solid-state drives

This article may have been automatically translated. If you have any feedback regarding its quality, please let us know using the form at the bottom of this page.

Article Content


Symptoms

What is NVMe?

NVM Express [NVMe] or Non-Volatile Memory Host Controller Interface Specification (NVMHCI), is a specification for accessing solid-state drives (SSDs) attached through the PCI Express (PCIe) bus. NVM is an acronym for non-volatile memory, as used in SSDs.  NVMe defines optimized register interface, command set and feature set for PCIe SSD’s. NVMe focuses to standardize the PCIe SSD’s and improve the performance

PCIe SSD devices designed based on the NVMe specification are  NVMe based PCIeSSD’s. For more details on the NVMe please refer the link http://www.nvmexpress.org/ .The NVMe devices used currently are NVMe 1.0c compliant

Below we will be looking into RHEL 7 support for the NVMe devices.

Cause

No cause information is available.

Resolution

NOTE: Currently DELL support the NVMe devices with RHEL 7 out of box [vendor based] driver.

The following are the list of the things being covered:


 

NVMe- Features Supported

NVMe driver exposes the following features

  • Basic IO operations
  • Hot Plug
  • Boot Support [UEFI and Legacy]

The following table lists the RHEL 7 [Out of box] driver supported features for NVMe on 12G and 13 G machines

 
Generation Basic IO Hot Plug UEFI Boot Legacy Boot
13 G Yes Yes Yes No
12 G Yes Yes No No
Table 1: RHEL 7 Driver Support


NVMe Device: Listing the device and its Capabilities

1)      List the RHEL 7 OS information

[root@localhost ~]# uname -a

Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux 

2)      Get the device details by using the lspci utility

a)      We support Samsung based NVMe drives. First get the pci slot id by using the following command

[root@localhost ~]# lspci | grep -i Samsung

45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)

47:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)

 b)      The slot id will be listed as shown in the below [ Fig 1]. Here ‘’45:00.0"and "47:00.0"are the slots on which the drives are connected.

SLN312382_en_US__2i-1_png-550x0
Figure 1:  lspci listing the slot id

a)      Use the slot id and use the following lspci options to get the device details ,capabilities and the corresponding driver

[root@localhost ~]# lspci -s 45:00.0 -v

45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03) (prog-if 02)

        Subsystem: Dell Express Flash NVMe XS1715 SSD 800GB

        Physical Slot: 25

        Flags: bus master, fast devsel, latency 0, IRQ 76

        Memory at d47fc000 (64-bit, non-prefetchable) [size=16K]

        Capabilities: [c0] Power Management version 3

        Capabilities: [c8] MSI: Enable- Count=1/32 Maskable+ 64bit+

        Capabilities: [e0] MSI-X: Enable+ Count=129 Masked-

        Capabilities: [70] Express Endpoint, MSI 00

        Capabilities: [40] Vendor Specific Information: Len=24 <?>

        Capabilities: [100] Advanced Error Reporting

        Capabilities: [180] #19

        Capabilities: [150] Vendor Specific Information: ID=0001 Rev=1 Len=02c <?>

        Kernel driver in use: nvme
 

The below [Fig 2] shows the Samsung NVMe device and the device details listed. It also shows name of the driver ‘nvme’ in this case for this device

SLN312382_en_US__3i-2_png-550x0
Figure 2: lspci listing  NVMe device details

Checking MaxPayLoad

Check the MaxPayload value by executing the following commands. It should set it to 256 bytes [Fig.3]

[root@localhost home]#  lspci | grep -i Samsung

45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03) 

[root@localhost home]# lspci -vvv -s 45:00.0

SLN312382_en_US__4i-4_png-550x0
Figure 3: MaxPayload set to 256 bytes


NVMe Driver:  List the driver information

1)      Use modinfo command to list the diver details

[root@localhost ~]# modinfo nvme

filename:       /lib/modules/3.10.0-123.el7.x86_64/extra/nvme/nvme.ko

version:        0.8-dell1.17

license:        GPL

author:         Samsung Electronics Corporation

srcversion:     AB81DD9D63DD5DADDED9253

alias:          pci:v0000144Dd0000A820sv*sd*bc*sc*i*

depends:       

vermagic:       3.10.0-123.el7.x86_64 SMP mod_unload modversions

parm:           nvme_major:int

parm:           use_threaded_interrupts:int 

The below [Fig 4] shows details of the NVMe driver nvme.ko 

SLN312382_en_US__5i-5_png-550x0
Figure 4: Modinfo listing driver information 
 

NVMe Device Node and Naming conventions

1)      cat  /proc/partitions displays the device node of nvme.

a)      Following command run lists the nvme device as nvme0n1 and nvme1n1

[root@localhost ~]# cat /proc/partitions

major minor  #blocks  name 

 259        0  781412184 nvme0n1

   8        0 1952448512 sda

   8        1     512000 sda1

   8        2 1951935488 sda2

  11        0    1048575 sr0

 253        0   52428800 dm-0

 253        1   16523264 dm-1

 253        2 1882980352 dm-2

259      3  390711384 nvme1n1 

Partition the device using the any partitioning tools (fdisk,parted)

b)      Executing the  following  command again, lists nvme device along with partitions

[root@localhost ~]# cat /proc/partitions

major minor  #blocks  name 

 259        0  781412184 nvme0n1

 259        1  390705068 nvme0n1p1

 259        2  390706008 nvme0n1p2

   8        0 1952448512 sda

   8        1     512000 sda1

   8        2 1951935488 sda2

  11        0    1048575 sr0

 253        0   52428800 dm-0

 253        1   16523264 dm-1

 253        2 1882980352 dm-2

 259        3  390711384 nvme1n1

 259        4  195354668 nvme1n1p1

 259        5  195354712 nvme1n1p2 
 

Naming conventions:

The below [Fig 5] explains the naming convention of the device nodes 

The number immediately after the string "nvme" is the device number

Example:

nvme0n1 – Here the device number is 0

Partitions are appended after the device name with the prefix ‘p’ 

Example:

nvme0n1p1 – partition 1

nvme1n1p2 – partition 2 

Example:

nvme0n1p1 – partition 1 of device 0

nvme0n1p2 – partition 2 of device 0

nvme1n1p1 – partition 1 of device 1

nvme1n1p2 – partition 2 of device 1  

SLN312382_en_US__6i-6_png-550x0
Figure 5: Device node naming conventions


Formatting with xfs and mounting the device

1)      The following command formats the nvme partition 1 on device 1 to xfs 

[root@localhost ~]# mkfs.xfs /dev/nvme1n1p1

meta-data=/dev/nvme1n1p1         isize=256    agcount=4, agsize=12209667 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=0

data     =                       bsize=4096   blocks=48838667, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

log      =internal log           bsize=4096   blocks=23847, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0 

2)      Mount the device to a mount point and list the same 

[root@localhost ~]# mount /dev/nvme1n1p1 /mnt/

[root@localhost ~]# mount | grep -i nvme

/dev/nvme1n1p1 on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota) 

Using ledmon utility to manage backplane LEDs for NVMe device

Ledmon and ledctl are two utilities for Linux that can be used to control LED status on drive backplanes.  Normally drive backplane LEDs are controlled by a hardware RAID controller (PERC), but when using Software RAID on Linux (mdadm) for NVMe PCIE SSD, the ledmon daemon will monitor the status of the drive array and update the status of drive LEDs.

For extra reading check the link https://www.dell.com/support/article/SLN310523/



Install and use the ledmon/ledctl utility

1)      Installing OpenIPMI and ledmon/ledctl utilities:

Execute the following commands to install OpenIPMI and ledmon

[root@localhost ~]# yum install OpenIPMI

[root@localhost ~]# yum install  ledmon-0.79-3.el7.x86_64.rpm 

2)      Use ledmod/ledctl utilities 

Running ledctl and ledmon concurrently, ledmon will eventually override the ledctl settings

a)      Start and check the status of ipmi as shown in the [Fig.6] using the following command

[root@localhost ~]# systemctl start ipmi

SLN312382_en_US__7i-7_png-550x0
Figure 6: IPMI start and status
 

a)      Start the ledmod

[root@localhost ~]# ledmon

b)      [Fig 7] shows LED status after executing ledmon for the working state of the device


SLN312382_en_US__8i-8_png-550x0
Figure 7: LED status after ledmon run for working state of the device (green)  

a)      The below command will blink drive LED [on the device node /dev/nvme0n1 ]

[root@localhost ~]# ledctl locate=/dev/nvme0n1

Below command will blink both the drive LEDs [on the device node /dev/nvme0n1 and             /dev/nvme1n1]

[root@localhost ~]# ledctl locate={ /dev/nvme0n1 /dev/nvme1n1 }

       And the following command will turn off the locate LED

[root@localhost ~]# ledctl locate_off=/dev/nvme0n1

Article Properties


Affected Product

Servers

Last Published Date

06 Apr 2021

Version

3

Article Type

Solution