In the last few months, we have been evaluating host-based caching software solutions in the Dell HPC engineering lab.  Take your familiar NFS environment – an NFS server with direct attached storage, serving data to an HPC compute cluster. Now add some host-based caching software to the NFS server and a couple of PCIe SSDs to be the cache layer between memory and the backend disks. Don’t change anything in the storage configuration, the file system or how the clients access the NFS server.  Does this improve the performance of standard HPC I/O workloads? If so, by how much? That was the goal of our study.

Host-based caching software is not a recent development. What’s unique about the version we tested, Dell Fluid Cache for DAS, is that it provides a write-back feature, not just write-through. This enables write I/O patterns to be accelerated in addition to read I/O patterns.

The figure below shows the configuration that was used for this study. Four storage arrays were attached to an NFS server. To enable host-based caching, a SSD controller, two Express Flash PCI-e SSDs and the caching software were added to the NFS server.

The graphs below chart the aggregate IOPS of a random I/O pattern to a regular NFS setup as well as with a NFS+caching configuration. A 64-node HPC compute cluster was used to drive I/O and test the two storage configurations.  The HPC cluster and NFS server were both connected to a shared InfiniBand fabric. As shown below, we measured up to 6.4x improvement in random write IOPs and up to 20x improvement in random read IOPs with caching software when compared to the baseline plain NFS configuration!

The random I/O workload consisted of multiple clients concurrently writing or reading a 4GB file each. The iozone benchmark tool was used to measure IOPs and record size was set to 4k. The caching software was configured in write-back mode.

The configuration used for these tests is summarized in the table below. 

Caching software

Dell Fluid Cache for DAS 1.0

Cache pool

Two 350GB Dell PowerEdge Express Flash PCIe SSDs

This provides ~350 GB for the write cache as writes are mirrored, and ~700GB for the read cache.

NFS server

Dell PowerEdge R720

Dual Intel Xeon E5-2680 @ 2.70 GHz processors. 128GB memory

NFS storage

Four PowerVault MD1200 storage arrays.

SAS based JBODs direct attached the NFS server.

12 * 3TB NL SAS disks per array. Total 48 disks, 144 TB.

Backend File system

Red Hat Scalable File system (XFS) on the NFS storage

I/O clients

64-server compute cluster comprised of Dell PowerEdge M420 servers

Interconnect for I/O

Mellanox InfiniBand FDR and FDR10.

All I/O traffic is over the IB links using the IPoIB protocol

*The baseline NFS configuration is similar to the Dell NSS family of storage solutions.

*The caching software and cache pool of SSDs were used for the NFS+caching tests only.

For details on the configuration, performance of write-back vs. write-through modes, and to see the impact on different I/O patterns, please consult the complete study available as a technical white paper. The white paper also contains a step-by-step appendix on configuring and tuning such a solution.