This is a follow up to Part 1 of the Dell | Terascala HPC Storage Solution series. In the last post, I gave a high level overview of the appliance and discussed benchmarking data as well. In this post I’ll go into a bit more detail about the solution – particularly how it’s administered via the GUI interface and I’ll also talk about some of the metadata performance results. You can see a picture of the equipment we used in the lab in Figure 1.
Figure 1. Dell HPC Lab I/O ClusterFor more details on the software and hardware involved, please refer to the first post at:http://www.delltechcenter.com/page/Dell+|+Terascala+HPC+Storage+Solution+Part+INow, as we all know, Lustre itself can be a bit complex to administer. With the Dell | Terascala HPC Storage Solution, we have created an appliance that demystifies Lustre by providing a comprehensive GUI that will be used for all Lustre filesystem administration. Here’s a sneak peek at the GUI and some of its functionality. Let’s start off by looking at the main console of the GUI in Figure 2. You can see on the left hand pane (Hardware) is a tree based representation of the hardware components of the appliance. Here we have 2 racks. One rack has the Terascala MDS servers configured in a Active-Passive manner as well as the Terascala OSS servers configured in a Active-Active manner. You can also see on the Hardware pane another rack which consists of the Dell MD3000 storage arrays. In this configuration we have 3. One for the MDT, and 2 for the OST’s. Basically, in all the panes, we have status information so it’s easy to tell if there are any problems with the filesystem. More detail about the “Lustre Systems” pane in the top middle. Here you can see the filesystem “lstrdell” and the OST and MDT status. You can also see the read and write performance as well as the size of the filesystem and the used / free space on the file system and the status.
Figure 2. Dell | Terascala Management Console To drill down a bit more, if I click on the RAID Enclosure as shown in Figure 3., I can get a detailed status on the MD3000 array.