I’m going to be writing a series of blog posts discussing the new Dell | Terascala HPC Storage Solution. In “Part I” of this blog, I’ll be giving a high level overview of the Dell | Terascala HPC Storage Solution and how we have it configured in our lab. I’ll also cover some benchmark data we have acquired as part of our validation process. Moving forward with the series I’ll discuss benchmarks and performance more in depth.In the world of HPC (High Performance Computing), storage I/O is the bottleneck for many applications. Dell has partnered with Terascala in an attempt to alleviate this bottleneck. The solution we will be talking about today is the Dell | Terascala HPC Storage Solution. The Dell | Terascala HPC Storage Solution is an appliance oriented scalable solution that has several key features such as:
File system status File system space usage File system throughput Failover capabilities File System mount / un-mount capabilities Etc…
In our lab, the Dell | Terascala HPC Storage Solution is configured as such: Compute Node Software / Hardware:
2.6.18-128.7.1.el5
Object and Metadata Servers Software / Hardware:
The diagram below is a visual representation of our cluster setup. Here’s the hardware configuration:
Each compute node has 24GB memory
Connected to the DDR IB fabric 8GB memory per server
Connected to the DDR IB fabric 8 GB memory per server