By Ashish Kumar Singh. January 2017 (HPC Innovation Lab)
This blog presents an in-depth analysis of the High Performance Conjugate Gradient (HPCG) benchmark on the Intel Xeon Phi processor, which is based on Intel Xeon Phi architecture codenamed “Knights Landing”. The analysis has been performed on PowerEdge C6320p platform with the new Intel Xeon Phi 7230 processor.
Introduction to HPCG and Intel Xeon Phi 7230 processor
The HPCG benchmark constructs a logically global, physically distributed sparse linear system using a 27-point stencil at each grid point in 3D domain such that the equation at the point (I, j, k) depend on its values and 26 surrounding neighbors. The global domain computed by benchmark is (NRx * Nx) X (NRy*Ny) X (NRz*Nz), where Nx, Ny and Nz are dimensions of local subgrids, assigned to each MPI process and number of MPI ranks are NR = (NRx X NRy X NRz). These values can be defined in hpcg.dat file or passed in the command line arguments.
The HPCG benchmark is based on conjugate gradient solver, where the pre-conditioner is a three level hierarchical multi-grid (MG) method with Gauss-Seidel. The algorithm starts with MG and contains Symmetric Gauss-Seidel (SymGS) and Sparse Matrix-vector multiplication (SPMV) routines for each level. Both SYMGS and SPMV require data from their neighbor as data is distributed across nodes which is provided by their predecessor, the Exchange Halos routine. The residual should be lower than 1-6 which is locally computed by Dot Product (DDOT), while MPI_Allreduce follows the DDOT and completes the global operation. WAXPBY only updates a vector with sum of two scaled vectors. Scaled vector addition is a simple operation that calculates the output vector by scaling the input vectors with a constant and performing an addition on the values of the same index. So, HPCG has four computational blocks SPMV, SymGS, WAXPBY and DDOT, while two communication blocks MPI_Allreduce and Halos Exchange.
Intel Xeon Phi Processor is a new generation of processors from the Intel Xeon Phi family. Previous generations of Intel Xeon Phi were available as a coprocessor, in a PCI card form factor and required an Intel Xeon processor. The Intel Xeon Phi 7230 contains 64 cores @ 1.3GHz of core frequency along with the turbo speed of 1.5GHz and 32MB of L2 cache. It supports DDR4-2400MHz memory up to 384GB and instruction set of AVX512. Intel Xeon Phi processor also encloses 16GB of MCDRAM memory on socket with a sustained memory bandwidth of up to ~480GB/s measured by the Stream benchmark. Intel Xeon Phi 7230 delivers up to ~1.8TFLOPS of double precision HPL performance.
This blog showcases the performance of HPCG benchmark on the Intel KNL processor and compares the performance to that on the Intel Broadwell E5-2697 v4 processor. The Intel Xeon Phi cluster comprises of one head node which is PowerEdge R630 and 12 PowerEdge C6320p as compute nodes. While Intel Xeon processor cluster includes one PowerEdge R720 as head node and 12 PowerEdge R630 as compute nodes. All compute nodes are connected by Intel Omni-Path of 100GB/s. The cluster shares the storage of head node over NFS. The detailed information of the clusters are mentioned below in table1. All HPCG tests on Intel Xeon Phi has been performed with the BIOS settings of “quadrant” cluster mode and “Memory” memory mode.
Table1: Cluster Hardware and software details
HPCG Performance analysis with Intel KNL
Choosing the right problem size for HPCG should follow the following rules. The problem size should be large enough not to fit in the cache of the device. The problem size should be able to occupy the significant fraction of main memory, at least 1/4th of total. For HPCG performance characterization, we have chosen the local domain dimension of 128^3, 160^3, and 192^3 with the execution time of t=30 seconds. The local domain dimension defines the global domain dimension by (NR*Nx) x (NR*Ny) x (NR*Nz), where Nx=Ny=Nz=160 and NR is the number of MPI processes.
Figure 1: HPCG Performance comparison with multiple local dimension grid size
As shown in figure 1, the local dimension grid size of 160^3 gives the best performance of 48.83GFLOPS. The problem size bigger than 128^3 allows for more parallelism and it fits well inside the MCDRAM while 192^3 does not. All these tests have been carried out with 4 MPI processes and 32 OpenMP threads per MPI process on a single Intel KNL server.
Figure 2: HPCG performance comparison with multiple execution time.
Figure 2 demonstrates HPCG performance with multiple execution times for grid size of 160^3 on a single Intel KNL server. As per the graph, HPCG performance doesn’t change even by changing the execution time. It means execution time does not appear to be a factor for HPCG performance. So, we may not need to spend hours or days of time to benchmark large clusters, which in result, will save both time and power. Although, the official execution time should be >=1800 seconds as reported in the output file. If you decide to submit your results to TOP 500 ranking list, execution time should be not less than 1800seconds.
Figure 3: Time consumed by HPCG computational routines.
Figure 3 shows the time consumed by each computational routine from 1 to 12 KNL nodes. Time spent by each routine is mentioned in HPCG output file as shown in the figure 4. As per the above graph, HPCG spends its most of the time in the compute intensive pre-conditioning of SYMGS function and matrix vector multiplication of sparse matrix (SPMV). The vector update phase (WAXPBY) consumes very less time in comparison to SymGS and least time by residual calculation (DDOT) out of all four computation routines. As the local grid size is same across all multi-node runs, the time spent by all four compute kernels for each multi-node run are approximately same. The output file shown in figure 4, shows performance of all four computation routines. In which, MG consists both SymGS and SPMV.
Figure 4: A slice of HPCG output file
Here is the HPCG multi-nodes performance comparison between Intel Xeon E5-2697 v4 @2.3GHz (Broadwell processor) and Intel KNL processor 7230 with Intel Omni-path interconnect.
Figure 5: HPCG performance comparison between Intel Xeon Broadwell processor and Intel Xeon Phi processor
Figure 5 shows HPCG performance comparison between dual Intel Broadwell 18 cores processors and one Intel Xeon phi 64 cores processor. Dots in figure 5 show the performance acceleration of KNL servers over Broadwell dual socket servers. For single KNL node, HPCG performs 2.23X better than Intel Broadwell node. For Intel KNL multi-nodes also HPCG show more than 100% performance increase over Broadwell processor nodes. With 12 Intel KNL nodes, HPCG performance scales out well and shows performance up to ~520 GFLOPS.
Overall, HPCG shows ~2X higher performance with Intel KNL processor on PowerEdge C6320p over Intel Broadwell processor server. HPCG performance scales out well with more number of nodes. So, PowerEdge C6320p platform will be a prominent choice for HPC applications like HPCG.
By Garima Kochhar. HPC Innovation Lab. January 2016.
The Intel Xeon Phi bootable processor (architecture codenamed “Knights Landing” – KNL) is ready for prime time. The HPC Innovation Lab has had access to a few engineering test units, and this blog presents the results of our initial benchmarking study. [We also published our results with Cryo-EM workloads on these systems, and that study is available here.]
The KNL processor is from the Intel Xeon Phi product line but is a bootable processor, i.e., the system does not need another processor in it to power on, just the KNL. Unlike the Xeon Phi coprocessors or the NVIDIA K80 and P100 GPU cards that are housed in a system that has a Xeon processor as well, the KNL is the only processor in the server. This necessitates a new server board design and the PowerEdge C6320p is the Dell EMC platform that supports the KNL line of processors. A C6320p server includes support for one KNL processor and six DDR4 memory DIMMs. The network choices include Mellanox InfiniBand EDR, Intel Omni-Path, or choices of add-in 10GbE Ethernet adapters. The platform has the other standard components you’d expect from the PowerEdge line including a 1GbE LOM, iDRAC and systems management capabilities. Further information on C6320p is available here.
The KNL processor models include 16GB of on-package memory called MCDRAM. The MCDRAM can be used in three modes – memory mode, cache mode or hybrid mode. The 16GB of MCDRAM is visible to the OS as addressable memory and must be addressed explicitly by the application when used in memory mode. In cache mode, the MCDRAM is used as the last level cache of the processor. And in hybrid mode, a portion of the MCDRAM is available as memory and the other portion is used as cache. The default setting is cache mode as this is expected to benefit most applications. This setting is configurable in the server BIOS.
The architecture of the KNL processor allows the processor cores + cache and home agent directory + memory to be organized into different clustering modes. These modes are called all2all, quadrant and hemisphere, Sub-NUMA Clustering-2 and Sub-NUMA Clustering 4. They are described in this Intel article. The default setting in the Dell EMC BIOS is quadrant mode and can be changed in the Dell EMC BIOS. All tests below are with the quadrant mode.
The configuration of the systems used in this study is described in Table 1.
Table 1 - Test configuration
12 * Dell EMC PowerEdge C6320p
Intel Xeon Phi 7230. 64 cores @ 1.3 GHz, AVX base 1.1 GHz.
96 GB at 2400 MT/s [16 GB * 6 DIMMS]
Intel Omni-Path and Mellanox EDR
Red Hat Enterprise Linux 7.2
Intel 2017, 17.0.0.098 Build 20160721
Intel MPI 5.1.3
Intel XPPSL 1.4.1
The first check was to measure the memory bandwidth on the KNL system. To measure memory bandwidth to the MCDRAM, the system must be in “memory” mode. A snippet of the OS view when the system is in quadrant + memory mode is in Figure 1.
Note that the system presents two NUMA nodes. One NUMA node contains all the cores (64 cores * 4 logical siblings per physical core) and the 96 GB of DDR4 memory. The second NUMA node, node1, contains the 16GB of MCDRAM.
Figure 1 – NUMA layout in quadrant+memory mode
On this system, the dmidecode command shows six DDR4 memory DIMMs, and eight 2GB MCDRAM memory chips that make up the 16GB MCDRAM.
STREAM Triad results to the MCDRAM on the Intel Xeon Phi 7230 measured between 474-487 GB/s across 16 servers. The memory bandwidth to the DDR4 memory is between 83-85 GB/s. This is expected performance for this processor model. This link includes information on running stream on KNL.
When the system has MCDRAM in cache mode, the STREAM binary used for DDR4 performance above reports memory bandwidth of 330-345 GB/s.
XPPSL includes a micprun utility that makes it easy to run this micro-benchmark on the MCDRAM. “micprun –k stream –p <num cores>” is the command to run a quick stream test and this will pick the MCDRAM (NUMA node1) automatically if available.
The KNL processor architecture supports AVX512 instructions. With two vector units per core, this allows the processor to execute 32 DP floating point operations per cycle. For the same core count and processor speed, this doubles the floating point capabilities of KNL when compared to Xeon v4 or v3 processors (Broadwell or Haswell) that can do only 16 FLOPS/cycle.
HPL performance on KNL is slightly better (up to 5%) with the MCDRAM in memory mode when compared to cache mode and when using the HPL binary packaged with Intel MKL. Therefore the tests below are with the system in quadrant+memory mode.
On our test systems, we measured between 1.7 – 1.9 TFLOP/s HPL performance per server across 16 test servers. The micprun utility mentioned above is an easy way to run single server HPL tests. “micprun –k hplinpack –p <problem size>” is the command to run a quick HPL test. However for cluster-level tests, the Intel MKL HPL binary is best.
HPL cluster level performance over the Intel Omni-Path interconnect is plotted in Figure 2. These tests were run using the HPL binary that is packaged with Intel MKL. The results with InfiniBand EDR are similar.
Figure 2 - HPL performance over Intel Omni-Path
The KNL-based system is a good platform for highly parallel vector applications. The on-package MCDRAM helps balance the enhanced processing capability with additional memory bandwidth to the applications. KNL introduces the AVX512 instruction set which further improves the performance of vector operations. The PowerEdge C6320p provides a complete HPC server with multiple network choices, disk configurations and systems management.
This blog presents initial system benchmark results. Look for upcoming studies with HPCG and applications like NAMD and Quantum Espresso. We have already published our results with Cryo-EM workloads on KNL and that study is available here.
December 2016 – HPC Innovation Lab
In order to build a balanced cluster ecosystem and eliminate bottle-necks, the need for powerful and dense server node configurations is essential to support parallel computing. The challenge is to provide maximum compute power with efficient I/O subsystem performance, including memory and networking. Some of the emerging technologies along with traditional computing that are needed for intense compute power are advanced parallel algorithms in the areas of research, life science and financial application.
Dell PowerEdge C6320p
The introduction of the Dell EMC C6320p platform, which is one of the densest and greatest maximum core capacity platform offerings in HPC solutions, provides a leap in this direction.
The PowerEdge C6320p platform is Dell EMC’s first self-bootable Intel Xeon Phi platform. The previously available versions of Intel Xeon Phi were PCIE adapters that required a host system to be plugged into. From the core perspective, it supports up to 72 processing cores, with each core supporting two vector processing units capable of AVX-512 instructions. This increases the computation of floating point operations requiring longer vector instructions unlike Intel Xeon® v4 processors that support up to AVX-2 instructions. The Intel Xeon Phi in Dell EMC C6320p also features on-package 16GB of fast MCDRAM that is stacked on the processor. The availability of MCDRAM helps out-of-order execution in applications that are sensitive to high memory bandwidth. This is in addition to the six channels of DDR4 memory hosted on the server. Being a single socket server, the C6320p provides a low power consumption compute node compared to traditional two socket nodes in HPC.
The following table shows platform differences as we compare the current Dell EMC PowerEdge C6320 and Dell EMC PowerEdge C6320p server offerings in HPC.
Server Form Factor
2U Chassis with four sleds
Intel ® Xeon Phi
Max cores in a sled
Up to 44 physical cores, 88 logical cores
(with two * Intel ® Xeon E5-2699 v4, 2.2 GHz, 55MB, 22 cores, 145W)
Up to 72 physical cores, 288 logical cores
(with the Intel ®Xeon Phi Processor 7290 (16GB, 1.5GHz, 72 core, 245W)
Theoretical DP Flops per sled
16 DDR4 DIMM slots
6 DDR4 DIMM slots +
on-die 16GB MCDRAM
MCDRAM BW (Memory mode)
~ 475-490 GB/s
~ 135 GB/s
Dual port 1Gb/10GbE
Single port 1GbE
Intel Omni-Path Fabric (100Gbps)
Mellanox Infiniband (100Gbps)
Intel Omni-PathFabric (100Gbps)
On-board Mellanox Infiniband (100Gbps)
Up to 24 x 2.5” or 12 x 3.5” HD
6 x 2.5” HD per node +
Internal 1.8” SSD option for boot
Integrated Dell EMC Remote Access Controller
Dedicated and shared iDRAC8
Table 1: Comparing the C6320 and C6320p offering in HPC
Dell EMC Supported HPC Solution:
Dell EMC offers a complete, tested, verified and validated solution offering on the C6320p servers. This is based on Bright Cluster Manger 7.3 with RHEL 7.2 that includes specific highly recommended kernel and security updates. It will also provide support for the upcoming RHEL 7.3 operating system. The solution provides automated deployment, configuration, management and monitoring of the cluster. It also integrates recommended Intel performance tweaks, as well as required software drivers and other development toolkits to support the Intel Xeon Phi programming model.
The solution provides the latest networking support for both InfiniBand and Intel Omni-Path Fabric. It also includes Dell EMC-supported System Management tools that are bundled to provide customers with the ease of cluster management on Dell EMC hardware.
*Note: As a continuation to this blog, there will be follow-on micro-level benchmarking and application study published on C6320p.
Just before the kick-off of the opening gala for the SC16 international supercomputing conference, HPCwire unveiled the winners of the 2016 HPCwire Editors’ Choice Awards. Each year, this awards program recognizes the best and the brightest developments that have happened in high performance computing over the past 12 months. Selected by a panel of HPCwire editors and thought leaders in HPC, these awards are highly coveted as prestigious recognition of achievements by the HPC community.
Traditionally revealed and presented each year to kick off the Supercomputing Conference (SC16), which showcases high performance computing, networking, storage, and data analysis, the awards are an annual feature of the publication and spotlight outstanding breakthroughs and achievements in HPC.
Tom Tabor, CEO of Tabor Communications, the publisher of HPCwire, announced the list of winners in Salt Lake City, UT.
“From thought leaders to end users, the HPCwire readership reaches and engages every corner of the high performance computing community,” said Tabor. “Receiving their recognition signifies community support across the entire HPC space, as well as the breadth of industries it serves.
Dell EMC was honored to be presented with two 2016 HPCwire Editors’ Choice Awards:
Best Use of High Performance Data Analytics: The Best Use of High Performance Data Analytics award was presented to UNC-Chapel Hill Institute of Marine Sciences (IMS) and Coastal Resilience Center of Excellence (CRC), Renaissance Computing Institute (RENCI), and Dell EMC. UNC-Chapel Hill IMS and CRC work with the Dell EMC-powered RENCI Hatteras Supercomputer to predict dangerous coastal storm surges, including Hurricane Matthew, a long-lived, powerful and deadly tropical cyclone which became the first Category 5 Atlantic hurricane since 2007.
Top 5 Vendors to Watch Dell EMC was recognized by the 2016 HPCwire Editors’ Choice Awards panel, along with Fujitsu, HPE, IBM and NVIDIA, as one of the Top 5 Vendors to Watch in high performance computing. As the only true end-to-end solutions provider in the HPC market, Dell EMC is committed to serving customer needs. And with the combination of Dell, EMC and VMware, we are a leader in the technology of today, with the world’s greatest franchises in servers, storage, virtualization, cloud software and PCs. Looking forward, we will occupy a very strong position in the most strategic areas of technology of tomorrow: digital transformation, software defined data center, hybrid cloud, converged infrastructure, mobile and security.
To learn more about HPC at Dell EMC, join the Dell EMC HPC Community at www.Dellhpc.org, or visit us online at www.Dell.com/hpc and www.HPCatDell.com.
Twice each year, the TOP500 list ranks the 500 most powerful general-purpose computer systems known. In the present list, released at the SC16 conference in Salt Lake City, UT, computers in common use for high-end applications are ranked by their performance on the LINPACK Benchmark. Sixteen of these world-class systems are powered by Dell EMC. Collectively, these customers are accomplishing amazing results, continually innovating and breaking new ground to solve the biggest, most important challenges of today and tomorrow while also making major contributions to the advancement of HPC.
Here are just a few examples:
Dell EMC has partnered with Scientific Computing, publisher of HPC Source, and NVIDIA to produce an exclusive high performance computing supplement that takes a look at some of today’s cool new HPC technologies, as well as some of the work being done to extend HPC capabilities and opportunities.
This special publication, “New Technologies in HPC,” highlights topics such as innovative technologies in HPC and the impact they are having on the industry, HPC trends to watch, and advancing science with AI. It also looks at how organizations are extending supercomputing with cloud, machine learning technologies for the modern data center, and getting starting with deep learning.
This digital supplement can be viewed on-screen or downloaded as a PDF.
Taking our dive into new HPC technologies a bit deeper — we also brought together technology experts Paul Teich, Principal Analyst at TIRIAS Research, and Will Ramey, Senior Product Manager for GPU Computing at NVIDIA, for a live, interactive discussion with contributing editor Tim Studt: “Accelerate Your Big Data Strategy with Deep Learning.”
Paul and Will share their unique perspectives on where artificial intelligence is leading the next wave of industry transformation, helping companies go from data deluge to data-hungry. They provide insights on how organizations can accelerate their big data strategies with deep learning, the fastest growing field in AI, and discuss how, by using data-driven algorithms powered by GUP accelerators, companies can get faster insights, as well as how companies can see dynamic correlations, and achieve actionable knowledge about their business.
For those who couldn't make the live broadcast, it is available for on-demand viewing.
Dell China has been honored with an “Innovation Award of Artificial Intelligence in Technology & Practice” in recognition of Dell’s collaboration with the Institute of Automation, Chinese Academy of Sciences (CASIA) in establishing the Artificial Intelligence and Advanced Computing Joint-Lab. The advanced computing platform was jointly unveiled by Dell China and CASIA in November 2015, and the AI award was presented by the Technical Committee of High Performance Computing (TCHPC), China Computer Federation (CCF), at the China HPC 2016 conference in Xi’an City, Shanxi Province China, on October 27, 2016. About a half dozen additional awards were presented at HPC China, an annual national conference on high performance computing organized by TCHPC. However, Dell China was the only vendor to receive an award in the emerging field of artificial intelligence in HPC.
The Artificial Intelligence and Advanced Computing Joint-Lab’s focus is on research and applications of new computing architectures in brain information processing and artificial intelligence, including cognitive function simulation, deep learning, brain computer simulation, and related new computing systems. The lab also supports innovation and development of brain science and intellect technology research, promoting Chinese innovation and breakthroughs at the forefront of science, and working to produce and industrialize these core technologies in accordance with market and industry development needs.
CASIA, a leading AI research organization in China, has huge requirements for computing and storage, and the new advanced computing platform — designed and set up by engineers and professors from Dell and CASIA — is just the tip of the iceberg with respect to CASIA’s research requirements. It features leading Dell HPC systems components designed by the Dell USA team, including servers, storage, networking and software, as well as leading global HPC partner products, including Intel CPU, NVIDIA GPU, Mellanox IB Network and Bright Computing software. The Dell China Services team implemented installation and deployment of the system, which was completed in February 2016.
The November 3, 2015, unveiling ceremony for the Artificial Intelligence and Advanced Computing Joint-Lab was held in Beijing. Marius Haas, Chief Commercial Officer and President, Enterprise Solutions of Dell; Dr. Chenhong Huang, President of Dell Greater China; and Xu Bo, Director of CASIA attended the ceremony and addressed the audience.
“As a provider of end-to-end solutions and services, Dell has been focusing on and promoting the development of frontier science and technologies, and applying the latest technologies to its solutions and services to help customers achieve business transformation and meet their ever-changing demands,” Haas said at the unveiling. “We’re glad to cooperate with CASIA in artificial intelligence, which once again shows Dell’s commitment to China’s market and will drive innovation in China’s scientific research.”
“Dell is well-positioned to provide innovative end-to-end solutions. Under the new 4.0 strategy of ‘In China, For China’, we will strengthen the cooperation with Chinese research institutes and advance the development of frontier technologies,” Huang explained. “Dell’s cooperation with CASIA represents a combination of computing and scientific research resources, which demonstrates a major trend in artificial intelligence and industrial development.”
China is a role model for emerging market development and practice sharing for other emerging countries. Partnering with CASIA and other strategic partners is Dell’s way of embracing the “Internet+” national strategy, promoting Chinese innovation and breakthroughs at the forefront of science.
“China’s strategy in innovation-driven development raises the bar for scientific research institutes. The fast development of information technologies in recent years also brings unprecedented challenges to CASIA,” added Bo. “CASIA always has intelligence technologies in mind as their main focus of strategic development. The cooperation with Dell China on the lab will further the computing advantages of the Institute of Automation, strengthen the integration between scientific research and industries, and advance artificial intelligence innovation.”
Dell China is looking forward to continued cooperation with CASIA in driving artificial intelligence across many more fields, such as meteorology, biology and medical research, transportation, and manufacturing.
By Garima Kochhar and Kihoon Yoon. Dell EMC HPC Innovation Lab. October 2016
This blog presents performance results for the 2D alignment and 2D classification phases of the Cryo-electron microscopy (Cryo-EM) data processing workflow using the new Intel Knights Landing architecture, and compares these results to the performance of the Intel Xeon E5-2600 v4 family. A quick description of Cryo-EM and the different phases in the process of reconstructing 3D molecular structures with electron microscopy is provided below, followed by the specific tests conducted in this study and the performance results.
Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system. This communication system controls fundamental cell activities and actions to maintain normal cell homeostasis. Errors in the cellular signaling process can lead to diseases such as cancer, autoimmune disorders, and diabetes. Studying the functioning of the proteins responsible for an illness enables a biologist to develop specific drugs that can interact with the protein effectively, thus improving the efficacy of treatment.
The workflow from the time a molecular sample is created to the creation of a 3D model of its molecular structure involves multiple steps. These steps are briefly (and simplistically!) described below.
As is now clear, the Cryo-EM processing workflow must comprehend a lot of data, requires rich compute algorithms and considerable compute power for the 2D and 3D phases, and must move data efficiently across the multiple phases in the workflow. Our goal is to design a complete HPC system that can support the Cryo-EM workflow from start to finish and is optimized for performance, energy efficiency and data efficiency.
Performance Tests and Configuration
Focusing for now on the 2D phases of the workflow, this blog presents results for the steps #7 and #8 listed above - the 2D alignment and 2D classification phases. Two software packages in this domain, ROME and RELION were benchmarked on the Knights Landing (KNL, code name for the Intel Xeon Phi 7200 family) and Broadwell (BDW, code name for Intel Xeon E5-2600 v4 family) processors.
The tests were run on systems with the following configuration.
12 * Dell PowerEdge C6320
Intel Xeon E5-2697 v4. 18 cores per socket, 2.3 GHz
128 GB at 2400 MT/s
Intel Omni-Path fabric
12 * Dell PowerEdge C6320p
Intel Xeon Phi 7230. 64 cores, 1.3 GHz
96 GB at 2400 MT/s
Set1. Inflammasome data: 16306 images of NLRC4/NAIP2 inflammasome with a size of 2502 pixels
Set4. RP-a: 57001 images of proteasome regulatory particles (RP) with a size of 1602 pixels
Set2. RP-b: 35407 images of proteasome regulatory particles (RP) with a size of 1602 pixels
ROME performs the 2D alignment (step #7 above) and the 2D classification (step #8 above) in two separate phases called the MAP phase and the SML phase respectively. For our tests we used “-k” for MAP equal to 50 (i.e. 50 initial classes) and “-k” for SML equal to 1000 (i.e. 1000 final 2D classes).
The first set of graphs below, Figure 1 and Figure 2 show the performance of the SML phase on KNL. The compute portion of the SML phase scales linearly as more KNL systems are added into the test bed, from 1 to 12 servers as shown in Figure 1. The total time to run shown in Figure 2 is slightly lower than linear, and includes an I/O component as well as the compute component. The test bed used in this study did not have a parallel file system and used just local disks on the KNL servers. Future work for this project includes evaluating the impact of adding a Lustre parallel file system to this test bed and its effect on total time for SML.
Figure 1 - ROME SML scaling on KNL, compute time
Figure 2 - ROME SML scaling on KNL, total time
The next set of graphs compare the ROME SML performance on KNL and Broadwell. Figure 3, Figure 4 and Figure 5 plot the compute time for SML on 1 to 12 servers. The black circle on the graph shows the improvement in KNL runtime when compared to BDW. For all three datasets that were benchmarked, KNL is about 3x faster than BDW. Note we’re comparing one single-socket KNL server to a dual-socket Broadwell server, so this is a server to server comparison (not socket to socket). KNL is 3x faster than BDW across different numbers of servers, showing that ROME SML scales well on Omni-Path on both KNL and BDW, but the absolute compute time on KNL is 3x faster irrespective of the number of servers in test.
Considering total time to run on KNL versus BDW, we measured KNL to be 2.4x to 3.3x faster than BDW at all node counts. Specifically, DATA6 is ~ 2.4x faster on KNL, DATA8 is 3x faster on KNL and RING11_ALL is 3.4x faster on KNL when considering total time to run. As mentioned before, the total time includes an I/O component and one of the next step in this study is to evaluate the performance improvement if adding a parallel file system to the test bed.
Figure 3 - DATA8 ROME SML on KNL and BDW
Figure 4 - DATA6 ROME SML on KNL and BDW.
Figure 5 - RING11_ALL ROME SML on KNL and BDW
RELION accomplishes the 2D alignment and classification steps mentioned above in one phase. Figure 6 shows our preliminary results on RELION on KNL across 12 servers and on two of the test datasets. The “--K” parameter for RELION was set to 300, i.e., 300 classes for 2D classification. There are several things to be still tried here – the impact of a parallel file system on RELION (as we discussed for ROME earlier) and dataset sensitivity to the parallel file system. Additionally we plan to benchmark RELION on Broadwell, across different node counts and with different input parameters.
Figure 6 - RELION 2D alignment and classification on KNL
The next steps in this project include adding a parallel file system to measure the impact on the workflow, tuning the test parameters for ROME MAP, SML and RELION, and testing on more datasets. We also plan to measure the power consumption of the cluster when running Cryo-EM workloads to analyze performance per watt and performance per dollar metrics for KNL.
Authors: Rengan Xu and Nishanth Dandapanthu. Dell EMC HPC Innovation Lab. October 2016
Deep Learning (DL), an area of Machine Learning, has achieved significant progress in recent years. Its application area includes pattern recognition, image classification, Natural Language Processing (NLP), autonomous driving and so on. Deep learning attempts to learn multiple levels of features of the input large data sets with multi-layer neural networks and make predictive decision for the new data. This indicates two phases in deep learning: first, the neural network is trained with large number of input data; second, the trained neural network is used to test/inference/predict the new data. Due to the large number of parameters (the weight matrix connecting neurons in different layers and the bias in each layer, etc.) and training set size, the training phase requires tremendous amounts of computation power.
To approach this problem, we utilize accelerators which include GPU, FPGA and DSP and so on. This blog focuses on GPU accelerator. GPU is a massively parallel architecture that employs thousands of small but efficient cores to accelerate the computational intensive tasks. Especially, NVIDIA® Tesla® P100™ GPU uses the new Pascal™ architecture to deliver very high performance for HPC and hyperscale workloads. In PCIe-based servers, P100 delivers around 4.7 and 9.3 TeraFLOPS of double and single precision performance, respectively. And in NVLink™-optimized servers, P100 delivers around 5.3 and 10.6 TeraFLOPS of double and single precision performance, respectively. This blog focuses on P100 for PCIe-based servers. P100 is also equipped with High Bandwidth Memory 2 (HBM2) which offers higher bandwidth than the traditional GDDR5 memory. Therefore, the high compute capability and high memory bandwidth make GPU an ideal candidate to accelerate deep learning applications.
In this blog, we will present the performance and scalability of P100 GPUs with different deep learning frameworks on a cluster. Three deep learning frameworks were chosen: NVIDIA’s fork of Caffe (NV-Caffe), MXNet and TensorFlow. Caffe is a well-known and widely used deep learning framework which is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. It focuses more on the image classification and it supports multiple GPUs within a node but not across nodes. MXNet, jointly developed by collaborators from multiple universities and companies, is a lightweight, portable and flexible deep learning framework designed for both efficiency and flexibility. This framework scales to multiple GPUs within a node and across nodes. TensorFlow, developed by Google’s Brain team, is a library for numerical computation using data flow graphs. TensorFlow also supports multiples GPUs and can scale to multiple nodes.
All of the three deep learning frameworks we chose are able to perform the image classification task. With this in mind, we chose the well-known ImageNet Large Scale Visual Recognition Competition (ILSVRC) 2012 dataset. This training dataset contains 1281167 training images and 50000 validation images. All images are grouped into 1000 categories or classes. Another reason we chose ILSVRC 2012 dataset is that its workload is large enough to perform long time training and it is a benchmark dataset used by many deep learning researchers.
This blog quantifies the performance of deep learning frameworks using NVIDIA’s P100-PCIe GPU and Dell’s PowerEdge C4130 server architecture. Figure 1 shows the testing cluster. The cluster includes one head node which is Dell’s PowerEdge R630 and four compute nodes which are Dell’s PowerEdge C4130. All nodes are connected by an InfiniBand network and they share disk storage through NFS. Each compute node has 2 CPUs and 4 P100-PCIe GPUs. All of the four compute nodes have the same configurations. Table 1 shows the detailed information about the hardware configuration and software used in every compute node.
Figure 1: Testing Cluster for Deep Learning
Table 1: Hardware Configuration and Software Details
PowerEdge C4130 (configuration G)
2 x Intel Xeon CPU E5-2690 v4 @2.6GHz (Broadwell)
256GB DDR4 @ 2400MHz
P100-PCIe with 16GB GPU memory
Mellanox ConnectX-4 VPI (EDR 100Gb/s Infiniband)
Software and Firmware
RHEL 7.2 x86_64
Linux Kernel Version
CUDA version and driver
CUDA 8.0 (361.77)
Deep Learning Frameworks
We measured the metrics of both images/sec and training time.
The images/sec is the measurement for training speed while the training time is the wall clock time for training, I/O operation and other overhead. The images/sec number was obtained from “samples/sec” in MXNet and TensorFlow in the output log files. NV-Caffe listed “M s/N iter” as output which means M seconds were taken to process N iterations, or N batches. The metric images/sec was calculated by “batch_size*N/M”. The batch size is the number of training samples in one forward/backward pass through all layers of a neural network. The images/sec number was averaged across all iterations to take into account the deviations.
The training time was obtained from “Time cost” in MXNet output logs. For NV-Caffe and TensorFlow, their output log files contained the wall-clock timestamps during the whole training. So the time difference from the start to the end of the training was calculated as the training time.
Since NV-Caffe did not support distributed training, it was not executed on multiple nodes. The MXNet framework was able to run on multiple nodes. However, the caveat was that it could only use the Ethernet interface (10 Gb/s) on the compute nodes by default, and therefore the performance was not as high as expected. To solve this issue, we have manually changed its source code so that the high-speed InfiniBand interface (EDR 100 Gb/s) was used. The training with TensorFlow on multiple nodes was able to run but with poor performance and the reason is still under investigation.
Table 2 shows the input parameters used in different deep learning frameworks. In all deep learning frameworks, the neural network training requires many epochs or iterations. Whether the term epoch or iteration is used is determined by each framework. An epoch is a complete pass through all samples in a given dataset, while one iteration processes only one batch of samples. Therefore, the relationship between iterations and epochs is: epochs = (iterations*batch_size)/training_samples. Every framework only needs either epochs or iterations so that another parameter can be easily determined by this formula. Since our goal was to measure the performance and scalability of Dell’s server and not to train an end-to-end image classification model, the training was a subset of the full model training which was large enough to reflect performance. Therefore we chose a smaller number of epochs or iterations so that they could finish in a reasonable time. Although only partial training was performed, the training speed (images/sec) remained relatively constant over this period.
The batch size is one of the hyperparameters the user needs to tune when training a neural network model with mini-batch Stochastic Gradient Descent (SGD). The batch size in the table are commonly used sizes. Whether these batch sizes are optimized for model accuracy is left in future work. For all neural networks in all frameworks, we increased the batch size proportionally with increasing number of GPUs. In the meantime, the number of iterations was adjusted so that the total number of samples was fixed no matter how many GPUs were used. Since epoch has nothing to do with batch size, its value was not changed when a different number of GPUs was used. For MXNet GoogleNet, there was runtime error if different bath sizes were used for different number of GPUs, so we used constant batch size. Learning rate is another hyperparameter that needs to be tuned. In this experiment, the default value in each framework was used.
Table 2: Input parameters used in different deep learning frameworks
Figure 2 shows the training speed (images/sec) and training time (wall-clock time) of GoogleNet neural network in NV-Caffe using P100 GPUs. It can be seen that the training speed increased as the number of P100 GPUs increased. As a result, the training time decreased. The CPU result in Figure 2 was obtained from Intel-Caffe on two Intel Xeon CPU E5-2690 v4 (14-core Broadwell processors) within one node. We chose Intel-Caffe for the pure CPU test because it has better CPU optimizations than NV-Caffe. From Figure 1, we can see that 1 P100 GPU is ~5.3x and 4 P100 is ~19.7x faster than a Broadwell based CPU server. Since NV-Caffe has not supported distributed training so far, we only ran it on up to 4 P100 GPUs on one node.
Figure 2: The training speed and time of GoogleNet in NV-Caffe using P100 GPUs
Figure 3 and Figure 4 show the training speed and time of GoogleNet and Inception-BN neural networks in MXNet using P100 GPUs. In both figures, 8 P100 used 2 nodes, 12 P100 used 3 nodes and 16 P100 used 4 nodes. As we can see from both figures, MXNet had great scalability in training speed and training time when more P100 GPUs were used. As mentioned in Section Testing Methodology, if the Ethernet interfaces in all nodes were used, it would impact the training speed and training time significantly since the I/O operation was not fast enough to feed the GPU computations. Based on our observation, the training speed when using Ethernet was only half the speed compared to when using the InfiniBand interfaces. In both MXNet and TensorFlow, the CPU implementation was extremely slow and we believe they were not CPU optimized, therefore we did not compare their P100 performance with CPU performance.
Figure 3: The training speed and time of GoogleNet in MXNet using P100 GPUs
Figure 4: The training speed and time of Inception-BN in MXNet using P100 GPUs
Figure 5 shows the training speed and training time of Inception-V3 neural network in TensorFlow using P100 GPUs. Similar to NV-Caffe and MXNet, TensorFlow also showed good scalability in training speed when more P100 GPUs were used. The training with TensorFlow on multiple nodes was able to run but with poor performance. So that result was not shown here and the reason is still under investigation.
Figure 5: The training speed and time of Inception-V3 in TensorFlow using P100 GPUs
Figure 6 shows the speedup when using multiple P100 GPUs in different deep learning frameworks and neural networks. The purpose of this figure is to demonstrate the speedup in each framework when more number of GPUs are used. The purpose does not include the comparison among different frameworks since their input parameters were different. When using 4 P100 GPUs for NV-Caffe GoogleNet and TensorFlow Inception-V3, we observed a speedup up to 3.8x and 3.0x, respectively. For MXNet, using 16 P100 achieved 13.5x speedup in GoogleNet and 14.7x speedup in Inception-BN which are close to the ideal speedup 16x. In particular, we observed linear speedup when using 8 P100 and 12 P100 GPUs in Inception-BN neural network.
Figure 6: Speedup of multiple P100 GPUs in different DL frameworks and networks
In practice, a real user application can take days or weeks for training a model. Although our benchmarking cases run in a few minutes or a few hours, they are just small snapshots from much longer runs that would be needed to really train a network. For example, the training of a real application might take 90 epochs of 1.2M images. A Dell C4130 with P100 GPUs can turn in results in less than a day, while CPU takes >1 week – that’s the real benefits to the end users. The effect for real use case is saving weeks of time per run, not seconds.
Overall, we observed great speedup and scalability in neural network training when multiple P100 GPUs were used in Dell’s PowerEdge C4130 server and multiple server nodes were used. The training speed increased and the training time decreased as the number of P100 GPUs increased. From the results shown, it is clear that Dell’s PowerEdge C4130 cluster is a powerful tool for significantly speeding up neural network training.
In the future work, we will try the P100 for NVLink-optimized servers with the same deep learning frameworks, neural networks and the dataset and see how much performance improvement can be achieved. This blog experimented the PowerEdge C4130 configuration G in which only GPU 1 and GPU 2, and GPU3 and GPU 4 have peer-to-peer accesses. In the future, we will try C4130 configuration B in which all of the four GPUs connected to one socket have peer-to-peer accesses and check the performance impact in this configuration. We will also investigate the impact of hyperparameters (e.g. batch size and learning rate) on both training performance and model accuracy. The reason of the slow training performance with TensorFlow on multiple nodes will also be examined.
Author: Yogendra Sharma, Ashish Singh, September 2016 (HPC Innovation Lab)
This blog describes the performance analysis on a PowerEdge R930 server powered by four Intel Xeon E7-8890 v4 @2.2GHz processors (code named as Broadwell-EX). Primary objective of this blog is to compare the performance of HPL, STREAM and few scientific applications ANSYS Fluent and WRF with the previous generation of Intel processor Intel Xeon E7-8890 v3 @2.5GHz codenamed Haswell-EX. Below are the configurations used for this study.
4 x Intel Xeon E7-8890 email@example.comGHz (18 cores) 45MB L3 cache 165W
4 x Intel Xeon E7-8890 firstname.lastname@example.orgGHz (24 cores) 60MB L3 cache 165W
1024 GB = 64 x 16GB DDR4 @2400MHz RDIMMS
1024 GB = 32 x 32GB DDR4 @2400MHz RDIMMS
Processor Settings > Logical Processors
Processor Settings > QPI Speed
Maximum Data Rate
Processor Settings > System Profile
Software and Firmware
RHEL 6.6 x86_64
Benchmark and Applications
V2.1 from MKL 11.2
V2.1 from MKL 11.3
v5.10, Array Size 1800000000, Iterations 100
v3.5.1, Input Data Conus12KM, Netcdf-18.104.22.168
V3.8 Input Data Conus12KM, Netcdf-4.4.0
Table 1: Details of Server and HPC Applications used with Broadwell-EX processors
In this section of the blog, we have compared benchmark numbers with two generations of processors on the same server platform i.e. PowerEdge R930 as well as performance of Broadwell-EX processors with different CPU profiles and memory snoop modes namely Home Snoop (HS) and Cluster On Die(COD).
The High Performance Linpack Benchmark is a measure of a system's floating point computing power. It measures how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering. HPL benchmark was run on both PowerEdge R930 servers (With Broadwell-EX and Haswell-EX ) with block size of NB=192 and problem size of N=340992.
Figure 1: Comparing HPL Performance across BIOS profiles Figure 2: Comparing HPL Performance over two generations of processors
Figure 1 depicts the performance of PowerEdge R930 server with Broadwell-EX processors on different BIOS options. HS (Home snoop mode) performs better than the COD (Cluster-on-die) on both of the system profiles Performance and DAPC. Figure 2 compares the performance between four socket Intel Xeon E7-8890 v3 and Intel Xeon E7-8890 v4 processor servers. HPL showed 47% performance improvement with four Intel Xeon E7-8890 v4 processors on R930 server in comparison to four Intel Xeon E7-8890 v3 processors. This was due to ~33% increase in the number of cores and 13% increase due to new improved version of both Intel compiler and Intel MKL.
Stream benchmark is a synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple vector kernels.
Figure 3: Comparing STREAM Performance across BIOS profiles Figure 4: Comparing STREAM Performance over two generations of processors
As per Figure 3, the memory bandwidth of PowerEdge R930 server with Intel Broadwell-EX processors are same on different bios profiles. Figure4 shows the memory bandwidth of both Intel Xeon Broadwell-EX and Intel Xeon Haswell-EX processors with PowerEdge R930 server. Both Haswell-EX and Broadwell-EX support DDR3 and DDR4 memories respectively, while the platform with this configuration supports 1600MT/s of memory frequency for both generation of processors. Due to the same memory frequency supported by the PowerEdge R930 platform for both generation of processors, both Intel Xeon processors have same memory bandwidth of 260GB/s with the PowerEdge R930 server.
The Weather Research and Forecasting (WRF) Model is a mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting needs. It features two dynamical cores, a data assimilation system, and a software architecture facilitating parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales from tens of meters to thousands of kilometers. WRF can generate atmospheric simulations using real data or idealized conditions. We used the CONUS12km and CONUS2.5km benchmark datasets for this study. CONUS12km is a single domain and small size (48hours, 12km resolution case over the Continental U.S. (CONUS) domain from October 24, 2001) benchmark with 72 seconds of time step. CONUS2.5km is a single domain and large size (Latter 3hours of a 9hours, 2.5km resolution case over the Continental U.S. (CONUS) domain from June 4, 2005) benchmark with 15 seconds of time step. WRF decomposes the domain into tasks or patches. Each patch can be further decomposed into tiles that are processed separately, but by default there is only one tile for every run. If the single tile is too large to fit into the cache of the CPU and/or core, it slows down computation due to WRF’s memory bandwidth sensitivity. In order to reduce the size of the tile, it is possible to increase the number of tiles by defining “numtile = x” in input file or defining environment variable “WRF_NUM_TILES = x”. For both CONUS 12km and CONUS 2.5km the number of tiles are chosen based on best performance which is equal to 56.
Figure 5: Comparing WRF Performance across BIOS profiles
Figure 5 demonstrates the comparison of WRF datasets on different BIOS profiles .With Conus 12KM data ,all the bios profiles performs equally well because of the smaller data size while for CONUS 2.5KM Perf.COD (Performance System Profile with Cluster-On-Die snoop mode) gives best performance. As per the figure 5, the Cluster-on-Die snoop mode is performing 2% higher than Home snoop mode, while the Performance system profile gives 1% better performance than DAPC.
Figure 6: Comparing WRF Performance over two generations of processors
Figure 6 shows the performance comparison between Intel Xeon Haswell-EX and Intel Xeon Broadwell-EX processors with PowerEdge R930 server. As shown in the graph, Broadwell-EX performs 24% better than Haswell-EX for CONUS 12KM data set and 6% better for CONUS 2.5KM.
ANSYS Fluent is a computational fluid dynamics (CFD) software tool. Fluent includes well-validated physical modeling capabilities to deliver fast and accurate results across the widest range of CFD and multi physics applications.
Figure 7: Comparing Fluent Performance across BIOS profiles
We used three different datasets for Fluent with ‘Solver Rating’ (Higher is better) as the performance metric. The above graph Figure 7 shows that all three datasets performed 4% better with Perf.COD (Performance System Profile with Cluster-On-Die snoop mode) bios profile than others. While, the DAPC.HS (DAPC system profile with Home snoop mode) bios profile shows lowest performance. For all three datasets ,the COD snoop mode performs 2% to 3% better than Home snoop mode and Performance system profile performs 2% to 4% better than DAPC. For all these three datasets the behaviour of Fluent is consistent.
Figure 8: Comparing Fluent Performance over two generations of processors
As shown above in Figure 8, for all the test cases on PowerEdge R930 with Broadwell-EX ,Fluent showed 13% to 27% performance improvement in-comparision to PowerEdge R930 with Haswell-EX.
Overall, Broadwell-EX processor makes the PowerEdge R930 server more powerful and more efficient. With Broadwell-EX, the HPL performance increses in the smae manner as increase in the number of cores in comparison to Haswell-EX. There is also increase in the performance for real time applications depending on their nature of computation. So, it can be a good choice to upgrade for those who are using compute hungry applications.