High Performance Computing Blogs

High Performance Computing

High Performance Computing
A discussion venue for all things high performance computing (HPC), supercomputing, and the technologies that enables scientific research and discovery.
  • Unsarling Traffic Jams at TACC

    by Stephen Sofhauser

    Nobody enjoys being stuck in traffic. Congestion is an increasing reality for people living in cities with fast growing populations and transportation infrastructures ill-equipped to meet mounting demand. But that may not have to be a reality anymore.

    A group of researchers within the Center for Transportation Research (CTR) at the University of Texas at Austin are turning to high performance computing to help solve the city's growing traffic woes.

    Using the Stampede supercomputer at the Texas Advanced Computing Center (TACC) to run advanced transportation models, CTR is helping local transportation agencies to better understand, compare and evaluate a variety of solutions to traffic problems in Austin and the surrounding region. By teaming with TACC, they have seen computations run 5 to 10 times faster on Stampede.

    Researchers are using dynamic traffic assignments to support city planners by offering greater insights into how travel demand, traffic control and changes in the existing infrastructure combine to impact transportation in the region.

    To help with better decision making, an interactive web-based visualization tool has been developed. It allows researchers to see the results of the various traffic simulations and associated data in a multitude of ways. By providing various ways to view the area's transportation network, researchers can gain greater clarity into traffic, how their models are performing, and what the impact of suggested transportation strategies might be.

    You can learn about UT's efforts to unclog Austin's roads in at TACC's news site.

  • Enhanced Molecular Dynamics Performance with K80 GPUs

    By: Saeed Iqbal & Nishanth Dandapanthula

    The advent of hardware accelerators in general has impacted Molecular Dynamics by reducing the time to results and therefore providing a tremendous boost in simulation capacity (e.g., previous NAMD blogs). Over the course of time, applications from several domains including Molecular Dynamics have been optimized for GPUS. A comprehensive (although a constantly growing) list can be found here. LAMMPS and GROMACS are two open source Molecular Dynamics (MD) applications which can take advantage of these hardware accelerators.

    LAMMPS stands for “Large-scale Atomic/Molecular Massively Parallel Simulator” and can be used to model solid state materials and soft matter. GROMACS is short for “GROningen MAchine for Chemical Simulations”. The primary usage for GROMACS is simulations for biochemical molecules (bonded interactions) but because of its efficiency in calculating non-bonded interactions (atoms not linked by covalent bonds), the user base is expanding to non-biological systems.

    NVIDIA’s K80 offers significant improvements over the previous model the K40. From the HPC prospective the most important improvement is the 1.87 TFLOPs (double precision) compute capacity, which is about 30% more than K40. The auto-boost feature in K80 automatically provides additional performance if additional power head room is available. The internal GPUs are based on the GK210 architecture and have a total of 4,992 cores, which represent a 73% improvement over K40. The K80 has a total memory of 24GBs, which is divided equally between the two internal GPUs; this is a 100% more memory capacity compared to the K40. The memory bandwidth in K80 is improved to 480 GB/s. The rated power consumption of a single K80 card is a maximum of 300 watts.

    Dell has introduced a new high density GPU server, PowerEdge C4130, it offers five configurations, noted here as “A” through “E”.  Part of the goal of this blog is to find out which configuration is best suited for LAMMPS and GROMACS. The three quad GPU configurations “A”, “B” and “C” are compared. Also the two dual GPU configurations “D” and “E” are compared for users interested in lower GPU density of 2 GPU per 1 rack unit. The first two quad GPU configurations (“A” & “B”) have an internal PCIe switch module which allows seamless peer to peer GPU communication. We also want to understand the impact of the switch module on LAMMPS and GROMACS. Figure 1 below shows the block diagrams for configurations A to E.

    Combining K80s with the PowerEdge C4130, results in an extra-ordinarily powerful compute node. The C4130 can be configured with up to four K40 or K80 GPUs in a 1U form factor. Also the uniqueness  of PowerEdge C4130 is that it offers several workload specific configurations, potentially making it a better fit, for MD codes in general, and specifically for LAMMPS and GROMACS. 

    Figure 1: C4130 Configuration Block Diagram

    (Click on images to enlarge.)

    Recently we have evaluated the performance of NVIDIA’s Tesla K80 GPUs on Dell’s PowerEdge C4130 server on standard benchmarks and applications (HPL and NAMD).

    Performance Evaluation with LAMMPS and GROMACS

    In this blog, we quantify the performance of two of the molecular dynamics applications; LAMMPS and GROMACS by comparing their performance on K80s to a CPU only. The performance is measured as  “Jobs/day” and “ns/day” (inverse of the number of days required to simulate 1 nanosecond of real-time) for LAMMPS and GROMACS respectively. Higher is better for both cases.Table 1 gives more information about the hardware configuration and application details used for the tests.

    Table 1: Hardware Configuration and Application Details

    Figure 2: LAMMPS Performance on K80s Relative to CPUs

    Figure 2 quantifies the performance of LAMMPS over the five configurations mentioned above and compares them to the CPU-only server (CPU only server => performance of application on a server with two CPUS). The graph can be described as follows:

    • Configurations A and B are the switched configurations with the only difference being that B has an extra CPU. Since LAMMPS just uses the GPU cores, the extra CPU does not offset the scale in terms of performance.
    • Configurations “A”, “B” and “C” are four GPU configurations. Configuration C performs better than A and B. This can be attributed to the PCIe switch in configurations A and B which introduces an extra hop latency when compared to “C” which is a more balanced configuration.
    • Among the two GPU configurations are D and E. Configuration D performs slightly better than E and this could again be attributed to the balanced nature of D. As mentioned previously, LAMMPS is not offset by the extra CPU in D.
    • An interesting observation here is that when moving from 2 K80s to 4 K80s (i.e. comparing D and C configurations in  Figure 2) the performance almost quadruples. This shows that for each extra K80 added (2 GPUs per K80) the performance doubles. This can be partially attributed to the size of the dataset used.


    Figure 3: GROMACS Performance on K80s relative to CPUs

    Figure 3 shows the performance of GROMACS among the five configurations and the CPU-only configuration. The explanation is as follows.

    • Among the quad CPU configurations (A, B and C), B performs the best. In addition to the 4 GPUs attached to CPU1, GROMACS also used the whole second CPU2 making B the best performing configuration. It seems GROMACS benefits from the second CPU as well as the switch, it’s likely that application has substantial GPU to GPU communication.
    • Configuration C outperforms A. This can be attributed to the more balanced nature of C. Another contributing factor may be the latency hit because of the PCIe switch in A.
    • Even in the dual GPU configurations (D and E), D which is the balanced of both, slightly outperforms E.

    Performance is not the only criteria when a performance optimized server as dense as the Dell PowerEdge C4130 with 4 x 300 Watt accelerators is used. The other dominating factor is how much power these platforms consume. Figures 4 answers questions pertaining to power.

    • In case of LAMMPS the order of power consumption is as follows. B > A >= C > D > E
      • Configuration B is a switched configuration and has an extra CPU then Configuration A.
      • Configuration A incurs a slight overhead of the switch and thus takes up slightly more power than C.
      • Configuration D is a dual GPU, dual CPU configuration and thus takes up more power than E, which is a single CPU dual GPU configuration.
    •  In case of GROMACS, the order is still the same, but B takes up considerably more power than A and C when compared to LAMMPS. This is because GROMACS uses the extra CPU in B while LAMMPS does not.

    In conclusion, both GROMACS and LAMMPS benefit greatly from Dell’s PowerEdge C4130 servers and NVIDIAs K80s. In the case of LAMMPS, we see a 16x improvement with only a 2.6x more power. In case of GROMCAS, we see a 3.3x improvement in performance while talking up 2.6x more power. The comparisons in this case are with a dual CPU only configuration. Obviously, there are a lot of other factors which come into play when scaling these results to multiple nodes; GPU direct, interconnect, size of the dataset/simulation are just a few of those.

     

     

  • Using HPC to Improve Competitiveness

    by Onur Celebioglu

    A story from insideHPC took a look at how the National Center for Super Computing Applications (NCSA) is helping organizations in the manufacturing and engineering industries to use high performance computing to improve their competitiveness. The posting suggests those companies utilizing supercomputing resources such as the iForge HPC cluster, which is based on Dell and Intel technologies, may realize some important benefits, including:  

    • Large memory nodes performing 53 percent faster on demanding simulations
    • Benchmarking to predict performance and scalability benefits from larger software licenses
    • The ability to solve finite element analysis and computational fluid dynamics problem

    The manufacturing industry is becoming increasingly competitive making the role high performance computing plays even more important. The ability to simulate models, for example, allows research and design to be conducted with greater cost efficiency. No longer is it necessary to build, test and repeat.Thus manufacturers are able to deliver a safer product to the market in shorter period of time.

    As Dell CTO and Senior Fellow, Jimmy Pike, wrote in a his blog, there is also a growing demand for iterative research - the emerging ability to change variables along the way without having to run a new batch - which will continue to decrease the speed to market of products while decreasing costs. 

    And that's what competition is all about.

     

  • The Cypress Supercomputer Takes Root at Tulane

    In August of 2005, Hurricane Katrina ravaged the city of New Orleans and nearly destroyed the almost 180-year old Tulane University. But along with the intrepid people of the city, Tulane's leaders refused to give up. It rose out of the ashes, rebuilt and moved forward. Few accomplishments exemplify Tulane's rebirth than the new Cypress supercomputer.

    In the years immediately following Katrina, Tulane's IT infrastructure lacked the power and capacity to meet the demand of a world-class university. The network was frequently clogged, and afternoon email slowdowns were a daily occurrence.

    Under the guidance of Charlie McMahon, Ph.D., Vice President of Information Technology, Tulane set out not only to develop a new, more powerful system, but one befitting a place of learning as impressive as Tulane itself. 

    The crowning achievement is the Cypress supercomputer, which will be used for a wide-range of workloads from sea-level calculations to molecular docking supporting pharmaceutical discovery. Tulane has even contracted with the National Football League Players' Association to conduct long-term tracking of players, who have a higher risk of traumatic brain injury.

    You can learn more about Tulane's remarkable journey to recovery in this video.

    Cypress arrived just in the nick of time: next year, Tulane is expected to see its largest, and arguably most diverse, graduating class in its history, with greater numbers of potential students applying every year.

    We are very proud of our partnership with Tulane, and are humbled to have played a small part in this amazing institution's bright future. You can read a case study of Dell's work with Tulane here, and learn more about Cypress at HPCwire.

  • Dell PowerEdge C4130 Performance with K80 GPUs - NAMD

    by Saeed Iqbal and Mayura Deshmukh 

    The field of Molecular Dynamics (MD) has seen a tremendous boost in simulation capacity since the introduction of General Purpose GPUs.This trend is sustained by freely available feature-rich GPU-enabled molecular dynamics simulators like NAMD. For more information on NAMD, please visit http://www.ks.uiuc.edu/Research/namd/ and for more information on GPUs visit http://www.nvidia.com/tesla.

    Things only get better for NAMD with the release of the new Tesla K80 GPUs from NVIDIA. K80 offers significant improvements over the previous model, the K40. From the HPC prospective the most important improvement is the 1.87 TFLOPs (double precision) compute capacity, which is about 30% more than K40.  The auto-boost feature in K80 automatically provides additional performance if additional power head room is available. The internal GPUs are based on the GK210 architecture and have a total of 4,992 cores which represent a 73% improvement over K40.  The K80 has a total memory of 24GBs which is divided equally between the two internal GPUs; this is a 100% more memory capacity compared to the K40. The memory bandwidth in K80 is improved to 480 GB/s.  The rated power consumption of a single K80 is a maximum of 300 watts.

    Combining K80s with the latest high GPU density design from Dell, the PowerEdge C4130, results in an extra-ordinarily powerful compute node. The C4130 can be configured with up to four K40 or K80 GPUs in a 1U form factor. Also the uniqueness of PowerEdge C4130 is that it offers several workload specific configurations, potentially making it a better fit, for MD codes in general, and specifically for NAMD.

    The PowerEdge C4130 offers five configurations, noted here as “A” through “E”.  Part of the goal of this blog is to find out which configuration is best suited for NAMD. The three quad GPU configurations “A”, “B” and “C” are compared. Also the two dual GPU configurations “D” and “E” are compared for users interested in lower GPU density of 2 GPU per 1 rack unit. The first two quad GPU configurations (“A” & “B”) have an internal PCIe switch module which allows seamless peer-to-peer GPU communication. We also want to understand the impact of the switch module on NAMD. Figure 1 below shows the block diagrams for configurations A to E. (Click on all images to enlarge.)

    Figure 1: C4130 Configuration Block Diagram

    In this blog, we evaluate improved NAMD performance with Tesla K80 GPUs. Two proteins F1ATPASE and STMV, which consist of 327K and 1066K atoms respectively, have been selected due to their relatively large problem size. The performance measure is in “days/ns”, which shows the number of days required to simulate 1 nanosecond of real-time. Table 1 gives more information about the hardware configuration and application details used for the tests.

    Table 1: Hardware Configuration and Application Details


                    

    Figure 2: NAMD performance and acceleration on the five C4130 configurations 

    Figure 2 shows the performance of NAMD on the PowerEdge C4130. Configurations “A”, “B” and “C” are four GPU configurations.  However, the acceleration on NAMD also seems to be sensitive to number of CPUs, e.g., there is a significant difference in the acceleration between “A” and “B”.  “B” has an additional CPU compared to “A”.  Among the three quad GPU configurations the current version of NAMD performs best on configuration “C”. The difference in the two highest performing configurations “C” and “B” is the manner in which GPUs are attached to the CPU. The balanced configuration “C” has 2 GPUs attached to 2 CPUs resulting in 7.8X acceleration over the CPU-only case. The same four GPUs attached via a Switch module to a single CPU results in about 7.7X acceleration.

    On the dual GPU configurations, “D” performs better with 5.9X acceleration compared to 4.4X in configuration “E”. The fact that “D” does better is in line with the assumption that a 2nd CPU is helpful for NAMD, as we saw “B & C” (the dual CPU quad GPU configurations) performing best.   

                   

    Figure 3 shows the power consumption results of running NAMD for the runs in Figure 2

    As shown, the power consumption for quad GPU configurations is about 2.1X to 2.3X resulting in accelerations from 4.4X to 7.8X. Configuration “C” does the best from performance per watt perspective (an acceleration of 7.8X for 2.3X more power). The power consumption of “D” is higher than “E” due to the additional CPU and improved utilization of GPUs resulting in better acceleration.   

    In conclusion, first, using K80 GPUs can substantially accelerate NAMD and it does that in a power efficient way too. Second, the balanced configurations seem to do better with NAMD. The configuration “C” and “D” are best for NAMD, the particular choice depends on required GPU density/U.

     

     

  • Michael Norman Shares His Thoughts on Why HPC Matters

    Michael Norman, Ph.D., the Director of the San Diego Supercomputer Center at UCSD, took time at SC14 to share his thoughts on why HPC matters and other issues important to the industry. 

  • Rhian Resnick of Florida Atlantic University Shares His Thoughts on Why HPC Matters

    Rhian Resnick, the Assistant Director of Middleware and HPC at Florida Atlantic University took time at SC14 to share his thoughts on why HPC matters and other issues important to the industry. 

  • Niall Gaffney of TACC Shares His Thoughts on Why HPC Matters

    Niall Gaffney took time out at SC14 to share his thoughts on why HPC matters and other issues important to the industry. He is Director of Data Intensive Computing at the Texas Advanced Computing Center (TACC).

  • Muhammad Atif from NCI and ANU Offers His Thoughts on Why HPC Matters

    Next from SC14 to give his thoughts on why HPC matters and other issues important to the industry is Muhammad Atif, Ph.D. from NCI and the the Australian National University.

  • Kenneth Buetow of Arizona State Gives His Thoughts on Why HPC Matters

    Next from SC14 to give his thoughts on why HPC matters and other issues important to the industry is Kenneth Buetow, Ph.D., Director of Computation and Informatics at Arizona State University.