HPC at Dell

High Performance Computing

High Performance Computing
A discussion venue for all things high performance computing (HPC), supercomputing, and the technologies that enables scientific research and discovery.
Results for supercomputing
  • SC12 Highlights

    This year's big supercomputing event, SC12, was held in Salt Lake City & was a fantastic conference. Some of the key highlights included: University of Texas winning the SC12 Student Cluster Competition Texas Advanced Computing Center's (TACC) Stampede recognized as 7th fastest computer...
  • 02-16-2010 -- PetaFLOPS for the Common Man- Pt 1 Who needs PetaFLOPS & Why?

    Jeff Layton, Ph.D. Dell Enterprise Technologist - HPC A PetaFLOPS sounds like a huge number, and indeed it is, but the word itself doesn’t really convey how much computational power there is in a PetaFLOPS. So for an analogy, a PetaFLOPS would require 83,333 laptops with 2 cores running...
  • Fast and furious – developing an enterprise class environment for Team Lotus

    You only need to watch a Formula One™ car when it comes into the pits and is surrounded by laptops, to see how reliant modern racing is on technology. As crucial as the driver’s skill is the IT infrastructure behind an F1 team. When the newly formed Team Lotus asked us to help them build...
  • HPC GPU Flexibility: The Evolution of GPU Applications and Systems Part 5 – Benchmarks

    Dell's Dr. Jeff Layton GPUs are arguably one of the hottest trends in HPC. They can greatly improve performance while reducing the power consumption of systems. However, because the area of GPU Computing is still evolving both for application development and for tool set creation, GPU...
  • Transforming Research with High Performance Computing (HPC)

    The Texas Advanced Computing Center at the University of Texas in Austin A common thread for HPC researchers is their need to solve bigger problems that can’t fit on smaller computers and/or the need to solve these problems very rapidly. TACC’s mission is to enable discoveries through the...
  • Enabling human potential: Texas Advanced Computing Center unveils Lonestar 4 supercomputer

    Michael Dell joins U.S. Sen. Kay Bailey Hutchison and University of Texas President William Powers Jr. for dedication ceremony Christine Fronczak Dell HPC Marketing Manager High performance computing (HPC) systems are indispensable in nearly all fields of science and...
  • Architecting HPC Systems for Fault Tolerance and Reliability Part 7- Login Head Nodes

    Typically, clustered HPC systems have at least one node that is dedicated to handling logins from individual users of the system. From here, users manipulate their data, submit jobs into the system, and track status of their running jobs. Login nodes are generally not extremely critical to continued...
  • HPC Visualization

    An important method of creating a visual & graphical output of raw HPC-related data to aid in the interpretation and analysis of complex HPC data sets. Recent Posts: http://en.community.dell.com/techcenter/high-performance-computing/w/wiki/2442.aspx Modeling North Carolina Meteorology and Storm Surge...
  • Guest Blog TACC: Real World Examples of HPC Working to Improve Human Health

    Introduction by Dell's Dr. Glen Otero Dr. Glen Otero Dell HPC Scientist I was working at TACC as an independent consultant back in 2003 when the seeds of the computational biology program were sown. At the time we concentrated on laying the foundation of a research...
  • PetaFLOPS for the Common Man- Pt 5 Challenges with PetaFLOPS scale systems

    I think it’s fairly safe to say that so far PetaFLOPS class systems will be more fairly straight-forward to build in the next three years or so. The processing part of the problem, whether it be from CPUs or a hybrid of CPUs and GPUs, is gaining a fair amount of momentum and in 3 years we should...
  • HPC Tech Tip: Updating your compute nodes firmware via PXE - 03-12-2010

    Introduction If you find yourself having to update any firmware on your servers via DOS, instead of walking to each server and using a USB drive, or booting to a USB floppy, try it via PXE. In most cases we have a Linux based update package, so you can avoid updating in this manner. I am assuming that...
  • Guest Blog TACC. Visualization in HPC – Seeing is Understanding! 03-04-2010

    Introduction by Dell's Dr. Jeff Layton Dr. Jeff Layton Dell Enterprise Technologist HPC Everyone is probably aware that the amount of computational power in HPC is growing at a tremendous rate. Problem sizes are growing, fidelity of the models is improving, more difficult...
  • HPC Overview

    Cluster Computing has revolutionized High Performance Computing (HPC). Prior to clusters, HPC was dominated by large centralized systems with names such as CDC , Cray , SGI , and IBM . All of the users shared these machines as centralized resources, reducing costs by centralizing expense resources. However...