Over the past decade, supercomputers have become more affordable and accessible as high performance computing (HPC) clusters were proven as a viable alternative to traditional SMP proprietary systems. The price/performance improvement delivered with these new systems built upon standard off-the-shelf computer hardware components, and open source software, represented a seismic shift in the landscape of supercomputing, and the amount of computing power that is utilized. The HPC industry has continued to grow in terms of money spent on these systems - even though the cost of these supercomputers dropped more than 10x. One of the questions has been- are the same players just gaining more computing power - or has the price/performance improvements allowed users of HPC systems to grow?

It's an interesting question. Now there is a hope of making HPC more accessible to more organizations by delivering HPC resources to users via the cloud. So by removing the requirement that organizations acquire, manage and maintain these supercomputing systems, would it allow new organizations to gain access to supercomputing power? It's possible the next seismic shift in HPC is currently underway with the promise of HPC via the cloud. In a recent Scientific Computing World article called Meeting demand, it explores that very concept. While not a new idea, it's finally come to fruition for some organizations. Whether you call it utility HPC, or cloud bursting, the ability to access massive amounts of computing power through the internet is a reality. Today. 

Many of the case study examples of this so-called cloud bursting, features organizations with existing HPC cluster systems on site. In these cases utility HPC using the cloud is implemented where they need an extra push - or when they’ve reached maximum capacity of internal HPC systems.

The article provides some guidance for when purchasing and owning makes sense, versus accessing HPC systems via the cloud. A simple evaluation of predicted utilization can help organizations determine where the break-even point exists. So for organizations that have constant demand for HPC systems, clearly owning and managing these resources internally makes sense. On the flip side, when HPC is used more to support special project work where utilization varies, investing in expensive infrastructure, and HPC hardware that can quickly become outdated, may not make sense.  

However, the most realistic situation is more of a combined approach. Perhaps you can call it a hybrid, where internal systems are acquired to be running near maximum capacity at all times, and when demand is greater than those internal HPC resources can handle, this is when utility supercomputing comes into play. 

In the article, Dell's own Bart Mellenbergh, EMEA Director of HPC, outlined some of the challenges to wider adoption of this so-called cloud bursting, or utility HPC. Some of these include licensing, HPC knowledge and the ability to run apps over the cloud, as well as data security.

So it seems the majority of the traditional HPC sites will continue using their internally-owned HPC systems. Some visionary organizations have already set in motion a path that will allow them to gain access to cloud-based HPC when demand exceeds internal capacity.

However, the question remains, will the promise of more accessible HPC through the cloud actually attract new users? In theory, any organization that stands to benefit from the powerful simulations and number crunching associated with HPC systems, should be considering how they can leverage HPC via the cloud. Better data and better information, leads to better decisions. What organization today wouldn't benefit from that?

For more information on this article, please read:

Scientific Computing World: Meeting Demand, by Beth Harlen http://content.yudu.com/A2byyn/SCWAUGSEP13/resources/18.htm