Recently I’ve been involved in a great online conversation about Intel’s MIC architecture, and the role it will play in our world of High Performance Computing (HPC). So far, it’s been a great discussion, so I’ve summarized some of the interaction for this blog post, as I think it offers some good reading and insights from our HPC community.
If you’d like to add your own comments to the conversation, you can post here on this blog, or at LinkedIn on the High Performance & Super Computing group. NOTE: This LinkedIn Group is a membership group, but requesting membership is all that is required.
Here is my original question posed to the group:
Following are some sections from the discussion that I captured for interest.
“I really think it all breaks down to MIC's performance against the hardware that is currently being used in this application space …
I don't see it as an "alternative" for x86 HPC, but more of an additional tool for HPC.”
Blake Gonzales • Thanks for your comments Anthony. I guess in a real way you are correct that it may not necessarily need to be an alternative to x86 for HPC... especially because, in theory, x86 codes will recompile w/o changes and be able to run on MIC. I think this is key to adoption of MIC, whereas with FPGAs and GPUs (to some degree) you have to rewrite your code.
"... we will just have to sit back and wait and see how Intel integrates MIC into their systems, ... and then benchmark them to see what performance gains we get ...
I truely believe the "core war" is over with, and right now the focus is MIC / GPU / Interconnect and getting down to the 18-12nm processing level so we will end up with high frequency, lower wattage, higher performing components ... thus giving us faster super computers with a smaller footprint, which is what more and more people are concerned with ... It doesn't matter if you have the biggest/fastest SC in the world if it costs you 300k a moth to run / cool it :)
“… you have to consider the software ecosystem that's provided along with the hardware as well. NVIDIA's and ATIs are good, but Intel`s is excellent and has been for years. I would expect to see the Intel MKL extended to exploit MIC if it`s in the machine. That will provide an instant speedup, possibly with no code changes and just a re-link, on what can be 50 - 80% of the computational load in numerical simulation. Applicable from the desktop up. That will definitely get traction.”
“MiC is a more general purpose platform of course than Nvidia Fermi GPUs. It does share with GPUs their shortcomings though, namely: 1) MiC is still an I/O deviice and 2) extensive tuning may be required to extract most of the performance out of MiC H/W…
Here though lie some of the issues with MiC: 1) it is much harder to get a hold of the actual hardware on which to develop the MiC enabled applications. 2) MiC platform (Knights Ferry) is really a usable research prototype and it will take some more time to become a more mature and finalized architecture, i.e., a stable and accepted target.”
Here are some places where GPUs / MiCs could be immediately successful: * workstations used for number crunching / visualization with say 1-2 sockets + accelerator * Single chip SOCs with GPU or scaled down MiC embedded within the same die. Here are some places where long term commitment from Vendors and developers is needed: HPC environments : clusters of multi-socket SMPs with GPU or MiC attached somewhere in the h/w”
The conversation is still happening – what are your thoughts? Please comment below – or visit the LinkedIn group High Performance and Super Computing to join in!