By Olumide Olusanya and Munira Hussain
This is the second part of this blog series. In the first post, we shared OSU Micro-Benchmarks (latency and bandwidth) and HPL performance between FDR and EDR Infiniband. In this part, we will further compare performance using additional real-world applications such as ANSYS Fluent, WRF, and NAS Parallel Benchmarks. For my cluster configuration, please refer to part 1.
Fluent is a Computational Fluid Dynamics (CFD) application used for engineering design and analysis. It can be used to simulate the flow of fluids, with heat transfer, turbulence and other phenomena, involved in various transportation, industrial and manufacturing processes.
For this test we ran Eddy_417k which is one of the problem sets from ANSYS Fluent Benchmark suits. It is a reaction flow case based on the eddy dissipation model. In addition, it has around 417,000 hexahedral cells and is a small dataset with a high communication overhead.
Figure 1 - ANSYS Fluent 16.0 (Eddy_417k)
From Figure 1 above, EDR shows a wide performance advantage over FDR as the number of cores increase to 80. We continue to see an even wider difference as the cluster scales. While FDR’s performance seems to gradually taper off after 80 cores, EDR’s performance continues to scale as the number of cores increase and performs 85% better than FDR on 320 cores (16 nodes).
WRF (Weather Research and Forecasting)
WRF is a modelling system for weather prediction. It is widely used in atmospheric and operational forecasting research. It contains two dynamic cores, a data assimilation system, and a software architecture that allows for parallel computation and system extensibility. For this test, we are going to study the performance of a medium size case, Conus 12km.
Conus 12km is a resolution case over the Continental US domain. The benchmark is run for 3 hours after which we take the average of the time per time step.
Figure 2 - WRF (Conus12km)
Figure 2 shows both EDR and FDR scaling almost linearly and also performing almost equally until the cluster scales to 320 cores when EDR performs better than FDR by 2.8%. This performance difference, which may seem little, is significantly higher than my highest run to run variation of 0.005% between three successive EDR and FDR 320-core tests.
HPC Advisory Council’s result here shows a similar trend with the same benchmark. From their result, we can see that the performances are neck and neck until the 8 and 16-node run where we see a small performance gap. Then the gap widens even more in the 32-node run and EDR posts a 28% better performance than FDR. Both results show that we could see an even higher performance advantage with EDR as we scale beyond 320 cores.
NAS Parallel Benchmarks
NPB contains a suite of benchmarks developed by NASA Advanced Supercomputing Division. The benchmarks are developed to test the performance of highly parallel supercomputers which all mimic large-scale and commonly used computational fluid dynamics applications in their computation and data movement. For my test, we ran four of these benchmarks: CG, MG, FT, and IS. In the figures below, the performance difference is in an oval right above the corresponding run.
Figure 3 - CG
Figure 4 - MG
Figure 5 - FT
Figure 6 - IS
CG is a benchmark which computes an approximation of the smallest eigenvalue of a large, sparse, symmetric positive-definite matrix using a conjugate gradient method. It also tests irregular long distance communication between cores. From Figure 3 above, EDR shows a 7.5% performance advantage with 256 cores.
MG solves a 3-D Poisson Partial Differential Equation. The problem in this benchmark is simplified as it has constant instead of variable coefficients to better mimic real applications. In addition to this, it tests short and long distance communication between cores. Unlike CG, the communication patterns are highly structured. From Figure 4, EDR performs better than FDR by 1.5% on our 256-core cluster.
FT is a 3-D partial differential equation solution using FFTs. It tests the long-distance communication performance as well and shows a 7.5% performance gain using EDR on 256 cores as seen in Figure 5 above.
IS, a large integer sort application, shows a high 16% performance difference between EDR and FDR on 256 cores. This application not only tests the integer computation speed, but also the communication performance between cores. From Figure 6, we can see a 12% EDR advantage with 128 cores which increases to 16% on 256 cores.
In both blogs, we have shown several micro-benchmark and real-world application results to compare FDR with EDR Infiniband. From these results, EDR has shown a higher performance and better scaling than FDR on our 16-node Dell PowerEdge C6320 cluster. Also, some applications have shown a wider performance margin between these interconnects than other applications. This is because of the nature of the applications being tested; communication intensive applications will definitely perform and scale better with a faster network when compared with compute-intensive applications. Furthermore, because of our cluster size, we were only able to test the scalability of the applications on 16 servers (320 cores). In the future, we plan on running these tests again on a larger cluster to further test the performance difference between EDR and FDR.
How one company known for thwarting cyber attacks made improvements to protect its own data.
Imagine a company that thwarts millions of cyber attacks for clients all over the world only to discover that its own digital property and reputation are at risk.
In our e-book, you’ll learn about how ISCI found itself facing a potential Achilles’ heel: Almost all of its intellectual property consists of digital assets, and backup turmoil was creating the threat of data loss. A wrong data protection strategy can set in motion the reversal from good fortune to bad — and mess with your organization’s integrity.
ISCI is a company with 150 patents. It knows how to detect the presence of sophisticated adversaries hiding inside enterprise networks; it also has a product line addressing some of the toughest security challenges an organization can face, including the user-identity, 8th or "human layer" in network protocol.
To make the most of these achievements, and capitalize on its victories in the global battle for cyber security, ISCI needed to face down data protection challenges on the home front: to eliminate the risk of losing data required for product development and financial auditing.
Last but not least, it had to prevent a loss of confidence in IT.
Trial by full recovery
ISC8 launched a new data protection strategy that deployed a mix of data protection solutions: a Dell DR series backup-to-disk deduplication appliance and NetVault Backup software. The two are tightly integrated to deliver automated backups and granular recovery. The new solution was subjected to a “trial by full recovery” — by copying 8 terabytes off of a server, executing a full recovery and undertaking a bit-wise comparison. The result: The full 8 terabytes were bit perfect.
By leveraging the right data protection solution, ISC8 was able to:
Data threats can be avoided
ISCI surmounted its data protection challenges and has ejected a worst-case scenario — an irreversible loss of proprietary data — from the realm of the possible. That alone represents a substantial return on its investment in a new data protection strategy.
There are other aspects of data protection ROI that we’ll be blogging about — accelerating recovery, gaining IT staff time, reducing storage costs — but they all hinge on avoiding data loss.
Companies, like heroes, are rarely perfect. But when best efforts are put forward, potential data protection threats can be avoided.
Read our e-book or more examples of how organizations like yours are protecting against data threats, and increasing their return on investment from data protection.
About James Gomez
James is a marketing content strategist for Dell Software's marketing organization.
View all posts by James Gomez |
For a long time the “Holy Grail” of identity and access management has been single sign-on (SSO) – at least when you ask end users and executives that’s what they would say. After all, nothing makes regular folks happier than easy access to everything they need, with only one password to remember, and no need to call IT – ever – to fix your mistakes, because you don’t make them anymore. It all sounds good, but as any of us who have tried to achieve SSO know, it’s not quite that simple.
Maybe “less” sign-on, or “reduced” sign-on, or “close to single” sign-on would be more accurate….and that’s fine. Anything is better than the mess of not streamlining access.
Let’s take a quick look at SSO through the ages.
So you can get single sign-on for everything but it will take a combination of tools and technologies and may not be worth the effort. Many people these days are taking advantage of SSO in pockets – maybe federation for SaaS apps and AD-based SSO for Unix and Linux, but often there are other critical systems that don’t fit the deployment.
But you can get awfully close if you choose the right solutions.
I’ve recorded a short “white board” video that details the options for Web single sign-on and provides alternatives to limited, siloed solutions that only address one of the needs detailed above.
And if you want a more detailed discussion about SSO, how to ensure your project is successful, and how to “sell” the benefits of doing it right read this white paper: Moving SSO beyond convenience
It's almost Valentine's Day! Birds are singing, love is in the air and candy bowls around the office are filled with tiny hearts. As the holiday approaches you're working hard to make your workday more efficient so you can clock out on the 14th and spend some time with that special someone! But think back to when you were a kid in elementary school. Valentine's Day was a time when you ran around giving paper hearts and candy to everyone in the class! It was a time to reach out to the people around you and let them know how special they are! Why not take that Valentine spirit with you into the work place this year?
February 14th is an opportunity for us to show a little extra love and appreciation to all the unsung heroes we work with every single day; the people who keep our IT systems going day in and day out! Don't forget about your faithful DBA, SysAdmin, and IT team that works tirelessly to keep things running smoothly. This year, rediscover the joy and appreciation that is Valentine's Day and show a little extra love to your team mates!
Here are 3 simple ways to show your DBA / SysAdmin / and IT team some extra love this Valentine's Day!
1. Surprise them with some coffee in the morning!
Nothing shows appreciation like bringing that sweet, sweet nectar of the gods... coffee.
...Bonus points if it comes with a box of donuts.
2. Stop and say "hello!"
Sometimes, all it takes is to pause for a moment and ask someone how they're doing! Life is crazy busy, true. But don't let it get so busy that you can't stop and ask someone how their day is going.
3. Ambush them with a big "Thank You!"
Two little words. That's all it takes. You can turn someone's day around with a simple, "Thank You!" So use up all those extra office supplies you have around and build someone a Thank You Valentine!
There you have it! 3 Simple ways to show your DBA / SysAdmin / IT Team some extra love this February!
Do you have any other ideas? Let us know in the comments below!
And from all of us here at Dell Software, Happy Valentines Day!
About Ryan McKinney
Ryan McKinney has been a Social Media and Communities Advisor for Dell Software since 2014.
View all posts by Ryan McKinney |
As I sit in my Columbus, Ohio office on February 9th and look out the window, I frown at the sight of my car already covered in snow. The holidays have come and gone.
I’m seeing doom and gloom weather reports of weekend storms to come. The harsh reality has hit me: winter is here! Depending upon where you live and what type of climate you’re in, you may be experiencing the same winter blues. But the best way to fight winter doldrums is to look forward to spring and all that is has to offer! Nice weather. Beautiful flowers. HBO’s Game of Throne’s show (where winter is still coming). But most importantly — NEW Microsoft releases!
Microsoft Releases Equivalent to Christmas Morning
We’ve made it easy for you to keep you current on what’s new and learn what steps you can take today to get ready for your next migration project. Register for our on-demand webcasts to learn about the latest Microsoft releases.
Today you don’t have to have thousands of employees to have serious application management and endpoint protection challenges. Organizations of all sizes have a broad mix of operating systems and applications to deploy and keep up to date; a wide variety of desktops, laptops, smartphones and other devices to manage and secure; and geographically disparate sites that make hands-on attention from IT impractical, if not impossible.
The difference is, some organizations are struggling with these challenges while others have application management well in hand. I’d like to tell you about one of the latter: online dating firm Meetic Group. (Spoiler alert: the company was able to complete an upgrade to Office 2014 without any issues in just two weeks.)
Too Many Systems Management Tools Can Be as Bad as Too Few
Meetic Group is headquartered in France but its 500 employees are spread across Europe. The small IT team was finding it challenging to manage all the company’s desktops and laptops — Windows, Mac and Linux —and maintain an up-to-date inventory of software assets. The problem wasn’t that they didn’t have tools; they had an open-source solution for desktops and asset management, as well as Active Directory Group Policy settings and the Microsoft Deployment Toolkit. The problem was that they didn’t have the right tools.
“We had multiple solutions to oversee and none of them gave us the level of insight we really needed to manage the client estate effectively,” explains Frédéric de Ascencao, internal IT manager at Meetic Group.” Ideally, we wanted a single solution that could help us obtain more value from our desktops and laptops for the business and personnel.”
Centralized Application Management Delivers a Host of Benefits
So Meetic Group went shopping. Through careful research and extensive testing, the company found a clear winner: Dell KACE K1000 and K2000 appliances. “They arrive preconfigured and can be up and running in a couple of hours,” says de Ascencao. “Furthermore, we didn’t need to spend time or money up-skilling our administrators. Using a basic knowledge of IT, an administrator can install and operate Dell KACE solutions. It’s that simple.”
Trading a hodgepodge of disparate tools for the integrated Dell solution has really paid off. Now Meetic Group enjoys:
To learn more about how efficient, centralized application management can make a real difference to your business, check out the latest chapter in our e-book, “Technology Tunnel Vision, Part 3: Expanding control of your application environment.”
About David Manks
David Manks is a Solutions Marketing Director for Dell Software focusing on endpoint management and security products.
View all posts by David Manks |
The more money you invest in IT, the greater the return you want on your investment. Estimates from IDC show that worldwide healthcare IT spending will rise from $115 billion this year to $135.7 billion by 2019. That means you and your counterparts are increasing your investment in areas like patch management software, remote administration tools and systems management, and probably expecting more and more value from it.
We’ve put together a paper called Realizing the Return on Healthcare IT Investment to highlight the way that comprehensive systems management maximizes the value of healthcare IT. It turns out that investment isn’t the only thing increasing from year to year; complexity is increasing as well, for several reasons.
Complexity in Healthcare IT
That’s why we emphasize the value of comprehensive systems management in healthcare IT, extending to patch management, physical inventory, system auditing, vulnerability scanning and configuration management.
It’s also a big part of fulfillling HIPAA requirements. Being able to see, update and account for all of the hardware and software in your organization is a long step on the path to compliance.
The Dell KACE K1000 Systems Management Appliance and the Dell KACE K2000 Systems Deployment Appliance help healthcare IT teams cut through complexity and maximize the return on their investments in IT. Have a look at our paper, Realizing the Return on Healthcare IT Investment, to gauge the fit with your healthcare environment.
About Christopher Garcia
A ten-year Dell veteran, Chris has had experience in various marketing roles within the organization. He is currently a Senior Product Marketing Manager.
View all posts by Christopher Garcia
Dell Desktop Authority v.9.3 is now live!
We're happy to announce the release of a Windows 10 ready Desktop Authority v.9.3. We've also made some performance enhancements and opened up features in certain licenses. Have a look!
Support for New Platforms
Desktop Authority 9.3 now supports:
SQL Server 2014 and
Exchange Server 2013
Enhancements and Improvements
We've added the MSI Packages feature to Standard version
We've added Hardware/Software Inventory & Reporting to Standard version
We've also enabled the USB/Port Security feature for all Standard and Professional licenses
We've spent some time implementing performance improvements. In this release, we've fixed issues that customers have reported and have found and fixed numerous intermittent bugs, performance bottlenecks, and stability issues.
See Release Notes for a complete list.
New to the product? Ready to try it in your own environment?
There’s an old saying: Insanity is doing the same thing over and over while expecting different results.
If you’re tired of spending your nights and weekends performing Oracle upgrades and migrations, struggling to minimize downtime and business impact, and worrying about what might happen if the migration fails, it’s time to shake things up. You don’t have limit yourself to old habits and old tools that don’t deliver the results you need. To get different results — better results — you need new approaches and new tools.
For example, the traditional way to reduce the impact of a migration on the business is to schedule resource-intensive tasks during times of low activity. But before you just accept all those long evenings and weekends in the office, look into newer technologies, such as near real-time replication, that can minimize the migration’s impact on the business — and your personal life.
Lest we throw out the baby with the bathwater, let’s take a hard look at the traditional methods for performing upgrades and migrations and determine whether and when they are helpful:
In short, while each of these tools has value in certain specific scenarios, all of them are complex or resource-intensive, require lengthy downtime of production systems, or work only for Oracle databases. Fortunately, you don’t have to limit yourself to these traditional tools. In my next blog, I’ll explain why investing in an enterprise tool is a smart alternative.
You can also learn more in our new e-book, “Simplify Your Migrations and Upgrades: Part 2: Choosing a fool-proof method and toolset.”
About Steven Phillips
With over 15 years in marketing, I have led product marketing for a wide range of products in the database and analytics space. I have been with Dell for over 3 years in marketing, and I’m currently the product marketing manager for SharePlex. As data helps drive the new economy, I enjoy writing articles that showcase how organizations are dealing with the onslaught of data and focusing on the fundamentals of data management.
View all posts by Steven Phillips |
If you’re like most system administrators, you’ll never have Mike Tyson working on your Exchange migration or Active Directory migration project. But when he says, “Everyone has a plan till they get punched in the mouth,” you’d better know he’s talking about you. And your project.
As a system administrator or IT manager, you almost always have migration projects on your radar, and some of the 2016 releases from Microsoft (think: Exchange, Windows Server, SharePoint) may be coming into your environment soon.
Have you got a plan? Are you ready in case you get punched in the mouth?
Active Directory and Exchange Recovery Plans
Think about a few things that can go wrong in an AD migration or Exchange migration:
None of that hurts as much as being punched in the mouth, but why run the risk when you can put in place an Exchange recovery plan using Windows recovery software before any of it happens?
Implementing Recovery Manager for Exchange and Recovery Manager for Active Directory lets you do three things to ensure your migration goes smoothly:
Recovery Manager is like buying insurance against the things that can go wrong in your migration project. Since there’s no such thing as a perfect project, recovery software is a good way to keep your Exchange migration or AD migration plan intact and on schedule.
Planning an Exchange or AD Migration? New Tech Brief
Which Windows migration projects are on your horizon? Have you devised a migration plan yet? More important, how about a recovery plan?
Have a look at our new tech brief, Planning an Exchange or AD Migration? Three Reasons to Include a Plan for Recovery. You’ll see in greater detail how a recovery plan can help you overcome getting punched in the mouth mid-project. You’ll also see real-world scenarios in which sysadmins have used Recovery Manager to get email back and keep their migration projects on schedule.
What you won’t see is Mike Tyson. When we wrote the tech brief, he was still on bed rest. After all, everyone has a plan till they get punched by a hoverboard.