Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Welcome to the Dell Cloud Marketplace Blog (now part of the Dell Cloud Blog)!
The Dell Cloud Marketplace simplifies how you can intelligently compare, purchase, use and manage cloud solutions from a variety of leading public cloud vendors and cloud-based applications. We launched a private beta of the Cloud Marketplace earlier this month. If you are interested in learning more about the beta program or would like to sign up for the program please sign up here.
This blog will feature information about the Dell Cloud Marketplace, general cloud trends, customer cloud needs and successes, cloud brokerage and other enabling technologies. We (the product and development team) look forward to engaging with you through this blog and look forward to receiving your comments and feedback.
To get latest updates about the Dell Cloud Marketplace, please visit us often or subscribe to our RSS feed. In the interim, listen to Nnamdi Orakwue, Dell VP Software Strategy, Operations and Cloud talk about how Dell helps our customers in the cloud.
by Mayura Deshmukh and NCSA Private Sector Program (PSP)
As new server technologies become available, each generation is touted as being better, faster, more efficient, and laden with new features. For an HPC user, the question is “how much better?”, “how much faster?” and especially, “how much does it benefit my specific application?”
For nearly 30 years, the National Center for Supercomputing Applications (NCSA) has been a leading site for some of the world’s most advanced computing and data systems. NCSA’s Private Sector Program (PSP) is also the largest industry engagement effort at an HPC center in America, providing computing, data, and other resources to help companies leverage supercomputing to solve the most challenging science and engineering problems.
This blog presents some of the work conducted by NCSA’s PSP to measure and analyze the performance benefits of the new Intel Xeon E5-2600 v2 processors (code-named Ivy Bridge) over the previous generation E5-2600 series (code-named Sandy Bridge). The focus here is on various HPC applications widely-used by researchers and designers in the manufacturing sector. A mix of commercial and open source applications like ANSYS Fluent, LS-DYNA, Simulia Abaqus, MUMPS, and LAMMPS were selected for this investigation. High-Performance Linpack (HPL) benchmark results are also presented.
The main difference between the two processor generations under study is the shrinking of process technology from 32nm to 22nm. This allows fitting more transistors on the chip, higher clock rates, and better power management. For example, running the SPEC power benchmark with 100% load on Dell Power Edge R720 with Ivy Bridge (IVB) Intel Xeon E5-2660 v2 showed 25% lower peak CPU power consumption compared to HUAWEI Tecal RH2288 V2 server with Sandy Bridge (SB) Xeon E5 2660. A 2.0 GHz SB processor runs at max TDP of 60W whereas a 2.0 GHz IVB processor runs at max TDP 50W. Also for processors with maximum TDP 150W the clock speed of IVB (3.4 GHz) is higher than that of SB processors (3.1 GHz). It is essential for users to understand the power and frequency differences between the two generations before selecting a processor for their application. Both processors have an integrated memory controller that supports four DDR3 channels. SB processors support memory speeds up to 1600 MT/s, whereas IVB support memory speeds up to 1866 MT/s.
When selecting a CPU for your application it is important to consider factors like power consumption, usable bandwidth (for applications on multiple nodes) along with the clock speed and the number of cores.
Table 1 gives more information about the application and the hardware configuration used for the tests.
* 1866MHz unless otherwise noted
Figure 1 shows the performance gain with 12-core IVB processors over the 8-core SB processors for various applications. For each application, the baseline is the Sandy Bridge system. Tests were conducted on a single node with 2 * Intel Xeon E5 2670 (8-core) SB processor and 2 * Intel Xeon 2697 v2 (12-core) IVB processors as noted in Table 1.
Figure 1 compares the performance of HPL. HPL solves a random dense linear system in double-precision arithmetic on distributed-memory systems. The problem size (N) used for the 12core IVB system was ~81% of the total memory size. Since HPL is a CPU-intensive benchmark, higher performance is expected with higher clock frequencies and more cores. BIOS Turbo mode was enabled for this test. The performance achieved with the IVB processor was ~1.5 times more than that measured with SB. Note that the performance gain with IVB measured by this synthetic benchmark is much greater than that measured for actual applications. This truly emphasizes the value of the detailed application benchmarking studies conducted by NCSA! Using just micro-benchmark and synthetic benchmark data to evaluate systems doesn’t always translate into gains for real-world use cases.
Figure 1 shows 25% performance gain with IVB on ANSYS Fluent, a computational fluid dynamics application. The benchmark problem used was a turbulent reacting flow case with large eddy simulation. The case has around 4 million unstructured hexahedral cells. The BIOS Turbo mode was disabled for the test. Figure 2 compares the performance of the 10 core IVB to the 8 core SB processor, which shows a 22% performance gain for the IVB processors against the SB processors. The 12-core IVB processor offers slightly better performance than the 10-core IVB model, which is expected due to the increase in number of cores. Note that this test disabled Turbo and hence we’re comparing the rated base CPU frequency. In cases where Turbo is enabled, both IVB and SB processors can operate at higher clock rates and that should be taken into account when comparing results.
LS-DYNA is a general-purpose finite element program from LSTC capable of simulating complex real-world structural mechanics problems. The “Dodge Neon Refined Revised” benchmark is a higher resolution mesh of the standard "Neon" benchmark, and features approximately 5M DOFs. The BIOS Turbo mode was enabled for the test. Here the IVB processors performed 12-16% better compared to the SB processors (as shown in Figures 1 and 2). It will come as little surprise those familiar with LS-DYNA that this observed performance increase tracks linearly with the 16% memory bandwidth increase of IVB platform over the SB platform, proving that application performance is tied to much more than just the sheer number of CPU cores or clock frequency.
Dassault Systèmes’ Abaqus offers solutions to engineering problems covering a vast spectrum of industrial applications. Abaqus/Standard applications include linear statics, nonlinear statics, and natural frequency extraction. The S4B (5M DOF direct solver version) benchmark is a mildly nonlinear static analysis that simulates bolting a cylinder head onto an engine block. It is a more compute intensive benchmark with a high degree of floating point operations per iteration. These types of models scale better than communication-bound benchmarks, like the Abaqus E6 dataset, where more time is utilized in communication of messages versus time spent in the solver.
Abaqus/Explicit benchmarks focus on nonlinear, transient, dynamic analysis of solids and structures using explicit time integration. The E6 (concentric spheres with 23,291 increments, 244,124 elements) benchmark consists of a large number of concentric spheres with clearance between each sphere. Abaqus/Explicit E6 model is more communications intensive than Abaqus/Standard, with large systems of equations passing information to one another as they are being solved.
Turbo mode was enabled for both the Abaqus benchmarks. The large arrays of equation solvers and datasets used in the simulation also require a large, fast memory system. For better performance with all applications, it is recommended to install enough memory so the job resides completely in physical memory to minimize I/O swapping to a local or network file system. The results show ~3% and 12% improvement with IVB 24c for the S4B and E6 benchmarks respectively. Test results were gathered using 1600 MHz RAM in both systems. Performance for IVB expected to be 5-10% higher with 1866 MHz RAM.
LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is an open source classical molecular dynamics simulator designed to run efficiently on parallel machines. The EAM metallic solid benchmark with problem size of 32000 atoms was used here. The BIOS Turbo mode was enabled for the SB processors. The results in Figure 1 show a significant improvement of 26% with the IVB 12 core system. This is expected performance as LAMMPS runs efficiently in parallel using message-passing techniques and scales well with additional number of cores. Also enabling the BIOS Turbo mode option for Ivy Bridge processors did not significantly impact the performance most likely because the system is already running close to TDP (Thermal Design Power).
MUMPS (Multifrontal Massively Parallel sparse direct Solver) is a software application for large sparse systems of linear algebraic equations on distributed memory parallel computers. The number of equations solved (norm of the matrix) was N = 1,522,431 and the number of nonzero values used was NNZ = 63639153. BIOS Turbo mode was enabled for the tests. The results were gathered using 1600 MHz RAM in both 8-core SB and 12-core IVB systems. The benchmark measured a 37% performance improvement with 24 cores of IVB over 16 cores of SB. This is primarily due to the increase in number of cores. Breaking this down further, Figure 3 shows the results of scaling within the server. The horizontal X-axis shows the number of cores used for the 24-core IVB system and the Y-axis shows the performance gain achieved compared to 8 cores of the SB processor. As the number of cores increase the problem gets solved faster.
NCSA’s testing results show that the Ivy Bridge processors provide better performance compared to the Sandy Bridge for all the applications. The actual improvement depends on the application and its characteristics. The cost/benefit analysis of upgrading to IVB processors should be done based on a particular application workload, and not just based on number cores or higher clock rates. Still, the work conducted by NCSA gives users insights into the performance gains they can expect for these widely-used applications.
Dell and NCSA continue to work together to investigate new HPC technologies and provide clear information to users from industry, with the shared goal of helping this community improve the performance of its applications and make informed, data-driven decisions about its HPC solutions.
If you’re interested in leveraging the computing and expertise resources available through NCSA’s Private Sector Program, contact Evan Burness at firstname.lastname@example.org.
It’s that time of year again, when schools and universities start preparing for their major summer rollouts and reimaging projects. K2000 version 3.6, released earlier this year, added Multicast capabilities to the K2000, which should cut down the total overall time to finish those deployment projects. If you haven’t already signed up, we encourage you to check out the Education Week KKE sessions- see the list above for dates and times.
So what exactly is Multicasting? Multicasting allows the K2000 to send image data to multiple devices simultaneously, reducing the overall bandwidth and disk usage of the K2000. While Multicasting will not necessarily speed up the imaging of an individual device, it does allow for more concurrent devices to be imaged at the same time without any additional load on the K2000. Instead of imaging a handful of computers at a time, imagine deploying entire floors, buildings, and regions at one time!
Haven’t setup Multicasting on your K2000 yet? Don’t fret- it is very easy. First, make sure you have updated your K2000 to 3.6 (you should be receiving a notification on your Home module). Next, create your “Gold-Master” image and capture it as a WIM image (Multicasting does not support K-Image or Scripted Installations.) Next, go to your Deployments module -> Automated (Boot Actions) tab and create a new boot action. Name the Boot Action and select your WIM Image. Under Schedule, choose to run at next boot or at a future date/time. Under Type, select Multicast and select the Show Advanced Settings link. Now we have some options to discuss!
Finally, save your Multicast session and start PXE booting targeted computers into your K2000 Boot Environment (KBE). The K2000 will define the devices based on their MAC Address and start initiating the assigned Boot Action. Be aware, that only one multicast session can be broadcast at a time and it does not work through Remote Site Appliances at the time of this publication.
Here are some useful Knowledge Base Articles to reference while creating your images: System Imaging Best Practices.To learn more about K2000 version 3.6 check out this KKE: What’s New in K2.
Overall, Multicasting opens up a whole new world for all customers by reducing the overall time for the summer imaging projects by allowing you to complete more in a shorter amount of time. With your free time, you may actually be able to take that vacation you were planning, or plan your trip to Austin for Dell World User Forum in November!
For the last 8 months I've focused most of my customer facing efforts on Foglight accounts that use Foglight for Enterprise System Monitoring. I engaged these accounts in the acting role of Customer Success Manager and have been active on a weekly basis in that role for many of these accounts. The Enterprise classification simply means that a firm monitors one or more technologies, on more than 100 and up to tens-of-thousands of servers. One of the most basic things I've learned or validated is that Enterprises are evaluating System Monitoring solutions like Foglight, in the same way that they would evaluate any other business application. At a high level initial evaluations and re-evaluations that happen at renewal time, are based on value versus cost. The goal simply stated, is to get a disproportionate amount of value for what is being spent on application deployments and maintenance.
One way to look at the value of an application is to consider the number of beneficiaries and the value per beneficiary. These beneficiaries can be internal users such as employees, business partners, customers, and in some cases even other applications. For this conversation I felt it best to divide the value of an application down into 3 areas:
During these engagement I've been noticing that Enterprise Customers are frequently looking at these usage patterns as a way to gauge value. This is not only the case for initial evaluations, but is also holds true as customers look to get more value from what they already own, or when they assess what they are getting versus what they originally hoped to get from their application purchase.
The second part of the overall value equation is the cost of an application. Getting value is a good start but the true value of an application is derived from the sum of the overall value as compared to the sum of the overall cost. For the purpose of this discussion I've also broken out the application cost considerations into 3 different areas:
We've been evaluating Foglight with this application value to cost breakdown in mind and searching for opportunities to increase value and reduce costs specifically for our Enterprise Customer segment. Beyond the value that comes from breath of coverage and the addition new technology monitors, we wanted to take a hard look at what the Foglight Enterprise users are doing and also what the Foglight Enterprise maintainers are doing. Increasing the value of Foglight for Enterprises will come from expanding the user base and providing that base with more useful and actionable information. Part of this may be to make it easier for end users to get information specific to their role and environment,easier and faster they they do now. Decreasing cost will come from making day-to-day tasks of the maintainers or Foglight Administrators, easier and less costly. Improvements in this area will come from product enhancements, knowledge sharing among Foglight Enterprise customers, and working better with infrastructure management echo systems. One commonly asked for example of this, is to document how Foglight can be used with automation tools like Puppet and Chef.
Building a solution to manage and gather information from hundreds or thousands of systems has distinct advantages and unique challenges. By breaking out these advantages and challenges specifically for the Foglight Enterprise segment in terms of the value delivered to the user base, and the costs, we can focus more on the Enterprise customers that are quickly becoming the majority of the Foglight user base.
This post was originally written by Perumal raja and Vinay Patkar. Send your suggestions or comments toWinServerBlogs@dell.com.
Dell is making it easier for customers to exercise downgrade rights to previous Microsoft Windows Server® OS versions. Downgrade rights allow a customer to install previous versions of Windows Server OS on a newly bought server with the latest license. For example, if you are buying a Dell server with Windows Server 2012 R2 Standard Edition, you are also entitled to run Windows Server 2012 and Windows Server 2008 R2 SP1 Standard Edition. This will help those who want to standardize on a previous version of the OS, while keeping the option to upgrade later. For more information about downgrade rights, refer to Microsoft links
Dell currently offers the option for DVD media of the downgrade versions, including product activation keys, with OS factory installation. Your feedback indicates that keeping track of the media over time can be difficult, and ordering additional media from Dell or Microsoft can delay implementation. To provide a better customer experience, we are placing the ISO files of the downgrade OS versions on the hard drive with the factory installed OS. When booting the system for the first time, a screen pops up (Fig 1) to inform you of the downgrade ISO file(s) location and also provides an option to save or delete the files.
Option 1 - You can choose to save the downgrade ISO files to different location by selecting the files and clicking SAVE button.
Option 2 – You can keep the ISO files in the current location by selecting the files and clicking CANCEL button.
Option 3 – You can delete the ISO files if you don’t want them by selecting the files and clicking DELETE button.
NOTE: Product activation keys for the downgraded images are provided along with physical recovery media kit. It is highly recommended to put them in a safe place, as these cannot be replaced.
Researchers at the Texas Advanced Computing Center (TACC) are using the Stampede supercomputer to study a link that neurological and psychiatric diseases share with addictions. The link is dopamine, a neurotransmitter that plays an important role in our cognitive, emotional, and behavioral functioning.
For example, cocaine and amphetamines release dopamine at increased levels causing greater energy and euphoria. However that increased level also helps lead to addiction. TACC's Stampede, the world's 7th fastest computer, is being employed to examine a molecular-level 3D model of the dopamine transporter to help researchers better understand how dopamine binds with various substances, and leads to addiction.
These simulations have also led to research in dopamine's role on Parkinson's and other diseases. The dopamine transporter is the mutated gene that is linked to Parkinson's. In this case, Stampede is supporting actual clinical trials by allowing researchers to examine the dopamine transporter and better understand how the irregularities have impacted the clinical study's participants.
Stampede's role in working on cures for diseases like Parkinson's, and helping better understand addiction, are exciting examples of just how much HPC matters!
You can learn more about the research at HPCwire or at TACC.
As with nearly everything for Mac OS X, there are ways to accomplish administrative tasks via the command line and via the GUI. Printing is no different. Printers on OS X are controlled using CUPS (Common Unix Printing System) which has its own set of command line tools. The basic capabilities include viewing, adding, and removing printers, as well as changing printer options and viewing print queues.
There are many things that can be done with CUPS, and once you master a few of them you’ll be able to implement them in K1000 Scripting. Once you’ve scripted the CUPS commands you need you can make them available to users if you want, or you can distribute them from the admin side of the table- it’s your call. One of our favorite Kace Koaches has written up some great details and examples on ITNinja just for you. See more at: http://www.itninja.com/blog/view/managing-printers-on-os-x-with-kace
By Alex Dubrovsky, Director of Software Engineering & Threat Research, Dell SonicWALL (Dell Security)
During the recent Threat Update, we discussed various malware targeted for Android-based devices (smartphones). Previously, the main purpose of Android-based malware was mainly data theft, reconnaissance and malware propagation onto Windows-based machines. However, we recently discovered that ransomware has now made its way onto Android smartphones. To stay up to date on the latest threats research here via the Dell SonicWALL Security Research Center, please click here
Here’s a snapshot of my analysis as to why this is particularly noteworthy:
AndroidLocker disables the phone from going into a “sleep mode” and completely locks the user’s access to the phone and uses Geo-location API to determine the ransom amount based on user’s location.
SimpleLocker tries to mimic some of the Windows-based malware that encrypts data (such as CryptoLocker, Cryptowall) by encrypting the user’s data stored on SD card and charging a fee to decrypt it. However, this malware is not very sophisticated yet, since it uses symmetric AES encryption, and this encryption key can be found during malware disassembly. The above shows a trend in Android based malware that illustrates it will become more sophisticated and will try to monetize Android based devices just like Windows based devices.
Further into the presentation, I talked about a new trojan (Soraya Infostealer), which combines capabilities of banker form-stealing trojan functionality and POS (Point of Sale) memory scraping malware functionality. In the examples I showed, malicious code is injected into IE processes to invoke form-grabbing functionality, and it uses other threads to scan memory of running non-system processes – thus invoking POS process memory scraping functionality. I also showed examples of how information stolen by Soraya is communicated back to the C&C server.
In the wrap up, we discussed CVE-2014-0502 (Adobe Flash zero-day) and Parcim Trojan, which was being propagated using Adobe Flash vulnerability. I gave examples of how the vulnerability was being exploited in the wild and provided information about the data being communicated back to the C&C server.
To access detailed presentation, please click here:
We are pleased to introduce the Foglight cartridge to monitor VMware vSwitches. Foglight for Virtualization 7.2 release contains many enhancements; one of them is this brand new feature.
Why monitor virtual networks?
Foglight currently monitors CPU, memory and storage performance. Foglight also monitored networking performance at the VM and ESX level. A lot of network switching functionality (bandwidth teaming, traffic prioritization, bandwidth sharing etc.) is now part of the VMware infrastructure. Many of these policies are set and implemented at the vSwitch level.
Customers routinely use vSwitches for sharing bandwidth between application and data as well as infrastructure operations like Storage migration, VMotion, backup etc. When many applications and IT operations are simultaneously using the network, having an insight into the goings-on becomes important. Virtualization has better ROI because it increases utilization – but that also necessitates close monitoring of the datacenter performance.
Before we look at the uses, let’s understand what virtual switches are and how they are used.
What is a Virtual Switch?
Take a look at the diagram here (small part of it reproduced below).
The virtual switches (shown in Orange) are part of the ESX hosts. Virtual Machines connect to these virtual switches using virtual Ethernet adapter. These switches can be administered as regular physical switches and they generally behave in similar fashion too – except the vSwitches exists only virtually. vSwitches are administered using VMware vCenter.
There are two types of virtual switches: Standard and Distributed.
A Standard vSwitch is the one that is wholly contained within the ESX host. Only VMs running on that can connect to the switch. When a VM is moved from one ESX host to another, it must connect to the new switch. It is up to the administrator to make sure that all switches are similarly administered so VMs can connect to them if they move between the hosts.
Distributed virtual switches span across the ESX hosts. There is only one switch that all VMs from spanned hosts connect to. So when VMs move from host to host, their networking properties remain the same. Networking paths, traffic prioritization etc. need not be programmed per server. There is only one configuration needed for the switch.
As you can probably guess, Distributed switches came much later than Standard switches. They also have more control and support profilers like Netflow and SNMP.
Apart from these two vSwitches provided by VMware, other vendors have ported their own vendor specific switches on the VMware platform. One of the most popular among these third party vSwitch is Cisco’s Nexus 1000v. This is a distributed vSwitch, but it also integrates into Cisco’s management framework.
More information about vSwitches can be found here and here.
Let’s look at couple of interesting uses of this technology and ways to monitor them.
Use Case: Topology
Since all VMs and hosts in the data center are networked, network-diagram of the connections would provide the administrator a snapshot of the health of the virtual infrastructure.
Use Case: Monitor vSwitch utilization
vSwitches form the core of VMware networking. All data traffic passes through the vSwitches. vSwitches also are used for sharing bandwidth (e.g. give 60% bandwidth for VM traffic and 40% for system/management traffic like VMotion).
Foglight monitors vSwitches and plots some important metrics on the default dashboard to answer questions like:
- Which of the VMs are highest users of the vSwitch networks?
- Which ESX hosts are highest users of the vSwitch (for distributed vSwitches)?
- What is the packet-loss statistics for the switch?
- What type of traffic is flowing through the vSwitch and their relative bandwidth use?
Virtual Network monitoring adds an important aspect to the performance monitoring of CPU, memory, Storage and IO. It makes the performance picture much more complete.
vSwitches are part of the virtual Infrastructure. They are separate from the physical networking switches. By monitoring virtual switches, Foglight will be able to help VMware administrators understand networking characteristics of the Virtual Machines and ESX hosts. They will be able to more effectively monitor the health of the NICs, monitor the effectiveness of traffic shaping policies set at the vSwitch level and keep track of the topology.
vSwitch monitoring represent a significant enhancement to Foglight’s end-to-end monitoring story. This cartridge is available free to current customers using VMware cartridge.
Footnote: vSwitches are NOT physical network switches. Naturally, they don’t know anything about networking outside of the virtual infrastructure – e.g. traffic between a NetApp array and an ESX server cannot be monitored fully using vSwitches. Monitoring physical networking switches for Storage traffic is the domain of Foglight for Storage Manager and the feature is on the roadmap for that cartridge.
It's SysAdmin Day!
Today is System Administrator Appreciation Day. It is a day where all system administrators will feel the love. And to celebrate SysAdmins everywhere, we are giving away an Alienware and five $100 Amazon gift cards! If you're a system administrator, find out how you can enter the SysAdmin Sweepstakes. Today is the last day!
And Happy SysAdmin Day!