Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
A team of researchers at Houston Methodist Research Institute (HMRI) have uncovered a link between Alzheimer's and brain cancer. The discovery may help lead to better treatments or new medications for the diseases.
The HMRI scientists used the Stampede and Lonestar supercomputers at the Texas Advanced Computing Center (TACC) to analyze and compare data from thousands of genes and to narrow the search for common-cell signaling pathways. This analysis lead to the discovery that the two afflictions share a pathway in gene transcription, a process essential for cell reproduction and growth.
You can read more about the vital role TACC played in this breakthrough at insideHPC, HPCwire or the TACC website.
On April 26th Microsoft released Security Advisory 2963983 in response to a vulnerability in Internet Explorer that has the potential to allow remote code execution. This is a zero day exploit meaning that the vulnerability was only just discovered at the time of the attack, and there are no available patches or software updates for it as yet. This vulnerability affects all versions of Internet Explorer, leaving approximately 1 in 4 computers exposed to the attack. The US and UK governments have issued warnings to stop using Internet Explorer until the vulnerability is fixed.
To protect your organization from this exploit, Dell Endpoint Systems Management (ESM) strongly recommends that organizations move all their systems off Windows XP as soon as possible so they are not left unprotected when Microsoft releases a patch to remedy this vulnerability. Microsoft stopped supporting Windows XP last month and will not be issuing security updates to systems running on XP. Organizations with systems running on Windows XP must stop using Internet Explorer and use another browser, such as Firefox or Chrome since this vulnerability is only on Internet Explorer.
For systems that must use Internet Explorer for compatibility with applications or other reasons, Dell ESM recommends changing Internet Security settings on all XP systems to High, adding secure web sites to the list of safe sites and removing Adobe Flash from all XP systems since the exploit relies on Adobe Flash. Organizations need to enforce these settings so that users cannot change them.
Dell ESM Solutions can help you quickly and efficiently accomplish all of the above recommendations. Here’s more information on how Dell ESM can help organizations:
Windows Migration: For hardware upgrades or refreshes, the Dell KACE K1000 management appliancecan quickly determine whether your systems are compatible with Windows 7 or 8. The KACE K2000 deployment appliance can easily and quickly migrate your systems from Windows XP to Windows 7 or 8 XP.
Change IE Security Settings: The K1000’s built in scripts can help you quickly and automatically change the IE security settings on all of your systems to High. Scripts can also quickly identify and designate both safe and unsafe web sites for all of your systems. The K1000 can be used to automatically remove unsafe software, such as Adobe flash for this particular vulnerability.
Enforce Security Settings: The K1000 can be configured to periodically scan and report on security configurations while Dell Desktop Authority Management Suite can set granular user environment settings and privileges on Windows systems, including restricting user rights on an application by application basis. This allows you to restrict user administrative rights for Internet Explorer so that users cannot change the High security settings while retaining administrative rights on software specifically deemed safe for their use.
For additional information, please contact us. Safe surfing!
The Foglight vMonitoring team is extremely excited to announce that Foglight for Virtualization, Enterprise Edition with Hyper-V support has been selected as a finalist for the Best of TechEd 2014 award in the Virtualization category!
As Microsoft Hyper-V continues to make significant gains in the enterprise hyper-visor market, Dell Foglight is delivering “best in class” virtualization monitoring and management solutions to provide a unified pane of glass to every virtualization administrator. Foglight for Virtualization, Enterprise Edition is the centerpiece to a "single version of the truth" within virtual infrastructure that simplifies troubleshooting, reduces downtime, and eliminates finger pointing though detailed visualization, and advanced analysis for both Hyper-v and vSphere environments. With the addition of Foglight for Active Directory, Foglight for Exchange, and Foglight for SQL Server cartridges, virtual administrators now have end-to-end monitoring and management capabilities that are truly virtualization aware!
Please join us in congratulating the Foglight for Virtualization team at booth #1601 during TechEd Houston, May 12th-15th. And of course, don't forget to vote!
With the launch of Foglight for Storage Management 3.1, we have added support for EMC Isilon storage arrays. In this BLOG, I’ll present some screenshots that illustrate the information that we’re able to collect.
Foglight for Storage Management collects from Isilon arrays using the Command Line Interface (CLI) embedded in the Isilon OneFS operating system. A single agent will collect from all of the nodes that belong to a specific array.
From the “Summary” tab, we can quickly ascertain the health of each of the subcomponents that make up this Isilon array by reviewing the “Related Inventory” section. It tells us how many of each subcomponent are present and the alarm status (good, warning, critical, or failure) of each. We also get a summarized view of total and free storage capacity that is broken down into traditional hard disk drives (HDD) and solid state drives (SSD). Finally, we display some select array-wide performance metrics such as average CPU busy, file system throughput, and number of clients.
From the “Network” tab, we can easily summarize network load and packet error rates across all outwardly facing Ethernet ports and inwardly facing InfiniBand ports. And we get detailed performance and health statistics for every network port in the array. Alternate views can filter the results to only show ports belonging to a specific member node.
The “Member Nodes” tab lists the top 5 nodes in terms of data rate, disk latency, and CPU busy. And it provides detailed health, performance, and capacity information for every node included in the array.
The “Pools” tab lists the top 5 pools in terms of data rate, disk latency, and L2 cache hit rate. And it provides detailed health, performance, and capacity information for every pool included in the array. It even breaks out capacity by hard disk drive vs solid state drive.
From the “IFS” tab, we capture metrics at the OneFS file system level, such as file system throughput, operations rate, and exported directories.
The “Disks” tab lists the top 5 disks in terms of ops rate, queue depth, and latency. And it provides health, performance, and capacity information for every disk in the array. Alternate views can filter the results to only show disks in a specific member node or assigned to a specific pool.
These screenshots represent only a fraction of the information that we collect from your Isilon array. You can analyze even deeper by selecting individual member nodes, Ethernet ports, InfiniBand ports, pools, etc which will take you to more detailed views.
In summary, with the new monitoring support that we’ve included in Foglight for Storage Management 3.1, you’re now able to better monitor and resolve storage-related performance issues quicker when you have EMC Isilon storage arrays in your environment.
This year has been an exciting year for the K2000! Version 3.6, released earlier this year, brought Multicasting and the Task Engine to the K2000. The Task Engine is a powerful tool that increases automation of all install tasks and brings reporting of deployment progress. In order to achieve this level of automation and reporting, some major changes were required in how tasks are processed from earlier version of the K2000.
The first major change is that installation tasks no longer need prepended parameters such as “start /wait” and “call,” executables with spaces in their names no longer require quotes wrapped around them anymore (I’m looking at you Firefox!), and tasks now have the ability to require a reboot of the Operating System (and keep coming back for more tasks!)
What this means for some folks is that some tasks need to be updated to be compliant with the 3.6 Task Engine. Thankfully your friendly Scripting Ninjas have you covered with the K2 Advisor application! Available from the K2000 Deployment Workbench on ITNinja (Did you notice that’s in the Tools tab on your K2?!), this handy application will tell you what tasks (based on what is currently assigned to a deployment) need modification. A report will be created for you telling you which tasks need to be changed, the recommended change, as well as a link right to the task itself in the K2000. The report will also let you know how many additional restarts are being called through the Reboot Required checkbox, so that you can increase your automatic logins (for Scripted Installs and Sysprep files).
If you have updated to 3.6 and are having some issues with your installation tasks, we highly recommend that you try this application to assist you in your troubleshooting. If you have not yet upgraded (Why haven’t you?) then you should download this application after your upgrade to catch any task issues prior to deployment.
The K2 Advisor can be downloaded directly from here.
Look for version 2 of the Advisor, coming out soon, which features reporting on many facets of the K2000. Want to prepare now? Enable Offboard Database Access (Settings and Maintenance -> Security) and reboot your K2000. You will then be ready for the upcoming release of the K2 Advisor! Thanks Scripting Ninjas!
Did you know that by putting your appliance in your network’s DMZ, you can use the K1000 GO app on your Android or iOS device? This allows you to manage tickets while you are mobile, deploy software to devices, and view detailed system inventory information to aid you in the field. With the 6.0 release of K1000 it will also allow users to create and update their tickets. That’s pretty handy. Another reason for an appliance to be exposed to incoming web traffic would be managing clients outside the network and VPN. Or, maybe you just want to be able to run reports from home without logging into the VPN; Whatever the reason, make sure you’re doing it safely.
We wouldn’t put our money on our front lawn in a big bag marked “My Money, please don’t steal”, would we? Probably not, so let’s protect important data the same way and use SSL to encrypt the traffic. All of your Dell KACE appliances are capable of using SSL so that you can encrypt web communication between a client device and the appliance. This protects usernames, passwords, device data, and other sensitive information that may be passed along during normal web communication with these appliances. This allows you to safely support clients that are inside, or outside your firewall and feel confident you are safe from prying eyes. We recommend using certificates from known and trusted vendors. Some of the many top qualified Certificate Authorities are VeriSign , Thawte, Comodo, and GeoTrust. Different Certificates can have different costs and there are multiple types of Certificates. We do not recommend one over another.
IT Security is no laughing matter; the K1000 has a lot of reach on your network and a lot of data about your environment. If you do expose your appliance to the Internet- Enable SSL, Ensure you’re using the most current versions of server and appliance, and be sure to apply any hotfixes quickly if they become available- just as you changed all passwords and began investigating possible issues to every tool and service you use when Heartbleed was revealed… Right? Listed above you will find a vulnerability hotfix, and our response to Heartbleed.
Aside from the people you employ, data is your organization’s biggest competitive advantage. So, naturally, data should be treated as the highest value asset in the organization, right? Shockingly, this is far from the truth in many cases.
Considering the growing numbers of data breaches, penalties and reputational risks as stake, as well as with more employees mobile, using public clouds and accessing potentially sensitive data the problem becomes even more pronounced. IT has to protect data wherever it might be and achieve compliance without adding burden, while enabling end-user productivity and simplifying management.
Start by answering these 5 questions and you will be on your way to better protecting your data to better serve the business.
To answer these questions, you need a lifecycle approach to protecting data: Identify> Classify> Govern> Enable> Encrypt.
Start with identifying data to determine what needs attention. For example, understanding “hot spots” of unstructured data that contain the most risk. Objects would be displayed as red> yellow> green for risk ratings, with each data block drillable to determine policies.
Once you have identified what needs attention, you move to the classification stage. Here you understand what sort of sensitive information is contained in unstructured data files and categorize and organize automatically the PII and whatever other data you have determined by policy. Based on policy, all assignments can then be delegated, re-assigned, managed, etc. – as part of an object lifecycle.
Step 3 is all about governance, giving access control to the business owners who actually know who should have access to which sensitive data, with the power to analyze, approve and fulfill unstructured data access requests to files, folders and shares across NTFS, NAS devices and SharePoint. Important here is the ability to go beyond the data to show what other entitlements the user has in the organization. Critical to step 3 is scheduling regular business-level attestation of access delivered with a 360-degree view of user access. Dell delivers capabilities in identifying, classifying and governing data in its Dell One Identity Manager product, and provides auditing in its ChangeAuditor family.
With growing numbers of remote and mobile users, it makes sense to next focus on enabling those users to access data securely through SSL VPN, like with Dell Secure Mobile Access. A secure mobile access solution should be delivered platform agnostically – authenticated users should be able to access allowed corporate applications and resources from any validated device.
The final step in ensuring data is protected is of course encryption. Look for a solution that protects data on any device, external media and in the cloud without overburdening your end-users. FIPS 140-2 level 3 compliance is a must.
Enabling users to access data, while protecting it and ensuring compliant usage of it doesn’t have to lead to an IT tradeoff. Start with this lifecycle approach to help you achieve your most stringent compliance and data security requirements and look for best-in-class connected solutions that provide the context and intelligence across all of these requirements.
Big Data: Datameer: How-to: Incrementally Load Data from Relational Sources in Datameer In case you haven’t noticed, we are always striving to maximize efficiency and ease for our customers. As such, one of my favorite features in Datameer is the ability to incrementally load data from relational sources. This allows you to keep fresh data in your Hadoop system while also putting minimal stress on the upstream relational system since you’re only adding changed or updated records to your environment. Read more.
DevOps: Rackspace: Rolling Deployments With Ansible and Cloud Load Balancers Recently a fellow Racker wrote a great post about zero downtime deployments. I strongly believe in the principals he described. In his example, he used Node.js to build a deployment script to accomplish the goal, and I want to explore how this could be done with Ansible and the Rackspace Cloud. Read more.
OpenStack: Adam Young: Configuring mod_nss for Horizon Horizon is the Web Dashboard for OpenStack. Since it manages some very sensitive information, it should be accessed via SSL. I’ve written up in the past how to do this for a generic web server. Here is how to apply that approach to Horizon. Read more.
Adam Young: MySQL, Fedora 20, and Devstack Once again, the moving efforts of OpenStack and Fedora have diverged enough that devstack did not run for me on Fedora 20. Now, while this is something to file a bug about, I like to understand the issue to have a fix in place before I report it. Here are my notes. Read more.
Adam Young: Packstack to LDAP While Packstack makes it easy to get OpenStack up and running, it does not (yet) support joining to an existing Directory (LDAP) server. I went through this recently and here are the steps I followed. Read more.
Andreas Jaeger: OpenStack Icehouse released - and available now for openSUSE and SUSE Linux Enterprise OpenStack Icehouse has been released and packages are available for openSUSE 13.1 and SUSE Linux Enterprise 11 SP3 in the Open Build Service. Read more.
Cody Bunch: Basic Hardening with User Data / Cloud-Init The ‘cloud’ presents new and interesting issues around hardening an instance. If you buy into the cattle vs. puppies or cloudy application building mentality, your instances or VMs will be very short lived (Hours, Weeks, maybe Months). The internet however, doesn’t much care for how short lived the things are, and the attacks will begin as soon as you attach to the internet. Thus, the need to harden at build time. What follows is a quick guide on how to do this as part of the ‘cloud-init’ process. Read more.
Cody Bunch: Basic Hardening Part 2: Using Heat Templates Specifying that as user-data every time is going to get cumbersome, as will managing your build scripts if you are using CLI tools and the like. A better way, is to build it into a Heat Template. This allows for some flexibility in both version controlling it as well as layering HOT templates to a desired end state (HOT allows for template nesting. Read more.
Docker: OpenStack – Icehouse release update Today, we expect the release of OpenStack Icehouse. In March, we reminded readers that Docker will continue to have OpenStack integration in Icehouse through our integration with Heat. Of course, that remains true. Since then, however, much has happened to warrant an update. Read more.
James Page: OpenStack 2014.1 for Ubuntu 12.04 and 14.04 LTS I’m pleased to announce the general availability of OpenStack 2014.1 (Icehouse) in Ubuntu 14.04 LTS and in the Ubuntu Cloud Archive (UCA) for Ubuntu 12.04 LTS. Users of Ubuntu 14.04 need take no further action other than follow their favourite install guide – but do take some time to checkout the release notes for Ubuntu 14.04. Read more.
Matt Fischer: Keystone – using stacked auth (SQL & LDAP) Most large companies have an Active Directory or LDAP system to manage identity and integrating this with OpenStack makes sense for some obvious reasons, like not having multiple passwords and automatically de-authenticating users who leave the company. However doing this in practice can be a bit tricky. Read more.
Mirantis: OpenStack Icehouse is here: what do you want to know? After 6 months of intensive development, OpenStack Icehouse was released today. With an increased focus on users, manageability and scalability, it’s meant to be more “enterprise friendly” than previous releases. Here are some of the quick facts from the OpenStack Foundation. Read more.
Mirantis: OpenStack Icehouse: More Than Just PR Is your head swimming with Icehouse information yet? By now you have certainly heard that the ninth OpenStack’s version, Icehouse, has been released. The most prominent features, such as rolling compute upgrades, improved object storage replication, tighter integration of networking and compute functionality, and federated identity services, have gotten a lot of airtime, but with more than 350 implemented blueprints, there’s a lot more below the surface of this particular iceberg. Here at Mirantis, we’ve been doing a lot of work with Icehouse, and we’ll be going into some technical depth on many of the new features of Icehouse in our webinar on April 24. In the meantime we wanted to give you a taste of what’s available. Read more.
Opensource.com: OpenStack Icehouse brings new features for the enterprise Deploying an open source enterprise cloud just got a little bit easier yesterday with the release of the newest version of the OpenStack platform: Icehouse. To quote an email from OpenStack release manager Thierry Carrez announcing the release, "During this cycle we added a new integrated component (Trove), completed more than 350 feature blueprints, and fixed almost 3000 reported bugs in integrated projects alone!" Read more.
Opensource.com: TryStack makes OpenStack experimentation easy Not everyone has a spare machine sitting around which meet the requirements of running a modern cloud server. Even if you do, sometimes you don't want to go through the entire installation and setup process just to experience something as an end user. TryStack is a free and easy way for users to try out OpenStack, and set up their own cloud with networking, storage, and computer instances. Read more.
Rackspace: Why We Craft OpenStack (Featuring Rackspace Software Developer Ed Leafe) As OpenStack Summit Atlanta fast approaches, we wanted to dig deeper into the past, present and future of OpenStack. In this video series, we hear straight from some of OpenStack’s top contributors from Rackspace about how the fast-growing open source project has evolved, what it needs to continue thriving, what it means to them personally, and why they are active contributors. Watch the video.
SUSE: OpenStack Icehouse – Cool for Enterprises OpenStack Icehouse has arrived with spring, but this ninth release of the leading open source project for building Infrastructure-as-a-Service clouds won’t melt under user demands. This is good, since user interest in OpenStack seems to know no limits as evidenced by the three new and sixteen total languages in which OpenStack Dashboard is now available. Read more.
The OpenStack Blog: Open Mic Spotlight: Joshua Hesketh An interview with Joshua Hesketh, a software developer for Rackspace Australia working on upstream OpenStack. Read more.
The OpenStack Blog: Open Mic Spotlight: Simon Pasquier An interview with Simon Pasquier, an OpenStack Active Technical Contributor. Read more.
First, your Dell SonicWALL firewall itself is not vulnerable and, if Intrusion Prevention was active, has been protecting your network from the Heartbleed vulnerability since April 8th.
Now that this is out of the way, let’s look at what a Dell SonicWALL firewall can do to defend your network against the Heartbleed attack
On April 7th, 2014, the Heartbleed vulnerability (CVE-2014-0160) was publically disclosed in the widely used OpenSSL library at the same time as a patch release to OpenSSL 1.0.1 was made available. The vulnerability existed in the heartbeat code of the TLS protocol (hence the “Heart” part of the name), which was added to the OpenSSL codebase more than two years ago. The relevant heartbeat code did not perform proper length validation of the payload field, allowing an attacker to trick a vulnerable server into returning, and thus leaking, sensitive memory information in chunks up to 64Kbytes at a time (the “bleed” part of the name), leaving absolutely no trace of the attack on the server.
Why is this leakage of 64Kbytes at a time such a big deal? The chunks of server memory, which an attacker could request an indefinite number of times, have a high probability of containing sensitive information such as usernames, passwords, as well as private server keys. For example, if a user “Bob” with a password “cake” unknowingly logs into a vulnerable web server, the attacker can get lucky and obtain a piece of memory containing “username=bob&password=cake”, thus leaking Bob’s password. This type of data is encrypted and protected by the SSL/TLS protocol while it is in transit over the internet between a browser and the server. However, once it is on the server, it is decrypted so that it can be interpreted by the web application (e.g. webmail, shopping form, admin interface). Since the decryption occurs in the memory of the OpenSSL library, this data remains in memory local to the OpenSSL decryption code and becomes vulnerable to an attack that is able to steal the contents of server memory. That’s how the Heartbleed vulnerability came into existence.
The vulnerability affects servers and appliances that used OpenSSL 1.0.1 for secure communication with a client application, web browsers being the most common example of such an application.
Much has been written on the topic, so I’ll just point you to heartbleed.com, Bruce Schneier’s blog, NIST, and my favorite explanation at xkcd.
The good news is that this attack can be stopped at the network perimeter (or at an internal boundary) with a capable next-generation firewall (NGFW) or Unified Threat Management (UTM) firewall, buying you valuable time during which you can patch vulnerable servers and other infrastructure. All Dell SonicWALL firewalls with an active Intrusion Prevention service have been serving as the first line of defense against the Heartbleed vulnerability since April 8th, dropping the malicious traffic at the network boundary before it has a chance to hit webservers or exposed appliances. The Intrusion Prevention protection analyzes SSL/TLS traffic entering the network and drops connections that contain:
- Malicious standard TLS heartbeats
- Malicious encrypted TLS heartbeats
It is not necessary to enable SSL Decryption (DPI SSL) on the firewall in order to block Heartbleed attacks, as the malicious heartbeat packets are in the SSL/TLS headers that can be observed without decrypting the SSL/TLS payload. The attacks are covered by the following IPS signatures in the WEB-ATTACKS category, which are enabled automatically if you have prevention enabled for “High Priority” attacks:
It is important to understand that “encrypted” heartbeats are a way of obfuscating the heartbeat payload, and do not refer to TLS encryption, nor the SSL/TLS payload.
More details and statistics on these countermeasures can be found in the SonicAlert posted on April 9th: https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=668 as well as the SonicAlert posted on April 18th: https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=671
I could argue that SSL/TLS heartbeats are not necessary at all.
Heartbeats may be useful in SSL/TLS implementations over connectionless protocols such as UDP (DTLS). In such protocols, the heartbeat can help with more efficient communication by reducing the number of unnecessary session terminations/re-establishments.
However, given that a significant majority of SSL/TSL traffic is over TCP, these heartbeats are not essential to SSL/TLS performance over TCP. There are very few applications that rely on TLS heartbeats, and they can only be utilized when both the server and the client are using a version of OpenSSL that supports them. In the absence of heartbeat support on the server, the SSL/TLS specification mandates a backoff to a heartbeat-less operation which covers the majority of SSL/TLS operation in the real world.
Therefore, we’re also providing the ability to completely disable SSL/TLS heartbeats with the following signatures in the IPS engine:
Immediately after a newsworthy event, malware writers usually capitalize on people seeking information by creating malicious webpages and malware that are likely to be found in common search queries. In this particular case, it only took a few days for a piece of malware to surface that masquerades as the Heartbleed testing tool. Fortunately, it was quickly caught by the Dell security research teams and on Friday, April 11th, we added protection against this malware.
The malware is, predictably, called heartbleed.exe and upon execution, registers with a Command and Control (C&C) server and runs a Trojan on the infected systems.
These are covered with the following Anti-Malware signatures, but I’m sure that this list will continue to grow:
More details can be found at https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=669 and in the April 18th update https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=671
As always, be very wary of running any executables that you download from the internet, especially small utilities that are not code-signed by a well-known organization.
WARNING: There are many convenient websites that allow you to quickly test whether your externally facing servers (whether websites or appliances) are vulnerable. I recommend strongly *against* using those sites for testing since, you’re effectively self-declaring yourself as vulnerable to an entity that makes no guarantees of what it will do with that information.
I’m certain well-meaning testing sites exist, but, as with the malware mentioned above, it doesn’t take long for tools with malicious intent to show up. For example, it is not a difficult project to set up a website that tests for the Heartbleed vulnerability and logs submitted sites that it finds to be vulnerable.
Therefore, think twice of what you leak about your public web presence.
The correct way to test whether you’re vulnerable is by using one of the many python scripts that are available on the internet as source code. Even if you’re not a Python expert, a quick scan of the code can tell you whether the results of a scan are either logged or sent off somewhere. You can use these scripts against publically facing sites and internal servers with full confidence that your findings are not leaked or shared. When using these Python scripts, make sure to get the source code and execute the source code as a script. Never run pre-compiled code downloaded from the web.
If you’re running Dell SonicWALL GMS or Analyzer, you can check for attacks caught by your firewall by selecting
You will be presented with a view similar to the following
Director, Product Management, Network Security
PS: There were some non-firewall Dell SonicWALL products that were vulnerable and have been patched. If these appliances were behind a Dell SonicWALL firewall with Intrusion Prevention, the risk was significantly lower. Please see the security bulletin for more details.
Continuing the trend of producing world record performance in the Enterprise Resource Planning (ERP) workspace, Dell has published another top 4-socket score on the SAP SD Two-Tier Benchmark*. With this submission, Dell also introduces our new solid-state storage solution, Dell PowerEdge Express Flash NVMe PCIe SSD, a high-performance, solid-state device. This technology combined with SUSE Linux Enterprise Server running SAP Sybase Adaptive Server Enterprise 16, and the Intel Xeon Processor E7-4800 v2 product family reasserts Dell PowerEdge R920 as a leadership enterprise solution provider in the 4-socket arena. Let’s take a closer look at how Dell solutions stack up against similar competitive platforms in the ERP workspace.
SAP SD Two-Tier Benchmark System Description
Number of Benchmark Users
Database Management System
Dell PowerEdge R920 with NVME
Intel Xeon E7-4890 v2
SUSE Linux Enterprise Server 11
SAP Sybase ASE 16
SAP Enhancement Package 5 for ERP 6.0
IBM System x3850 X6
Windows Server 2012 Standard Edition
IBM DB2 10
HP ProLiant DL580 Gen8
Windows Server 2012 Datacenter Edition
Microsoft SQL Server 2012
Dell PowerEdge R920
* Benchmark results based on best SAP SD Two-Tier results published as of April 2014. For the latest SAP SD Two-Tier results, visit: http://global.sap.com/solutions/benchmark/sd2tier.epx .
This latest world record performance in the SAP SD two-tier benchmark is just one example of Dell PowerEdge R920 leadership. The SAP SD two-tier workload has been developed by SAP and its partners to determine both hardware and database performance of SAP applications and components. The benchmark itself is a simulation of a SAP Sales & Distribution scenario. This simulation is carried out by multiple clients (companies) with users making orders concurrently for a sustained period of time with a predetermined set number of users. The metrics quoted by the benchmark are only considered successful when the host system exhibits the ability to sustain a maximum number of users working concurrently while delivering sub-second average response times. In the case of this discussion, we are specifically looking at the metric “number of benchmark users” as our measurement of throughput.
As promised in my last blog posting, this is the second chapter in this leadership story for Dell and the PowerEdge R920. Be on the lookout for more stories of how Dell and our PowerEdge portfolio help power businesses and communities.
To view this and other Dell World Records, please visit the Intel World Record Page: http://www.intel.com/content/www/us/en/benchmarks/server/xeon-e7-summary.html
To view the entire field of SAP SD two-tier benchmarks, please visit the SAP SD Standard Application Benchmark site: http://global.sap.com/solutions/benchmark/sd2tier.epx?num=200