By Nicholas Wakou, Principal Performance Engineer, Dell

 

  

Computer benchmarking, the practice of measuring and assessing the relative performance of a system, has been around almost since the advent of computing.  Indeed, one might say that the general concept of benchmarking has been around for over two millennia.  As Sun Tzu wrote on military strategy:

"The ways of the military are five: measurement, assessment, calculation, comparison, and victory.  If you know the enemy and know yourself, you need not fear the result of a hundred battles.”
 

The SPEC Cloud™ IaaS 2016 Benchmark is the first specification by a major industry-standards performance consortium that defines how the performance of cloud computing can be measured and evaluated. The use of the benchmark suite is targeted broadly at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers.  The SPEC Cloud Benchmark addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms, either public or private.

Dell has been a major contributor to the development of the SPEC Cloud IaaS Benchmark and is the first – and so far only – cloud vendor either private or public to successfully execute the benchmark specification tests and to publish its results.  This article explains this new cloud benchmark and Dell’s role and results.

How it works and what is measured

The benchmark is designed to stress both provisioning and runtime aspects of a cloud using two multi-instance I/O and CPU intensive workloads: one based on YCSB (Yahoo! Cloud Serving Benchmark) that uses the Cassandra NoSQL database to store and retrieve data in a manner representative of social media applications; and another representing big data analytics based on a K-Means clustering workload using Hadoop.  The Cloud under Test (CuT) can be based on either virtual machines (instances), containers, or bare metal.

The architecture of the benchmark comprises two execution phases, Baseline and Elasticity + Scalability. In the baseline phase, peak performance for each workload running on the Cloud under Test (CuT) alone is determined in 5 separate test runs.  Data from the baseline phase is used to establish parameters for the Elasticity + Scalability phase.  In the Elasticity + Scalability phase, both workloads are run concurrently to determine elasticity and scalability metrics.  Each workload runs in multiple instances, referred to as an application instance (AI).  The benchmark instantiates multiple application instances during a run.  The application instances and the load they generate stress the provisioning as well as the run-time performance of the cloud.  The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in the cloud.  The benchmark runs the workloads until specific quality of service (QoS) conditions are reached.  The tester can also limit the maximum number of application instances that are instantiated during a run.

The key benchmark metrics are:

  • Scalability measures the total amount of work performed by application instances running in a cloud.  The aggregate work performed by one or more application instances should linearly scale in an ideal cloud.  Scalability is reported for the number of compliant application instances (AIs) completed and is an aggregate of workloads metrics for those AIs normalized against a set of reference metrics.
     
  • Elasticity measures whether the work performed by application instances scales linearly in a cloud when compared to the performance of application instances during the baseline phase.  Elasticity is expressed as a percentage.
     
  • Mean Instance Provisioning Time measures the time interval between the instance provisioning request and connectivity to port 22 on the instance.  This metric is an average across all instances in valid application instances.
     

Benchmark status and results

The SPEC Cloud IaaS benchmark standard was released on May 3, 2016, and more information can be found in the Standard Performance Evaluation Corporation’s announcement.  At the time of the release, Dell submitted the first and only result in the industry.  This result is based on the Dell Red Hat OpenStack Cloud Solution Reference Architecture comprised of Dell’s PowerEdge R720 and R720xd server platforms running the Red Hat OpenStack Platform 7 software suite.  The details of the result can be found on the SPEC Cloud IaaS 2016 results page

Dell has been a major contributor to developing the SPEC Cloud IaaS benchmark standard right from the beginning from when the charter of SPEC Cloud Working Committee was drafted to when the benchmark was released.  So it is no surprise that Dell was the first company to publish a result based on the new Cloud benchmark standard.  Dell will continue to use the SPEC Cloud IaaS benchmark to compare and differentiate its cloud solutions and will additionally use the workloads for performance characterizations and optimizations for the benefit of its customers.

At every opportunity, Dell will share how it is using the benchmark workloads to solve real world performance issues in the cloud.  On Wednesday, June 29th, 2016, I will be presenting a talk entitled “Measuring performance in the cloud: A scientific approach to an elastic problem” at the Red Hat Summit in San Francisco.  This presentation will include the use of SPEC Cloud IaaS Benchmark standard as a tool for evaluating the performance of the cloud.

Computer benchmarking is no longer an academic exercise or a competition among vendors for bragging rights.  It has real benefits for customers, and now – with the creation of the SPEC Cloud IaaS 2016 Benchmark – it advances the state of the art of performance engineering for cloud computing.