Stephen Rousset – Distinguished Eng., ESI Director of Architecture, Dell EMC

Top 10 Scale-out IT Business Trends for 2018

In the spirit of beginning-of-new-year lists, this blog enumerates a number of the most prominent trends that ESI believes will be impacting customers who operate at scale in 2018, and how those trends will influence the direction and need for specific technologies. Some are continuing trends that may seem obvious at this point - but are still dominant forces - others are the emerging trends that need to be brought into consideration as future infrastructure development is being planned.

1 - The role of data will continue to increase. The volume and importance of data will continue to drive a need for a greater speed of access, higher reliability and more rapid comprehension of the data. The scaleoutaccelerating adoption of Machine Learning and Deep Learning models is making technologies like Flash Storage (NVMe) pervasive, because in order to efficiently support the larger and larger amounts of data feeding those models, faster data access technologies must come to the forefront. Customers are increasingly looking for faster, yet reasonably priced, storage. This is also creating a new “mid-tier” of non-volatile storage that exists in the space between traditional storage and host memory, allowing for a tiered memory/storage model. 


2 - The Internet of Things (IoT) is shifting computing to the Edge. IoT is enabling data gathering at every imaginable physical endpoint (this is part of what continues to drive the increase overall data). With this development came the realization that it is much more efficient to initially process all that data at the endpoint – avoiding the latencies of communicating the data to a centralized hub.  Consequently, this is resulting in a move to denser form factor infrastructures at the endpoint/edge, and for the need to incorporate previously unnecessary environmental and packaging considerations (like ruggedization) – for example, carriers are placing small footprint server installations outdoors at the base of cell phone towers to initially handle the data. Dell EMC Modular Edge Data Centers are one of the responses to this ongoing trend.

3 - Accelerated shift in focus from technology architectures to service architectures. The efficiencies delivered by today’s software defined service architectures are undeniable at this point. By its software-defined nature, a service architecture makes innovating core business value easier, with less reliance on a hardware technology provider.  As more large scale infrastructures are migrated to this model the industry will be working to make operating in that environment simpler, and adding even more efficiencies. Examples of this are vendors enabling Containers and Microservices within technology offerings themselves, allowing customers to create services more quickly without specific hardware dependencies.  So as a hardware provider, it is incumbent on us to provide the internal business units reliable and flexible HW that allows them to concentrate on the service architectures and not worry about HW delivery and reliability.

4 - Increased focus on speed of deployment. The move to service architectures has customers realizing that the speed of deploying/delivering new services is now an even more important component of any TCO calculation.  The time it takes to bring on new services can be directly related to new sources of revenue – and can be a significant factor in competitiveness in the market place. The speed of differentiation is becoming just as important as cost of differentiation. Getting to the point of “on-demand” deployment is the new high watermark.  IT solutions providers will need to be able to have a reliable, robust and global delivery capability so companies can achieve a faster “time to money”.

5 - The drive to collect more infrastructure telemetry.  As infrastructures become larger and larger, the need (and desire) to simplify the management and understand critical metrics of the infrastructure becomes greater too. This has created the trend toward collecting much more infrastructure telemetry, and in turn, has pushed vendors to support common, standard APIs (like Redfish) to simplify the task of monitoring and managing large scale infrastructures across various vendors’ equipment. Increased telemetry via standard and open interfaces, answers the market demand to provide infrastructure-wide (rack scale) management capabilities to take business intelligence to their IT.  The foundation of basic management capabilities provided through industry accepted standards (for example, Redfish APIs) is opening the way to disaggregation of hardware resources and fully composable infrastructure.  This will be part of the path to ensuring optimal IT utilization for changing workload platform requirements. 

6 - Continued reduction of stranded capacity. From the very earliest beginnings of server virtualization, IT organizations have been trying to improve the utilization rates of their infrastructure, which are sometimes shockingly low. This effort is ongoing and vendors and administrators are always looking to the latest technology to assist.  The previously discussed trend of telemetry expansion is allowing more thorough and accurate monitoring of systems resulting in an improved ability to increase utilization rates. In addition, advances in rack scale management and hardware resource management– like Intel RSD with MAAS – are enabling resource pooling that, through orchestration, can balance capacity and optimize resource utilization across a rack or even across the larger infrastructure by ensuring resource rations of compute, storage and networking are optimized for workload requirements. 

7 - Cost containment of company cloud models. As rapidly growing companies move to the cloud model (private, public, hybrid), they frequently fall victim to “cloud sprawl” – silo’d groups independently  engaging public cloud instances instead of using the company’s existing private cloud resources. This has been due partly to the pressure for rapid roll out of new services. Organizations want to be competitive with the rate of innovation and turn to “outsourcing” for speed and ease of implementation. But this results in large, and often hidden, inefficiencies - low utilization rates on the private cloud and numerous, difficult to track public cloud expenses. Data centers are now looking for infrastructure that supports offering their users easier, rapid development of new business services on their private cloud.


8 - Blockchain Infrastructures and Distributed Trust Systems. As the Internet of things (IoT) becomes even more pervasive, the need for a de-centralized form of security becomes ever greater – centralized schemes will not be able to handle the scale or the inherent latencies of the geographic distribution that is likely to proliferate with the trend toward edge computing. Blockchain, with its peer-to-peer distributed ledger technology, can provide a secure, decentralized framework for the highly distributed world of IoT, in the same way it has for crypto-currencies, it will be able to provide data integrity, faster transaction speeds, and no single points of failure.  We foresee Blockchain technology being incorporated into a wide range of offerings from vendors across the industry with items such as SmartNIC data encryption for data in flight and processor data encryption for local data.

9 - Continuing trade-offs: Open vs. “Shrink-wrapped” vs. Cost. Large scale data centers will continue to struggle with achieving their inter-related (and frequently conflicting) goals. They will continue to negotiate a three-sided triangle of being Open (to provide the possibility of greater speed of competitive innovation through open source and DIY development), being quick to deploy services (through pre-configured, “shrink-wrapped” vendor offerings) and being cost conscious (balancing between how the other two factors impact CAPEX and OPEX).  For example, some data centers are choosing to implement fully configured “Ready Bundles” that allow them to deploy hardware with extreme speed (for a particular service/configuration) – but lack the flexibility to adapt to changes in the environment (i.e., updated application).

10 - Simplifying operational complexity. Simplifying an increasingly complex and ever larger computing environment is a thread that runs through almost all of these trends. At the hardware level there is a push to reduce the number of different SKUs that compose an infrastructure, because using a consistent set of foundational building blocks simplifies procurement, maintenance and management.  Part of the rationale behind software defined data centers is that they allow the underlying hardware to be a common platform – not a patchwork of software-specific machines. As data centers migrate to software defined implementations, they want the hardware that complements the new model. Getting to a single architecture also simplifies the management challenges of scale out infrastructures and opens the door for rack scale management. The Dell EMC DSS 9000 is an example of a common platform that addresses this trend by providing that rack scale foundation being used with RSD and hooks for the next generation memory-centric fabric providing true system composability – Gen-Z.

Conclusion. The Extreme Scale Infrastructure group is excited to be helping Dell EMC customers answer the challenges these trends present to large scale computing.  We look forward in 2018 to continue to work with them to push the bounds of what is possible, and to assist them in achieving their goals.