Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
CommAutoTestGroup - Blog CommAutoTestGroup 1
Custom Solutions Engineering Blog Custom Solutions Engineering 5
Data Security Data Security 8
Latest Blog Posts
  • Dell TechCenter

    Dell EMC SCv3000 Series Release

    Author: Chuck Armstrong

    Dell EMC has released an all new affordable SC Series entry point into enterprise storage: the Dell EMC SCv3000 Series arrays.


    This new affordable array goes well beyond its predecessor (SCv2000 Series) with faster processors, twice the memory, and all of the same enterprise features as the “big-guy” SC Series arrays, including: auto-tiering in “zero-100 percent flash” hybrid solutions, full replication capabilities, and Live Volume / Live Migrate capabilities! This information and lots more can be found in Jeff Boudreau’s announcement blog.

    Let’s talk about what this new affordable powerhouse can do:

    If you plan on deploying Microsoft® SQL Server®, Exchange Server, or even a virtualized environment on the new SCv3000, you’ve come to the right place! Even if you already have one of these environments and plan on migrating it to the new SCv3000 or plan on utilizing your new SCv3000 as part of your security system we’ve still got you covered. This blog will connect you with all the nuggets of information you need.

    And let’s not forget those environments with existing PS Series storage who need to replicate to and from the SCv3000 – we can do that too!

    There are some businesses that simply aren’t in a position to purchase the traditional SAN infrastructure for their environment. In many cases, this limitation makes entry in the world of SAN storage a pipe dream. Not anymore! Like all the TV infomercials say, “But wait, there’s more”. The SCv3000 also provides the ability to direct-connect to hosts using SAS Front-End (SAS FE) ports. Keep reading and you’ll find information and links to papers for both Exchange Server, Hyper-V, and VMware vSphere covering SAS FE configuration!

    Microsoft Exchange

    When it comes to Microsoft Exchange, deploying on or migrating to a new storage platform always requires plenty of preparation. A real understanding of your environment, as well as what the storage can do are vital pieces of information. To help with the “what the storage can do” pieces, Microsoft has an Exchange Solution Reviewed Program (ESRP). There are two new papers in this area to help determine if the SCv3000 platform will support your Exchange environment: 14,000 mailboxes on the SCv3020 platform using 10K disks, and 7,000 mailboxes on the SCv3020 platform using 7K disks (this one utilizes SAS FE storage connectivity). In short, both of these environments performed well using only lower cost spinning disks. Add a flash tier and watch the performance begin to skyrocket!

     

    Microsoft SQL Server

    If you have Microsoft SQL Server data warehouse workload running in your environment, or plan to implement it. The SCv3000 might just be the platform you’re looking for!

    The Microsoft Data Warehouse Fast Track (DWFT) framework is designed to help you evaluate storage options based on your workload. We’ve paired out latest 14th generation PowerEdge server with the SCv3000 to create two reference architectures for the Microsoft DWFT. For smaller environments, we’ve got the 8TB DWFT reference architecture, along with the step-by-step 8TB solution deployment guide. For larger environments, we’ve got the 100TB DWFT reference architecture, along with its step-by-step 100TB solution deployment guide. Additionally, Dell EMC was the first one to receive a Microsoft DWFT certification for SQL Server 2017, and the only one provide a 100TB solution using a 2-socket server – according to information available at publication time for this blog, from Microsoft’s data-warehousing site.

     

    Virtual Environments

    Virtual environments open so many options with specific needs; here, we’re going to cover three:

    Virtual Desktop Infrastructure (VDI) with VMware View

    You’ve got a VDI environment and 2000 or fewer persistent virtual desktop users – can the SCv3000 support your environment? You bet it can! And the details covering how are found in this 2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage reference architecture paper.

     

    Connecting Microsoft Hyper-V to the SCv3000 using SAS FE ports

    If your environment is one that doesn’t have plans to implement a SAN infrastructure, the SCv3000 can provide a great solution for you. Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V covers all the details on how to get the most out of your SAS FE SCv3000 array.

     

    Connecting VMware vSphere to the SCv3000 using SAS FE ports

    No SAN infrastructure in the plans for your vSphere environment? SCv3000 can work with that too! Dell EMC SC Series Storage with SAS Front-end Support for VMware vSphere demonstrates the ins and outs of implementing the SAS FE SCv3000 in your environment.

    And coincidentally, VMware recently announced the expansion of their very affordable vCenter Server Foundation licensing to include four hosts – up from three. As you’ll find in the vSphere with SAS FE paper, this now matches the host-count limitations of SAS FE connected SC Series arrays, including the SCv3000.

     

    Cross-Platform Replication

    Some companies have been using PS Series storage for quite some time, and are now looking into implementing SC Series storage into their environments, but aren’t quite sure how to best utilize their existing investment in PS Series storage. Good news! Dell EMC Storage Cross-Platform Replication Solution Guide steps through how to configure replication from your new SC Series primary storage to your existing PS Series storage that has been moved to your DR site.

     

    Video Surveillance

    Yep, the SCv3000 can even be a great storage solution for your video surveillance environment! We’ve even managed to store more than 1 Million hours of CCTV footage on a single SCv3000 Series array! The Dell EMC SCv Series Storage for Video Surveillance solution brief will get you started. Included in the paper is contact information for when you’re ready to get stated.

     

    If you’ve made it this far, you’ve already learned about a lot of ways the new SCv3000 can make a difference in your environment. What’s more is that there are many, many more solutions and environments that could benefit from this all new array! Check with your sales team and get an architect involved to learn how it can benefit your environment. And while you’re waiting for that meeting to take place, check out the wealth of information you can find in the storage area of Dell TechCenter!

  • Dell TechCenter

    VMware ESXi CLI Support for DellEMC BOSS-S1

    Introduction to BOSS

                  DellEMC PowerEdge Boot Optimized Storage Solution aka BOSS-S1 is a new boot device introduced by Dell with 14th generation of PowerEdge servers. The BOSS-S1 supports two M.2 disks which can either configured as a RAID 1 or both disks used in Pass-through mode.

    Currently, CLI Support for Inventory and Monitoring is available for Windows Server and Linux Operating Systems. And now we have extended the same support for VMware ESXi.

    Pre-Requisites - BOSS-S1 CLI support for ESXi is currently enabled for the below versions and above,

    • VMware ESXi 6.5 U1
    • VMware ESXi 6.0 U3 w/ P06

    Installing VIB in ESXi Host 

    Dell has already posted the Offline Bundle of the CLI in support.dell.com. Please follow this link to download the same.

    Steps to Install the VIB

    1. Download SAS-RAID_BOSS-S1_CLI_A00.zip.
    2. Install VMware ESXi in Dell PowerEdge 14G Server
    3. Note down the ESXi IP post installation.
    4. Once VMware ESXi is installed - Press F2 in the DCUI for Customize System/View Logs.
    5. Enter your login Credentials.
    6. Go to Troubleshooting Options and Enable ESXi Shell and Enable SSH.
      1. Use WinSCP application and login into WinSCP using ESXi IP and copy downloaded file (SAS-RAID_BOSS-S1_CLI_A00.zip) into  directory /tmp/
      2. Login to ESXi shell using putty using iDRAC  IP
      3. Change the directory to cd /tmp
      4. Run the command esxcli software vib install –d /tmp/SAS-RAID_BOSS-S1_CLI_A00.zip and it will report “success” after installation.
      5. Post that go into the directory cd /opt/Boss-s1.
      6.  Run CLI using  ./mvcli.

    References

    Operating System Support - http://en.community.dell.com/techcenter/b/techcenter/archive/2017/07/14/operating-system-support-for-boss-boot-optimized-storage-solution-device

    BOSS-S1 Direct from Development - http://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/Dell-PowerEdge-Boot-Optimized-Storage-Solution.pdf

    Copying files to/from ESXi - https://kb.vmware.com/s/article/1918

    BOSS-S1 User Guide - http://topics-cdn.dell.com/pdf/boss-s-1_user's%20guide_en-us.pdf

  • Dell TechCenter

    DellEMC firmware catalogs for VMware ESXi

    What is a firmware catalog?

    A firmware catalog is an aggregation of all firmware bundles that is supported on multiple generations of DellEMC servers. 

    Firmware catalogs for DellEMC customized VMware ESXi

    DellEMC started releasing firmware catalogs for Dell customized versions of VMware ESXi releases. The intent of this catalog is to announce the recommended firmware versions for Dell customized versions of VMware ESXi for various server models and peripherals. NOTE that the catalog contain firmware versions for all the generations of servers and corresponding peripherals. Hence it's advised that end users go through the compatibility matrix carefully to assess the support stance for the specific server model and  the peripheral that they use for a specific version of VMware ESXi. 

    The DellEMC customized VMware ESXi 6.5.x compatibility matrix is available here. The DellEMC customized VMware ESXi 6.0.x compatibility matrix is available here

    This specific Dell tech center page contain a table which maps the recommended firmware for Dell customized VMware ESXi images. This blog will be modified as and when new catalogs are released on a quarterly basis. 

    DellEMC Customized VMware ESXi Revision Release date Catalog Location
    VMware ESXi 6.5 U1 A04 7th Nov 2017 http://ftp.dell.com/FOLDER04655306M/1/ESXi_Catalog.xml.gz
    VMware ESXi 6.0 U3 A05 19th Oct 2017 http://ftp.dell.com/FOLDER04655306M/1/ESXi_Catalog.xml.gz

  • General HPC

    Scaling Deep Learning on Multiple V100 Nodes

    Authors: Rengan Xu, Frank Han, Nishanth Dandapanthula.

    HPC Innovation Lab. November 2017

     

    Abstract

    In our previous blog, we presented the deep learning performance on single Dell PowerEdge C4130 node with four V100 GPUs. For very large neural network models, a single node is still not powerful enough to quickly train those models. Therefore, it is important to scale the training model to multiple nodes to meet the computation demand. In this blog, we will evaluate the multi-node performance of deep learning frameworks MXNet and Caffe2. The Interconnect in use is Mellanox EDR InfiniBand. The results will show that both frameworks scale well on multiple V100-SXM2 nodes.

    Overview of MXNet and Caffe2

    In this section, we will give an overview about how MXNet and Caffe2 are implemented for distributed training on multiple nodes. Usually there are two ways to parallelize neural network training on multiple devices: data parallelism and model parallelism. In data parallelism, all devices have the same model but different devices work on different pieces of data. While in model parallelism, difference devices have parameters of different layers of a neural network. In this blog, we only focus on the data parallelism in deep learning frameworks and will evaluate the model parallelism in the future. Another choice in most deep learning frameworks is whether to use synchronous or asynchronous weight update. The synchronous implementation aggregates the gradients over all workers in each iteration (or mini-batch) before updating the weights. However, in asynchronous implementation, each worker updates the weight independently with each other. Since the synchronous way guarantees the model convergence while the asynchronous way is still an open question, we only evaluate the synchronous weight update.  

    MXNet is able to launch jobs on a cluster in several ways including SSH, Yarn, MPI. For this evaluation, SSH was chosen. In SSH mode, the processes in different nodes use rsync to synchronize the working directory from root node into slave nodes. The purpose of synchronization is to aggregate the gradients over all workers in each iteration (or mini-batch). Caffe2 uses Gloo library for multi-node training and Redis library to facilitate management of nodes in distributed training. Gloo is a MPI like library that comes with a number of collective operations like barrier, broadcast and allreduce for machine learning applications. The Redis library used by Gloo is used to connect all participating nodes.

    Testing Methodology

    We chose to evaluate two deep learning frameworks for our testing, MXNet and Caffe2. As with our previous benchmarks, we will again use the ILSVRC 2012 dataset which contains 1,281,167 training images and 50,000 validation images. The neural network in the training is called Resnet50 which is a computationally intensive network that both frameworks support. To get the best performance, CUDA 9 compiler, CUDNN 7 library and NCCL 2.0 are used for both frameworks, since they are optimized for V100 GPUs. The testing platform has four Dell EMC’s PowerEdge C4130 servers in configuration K. The system layout of configuration K is shown in Figure 1. As we can see, the server has four V100-SXM2 GPUs and all GPUs are connected by NVLink. The other hardware and software details are shown in Table 1. Table 2 shows the input parameters that are used to train Resnet50 neural network in both frameworks.

    Figure 1: C4130 configuration K

    Table 1: The hardware configuration and software details

    Table 2: Input parameters used in different deep learning frameworks

    Performance Evaluation

    Figure 2 and Figure 3 show the Resnet50 performance and speedup results on multiple nodes with MXNet and Caffe2, respectively. As we can see, the performance scales very well with both frameworks. With MXNet, compared to 1*V100, the speedup of using 16*V100 (in 4 nodes) is 15.4x in FP32 mode and 13.8x in FP16, respectively. And compared to FP32, FP16 improved the performance by 63.28% - 82.79%. Such performance improvement was contributed to the Tensor Cores in V100.

    In Caffe2, compared to 1*V100, the speedup of using 16*V100 (4 nodes) is 14.8x in FP32 and 13.6x in FP16, respectively. And the performance improvement of using FP16 compared to FP32 is 50.42% - 63.75% excluding the 12*V100 case. With 12*V100, using FP16 is only 29.26% faster than using FP32. We are still investigating the exact reason of it, but one possible explanation is that 12 is not the power of 2, which may make some operations like reductions slower.

    Figure 2: Performance of MXNet Resnet50 on multiple nodes

    Figure 3: Performance of Caffe2 Resnet50 on multiple nodes

    Conclusions and Future Work

    In this blog, we present the performance of MXNet and Caffe2 on multiple V100-SXM2 nodes. The results demonstrate that the deep learning frameworks are able to scale very well on multiple Dell EMC’s PowerEdge servers. At this time the FP16 support in TensorFlow is still experimental, our evaluation is in progress and the results will be included in future blogs. We are also working on containerizing these frameworks with Singularity to make their deployment much easier.

  • Dell TechCenter

    Dell PowerEdge VRTX support for VMware ESXi 6.5

    This blog post is written by Thiru Navukkarasu and Krishnaprasad K from Dell Hypervisor Engineering. 

    Dell PowerEdge VRTX was not supported for VMware ESXi 6.5 branch of ESXi thus far. Dell announced support for VRTX from Dell customized version of ESXi 6.5 A04 revision onwards. However there was a late breaking issue identified on dell-shared-perc8 (Version: 06.806.89.00) and ESXi 6.5.x. To resolve this issue, install/upgrade to version 06.806.90.00 and above. This revised driver is part of Dell customized VMware ESXi 6.5 Update1 A04 which is available for download from here

    From VMware ESXi 6.5 onwards, the shared PERC8 controller in VRTX use dell_shared_perc8 native driver instead of megaraid_sas vmklinux driver in ESXi 6.0.x branch. 

    You may look at the following command outputs in ESXi to verify if you have the supported image installed on PowerEdge VRTX blades. 

    ~] vmware –lv

    VMware ESXi 6.5.0 build-6765664 

    VMware ESXi 6.5.0 Update 1

     ~] cat /etc/vmware/oem.xml

    You are running DellEMC Customized Image ESXi 6.5 Update 1 A04 (based on ESXi VMKernel Release Build 6765664)

    ~] esxcli storage core adapter list

    HBA Name  Driver             Link State  UID                   Capabilities  Description

    --------  -----------------  ----------  --------------------  ------------  ----------------------------------------------------------

    vmhba3    dell_shared_perc8  link-n/a    sas.0                               (0000:0a:00.0) LSI / Symbios Logic Shared PERC 8 Mini

    vmhba4    dell_shared_perc8  link-n/a    sas.c000016000c00        (0000:15:00.0) LSI / Symbios Logic Shared PERC 8 Mini

    References

  • Dell TechCenter

    Building the Optimal Machine Learning Platform

    logo

    Austin Shelnutt ESI Architecture team

    While various forms of machine learning have existed for several decades, neural brainthe past few years of development have yielded some extraordinary progress in democratizing the capabilities and use cases for artificial intelligence in a wide multitude of industries. Image classification, voice recognition, fraud detection, medical diagnostics, and process automation are just a handful of the burgeoning use cases for machine learning that are reinventing the very world we live in.  This blog provides a brief overview of some of the basic principles of Machine Learning and describes the challenges and trade-offs involved in constructing the optimal Machine Learning platform for different use cases.

    Neural Networks are key to machine learning

    At the center of the growth in machine learning is a modeling technique referred to as neural networks (also known as deep neural networks, or deep learning), which is based on our understanding of how the human brain learns and processes information. Neural networks are not a new concept, and have been proposed as a model for computational learning since the 1940’s. What makes neural networks so attractive for machine learning is that they provide a mathematical ecosystem that allows the decision making accuracy of a computer to scale beyond explicit programming rules and, in a sense, learn from experience.

    Previously, the limiting factor of neural network models has been that they are extremely computation intensive and require a tremendous amount of labeled data input to be able to “learn”. This double hurdle of processing power and available data had prevented them from becoming relevant…. until now.

    Today, we stand at the intersection of huge data sets being generated in all corners of industry and the rise of massively-parallel compute infrastructure in the form of enhanced CPU instruction sets, GPUs, FPGAs, and new ASICs - designed specifically to accelerate neural network math. Neural network models that would have taken weeks or even months to run even a couple of years ago can learn (or be “trained”) in just a few hours on today’s hardware. 

    While this convergence of available data and compute has opened up seemingly limitless potential applicability to end users, it presents a new challenge to hardware architects: What does the ultimate machine learning platform look like? To answer that, we need to better understand the machine learning platform stack.

    The machine learning stack consists of:

    The neural network (application) layer – this is the data analysis model

    The framework layer - provides the specialized software neural networks run on

    The math libraries layer - houses the math routines the frameworks call

    The operating system layer – choice of OS

    The hardware platform layer -offers a number of different accelerator options

    The platform choices made at each of these layers can impact the performance and capabilities of the targeted learning function. The following sections expand on important points that should be considered for these layers.

    Neural Network layer

    Neural networks are symbolic representations of the mathematical models created for a specific learned function (for example, speech recognition). Neural networks come in many different shapes, sizes and functions, depending upon both the type of data being ingested and the intended goal (output) of the learned function. The complexity of neural network construction can vary by:nn

    • Specific activation functions
    • Number of activation layers
    • Data set manipulation types:  forward/backward propagation, convolutions, recurrence, LSTM, etc.

    At the highest level, all neural networks break down input features  (such as the pixels in a photo) into multi-dimensional arrays of data (tensors) and then pass them through one or more  layers of parameters (or weights) into activation functions which can be represented as neurons in the neural network.

    Input tensors are multiplied by parameter tensors and activation functions to yield a hypothesis that can be used for a decision – for example, classification that an object shown in a given picture is a cat or a dog. The size and dimensions of the input features and the number of activation layers is what determines how to handle the necessary math operations in the hardware layer (i.e., you may require multiple GPUs).

    When designing the optimal platform to use for a neural network, how that particular neural network is constructed is crucial in determining what options are best for it at other layers of the stack. In general, the platform designer’s goal is to understand how data is moved in, out, and around inside of the system to tune features in a manner that most efficiently eliminates data choke points or bottlenecks.

    balance

    For example, small neural networks that can be computed relatively quickly might create a tremendous demand on data set ingest bandwidth either from local storage or remote data pools and consequently would be potentially bottlenecked by slow storage devices or narrow I/O bandwidth.  Pairing this type of neural model with a high performance accelerator platform that lacks significant I/O bandwidth would also result in under-utilized compute hardware.

    As another example, very large neural networks with a large number of input features and/or activation layers, may not fit comfortably inside of a single accelerator’s onboard memory or need to swap weight calculations in and out of the page file during each iteration. This type of model might operate most efficiently when the stored weights can be exchanged and multiplied across multiple accelerators. So, a hardware platform that offers multiple accelerators would be the right choice in this case.  But note that the distribution of operations to multiple accelerators is handled differently by different hardware offerings and frameworks, so the efficiency of distribution varies accordingly. Also note that not every neural network equally benefits from a multiple accelerators – or at least not at the same scaling efficiency. (See the following sections.)

    Framework layer

    Neural network models run on deep learning software frameworks. The proliferation of frameworks, while primarily open source in nature, has largely stemmed from academia and a number of hyperscale service providers – each attempting to advance their own particular code.  You can run virtually any neural network on any deep learning framework, but they are certainly not all created equal. The manner in which frameworks utilize the underpinning hardware varies from framework to framework. While end users often choose a framework based on coding familiarity, there are a number of factors to consider that impact neural network performance:

    • How a framework makes math library calls (and which libraries it uses), how it pulls apart the tensor multiplication operations, and how it maps these operations into the physical hardware are all unique to that framework.
    • Some frameworks are better at scaling outside of a single server to use multiple servers working together - and some are not capable of scaling out at all.
    • Some frameworks are well suited to orchestrating neural network mathematics across a large number of parallel compute devices (i.e. GPUs) within a single server, while others scale very poorly on multiple accelerators.

    Each of these points needs to be considered in light of the characteristics of the specific neural network. They may ultimately influence the choice of framework and the accelerator options.

    Hardware Platform layer

    Choosing the right hardware technology to support a given machine learning application is another challenge for platform design. While CPUs can be used for deep learning, they are scalar multiplication engines by nature, and poorly suited to the higher-order tensor operations common to deep learning (vectors, matrices, and beyond).  So machine learning platforms typically incorporate some form of accelerator technology - GPU, FPGA, or ASIC. But even at that level there are trade-offs to consider – particularly concerning the distribution of operations across multiple accelerators and how that impacts scaling.  These considerations are described below.

    GPUs

    GPUs have been the cornerstone of the deep learning growth in recent years because of their powerful parallel compute capabilities derived from their relatively large number of independent logic cores. The different models for how data is exchanged between GPUs is a differentiating feature when considering platform design.

    FPGAs & ASICs

    Though GPUs currently occupy a fortress on the deep learning market today, technology vendors from across the globe are lining up to take aim at specific soft spots in the GPU’s dominance. The latest FPGA and ASIC technology delivers new levels of component-level performance-per-dollar, performance-per-watt and small-batch efficiency that will result in competitive offerings in 2018 that are intended to blow the doors off contemporary deep learning hardware.

    PCIe-based Accelerators

    bottleneckUsing PCI-Express accelerators for machine learning has become popular for a number of previously discussed reasons, however, one primary benefit is the ability to ‘scale-up’ to use multiple accelerators in the same server. The challenge for effectively using more than one accelerator is data exchange between the cards. The latency and bandwidth limitations for data going back through the host CPU’s PCIE root complex, for example, can be a large performance penalty that negates the multi-accelerator benefit.

    PCIe switchModern non-blocking PCIE switches can be a great solution to this challenge by allowing the PCIE accelerators to exchange data directly without passing through the host root complex, if the framework comprehends this type of communication path.

    Again, here, balance is the key. As you add accelerators to the switch, eventually the host bandwidth between the switch and the (single host) CPU becomes the new bottleneck. Unfortunately, due to the variations in neural networks, data sets, and frameworks, this point is a moving target, and very difficult to predict.

    Specialized accelerator-to-accelerator communication

    Many technology companies are now implementing specialized accelerator to accelerator connection links. Nvidia’s NVlink is a great example of a specialized communication path that dramatically improves bandwidth between accelerators for applications that benefit from peer-to-peer data exchange.

    SpecializedTo be clear, while these auxiliary connection types are extremely valuable for some end customer use cases, there are other deep learning applications that yield very little benefit from this type of interconnect. Furthermore, these specialty interconnects can be costly, both in terms of materials and design changes required to accommodate them.

    In fact, the current proprietary interconnect trend is driving unique server designs - just to support the interconnect; resulting in wide variations in hardware from vendor to vendor. Accelerator technology vendors are, seemingly, abandoning all forms of conventional design guidelines in their own pursuit of maximum peer-to-peer bandwidth. This may be the single biggest pain point for designing a truly optimized deep learning platform.

    What’s next for Machine Learning platforms?

    Physical manifestation of the peer-to-peer interconnect is not the only place where deep learning technology providers are departing from conventional techniques. In the pursuit of ever-improved performance, some vendors are moving beyond PCIE form factor, pushing beyond the accepted power/heat limits, and writing new math libraries.  Platform designers need to be aware that the technology underpinning the explosive growth in machine learning is still very fluid and divergent.

    Conclusion

    Machine learning customers have more choices than ever for neural network models and frameworks. Those choices impact the type, number, and form factor of the preferred accelerator, the dataflow topology between accelerators and CPUs, the amount and speed of direct attached storage, and the necessary bandwidth of I/O devices. The resulting platform must:

    • Serve ‘their’ specific learning model– not an unrelated deep learning model.
    • Stay within their data center requirements for server form factor, rack depth, power and cooling
    • Be management agnostic

    Solving a platform optimization challenge with this many degrees of freedom may seem daunting, but Dell EMC is committed to helping our customers meet this challenge. Today, we are already working with a wide range of customers, across a number of industries to solve some of the most complex and interesting machine learning problems.  And going forward, we are committing resources to ensure we remain a technology leader in this arena.

    For more information on what Dell EMC Extreme Scale Infrastructure is doing with Machine Learning, contact ESI@dell.com .

     

     

  • Hotfixes

    Mandatory Hotfix 654693 for 8.6 MR3 Linux Connector Released

    This is a mandatory hotfix for: 

     

    • Linux Connector

     

    The following is a list of issues resolved in this release.

    Feature

    Description

    Feature ID

    BYOD

    The Linux Connector might close during the second attempt to download the configuration using an incorrect FQDN path to the Connection Broker

    654171

    Auto-Launch

    The error message is displayed during the first connection when an auto-launch application is configured according to the user`s configuration, but not assigned to him

    654334

    Configuration

    Configuration is not saved in the Linux Connector if the user clicks Cancel on the Credentials screen after configuration has already been downloaded

    654018

    Dynamic Resize

    There is an inform message every time when user resizes remote window

    654539

    Seamless

    In Notepad, the Menu drop-down list is displayed shifted when Notepad is placed near the screen border

    654012

    Seamless

    The Seamless application does not detect the Unity taskbar

    654225

    Seamless

    The session window of an MS Seamless application might disappear after reconnection

    654535

    Seamless,

    Multiple Monitors

    The resize frame has incorrect position and size when resizing an MS Seamless application if the start point of the primary monitor is not 0,0

    654016

    Seamless,

    Multiple Monitors

    An MS seamless application might be displayed incorrectly after the left click on it when the upper borders of the monitors are not aligned

    654537

    Multiple Monitors

    The session window might be cropped if the Span Multiple Monitors option is not selected and the start point of the primary monitor is not 0,0

    654017

    Password Management

    The error message is displayed after attempt to change password for users with non-English names

    654170

    Password Management

    If the Require Authentication option is not selected for the farm, the Change Password window is not displayed when the Change Password option is checked either on the Welcome or on the Auto-Configuration screens

    654216

    Password Management

    The Password Management messages are not displayed in the client’s language

    654250

    This hotfix is available for download at: https://support.quest.com/kb/234432 

  • Dell TechCenter

    Network Automation with Dell EMC OS9, OS10, Open Switch OPX and Ansible

    This blog describes the Open Switch OPX network automation demo based on the Ansible framework, as delivered by Dell EMC at AnsibleFest2017. For more details on the demo including videos, configuration playbooks and other details please contact  feedback-ansible-dell-networking@Dell.com

    Demo Overview

    This demo goes over the process of deploying a BGP fabric in a leaf-spine topology using Ansible. We use a single Ansible playbook to configure and bring BGP across OS9, OS10 Enterprise Edition and OPX (Open Switch).

     Ansible Playbook Details

    The Ansible playbook is deployed on the Ansible server that is part of the switch management subnet. The Ansible server is configured to connect with all the switches with SSH and deliver the configuration.

    The playbook deployment includes three primary components switch inventory file, host variable files and the main playbook. This playbook directory structure is shown below.

     

     Switch Inventory File

    The file lists the switches that would be configured by the playbook. Each node in the data center topology is listed with the OS name, Mgmt. IP address and node name.

     

    Host Var Files

    Each node in the topology has host variable file associated with it. Host var file for leaf1 node is shown below.

    Ansible Playbook

    The first line in the playbook shows target hosts as datacenter. From the switch inventory we can see that datacenter list includes all the leaf and spine nodes.

    The role entry shows the Dell EMC roles for configuring BGP, interface, system and version information. Ansible runs each of these roles with the help from the host vars files to build the CLI commands from a set of predefined templates and deliver to the devices.

    Running the Playbook

     

    The arguments to the ansible-playbook command includes the switch list from the inventory file and the playbook file datacenter.yaml. YAML  is new serialization standard used to write the Ansible script.

    The execution of playbook results in the switch configuration generated by programming the jinja2 templates defined for each role used in the playbook. The commands are then delivered to the devices via SSH client connection.

    Repeated execution of the Ansible playbook, will only update any changes made to the playbook or host var file or switch inventory file and thus safe to repeat. All the modules have idempotence baked in. That is, running a module multiple times in a sequence should have the same effect as running it just once.

    Playbook Results!

    A portion of the output on the console is shown below.

     

    As can be seen, all 4 leaf and 2 spine that are part of the data center topology have been configured with Quagga [BGP] to run a L3 network.

    You can also look at the CLI command file for each node that were delivered to the device under the /tmp directory. This can be a way to troubleshoot deployment.

    For more detailed console output, you can use –vvvv option. This makes the console output very granular and shows the details of all the steps that Ansible takes to deploy to the switches. A useful way to debug the playbooks.

    You can login to one of the switches to check out the deployed configuration. A sample is shown below.

    Summary

    The demo highlights Ansible as the right, flexible automation framework for switch manageability and a simple programmable environment.

    Ansible can prove to be pretty powerful and the network as a whole doesn’t have to be automated overnight. It’s about thinking a little differently and exploring some automation to see if it makes sense for any given environment. There are steps that can be taken to learn about these new processes and tools. And the best part is, all of these configuration management tools are open source and OPX as well can be tried at no cost.

    Questions? Help? Contact: Open Networking Team at Dell EMC

  • Custom Solutions Engineering Blog

    Run Oracle Linux and Oracle VM on Dell EMC PowerEdge Servers Utilizing Custom Solutions Engineering

    Written by J Tamilarasan

    DellEMC has been building and researching certified and tested Oracle platform solutions for over 20 years, with leadership in x86 technologies and clustered implementations. Over the years, our own internal usage of Oracle database and application software has translated that relationship into significant benefits for our customers. Technical certification is a foregone conclusion, with Dell servers listed on the Oracle Hardware Certification Lists (HCL) and on the Oracle Validated Configurations list (VC) that are all qualified to run Oracle Linux and Oracle VM. But certification goes far beyond these base levels. Dell offers Tested and Validated solutions for other operating environments, with complete guidance for deployment and configuration. These qualifications serve as a reassurance to customers that their configuration has been tested and is enterprise-ready.

    Oracle’s latest server virtualization product, delivers many important new features and enhancements to enable rapid enterprise application deployment throughout public and private cloud infrastructure. The new release continues expanding support for both Oracle and non-Oracle workloads - providing customers and partners with additional choices and interoperability - including the capability to enable OpenStack support. 

    Below are the Links where you find the DellEMC Hardware specifications which are certified and published by Oracle.

    OVM 3.4.4 support for DellEMC PowerEdge 14G server.

    Dell EMC’s new PowerEdge 14G servers are highly scalable and performance optimized so they are a good fit for both traditional and cloud-native workloads. The new PowerEdge servers feature automation to increase productivity and simplify lifecycle management. PowerEdge users can use Quick Sync 2 to manage the servers through mobile devices (Android and iOS).

    OVM 3.4.4 on PowerEdge 14G servers will be validated and certified with the hardware specifications and the same is performed by the Custom Solutions Engineering, DellEMC team which validates and certifies OVM & OVS.

    OVM 3.4.4 supports PowerEdge Raid Controller 9, (PERC 9) on PowerEdge 14G Servers but not PERC 10. Installing OVM 3.4.4 on DellEMC PowerEdge 14G requires updating to the most recent kernel after which you may encounter the following errors:

    1. Unable to find the storage devices during the installation as the PERC 10 driver is not associated with the OVM 3.4.4 kernel

    2. It is to be noted that upgrading the OVM kernel with the other OL versions like OL5 will result in Kernel Panic.

     

     Dell’s Custom Solutions Engineering can install the drivers and certify the Operating System. Please contact your account team to start the process.

    To learn more about Dell Custom Solutions Engineering visit www.dell.com/customsolutions

    We require customers sign a Disclaimer form to acknowledge that this is not supported by Dell and that there are associated risks the customer must assume.

  • Hotfixes

    Mandatory Hotfix 654658 for 8.6 MR3 Android Connector Released

    This is a mandatory hotfix for: 

     

    • Android Connector

       

    The following is a list of issues resolved in this release.

    Feature

    Description

    Feature ID

    BYOD

    The user is not prompted for authentication when their credentials are not saved

    654402

    Localization

    The Password Manager messages are not displayed in the client’s language

    654248

    Dynamic Resize

    The session window is not resized if a device has been rotated during session launching

    654461

    Custom resolution

    Custom resolution more than 1152*2048 cannot be applied to the connector

    654520

    Clipboard

    Copied text is pasted into the remote session from the local app without the last character

    654357

    Clipboard

    Text is copied with additional characters from Microsoft Notepad to Microsoft Paint in the remote session

    643146

    Keyboard

    The remote session constantly moves upward if the user switches between the keyboard and the extended keyboard

    654374

    Keyboard

    There is empty space between the keyboard with F-panel and the session

    654375

    Keyboard

    The F-panel is displayed without on-screen keyboard

    654440

    Keyboard

    Numeric characters are displayed instead of special ones after device rotation with the remote session opened

    654509

    This hotfix is available for installation directly from the Google Play store and can also be downloaded from: https://support.quest.com/vworkspace/kb/234044