Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
CommAutoTestGroup - Blog CommAutoTestGroup 1
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Latest Blog Posts
  • General HPC

    14G with Skylake – how much better for HPC?

    Garima Kochhar, Kihoon Yoon, Joshua Weage. HPC Innovation Lab. 25 Sep 2017.

    The document below presents performance results on Skylake based Dell EMC 14th generation systems for a variety of HPC benchmarks and applications (Stream, HPL, WRF, BWA, Ansys Fluent, STAR-CCM+ and LS-DYNA). It compares the performance of these new systems to previous generations, going as far back as Westmere (Intel Xeon 5500 series) in Dell EMC's 11th generation servers, showing the potential improvements when moving to this latest instance of server technology.

  • Hotfixes

    Optional Hotfix 654339 for 8.6 MR3 Password Management Released

    This is an optional hotfix for: 

    • Password Management Role

     

     The following is a list of issues resolved in this release.

     Feature

    Description

    Feature ID

    Password Manager

    The same error message is displayed for the incorrect password, access denied and other errors.

    647264

    This hotfix is available for download at: https://support.quest.com/vworkspace/kb/233001 

  • Hotfixes

    Mandatory Hotfix 654335 for 8.6 MR3 Windows Connector Released

    This is a Cumulative Mandatory hotfix and can be installed on the following vWorkspace roles:

    • Windows Connector

    The following is a list of issues resolved in this release.

     Feature

    Description

    Feature ID

    Windows connector

    Focus changes to PNTSC after launching a pinned application via the task bar

    627437

    Windows connector

    The Win+M keys combination does not minimize the full screen session if keyboard is set ‘On the local’

    654197

    Windows connector

    Published App with multiple modal dialogue boxes does not respond

    633773

    Windows connector

    Seamless application focus blinks if a seamless application with several modal windows was activated after the local application

    654260

    Windows connector

    The “Web site or address” field is not focused on the Welcome screen after configuration has been deleted

    598945

    Windows connector

    The Search field is not cleared after deleting and then downloading a configuration

    588327

    Windows connector

    The Search applications function does not work after the ‘Back door’ settings are closed

    596787

    Windows connector

    There are black borders around VW Seamless applications

    653831

    Windows connector

    PNTSC hangs on logoff when using Olympus VD client

    653863

    Windows connector

    After minimizing and maximizing the session, it is spanned over both monitors incorrectly

    654028

    Windows connector

    User’s credentials should not be updated in connection automatically if Password Manager hasn’t changed user’s password

    654116

    Windows connector

    If there are two monitors, only half of the remote desktop is shown when the Span Multi-Monitor option is not selected

    653940 

    Windows connector

    The remote session is not maximized on the second monitor after it was minimized on the first monitor and moved to the second one

    654403

    Windows connector

    It is impossible to connect to the application using Kerberos Authentication

    654004

    Windows connector

    Connector closes if switched between two seamless applications with opened printer preference page and changed default paper

    654219

    This hotfix is available for download at: https://support.quest.com/vworkspace/kb/232979 

  • Hotfixes

    Mandatory Hotfix 654337 for 8.6 MR3 Web Access Released

    This is a mandatory hotfix and can be installed on the following vWorkspace roles -

     

    • Web Access

     

     The following is a list of issues resolved in this release.

     Feature

    Description

    Feature ID

    Web Access

    UPN login causes black screen when logging into a full desktop, but works fine via Web Access 

    654026

    Web Access

    Notification message about user account locking has become more informative

    654143

    Web Access

    Connector installation from Web Access does not work

    453332

    Web Access

    Writing issues (keyboard) in some applications like Wireshark and Cisco Packet Tracer using HTML5

    653999

    Web Access

    Connector Policy breaks Web Access

    654128

    Web Access

    Web Access multi-line text in Browser messages does not work properly

    654217

    Web Access

    Session does not launch if user uses UPN credentials and NLA is disabled on the server side

    654482

    This hotfix is available for download at: https://support.quest.com/vworkspace/kb/232974 

  • Hotfixes

    Mandatory Hotfix 654514 for 8.6 MR3 PNTools/RDSH Released

    This is a Mandatory hotfix and can be installed on the following vWorkspace roles:

    • Remote Desktop Session Host (RDSH)
    • PNTools (VDI)

     The following is a list of issues resolved in this release.

     Feature

    Description

    Feature ID

    PNtools and RDSH

    There are black borders around the vWorkspace Seamless applications

    653831

    This hotfix is available for download at: https://support.quest.com/vworkspace/kb/232958

  • Hotfixes

    Mandatory Hotfix 654338 for 8.6 MR3 Management Console Released

    This is a mandatory hotfix and can be installed on the following vWorkspace roles -

     

    • vWorkspace Management Console

     

     The following is a list of issues resolved in this release.

     Feature

    Description

    Feature ID

    Management Console

    Wizards in the MC display ‘_’ symbol instead of ‘&’ symbol

    653743

    Management Console

    The Active Directory Path item is not disabled if the Workgroup item is selected on the Domain or Workgroup page in the New Operating System Customization Properties wizard

    653745

    Management Console

    Web Access multi-line text in Browser messages does not work properly

    654217

    This hotfix is available for download at: https://support.quest.com/vworkspace/kb/232952 

  • Hotfixes

    Mandatory Hotfix 654336 for 8.6 MR3 Connection Broker Released

    This is a cumulative mandatory hotfix and can be installed on the following vWorkspace roles -

     

    • vWorkspace Connection Broker

     

     The following is a list of issues resolved in this release.

     Feature

    Description

    Feature ID

    Connection Broker

    VMware Standard Clones with MS Sysprep do not retain MAC address 

    622600

    Connection Broker

    Reprovision using task automation does not work when using a new template

    627696

    Connection Broker

    VM created from Windows Server 2016 template on SCVMM server does not initialize

    653746

    Connection Broker

    VM RAM on VMware resets to a template value after reprovisioning

    653747

    Connection Broker

    Thread handle leak in broker service

    653771

    Connection Broker

    VM Reprovision can cause duplicate Mac address in VMware

    653939

    Connection Broker

    VMs created in the Hyper-V FailOver Cluster with High Availability enabled are reprovisioned without VHD

    654101

    Connection Broker

    Linked reprovision to Existing Template or New Template doesn’t work

    654142

    Connection Broker

    After adding Hyper-V host to MC, getting of Hyper-V volume information fails and the host does not initialize

    654160

    Connection Broker

    On VMware, VM Virtual Disks setting resets to template's value after reprovisioning

    654312

    This hotfix is available for download at: https://support.quest.com/vworkspace/kb/232947

  • Dell TechCenter

    Rack Scale & Ready: DSS 9000

    Author: Robert Tung, Sr. Consultant, Product Management, Extreme Scale Infrastructure 

    This month, Dell EMC Extreme Scale Infrastructure (ESI) group is releasing the latest version of the DSS 9000 – an open, agile and efficient rack scale infrastructure solution.  It’s specifically designed using hyperscale principles to help carriers and service providers accelerate the shift to software-defined data centers, to realize their digital transformation goals, and to stay competitive in a rapidly changing business environment.

    Carriers and service providers are competing in arenas that are being upended by trends like Big Data Analytics, the Internet of Things, and Edge Computing. In some cases they are directly competing with industry-leading hyperscale cloud pioneers. They need infrastructure that can easily and rapidly scale to answer the demands for compute and storage – and that can be readily managed at the rack and datacenter level.

    Scale-out hardware design

    One of the major challenges to rapidly growing IT environments is the ability to scale with minimum disruption to business operations. The DSS 9000 is designed to make scaling easy – both at the rack level and within the rack. As a pre-integrated rack solution, DSS 9000 deployment is as simple as rolling the pre-configured rack into place and plugging it in. It is highly flexible and available in multiple heights that can be tailored to meet specific workload needs using modular blocks that can be loaded with sleds of varying widths containing compute and storage resources. This flexible approach allows infrastructure to be easily and rapidly changed or scaled as needed over time.

    Flexibility extends to the design of the sleds as well. The DSS 9000 offers three variable-width server sleds: full-width (DSS 9600), half-width (DSS 9620) and third-width (DSS 9630). Each supports two Intel® Scalable Family processors, 16 DDR4 DIMMs, and variable amounts of supported storage. (See sidebar.) Compute-intensive environments can scale up to 96 nodes per rack using the third width sleds.

    DSS 9620 half-width server

    The DSS 9000 also supports a full width JBOD storage sled that delivers up to twelve 3.5” drives and can be daisy chained to a head node to provide up to 288 TB of additional storage.

    Innovative rack management

    One of the most unique features of the DSS 9000 is its Rack Manager component. It is designed to provide enhanced rack-level management functions through a single point of control. At its core is an embedded CentOS server powered by an Intel® Atom microprocessor that handles the increasingly sophisticated management tasks that are common in today’s data centers. It is connected to each block in the rack and, through them, to each individual node using a 1Gbit management network that is independent of the data network. It is also connected to the power bays and the cooling fans which are disaggregated from the sleds to improve overall reliability and serviceability.

    Using Rack Manager with a Redfish- based Command Language Interface, you can get information from, and manage all of the devices in the rack. This allows you to consolidate rack-wide operations including firmware updates to all devices in the rack or control power and cooling at the block level as well as many other possibilities.

    Rack scale management allows you to greatly reduce the time consumed in day-to-day operations across massive infrastructure environments and also reduces the number of management switches involved compared to typical datacenters. 

    Open networking… open everything

    The DSS 9000 is designed without an embedded fabric in support of an open networking approach.  Data center managers can implement whatever vendor’s networking stack makes the most sense for them. The DSS 9000 supports any standard 19” switch via its Open IT bay capability.

    In fact, the Open IT bay capability allows the rack configuration to accommodate any standard 19” server, storage or networking equipment.  The Redfish API compliance also makes it possible for the DSS 9000 to support the rack-wide management of this ancillary third-party gear. Such open technologies and interoperability are part of what led the Open Compute Project (OCP) Foundation to recognize the DSS 9000 as OCP-INSPIRED™.

    More choices for workload optimization

    The Extreme Scale Infrastructure organization has more than a decade of experience working with customers to deliver tailored, hyperscale-inspired features to optimize their infrastructure for specific workloads. That history of innovation has led us to integrate the latest compute, networking and storage options into the DSS 9000 to allow customers to meet their needs now and into the future.

    For more information refer to the DSS 9000 website or email ESI@dell.com.

  • Dell TechCenter

    Extreme Scale Infrastructure (ESI) Architects' POV Series (REDFISH)

    Paul Vancil – Distinguished Eng., ESI Architecture, Dell EMC

    In today’s typical large scale data center the numbers of servers that organizations are trying to keep track of have reached enormous proportions. In addition to that, existing protocols lack the ability to describe modern architectures and do not incorporate modern security best practices. The differences between various vendors’ solutions only multiplies the problem. Organizations have been forced to manage to the lowest common denominator – severely compromising their capabilities.

    Redfish origins and Dell EMC

    To address these challenges, in 2014, the Distributed Management Task Force created its Scalable Platform Management Forum (SPMF) to begin work on Redfish, an open, industry standard specification and schema to provide a consistent API that addresses all the components in the data center – one that is simpler, more secure and uses modernized protocols.  Redfish is an ongoing project that is supported by participation from a wide array of server vendors, including Dell EMC, Microsoft, Intel, HPE, Lenovo and IBM. Dell has co-chaired the SPMF since its inception, myself from 2014 until late last year, and now my Dell EMC colleague, Michael Rainieri is co-chair.

    Version 1.0

    The initial goal for Redfish 1.0 was providing an IPMI 2.0 equivalence with extensions for processor, NIC, and storage inventory - essentially standardizing the level of functionality available in IPMI – but with a benefits of a more secure RESTful interface. . Much of the early work went into establishing the protocol and modeling rules. Considering the complexity of the task, and the number of industry participants, the project held to an extremely aggressive schedule and delivered Redfish v1.0 in August of 2015, just a year after the forum began its work.

    Redfish impact on the industry

    Not only was the initial release held to a tight timeline – but since then, and key to the success of Redfish, there has been wide spread acceptance and adoption of the APIs by industry vendors.  Most of the major server vendors are currently implementing some level of the Redfish APIs. Dell EMC has been aggressive in its implementation of Redfish – PowerEdge servers (including 2 generations back) have support for v1.0 and parts of subsequent releases, and the Rack Manager component on the DSS 9000 uses Redfish as the basis of its rack-wide management capabilities.

    Having a de facto standard for system management APIs is a boon for data center management. One of the most significant benefits is that system management software and orchestration tools now have a standard way to address heterogeneous systems in large scale infrastructures. This both simplifies the development effort and expands the capabilities of administrators.  Because open standards development involves wide participation from across the industry, adopters of the standard benefit from the industry-wide expertise and input and more work can be done more quickly.

    Redfish and composability

    The fact that customers across the industry were having to wrestle with the mentioned management challenges helped drive the project to this point, but so did the advent of Intel® Rack Scale Design (RSD).  RSD is an architecture that includes a multi-rack hardware resource manager and has been developed in the same timeframe as Redfish. It uses Redfish APIs as its standard way of accessing hardware resource information. Using Redfish APIs and RSD concepts, data centers can dynamically allocate pooled hardware resources. These two elements working together are foundational steps on the path to fully composable infrastructure.

    The Redfish roadmap

    Redfish v1.0 proved that it was possible to finally get the development community to standardize on basic management APIs. But the ultimate goal of the Redfish project is to provide even higher level capabilities to the modern data center. To that end, the SPMF has committed to delivering three release bundles each year with additional capabilities. Work is in-progress to define standards for: Ethernet Switches, Data Center Infrastructure (e.g., PDUs, BBUs), External Storage (via SNIA), and Composability. In addition, the SPMF is continuing to update interoperability capabilities with conformance test tools, and interoperability feature profiles that define the minimum required features for particular environments; for example, a Base Server feature profile that would standardize the minimum set of guaranteed Redfish features for a base server Interoperability is really what Redfish is all about, and Dell EMC is actively working as part of this industry effort to help bring it to our data center customers.

    For more details on all of the Redfish activities and releases see the Distributed Management Task Force website.

     

  • General HPC

    HPCG Performance study with Intel Skylake processors

    Author: Somanath Moharana and Ashish Kumar Singh, Dell EMC HPC Innovation Lab, September 2017

    This blog presents analysis of the High Performance Conjugate Gradient (HPCG) benchmark on the Intel(R) Xeon(R) Gold 6150 CPU codename “Skylake”. It also compares the performance of Intel(R) Xeon(R) Gold 6150 processors with its previous generation Intel(R) Xeon(R) CPU E5-2697 v4 Codename “Broadwell-EP” processors.

    Introduction to HPCG

    The High Performance Conjugate Gradients (HPCG) Benchmark is a metric for ranking HPC systems. HPCG can be considered as a complement to the High Performance LINPACK (HPL) benchmark. HPCG is designed to exercise computational and data access patterns that more closely match a different and broad set of applications that have impact on the collective performance of these applications.

    The HPCG benchmark is based on a 3D regular 27-point discretization of an elliptic partial differential equation. The 3D domain is scaled to fill a 3D virtual process grid for all of the available MPI ranks. The preconditioned conjugate gradient (CG) algorithm is used to solve the intermediate systems of equations and incorporates a local and symmetric Gauss-Seidel pre-conditioning step that requires a triangular forward solve and a backward solve. The benchmark exhibits irregular accesses to memory and fine-grain recursive computations.

     

    HPCG has four computational blocks: Sparse Matrix-vector multiplication (SPMV), Symmetric Gauss-Seidel (SymGS), vector update phase (WAXPBY) and Dot Product (DDOT), while two communication blocks MPI_Allreduce and Halos Exchange.

     

    Introduction to Intel Skylake processor

     

    Intel Skylake is a microarchitecture redesign using the same 14 nm manufacturing process technology with support for up to 28 cores per socket, serving as a "tock" in Intel's "tick-tock" manufacturing and design model. It supports 6 DDR4 memory channels per socket with 2 DPC (DIMMs per channel), where supported full memory bandwidth is up to 2666 MT/s.

    Please visit BIOS characteristics of Skylake processor-blog for a better understanding of Skylake processors and their bios features on Dell EMC platforms.

    Table 1: Details of Servers used for HPCG analysis

    Platforms

    PowerEdge C6420

    PowerEdge R630

    Processor

    2 x Intel(R) Xeon(R) Gold 6150 @2.7GHz, 18c

    2 x Intel(R) Xeon(R) CPU E5-2697 v4 @2.3GHz, 18c

    Memory  

    192GB (12 x 16GB) DDR4

    @2666MT

    128GB( 8 x 16GB ) DDR4

    @2400MT

    Inter Connect

    Intel Omni Path

    Intel Omni path

    Operating System                                

    Red Hat Enterprise Linux Server release 7.3

    Red Hat Enterprise Linux Server release 7.2

    Compiler  

    version 2017.0.4.196

    version 2017.0.098

    MKL

    Intel® MKL 2017.0.3

    Intel® MKL 2017.0.0

    Processor Settings > Logical Processors

    Disabled

    Disabled

    Processor Settings > Sub NUMA cluster

    Disabled

    NA

    System Profiles

    Performance

    Performance

    HPCG

    Version 3.0

    Version 3.0

    HPCG Performance analysis with Intel Skylake

    In HPCG we have to set the problem size to get the best results out of it. For a valid run, the problem size should be large enough so that the arrays accessed in the CG iteration loop does not fit in the cache of the device. The problem size should be large enough to occupy the significant fraction of “main memory”, at least 1/4th of the total.

    Adjusting local domain dimensions can affect global problem size. For HPCG performance characterization, we have chosen the local domain dimension of 160^3,192^3 and 224^3 with the execution time of t=30 seconds. The local domain dimension defines the global domain dimension by (NR*Nx) x (NR*Ny) x (NR*Nz), where Nx=Ny=Nz=160 or 192 or 224 and NR is the number of MPI processes used for the benchmark.

    Figure 1: HPCG Performance on multiple grid sizes with Intel Xeon Gold 6150 processors

    As shown in figure 1, we can observe that the local dimension grid size of 192^3 gives the best performance compared to other local dimension grid sizes i.e. 160^3 and 224^3. Here we are getting a performance of 36.14 GFLOP/s for a single node and we can observe a linear increase in performance with the increase in number of nodes. All these tests have been carried out with 4 MPI processes and 9 OpenMP threads per MPI process.

    Figure 2: Time consumed by HPCG computational routines Intel Xeon Gold 6150 processors

    Time spent by each routine is mentioned in the HPCG output file as shown in the figure 2. As per the above graph, HPCG spends its most of the time in the compute intensive pre-conditioning of SymGS function and matrix vector multiplication of sparse matrix (SPMV). The vector update phase (WAXPBY) consumes very less time in comparison to SymGS and least time by residual calculation (DDOT) out of all four computation routines. As the local grid size is same across all multi-node runs, the time spent by all four compute kernels for each multi-node run are approximately same.


    Figure 3: HPCG performance over multiple generation of Intel processors

    Figure 3 compares HPCG performance between Intel Broadwell-EP processors and Intel Skylake processors. Dots in the figure shows the performance improvement of Intel Skylake over Broadwell-EP processors. For a single node, we can observe ~65% better performance with Skylake processor than Broadwell-EP processors and ~67% better performance for both two nodes and four nodes.

    Conclusion

    HPCG with Intel(R) Xeon(R) Gold 6150 processor shows ~65% higher performance over Intel(R) Xeon(R) CPU E5-2697 v4 processors. HPCG scales out well with more number of nodes and shows a linear increase in performance with the increase in number of nodes.

    References

    1. http://www.hpcg-benchmark.org/index.html
    2. http://en.community.dell.com/techcenter/high-performance-computing/b/general_hpc/archive/2017/01/17/hpcg-performance-study-with-intel-knl
    3. https://software.intel.com/en-us/mkl-linux-developer-guide-overview-of-the-intel-optimized-hpcg
    4. https://www.intel.in/content/www/in/en/silicon-innovations/intel-tick-tock-model-general.html
    5. http://en.community.dell.com/techcenter/high-performance-computing/b/general_hpc/archive/2017/08/01/bios-characterization-for-hpc-with-intel-skylake-processor
    6. https://www.top500.org/project/linpack/