Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
CommAutoTestGroup - Blog CommAutoTestGroup 1
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Latest Blog Posts
  • Dell TechCenter

    SPEC Cloud IaaS 2016 performance on Dell PowerEdge servers using VMware’s ESXi 6.5 and VIO 3.1

      

    Dell with the help of VMware has achieved the top score with SPEC Cloud™ IaaS 2016 Benchmark.

    These results were achieved on FX2s Chassis using Dell PowerEdge FC630 with E5-2698v4 processors running VMware ESXi 6.5 and VMware Integrated OpenStack 3.1, controller by vSphere 6.5.

    This Hardware, Software combo created on a cluster of 8 data nodes were able to create 468 VMs, with a Elasticity score of 87.4 % and a Scalability score of 78.5 @ 72 Applications Instances.

    This results can be viewed on SPEC’s website, at the follow link https://www.spec.org/cloud_iaas2016/results/cloudiaas2016.html

     

     

    Figure of the Cluster Environment.

     

     

     

    List of Cluster Equipment.

     

     

    Figure of the Published OSG Cloud result.

     

     

    By replacing the E5-2698v4 processors in the data nodes of the cluster with E5-2699v4, we were able to improve the SPEC Cloud score by 11.7%.

    Achieving a Scalability score of 87.7 @ 80, with an Elasticity score of 88.6%, while creating 520 VMs, without increasing the Provisioning Time of only 25 seconds.

    This 11.7% increase was due to having 2 more cores per processor, for a total of 4 cores per data node and a cluster increase of 24 cores over the previous cluster configuration.

     

     

    Figure of the update results with E5-2699v4 processors installed in all the Data Nodes. 

     

     

    As seen in the in the chart below there was a 11.1% increase in the amount of VM’s created by the cluster, after installing the E5-2699v4 processors in the data nodes.

     

    In the Chart below there was an 11.1% increase in the amount Application Instances that the cluster can create.

     

     

     There was an 11.7% increase in Scalability and seen in the figure below.

    Scalability is the total amount of measured work done by the application instances that are running in cloud environment in the cluster.

     

     

    There was a 1.3% increase in Elasticity. Elasticity is difference between the baseline and the clouds application workload performance.

    Each VM runs a baseline of the an application workload, before entering the cloud environment.

    As seen in the chart below there was a 81.4% decrease in Provisioning Time over  traditional Openstack clusters, when using VMware's VIO Openstack cluster. 

    The R720 and R630 in this chart are using the traditional Openstack software on physical servers.

    SPEC’s Benchmark to measure cloud performance is SPEC Cloud™ IaaS 2016. This benchmark use a suite of tools to measure cloud performance.

    The benchmark sets up VM’s as cloud instances to stress the provisioning of the Cluster and then runs computing workloads on those instances to figure out Scalability and Elasticity of the cluster.

    For more information on SPEC’s Cloud™ IaaS 2016 visit the following website https://www.spec.org/cloud_iaas2016/ .

     

     

    The VMware software used in this cluster is ESXi 6.5, VIO 3.1 and vSphere 6.5. ESXi 6.5 is an embedded hypervisor, which allow the creation of VMs without an underlying OS.

    VIO 3.1 is a VMware product that creates an OpenStack cluster on VMware infrastructure.

    This software combo allows for the creation of an integrate openstack cluster on the Dell PowerEdge servers hardware.

    Once the Cluster is created it can be controlled via VMware’s vSphere software.

    For information on the VMware Software used for this Cluster visit: http://www.vmware.com/go/vsphere and http://www.vmware.com/go/openstack.

     

     

  • vWorkspace - Blog

    What's new for vWorkspace - May 2017

    Updated monthly, this publication provides you with new and recently revised information and is organized in the following categories; Documentation, Notifications, Patches, Product Life Cycle, Release, Knowledge Base Articles.

    Subscribe to the RSS (Use IE only)

     

    Knowledgebase Articles

    New 

     229675 - error: disk deployment blocked due to HyperDeploy

    When re-provisioning computers the following error may be seen: Disk deployment blocked due to HyperDeploy by Host:  Hyper-V server name

    Created: May 23, 2017

     

     

    Revised

    149541 - Redirection not working for Web Access

    When creating a new website, the default.aspx should get edited automatically and a redirector setup to point to the new website, for example...

    Revised: May 1, 2017

     

    55554 - Cannot load pit file when launching Published application through Web Access

    When browsing to the location identified in the error message, "<path><name>.pit", the file is not present. The PortWise cache cleaner has...

    Revised: May 3, 2017

     

    73446 - How To: Improve Performance to a Virtual Desktop VDI Session

    When connecting to a VDI over a slow connection users report that the performance on the VDI is degraded, or sessions are disconnected often.

    Revised: May 5, 2017

     

    227781 - How to set the minimum memory with Hyper-V

    The Hyper-V role in Windows 2012 has an improved Dynamic memory feature that adds a property called Minimum Memory. This allows you to specify a...

    Revised: May 5, 2017

     

    70481 - HOW TO: Secure Access Certificate Configuration

    Revised: May 11, 2017

     

    183146 - Smart card logon to Web access

    When using Smartcards to authenticate, by default you still need to type in a Username and Password to log into the Web Access site before you can...

    Revised: May 19, 2017

     

    182486 - Best practices for User Profile Management (MetaProfiles)

    Most issues with User Profile Management are due to the user&#39;s profile being too large as a result of too much being captured.

    Revised: May 22, 2017

     

    148502 - Saving Windows Credentials Store with Metaprofiles

    Is it possible to use Metaprofiles (User Profile Management) to save the Windows Credentials Store (for example saved Outlook passwords) so that...

    Revised: May 22, 2017

     

    147847 - After updating PNTools user is presented with a blank screen on login.

    After updating PNTools user is presented with a blank screen on login. 

    Revised: May 23, 2017

     

    Product Life Cycle -vWorkspace

    Revised: May 2017

  • vWorkspace - Blog

    What's new for vWorkspace - April 2017

    Updated monthly, this publication provides you with new and recently revised information and is organized in the following categories; Documentation, Notifications, Patches, Product Life Cycle, Release, Knowledge Base Articles.

    Subscribe to the RSS (Use IE only)

     

    Patches

    228613 - Mandatory Hotfix 654037 for 8.6 MR3 Mac Connector

    This mandatory hotfix addresses the following issues: Picture is pasted instead of copied text when copying from Excel 2016 to Mac version of...

    Created: April 21, 2017

     

    228607 - Mandatory Hotfix 654123 for 8.6 MR3 Linux Connector

    This mandatory hotfix addresses the following issues: It is impossible to launch any application with non-English user name if Cred SSP is...

    Created: April 21, 2017

      

    228830 - Mandatory Hotfix 654125 for 8.6 MR3 Android Connector

    This mandatory hotfix addresses the following issues: It is impossible to launch any application with non-English user name if Cred SSP is...

    Created: April 26, 2017

     

     

    Knowledgebase Articles

    New 

     

    228414 - Quest Metaprofile Server (pnmfps service) not listening on any port 5206

    After setting the Storage Servers in User Profile Management | Properties, Profiles are not stored. The other symptoms are: No file...

    Created: April 18, 2017

     

    228613 - Mandatory Hotfix 654037 for 8.6 MR3 Mac Connector

    This mandatory hotfix addresses the following issues: Picture is pasted instead of copied text when copying from Excel 2016 to Mac version of...

    Created: April 21, 2017

     

    228607 - Mandatory Hotfix 654123 for 8.6 MR3 Linux Connector

    This mandatory hotfix addresses the following issues: It is impossible to launch any application with non-English user name if Cred SSP is...

    Created: April 21, 2017

      

    228830 - Mandatory Hotfix 654125 for 8.6 MR3 Android Connector

    This mandatory hotfix addresses the following issues: It is impossible to launch any application with non-English user name if Cred SSP is...

    Created: April 26, 2017

     

    228827 - vWorkspace 8.6.x patch installers

    When trying to upgrade vWorkspace 8.6 to mr1, mr2 or mr3 running the setup in the full installer package does not appear to upgrade the version...

    Created: April 26, 2017

     

    Revised

    227781 - How to set the minimum memory with Hyper-V

    The Hyper-V role in Windows 2012 has an improved Dynamic memory feature that adds a property called Minimum Memory. This allows you to specify a...

    Revised: April 4, 2017

     

    224308 - Hyper-V host is showing offline and is unable to be initialized.

    Hyper-V host fails to initialize and is showing offline. The following message may be seen in the vWorkspace console: "Remote computer could not...

    Revised: April 14, 2017

     

    228413 - PNTools versions are detected as outdated after upgrade.

    vWorkspace Monitoring and Diagnostics detected PNTools  versions to be outdated after  an upgrade. Alarm was triggered and you could be...

    Revised: April 21, 2017

     

    73446 - How To: Improve Performance to a Virtual Desktop VDI Session

    When connecting to a VDI over a slow connection users report that the performance on the VDI is degraded, or sessions are disconnected often.

    Revised: April 26, 2017

     

    63874 - The Remote Computer requires Network level authentication error.

    When trying to connect to a machine, an error occurs. The Remote Computer requires Network level authentication error.

    Revised: April 27, 2017

     

    228830 - Mandatory Hotfix 654125 for 8.6 MR3 Android Connector

    This mandatory hotfix addresses the following issues: It is impossible to launch any application with non-English user name if Cred SSP is...

    Revised: April 28, 2017

     

    Product Life Cycle -vWorkspace

    Revised: April 2017

     

  • Dell TechCenter

    VMware ESXi upgrade fails using VMware Update Manager

    This blog post is written by Sapan Jain and Krishnaprasad K from Dell Operating System team. 

    VMware ESXi versions (Dell customized VMware ESXi 6.0.x) shipped out of Dell factories may expose a failure when users try to upgrade to a higher version using VMware Update Manager (VUM).

    The failure would be seen in "Remediate" step in the upgrade process and error code may be similar to "Cannot execute upgrade script on the host". 

    This issue will not be seen if users have installed ESXi using Dell customized VMware ESXi images downloaded from support.dell.com (OR) VMware ESXi images posted at www.vmware.com/download. 

    How do I check the Dell customized version running on my system?

    • Login to vSphere client (HTML client) where you could see a string something similar to “You are running DellEMC Customized Image ESXi 6.0 Update 2 A00”         (OR)
    • Login to VMware ESXi shell console and read the file /etc/vmware/oem.xml

    Resolution

    The problem is caused by having upper case letters for the files residing in ESXi file system. For example, you may see that /altbootbank has files of upper case. 

     


     

    There are no workarounds available which can be used to proceed upgrade when this failure is hit. There is no data loss even though the upgrade is failed, you may boot back to the original ESXi configuration prior to upgrade.  The available options to recover from this failure are as follows:-

     

    1.    Upgrade using Dell iDRAC vMedia/CD Based method.

    2.    Use scripted upgrade using PXE infrastructure.

    This issue is fixed from the following versions of Dell customized VMware ESXi releases. 

  • Dell TechCenter

    Introduction of wbem in VMware ESXi 6.5

    This blog post is written by Elavarasan Selvaraj and Krishnaprasad K from Dell Operating System team. 

    The Common Information Model (CIM) interface on an ESXi host provides a way of remotely monitoring the hardware health of your hosts via the Web-Based Enterprise Management (WBEM) protocol. It builds on a standard HTTP(S) API, allowing secure SSL/TLS protected authentication and communication between the host and the management stations. From ESXi 6.5 onwards, VMware introduced a new namespace named as ‘wbem’ as an extension for esxcli which controls the services such as sfcbd and wsman. From ESXi 6.5 onwards, sfcdb and wsman are disabled by default as wbem is kept to false by default.

    The wbem services gets enabled automatically when an OEM provider VIB is installed.  For example when Dell OpenManage systems management software is installed on ESXi 6.5, you may see that the services such as sfcbd and wsman are started automatically. This blog detail on the options wbem provides which can help understand and troubleshooting CIM related issues. 

    Wbem Service command details

    This section summarizes some of the options provided by wbem which would be helpful for users relying on CIM systems management stack with in ESXi to monitor/manage their systems. 

    Below is what you see on a newly installed ESXi 6.5 in terms of the service status. 

    ~] esxcli system wbem get

       Authorization Model: password

       Enabled: false

       Loglevel: warning

       Port: 5989

       WSManagement Service: true

    You may see that the service is disabled by default. This also mean that sfcbd and wsman services are turned off by default.

    ~] /etc/init.d/sfcbd-watchdog status

    sfcbd is not running

    ~] /etc/init.d/wsman status

    openwsmand is not running

    After enabling wbem, you may observe that sfcbd and wsman are turned on.

    ~] esxcli system wbem set -e 1

     

    ~] /etc/init.d/sfcbd-watchdog status

    sfcbd is running

     

    ~] /etc/init.d/wsman status

    openwsmand is running

     

    Now, let's look into some of the options provided by wbem. Below is the help menu which detail various command parameters. 

     ~] esxcli system wbem set --help

    Usage: esxcli system wbem set [cmd options]

     Description:

      set                   This command allows the user to set up ESX CIMOM agent.

     Cmd options:

      -a|--auth=<str>       Specify how to authorize incoming requests. Values are password, certificate, password is by default. Changes take effect when --enable is

                            specified.

      -e|--enable           Start or stop the WBEM services (sfcbd, openwsmand). Values: [yes|no, true|false, 0|1]

      -l|--loglevel=<str>   Syslog logging level: debug|info|warning|error

      -p|--port=<long>      Set the TCP port on which the CIMOM listens for requests. The default is 5989

      -r|--reset            Restore the WBEM configuration to factory defaults

      -W|--ws-man           Enable or disable the WS-Management service (openwsmand). Enabled by default. Changes take effect when --enable is specified.

     

    ~] esxcli system wbem -h

    Usage: esxcli system wbem {cmd} [cmd options]

    Available Namespaces:
    provider

    Available Commands:
    get Display WBEM Agent configuration.
    set This command allows the user to set up ESX CIMOM agent.

    There is an extension available for wbem (named as provider) as well which is useful to understand the providers that are registered to sfcbd and enabled. Below is a sample command output taken from an ESXi 6.5 host with default providers. When Dell OpenManage is installed, you may see an additional entry named as 'OpenManage' to the below. 

    ~] esxcli system wbem provider list

    Name Enabled Loaded
    ---------------- ------- ------
    sfcb_base true true
    vmw_base true true
    vmw_hdr true true
    vmw_hhrcwrapper true true
    vmw_iodmProvider true true
    vmw_kmodule true true
    vmw_omc true true
    vmw_pci true true
    vmw_vi true true

    The below command provides an option to enable/disable individual providers registered to sfcb. 

    ~] esxcli system wbem provider set <-e | --enable> <yes|no, true | false, 0|1>

    Few other useful commands related to wbem are the following. 

    ~] esxcli system wbem set -a | --auth=<str>

    The authentication method defines how the incoming requests are authenticated from the external HTTPs APIs. Password and certificate are the two options available. 

    ~] esxcli system wbem set -e | --enable true|false     (sfcbd, openwsmand)

    ~] esxcli system wbem set -W | --ws-man true|false (openwsmand)

    This allows the administrator to enable/disable wsman service specifically. 

    ~] esxcli system wbem set -l | --log-level=<str> debug|info|warning| error

     This is an important parameter available for wbem. This is a useful option for troubleshooting CIM issues.

               ~] esxcli system wbem set -p | --pot=<long>  

    Setting the TCP port for CIMOM communication listens the request from outside HTTPs API. By default it's set to 5989. You may use this parameter only if there is a requirement to change the default port used. 

    ~] esxcli system wbem set -r | --reset

    This command resets the wbem configuration to the default.  

    Dell systems management support for ESXi 6.5

                    DellEMC provides an inband management tool known as OpenManage in which it provides two specific solutions named as Server Administrator (OMSA) and iDRAC Service Module(iSM). Dell released an OMSA version of 8.5 supported for ESXi 6.5 which is a test and document supported version. The iSM version is yet to be out for ESXi 6.5.  

  • Dell TechCenter

    Windows Containers with Windows Server 2016

    This blog was originally written by Navya SM from OS Engineering team in Dell

    Windows containers is a concept introduced with Windows server 2016 TP3 on both Core and GUI based OS image. A container looks a lot like a virtual machine (VM)-and is often considered a type of virtualization-but the two are distinctly different. Both host an operating system (OS), provide a local file system, and can be accessed over a network, just like a physical computer. However, a VM provides a full and independent OS, along with virtualized device drivers, memory management, and other components that add to the overhead. A container shares more of the host’s resources and consequently is more lightweight, quicker to deploy, and easier to scale across data centers. In this way, the container can offer a more efficient mechanism for encapsulating an application, while providing the necessary interface to the host system, all of which leads to more effective resource usage and greater portability. More details about the containers can be found at https://msdn.microsoft.com/en-us/virtualization/ windowscontainers/about/about_overview.

              Windows Server 2016 actually offers two different types of container run times, each with different degrees of application isolation.

    1. Windows Containers
    2. Hyper-V Containers

              Windows Containers offer isolation through namespace and process isolation, whereas Hyper-V Containers isolate each container via VMs. Windows Containers share a kernel with the container host and all the containers running on the host. In contrast, with Hyper-V Containers the kernel of the container host is not shared with the Hyper-V Containers. The Container Host can be either full OS or Core OS or a Nano edition. Both the types of Containers can be managed using Docker.

    Prerequisites:

    • The Windows container feature is only available on Windows Server 2016 (Core and with Desktop Experience), Nano Server, and Windows 10 Professional and Enterprise (Anniversary Edition).
    • The Hyper-V role must be installed before running Hyper-V Containers.
    • Windows Server Container hosts must have Windows installed to C drive. This restriction does not apply if only Hyper-V Containers will be deployed.

    Note: For the Container feature to work better all the Microsoft updates should be installed along with the OS.

    Windows Containers on Full OS:

    1. Install either Windows Server 2016 or Windows Server Core 2016 on a physical or virtual system. Also keep the system up-to-date by installed all the available updates.
    2. The Docker commands are used to create and manage the containers in windows. The below cmdlets helps in downloading and installing docker. The DockerMsftProvider will enable container feature on the machine and also helps to install docker.
      1. Install-Module -Name DockerMsftProvider -Repository PSGallery –Force
      2. Install-Package -Name docker -ProviderName DockerMsftProvider
      3. Restart-Computer –Force

     

    3.       Configure the firewall on the container host for the docker and configure the docker to listen on both the tcp and pipe using the commands given below. 

      1. netsh advfirewall firewall add rule name="docker engine" dir=in action=allow protocol=TCP localport=2375
      2. Stop-Service docker
      3. dockerd --unregister-service
      4. dockerd -H npipe:// -H 0.0.0.0:2375 --register-service
      5. Start-Service docker

    4. Windows Containers need a base image to be installed. Base OS images are available with both Windows Server Core and Nano Server as the underlying operating system and can be installed using docker pull.

    a.       docker pull microsoft/windowsservercore or

    b.      docker pull microsoft/nanoserver

     

    5. The docker command set is used to manage and work with containers. To create a new container and run commands on it we will have to use “Docker run” command and “docker ps” lists the containers that are running.

     6. To run commands on the existing container we can use “docker exec <container name> <cmd>”

    7. To start or stop a container we can run using “docker start/stop <container ID>”.

    8. Hyper-V containers on the other hand needs nested virtualization to be enabled (before installing Hyper-V role) as the Hyper-V host will be a VM and the containers will be the nested VMs on top of it. Below are the steps to create Hyper-V containers

      1. #replace with the virtual machine
        • name $vm = "<virtual-machine>"
      2. #configure virtual processor
        • Set-VMProcessor -VMName $vm -ExposeVirtualizationExtensions $true -Count
      3. #disable dynamic memory
        • Set-VMMemory $vm -DynamicMemoryEnabled $false
      4. enable mac spoofing
        • Get-VMNetworkAdapter -VMName $vm | Set-VMNetworkAdapter -MacAddressSpoofing On

                     The rest of the steps remain same on the Hyper-V container host

    Windows Containers on Nano Server:

    1. Install the Nano image with container module or if the server with Nano image is ready we can install the container package onto it or an evaluation VHD can be downloaded from here and create a VM from the same.
    2. In order to work with Nano we have to connect it through remote Powershell using ‘Enter- PSSession’ cmdlet as shown below
    3. All the critical updates has to be installed for the containers feature to work properly. Once the updates are installed reboot the server for the updates to apply.
    4. Once the server is up and running then install the docker. The docker has to be installed first to work with the containers. Below cmdlets help in installing the docker.
    •  The below command installs the NuGet that is required for the powershellGet module

    Install-Module -Name DockerMsftProvider -Repository PSGallery –Force

    • Now using the PowerShell Get we will install the latest available docker.

    Install-Package -Name docker -ProviderName DockerMsftProvider

    •  Reboot the Nanoserver once before using docker.

    Restart-Computer –Force

    5. The container host needs base OS image to hold the containers. The Windows Server Core and Nano Server has got the base OS image as the underlying OS. We can pull the base OS images using the docker pull cmd which fetches the ready image for use.

      • docker pull microsoft/nanoserver    or

    If the hyper-V containers are needed the also get the server core image using

      • docker pull Microsoft/Windowsservercore

    6. We need a remote system to manage docker on Nano server and so we need to configure the following for the same by installing docker on the remote server.

    7. Now we can create and connect to the container by using docker commands as same as that of the full Windows server OS

    The same can be seen using “docker ps –a” command as shown. We have to start the container using “docker start <container ID>” and can stop the container using “docker stop <container ID>”. To execute a command on a running container “docker exec <container ID> <cmd>”

     

    If the Nano server is installed on a VM, then the Hyper-V role has to be installed to create the Hyper-V container with nested virtualization being enabled but the rest of the steps remain same.

    This part is to cover the creation of containers, but to use the containers we have to get the network connections for the same.

  • Dell TechCenter

    TPM 2.0 configuration in SLES12 SP2 on Dell PowerEdge server

    Content of this blog is originally written by Shubhrata Priyadarshinee. 

    This blog provide steps for TPM 2.0 enablement in bios, kernel, TPM-2.0 user space utility and uses of TPM ownership in TPM 2.0 in SLES12 SP2.

    It also helps to find out solutions to the below error messages.

    • tcsd service failed with TPM 2.0 under UEFI/BIOS mode.
    • Failed to run tpm_takeownwership, tpm_clear commands.
    • Failed to run TPM2.0 commands.

    Trusted Platform Module (TPM)

    • A Trusted Platform Module, also known as a TPM is a cryptographic coprocessor. TCG (Trusted Computing Group) created a library specification which describes all the commands and features that are implemented and is capable of communicating with the platforms servers.
    • TPM is used to refer to both the name of a published specification by the Trusted Computing Group for a secure crypto processor and the implementation of that specification in the form of a TPM chip. A TPM chip’s main purpose is the secure generation of cryptographic keys, the protection of those keys, and the ability to act as a hardware pseudo-random number generator. In addition, it can also provide remote attestation and sealed storage.
    • TPMs are passive device because they only receive commands and return responses. So they don’t have intelligence to communicate.
    • TPM 1.2 PCRs (platform configuration register is a TPM register holding a hash value) were hard-coded to use the SHA-1 algorithm, whereas TPM 2.0 PCRs can use other hash algorithms.
    • TPM 2.0 supports newer hash algorithms SHA256, which can improve drive signing and key generation performance. SHA-256 hash is 256 bits or 32 bytes whereas SHA-1 hash is 20 bytes. The SHA-1 algorithm is being deprecated in favor of stronger algorithms SHA-256. This command, which can do both algorithms simultaneously, permits a staged phase-out of SHA-1, because it can return multiple results and extend multiple PCR banks.
    • In TPM 2.0, there are three separate domains
      • Security – functions that protect the security of the user.
      • Privacy – functions that expose the identity of the platform/user.
      • Platform – functions that protect the integrity of the platform/firmware services.
    • Each domain has its own resources and controls
      • Security – storage hierarchy, hierarchy enable.
      • Privacy – endorsement hierarchy.
      • Platform – Platform hierarchy.
    • TPM 2.0 is not fully supported in legacy bios mode because there is no pointer to TCG logs in legacy bios mode.
    • Below table shows algorithms supports for TPM 2.0 and TPM 1.2 

    Three things that needs to be done before running TPM2.0 commands:-

    1. Enable TPM 2.0 in BIOS/UEFI.
    2. Install TPM 2.0 driver and check device information in kernel.
    3. Install user space utility.

    1- Enabling TPM 2.0 in the BIOS/UEFI

    Dell PowerEdge have the TPM 2.0 chip built on the motherboard. However, it is not enabled by default. Therefore, we need to enable the TPM in the BIOS.

    To enable TPM 2.0 in BIOS:- 

    Press F2 while system boots -> System setup -> System BIOS -> System Security -> TPM security -> turn ON TPM security if not and Enable TPM hierarchy

    Under TPM advanced security do the following

    • Clear the 'TPM PPI bypass clear'
    • Select algorithm allows user to change the cryptographic hash algorithms used in TPM 2.0. ”SHA1” hash algorithm is default one. But SHA-256 is recommended for TPM 2.0
    • Save and exit from BIOS

                                         

     Screenshot showing TPM configuration setup page for 13G dell PowerEdge server.

         

    Screenshot showing TPM advanced configuration page for 13G dell PowerEdge server.

    2- TPM 2.0 in Kernel

    • Freshly install SLES12 SP2 GM and boot into OS.
    • To Check whether the kernel supports TPM 2.0 by default, execute the below command:

    #cat /boot/config-4.4.21-69.1.x86_64 | grep TPM

    output will look like this: CONFIG_TCG_TPM=Y

    • Check below command to verify tpm 2.0 chip

    # cat /sys/class/tpm/tpm0/device/description

    output of above command will look like this: TPM 2.0 Device

    • TPM 2.0 uses tpm_crb driver. Run the below command to verify it.

      # lsmod | grep  -i tpm           


      Output will look something like this.

    3- TPM 2.0 userspace packages

    TPM 2.0 uses tpm2-0-tss package that provides an open-source TCG software stack (TSS) implementation and tpm2.0-tools package that provides the tpm-2.0 tools based on tpm2.0-tss.

    TPM 2.0 does not work with TPM 1.2 trousers package and tpm-tools. So when working with TPM 2.0, install below two packages.

    • tpm2-0-tss
    • tpm2.0-tools

    Mount SLES 12SP2 GM DVD or configure SLES12 SP2 repository and install both of the packages, by running below commands.

    #zypper install tpm2-0-tss

    #zypper install tpm2.0-tools


    • To check resourcemgr.service status, run below command.

    #systemctl status resourcemgr.service

    • If resource manager service is not activated then run below command to start resource manager service.

    #systemctl enable resourcemgr.service

    #systemctl start resourcemgr.service


    • Once resource manager service is activate, use tpm2.0-tools commands to test TPM-2.0 functionality.

    TPM 2.0 ownership

    • Set owner, endorsement and lockout Authorization password for first time. Run below command.  

    #tpm2_takeownership -o new -e new -l new


    • Change to a new Authorization password for owner, endorsement and lockout.

    #tpm2_takeownership -o new1 -e new1 -l new1 -O new -E new -L new

     

    References:

    https://github.com/01org/tpm2.0-tools/blob/master/manual

    https://github.com/01org/TPM2.0-TSS

    https://github.com/01org/tpm2.0-tools

    https://en.wikipedia.org/wiki/Trusted_Platform_Module

    https://link.springer.com/book/10.1007%2F978-1-4302-6584-9

  • Dell TechCenter

    TPM 2.0 and Shielded Virtual Machines

    This blog post is originally written by Shubhra Rana and Vinay Patkar from Windows Engineering Team

    Cloud security is one of the trending areas due to high adoption rates by small and huge businesses alike. Security of the virtual layer is very important from the customer’s perspective as all the private data is hosted over virtual machines. This paper is aimed at describing the role of TPM 2.0 chip in guaranteeing the best security features to the VMs hosted in a third party environment in collaboration with the Hyper-V Shielded VM security feature introduced by Microsoft.

    Refer to the white paper located at Dell TechCenter which provides some useful information for users who plan to use Shielded Virtual Machines on Dell PowerEdge servers with Windows Server 2016 Installed.

  • General HPC

    Dell EMC HPC System for Research - Keeping it fresh

    Dell EMC has announced an update to the PowerEdge C6320p modular server, introducing support for the Intel® Xeon Phi x200 processor with Intel Omni-Path™ fabric integration (KNL-F).  This update is a processor-only change, which means that changes to the PowerEdge C6320p motherboard were not required.  New purchases of the PowerEdge C6320p server can be configured with KNL or KNL-F processors.  For customers utilizing Omni-Path as a fabric, the KNL-F processor will improve cost and power efficiencies, as it eliminates the need to purchase and power discrete Omni-Path adapters.  Figure 1, below, illustrates the conceptual design differences between the KNL and KNL-F solutions.

    Late last year, we introduced the Dell EMC PowerEdge C6230p Server, which delivers a high performance processor node based on the Intel Xeon Phi processor (KNL).  This exciting server delivers a compute node optimized for HPC workloads, supporting highly parallelized processes with up to 72 out-of-order cores in a compact half-width 1U package.  High-speed fabric options include InfiniBand or Omni-Path, ideal for data intensive computational applications, such as life sciences, and weather simulations.   

    Figure 1: Functional design view of KNL and KNL-F Omni-Path support.

    As seen in the figure, the integrated fabric option eliminates the dependency on dual x16 PCIe lanes on the motherboard and allows support for a denser configuration, with two QSFP connectors on a single carrier circuit board.   For continued support of both processors, the PowerEdge C6230p server will retain the PCIe signals to the PCIe slots.  Inserting the KNL-F processor will disable these signals, and expose a connector supporting two QSFP ports carried on an optional adapter using the same PCIe x16 slot for power.

    Additional improvements to the PowerEdge C6320p server include support for 64GB LRDIMMs, bumping memory capacity to 384GB, and support for the LSI 2008 RAID controller via the PCIe x4 mezzanine slot.

    Current HPC solution offers from Dell EMC

    Dell EMC offers several HPC solutions optimized for customer usage and priorities.  Domain-specific HPC compute solutions from Dell EMC include the following scalable options:

    • HPC System for Life Sciences – A customizable and scalable system optimized for the needs of researchers in the biological sciences.
    • HPC System for Manufacturing – A customizable and scalable system designed and configured specifically for engineering and manufacturing solutions including design simulation, fluid dynamics, or structural analysis.
    • HPC System for Research – A highly configurable and scalable platform for supporting a broad set of HPC-related workloads and research users.

    For HPC storage needs, Dell EMC offers two high performance, scalable, and robust options:

    • Dell EMC HPC Lustre Storage - This enterprise solution handles big data and high-performance computing demands with a balanced configuration — designed for parallel input/output — and no single point of failure.
    • Dell EMC HPC NFS Storage Solution – Provides high data throughput, flexible, reliable, and hassle-free storage.

    Summary

    The Dell EMC HPC System for Research, an ideal HPC platform for IT administrators serving diverse and expanding user demands, now supports KNL-F, with its improved cost and power efficiencies, eliminating the need to purchase and power discrete Omni-Path adapters. 

    Dell EMC is the industry leader in HPC computing, and we are committed to delivering increased capabilities and performance in partnership with Intel and other technology leaders in the HPC community.   To learn more about Dell EMC HPC solutions and services, visit us online.

    http://www.dell.com/en-us/work/learn/high-performance-computing

    http://en.community.dell.com/techcenter/high-performance-computing/

    www.dellhpc.org/

  • Dell TechCenter

    What will the Dell EMC Elect be doing at Dell EMC World?

    Dell EMC World 2017 has officially kicked off and there are a number of activities going on both on the ground, and virtually.

    To keep track of the event agenda, visit http://www.dellemcworld.com/agenda.htm for more details and view live streams.

    If you want to keep track of what's going on with the Dell EMC Elect, here is the calendar of events they will be participating in.

    The podcasts can be listened to at the following links:

    itunes podcast link

    https://soundcloud.com/dellemc 

    If you are there in person or following online, you'll know where they will be:

    • Tech sElect podcast on Dell EMC Midrange with Brian Henderson
      • Dell EMC Elect Lounge Mon, May 8, 2017 at 1:00 PM

    • Dell EMC Elect exclusive private briefing on Dell PowerEdge (private event)
      • Dell EMC Elect Lounge Mon, May 8, 2017 at 1:00 PM

    • VMAX Bloggers Roundtable at DEW (private event)
      • in the Dell EMC Elect Lounge , Tues, May 9, 2017 9:00 am

    • Tech sElect podcast with Frank Nicolo from VMAX
      • in the Dell EMC Elect Lounge , Tues, May 9, 2017 9:30 am

    • Dell Tech sElect podcast with Jason Brown
      • in the Dell EMC Elect Lounge , Tues , May 09, 2017  5:00 PM

    • Dell EMC World:Dell EMC Elect and {code} Catalyst meetup (private event)
      • in the Dell EMC Elect Lounge , Tues, May 9, 2017 3:00 PM

    • Dell EMC World: Ice cream briefing with Dell EMC CTO John Roese (private event)
      • in the Dell EMC Elect Lounge , Wed, May 10, 2017  12:00 PM

    • Dell Tech sElect Podcast with Ian Parslow (VP of Sales - MTI)
      • in the Dell EMC Elect Lounge , Wed, May 10, 2017 4:00 PM

    • DECN Community Appreciation Event (by invitation only)
      • Located at Sugar Cane, Wed, 5- 6:30 Don't forget to Register