Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 9
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,861
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 229
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 12
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 512
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 6
Latest Blog Posts
  • Dell TechCenter

    JSON Schema files for OemManager v1.0.0

    This article allows you to access the JSON schema files for OemManager v1.0.0.

    1. OemManager.v1_0_0.json

    {
        "$schema": "/redfish/v1/Schemas/redfish-schema.v1_1_0.json",
        "title": "#OemManager.v1_0_0.OemManager",
        "$ref": "#/definitions/OemManager",
        "definitions": {
        
            "ExportFormat": {
                "type": "string",
                "enum": [
                    "XML"                
                ],
                "enumDescriptions": {
                    "XML": "The Server configuration profile format is XML for exporting."
                }
            },
            "ExportUse": {
                "type": "string",
                "enum": [
                    "Default",
                    "Clone",
                    "Replace"                
                ],
                "enumDescriptions": {
                    "Default": "The SCP-generated profile includes all aspects of the system, such as BIOS, NIC, RAID, FC, iDRAC, System and Lifecycle Controller settings.This is the default.",
                    "Clone": "If the user intends to Clone settings from one gold server to another server with identical hardware setup, this export mode can be used.",
                    "Replace": "If the user intends to retire a server from the datacenter and replace it with another or restore a server’s settings to a known baseline, this mode of export could be used."                
                }
            },
            "IncludeInExport": {
                "type": "string",
                "enum": [
                    "Default",
                    "IncludeReadOnly",
                    "IncludePasswordHashValues"                
                ],
                "enumDescriptions": {
                    "Default": "Extra information to include in the export like Default.",
                    "IncludeReadOnly": "Extra information to include in the export like Include read only.",
                    "IncludePasswordHashValues": "Extra information to include in the export like Include password hash values, Include read only and password hash values."                
                }
            },
            "ShutdownType": {
                "type": "string",
                "enum": [
                    "Graceful",
                    "Forced",
                    "NoReboot"                
                ],
                "enumDescriptions": {
                    "Graceful": "The system will Gracefully shut down before performing import operation.",
                    "Forced": "The system will forcefully shut down before performing import operation",
                    "NoReboot": "The system will shut down before performing import operation. Manual reboot is done here."                
                }
            },
            "HostPowerState": {
                "type": "string",
                "enum": [
                    "On",
                    "Off"                                
                ],
                "enumDescriptions": {
                    "On": "Host power state after performing import operation is set to On.",
                    "Off": "Host power state after performing import operation is set to Off."                                
                }
            },
            "ShareType": {
                "type": "string",
                "enum": [
                    "NFS",
                    "CIFS"                                
                ],
                "enumDescriptions": {
                    "NFS": "Network Share type is NFS for export, import or preview.",
                    "CIFS": "Network Share type is CIFS for export, import or preview."                                
                }
            },
            "Target": {
                "type": "string",
                "enum": [
                    "ALL",
                    "IDRAC",
                    "BIOS",
                    "NIC",
                    "RAID"
                ],
                "enumDescriptions": {
                    "ALL": "The SCP-generated profile includes ALL aspects of the system, such as BIOS, NIC, RAID, FC, iDRAC, System and Lifecycle Controller settings.",
                    "IDRAC": "The SCP-generated profile includes IDRAC aspects of the system.",
                    "BIOS": "The SCP-generated profile includes BIOS aspects of the system.",
                    "NIC": "The SCP-generated profile includes NIC aspects of the system.",
                    "RAID": "The SCP-generated profile includes RAID aspects of the system."
                }
            },
            "ShareParameters": {
                    "IPAddress": {
                        "type": "string",
                        "readonly": true,
                        "description": "The IP address of the target export or import server.",
                        "longDescription": "The IP address of the target export or import server."
                    },
                    "ShareName": {
                        "type": "string",
                        "readonly": true,
                        "description": "The ShareName or the directory path to the mount point.",
                        "longDescription": "The ShareName or the directory path to the mount point for NFS and CIFS, during export or import server configuration."
                    },
                    "FileName": {
                        "type": "string",
                        "readonly": true,
                        "description": "The target output file name.",
                        "longDescription": "The target output file name for export or import server configuration."
                    },
                    "ShareType": {
                        "$ref": "#/definitions/ShareType",
                        "readonly": true,
                        "description": "The ShareType specifies Type of share like  NFS, CIFS.",
                        "longDescription": "The ShareType specifies Type of share like  NFS, CIFS. If nothing is specified it is a local share type."
                    },
                    "Username": {
                        "type": "string",
                        "readonly": true,
                        "description": "User name for the target export or import server configuration.",
                        "longDescription": "User name for the target export or import server configuration in the NFS or CIFS share path."
                    },
                    "Password": {
                        "type": "string",
                        "readonly": true,
                        "description": "Password for the target export or import server configuration.",
                        "longDescription": "Password for the target export or import server configuration in the NFS or CIFS share path."
                    },
                    "Target": {
                        "$ref": "#/definitions/Target",
                        "readonly": true,
                        "description": "To identify the component for Export. It identifies the one or more FQDDs.",
                        "longDescription": "To identify the component for Export. It identifies the one or more FQDDs .Selective list of FQDDs should be given in comma separated format . Default = ALL."
                    },
            
                    "description": "Share parameters are listed.",
                    "longDescription": "Share parameters are listed in this object for accessing the NFS, CIFS share locations for Export of the configuration XML file."
            },
            
             "ExportSystemConfiguration": {
                "type": "object",
                "patternProperties": {
                    "^([a-zA-Z_][a-zA-Z0-9_]*)?@(odata|Redfish|Message|Privileges)\\.[a-zA-Z_][a-zA-Z0-9_.]+$": {
                        "type": [
                            "array",
                            "boolean",
                            "number",
                            "null",
                            "object",
                            "string"
                        ],
                        "description": "This property shall specify a valid odata or Redfish property."                    
                    }
                },
                "additionalProperties": false,            
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "Friendly action name"
                    },
                    "target": {
                        "type": "string",
                        "format": "uri",
                        "description": "Link to invoke action"
                    }
                },
                "description": "This action is used to export System Configuration attributes.",
                "LongDescription": "This action shall perform an export System Configuration attributes."
                },
                "ImportSystemConfiguration": {
                "type": "object",
                "patternProperties": {
                    "^([a-zA-Z_][a-zA-Z0-9_]*)?@(odata|Redfish|Message|Privileges)\\.[a-zA-Z_][a-zA-Z0-9_.]+$": {
                        "type": [
                            "array",
                            "boolean",
                            "number",
                            "null",
                            "object",
                            "string"
                        ],
                        "description": "This property shall specify a valid odata or Redfish property."                    
                    }
                },
                "additionalProperties": false,
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "Friendly action name"
                    },
                    "target": {
                        "type": "string",
                        "format": "uri",
                        "description": "Link to invoke action"
                    }
                },
                "description": "This action is used to import System Configuration attributes.",
                "LongDescription": "This action shall perform an import System Configuration attributes."
                },
                "ImportSystemConfigurationPreview": {
                "type": "object",
                "patternProperties": {
                    "^([a-zA-Z_][a-zA-Z0-9_]*)?@(odata|Redfish|Message|Privileges)\\.[a-zA-Z_][a-zA-Z0-9_.]+$": {
                        "type": [
                            "array",
                            "boolean",
                            "number",
                            "null",
                            "object",
                            "string"
                        ],
                        "description": "This property shall specify a valid odata or Redfish property."                    
                    }
                },
                "additionalProperties": false,
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "Friendly action name"
                    },
                    "target": {
                        "type": "string",
                        "format": "uri",
                        "description": "Link to invoke action"
                    }
                },
                "description": "This action is used to import System Configuration Preview.",
                "LongDescription": "This action shall perform an import System Configuration Preview."
                }    
        },
        "copyright": "Copyright 2016 Dell, Inc. or its subsidiaries.  All Rights Reserved."
    }

    2. OemManager.json

    {
        "$schema": "/redfish/v1/Schemas/redfish-schema.v1_1_0.json",
        "title": "#OemManager.OemManager",
        "$ref": "#/definitions/OemManager",
        "definitions": {
            "OemManager": {
                "anyOf": [
                    {
                        "$ref": "/redfish/v1/Schemas/odata.4.0.0.json#/definitions/idRef"
                    },
                    {
                        "$ref": "/redfish/v1/Schemas/OemManager.v1_0_0.json#/definitions/OemManager"
                    }
                ]
            }
        },
        "copyright": "Copyright 2016 Dell, Inc. or its subsidiaries.  All Rights Reserved."
    }

  • Dell TechCenter

    Setup Desired State Configuration SMB Pull Server on Dell PowerEdge R730XD

    This blog was written by Shruthin Reddy

     

    Desired State Configuration (DSC) is a PowerShell extension which enables you to install or remove server roles, manage environment variables and fix a configuration if it drifts from what was planned and deployed. This minimizes manual administration work, we can create a configuration blue print and automate the configuration process by either pushing the configuration to a client or make the client pull the configuration. This ensures the configuration cannot change from the deployed configuration and maintains consistency in the deployed configuration.

     

    I’ll explain and describe the process of getting DSC configured and installed on Dell PowerEdge R730 MLK platform. There are many resources out there on web but I somehow find that the information is outdated and spread all over. I’ll put all the required content to configure the DSC in this blog over a few posts.

     

    I’ll walk you through setting up an SMB Pull Server on Dell PowerEdge Server and walk you through configuring the Local Configuration Manager (LCM) of the clients to pull the configuration from the SMB Share. Note that this configuration should only be used for POC purposes and not for production as Microsoft DSC resources I am using are under experimental phase and are denoted by symbol ‘X’ in front of the resource name, Microsoft does not provide support for those resources.    

     

    Pre-requisites

     

    Install two Windows VM's, Join them to your domain. I used Windows Server 2016 RTM for my example. DSC needs WMF 5.0 and this link has System Requirements for WMF 5.0 supported operating system versions.

    • VM1 – Pull-Server ( This is the server we will set up for the clients to pull the configuration)

    • VM2 – Pull-Client1 ( I called it pull client as this will be my client which I will set its LCM (Local Configuration Manager) to Pull the configuration from SMB Share)

     

    Download the DSC PowerShell Modules below from Microsoft PowerShell repository using Windows PowerShell console. You can directly find and install Modules from a PowerShell console on your Pull server using Install-Module CMDLET. If you don’t have internet connectivity you can download and copy the Modules to the pull server C:\Program Files\WindowsPowerShell\Modules directory.

     

    To download and save the PowerShell Modules open the PowerShell console from your management workstation with internet access and type the commands beside each DSC Module below to download it.

     

    1. xPSDesiredStateConfiguration (Save-Module -Name xPSDesiredStateConfiguration -Path C:\Program Files\WindowsPowerShell\Modules)

    2. NuGet ( Save-Module -Name NuGet -Path C:\Program Files\WindowsPowerShell\Modules)

    3. You will need to install WMF 5.0 if you are using Server 2012 or older OS. Here is the download link. Windows Server 2016 has the latest Windows Management Framework (WMF) native in the OS.

     

    Step1: Configure SMB Pull Server

     

    We need to setup an SMB Share to store the DSC configuration files. I have configured this SMB Share on a VM “Pull-Server”. You can do it on any of your existing infrastructure. The directory I am creating on the VM will be the repository for all the configuration files.

     

    # 1. Create a Directory

    • New-Item -path C:\DSCSMB -ItemType Directory

     

    # 2. Make it SMB Share and share the folder to store .MOF file and Resource Modules

    • New-SMBShare -Name DSCSMB -Path C:\DSCSMB -ReadAccess Everyone -FullAccess administrator -Description "SMB Share for DSC"

     

    Step2: Create LCM Meta.MOF file with appropriate settings

     

    Using script below we are changing the settings of the LCM (Local Configuration Manager) of the client machine for it to pull the configuration from SMB Pull-Server. The Script will be using the xPSDesiredStateConfiguration module.

     

    Open the PowerShell ISE as administrator and run the entire script. It will create a Meta.MOF file which has the configuration details to be sent to the Local Configuration Manager (LCM) of that machine.

     

    # 1.Configure LCM for SMB pull configuration

    [DSCLocalConfigurationManager()]

    Configuration LCM_SMBPULL

    {

       param

           (

               [parameter(Mandatory=$true)]

               [String[]]$COMPUTERNAME,

     

               [parameter(Mandatory=$true)]

               [String]$guid

           )

     

       Node $COMPUTERNAME

       {

           Settings

               {

               AllowModuleOverwrite = $true

               ConfigurationMode = 'ApplyandAutoCorrect'

               RefreshMode = 'Pull'

               ConfigurationID = $guid

                       }

                       ConfigurationRepositoryShare DSCSMB

                        {

                       SourcePath = '\\pull-Server\DSCSMB'            

                 }

       }

    }

    # Computer list (This is my variable)

    $COMPUTERNAME= 'pull-client1'

     

    # Create Guid for the computers

    $guid=[guid]::NewGuid()

     

    # Create the computer Meta.Mof in folder

    LCM_SMBPULL -ComputerName $COMPUTERNAME -Guid $guid -OutputPath C:\DSCSMB

     

    Step3: Set the LCM for the clients

     

    In the previous step we created a Meta.MOF file which has the configuration details. Now we need to send the configuration information to the client’s machine’s LCM. 

    • Use this Command to set the LCM of the client. In the command below “$COMPUTERNAME” is the variable I have in the script which is ‘Pull-Client1’ 

    • Set-DscLocalConfigurationManager -ComputerName $COMPUTERNAME -Path C:\DSCSMB –Verbose

     

     

    • Use the Command below to see if the LCM of Pull-Client1 is set to PULL the configuration

    • Get-DscLocalConfigurationManager -CimSession $COMPUTERNAME

      

     

    Use the command’s below to verify the LCM is seeing the right SMB Share Path.

    • The first command will get the configuration from our “$COMPUTERNAME” which is again our variable ‘Pull-Client1’

    • The second command will get the download manager information and displays the SMB Share information from where the Pull-client1 is pulling the configuration 

    • $x=Get-DscLocalConfigurationManager -CimSession $computername

    • $x.ConfigurationDownloadManagers

      

     

    Step4: Write a simple configuration which need to be pulled

     

    We will now create the configuration.MOF file and save it to C:\DSCSMB folder. The name of the file will be SMBComputers.MOF. You can rename SMBComputers from the script below.

     

    # configure Backup feature and create a .MOF file

     

    Configuration Backup

    {

     

       Node SMBComputers

    {

                WindowsFeature Backup

    {

           Name = 'Windows-Server-backup'

           Ensure = 'present'

                }

          }

    }

     

    Backup -Outputpath C:\DSCSMB

     

     

    Step5: Rename Configuration file name with GUID

     

    Rename the configuration.MOF file with GUID so that the clients can pull the configuration from the pull server using the GUID.

     

    The command below will grab the Configuration ID from “$COMPUTERNAME” which is our variable Pull-Client1. If we have more than 1 system we only need the GUID from one computer. We will set the GUID to be same for all the machines which will pull this configuration.

    • $guid=Get-DscLocalConfigurationManager -CimSession $COMPUTERNAME | Select-Object -ExpandProperty ConfigurationID

     

    Next insert the Configuration ID of the LCM and assign that Configuration ID to the configuration.MOF file which is “SMBComputers.MOF” and store that configuration file on SMB Share. (We need to do this because all the LCM's with that GUID will be forced to have the *Backup* feature as their desired state).

     

    # 1. Specify the Source path of the configuration

    • $Source = "C:\DSCSMB\SMBComputers.mof"

     

    # 2. Specify the Destination path of the configuration. In this step we will be replacing the name of SMBComputer.MOF to $GUID.MOF

     

    • $dest = "\\Pull-server\DSCSMB\$Guid.mof"

     

    # 3. Copy the changed configuration.MOF file to destination which is SMB Share

    • Copy-Item -Path $Source -Destination $Dest

     

    Note: If you see this error while copying the source file to destination check the access permissions to the SMB Share and make sure the user has write access to the share.

     

    Below are the steps to give write access

    • To check and give the write access to the share go to the SMB Directory we have created à right clickà click on Properties of the folderàSharing à Advanced Sharingà Permissionsà Select user and click allow Full control

    • You will see a .MOF file with the GUID created in the folder

     

    Step6: Generate the Checksum of the configuration

     

    Generating the checksum file lightens the network traffic when DSC is checking the configuration of the clients. Use this command to make the checksum file.

    • New-DscChecksum $Dest

     

    Note: If you make any changes to the configuration you need to checksum the configuration each time. If not, you will encounter an error where the client configuration will not be updated to desired state.

     

     

    Step7: Test the Configuration

     

    • Get-WindowsFeature -ComputerName $COMPUTERNAME -Name *Backup* # shouldn’t be installed yet

    • Update-DscConfiguration -ComputerName $COMPUTERNAME -wait #Check to see if it installs

    • Get-WindowsFeature -ComputerName $COMPUTERNAME -Name *Backup* # should have installed by now

    • Test-DscConfiguration -cimsession pull-client1 # Test to see if the computer is in desired state

     

    Here is the link to my blog where I have explained how to configure the HTTP Pull server.

     

  • Dell TechCenter

    Premium User Experience and Transformed Mobile Network Economics

    It is expected that by 2020, 90 percent of the world’s population over six years of age will have a mobile device (1) and there will be over 26 billion mobile devices(2). All of these devices will result in an exponential growth in the Mobile data traffic over the next several years. By 2019 Mobile traffic is expected to reach 24.3 Exabytes per month growing at compound annual growth rate of 57%*(3).

     

    To support such a data growth, Mobile Network Operators (MNOs) needs to continuously build and maintain larger capacity infrastructures and backhauls with decreasing revenue. In many cases the MNOs are basically turning into bit-pipes for Over-the-Top (OTT) apps.

     

    As a result, globally more and more MNOs are finding out that their business model is challenged where data is growing at an exponential rate and voice and messaging revenues are diminishing. As such, there is a need for MNOs to optimize, scale and to open up new monetization opportunities. Fortunately MNOs can accomplish that with Mobile-Edge Computing.

     

    What is Mobile-Edge Computing (MEC)?

     

    Mobile-edge Computing offers an IT service environment and cloud computing capabilities within the Radio Access Network (RAN), in close proximity to mobile subscribers.

    The RAN edge is a service environment with ultra-low latency and high-bandwidth, proximity and it provides exposure to real-time radio network and context information. Mobile-edge Computing allows newer value added apps and services to be delivered and current content, services and applications to be accelerated, increasing responsiveness from the edge.

     

     The mobile edge computing (MEC) provides several business benefits.

    • MEC enables a new value chain and an energized ecosystem, based on Innovation and business value that can benefit Mobile operators, application developers, content providers, OTT players, network equipment vendors, IT and middleware providers.
    • MEC provides flexibility and agility as Operators can open their Radio Access Network (RAN) edge to authorized third-parties, allowing them to flexibly and rapidly deploy innovative applications and services.
    • MEC opens new Revenue opportunities within new market segments as MNOs can deliver new innovative applications and services towards mobile subscribers, enterprises and vertical segments.

    The technical standards for MEC are being developed by the European Telecommunications Standards Institute (ETSI). MEC enables implementation of Mobile Edge Applications as software-only entities or VMs that can run on a Network Function Virtualization Infrastructure (NFVi) layer.

    Mobile Edge Computing translates local context, agility, rapid response time and speed into value. It delivers several key use cases in the following areas:

    • Consumer Oriented Services - gaming, AR/VR, remote desktop applications, stadium or retail real-time services, etc.
    • Internet of Thing (IoT) / Machine-to-Machine (M2M) Services
    • Enterprise Services - local breakout
    • Operator/Third-party Services - active device location tracking, Big data and analytics, connected vehicles, etc.
    • Network-performance Services - content/DNS caching, performance optimization, video optimization and acceleration

    Summary:

    MNOs are finding out that their business models are challenged as they have to continuously expand their network footprint to support ever increasing data requirements from mobile and IoT devices

    Mobile Edge computing can help these MNOs by optimizing their networks and significantly improving network economics with newer value added apps and service such as content caching, CDN, IoT, Smart Cities, Connected cars and Enterprise Services, etc.

    This was a quick introduction to the Mobile-Edge Computing. In the next blog, we will talk about some of these key Mobile-Edge Computing use-cases in more detail. Watch this space as Dell EMC Service Provider team delivers some new and exciting solution for Mobile Edge Computing. To learn more visit: www.dell.com/telecom .

     

     

    References:

    (1) Ericsson Mobility Report, June 2015.

    (2) Gartner Supply Chain Executive Conference, May 2014. 

    (3) https://www.statista.com/statistics/271405/global-mobile-data-traffic-forecast/

  • Dell Cloud Blog

    Dell EMC Continues to Lead at OpenStack Summit

     

    OpenStack Summit Barcelona is in full swing this week.  Barcelona, of course, is the capital city of Catalonia, which is an autonomous community within Spain. Perfectly fitting for OpenStack, which is an autonomous community in its own right.

    Since the early days of OpenStack, Dell, EMC – and now Dell EMC – have had strong representation at OpenStack Summit and this week is no exception.

     

    Showcasing OpenStack Solutions

    First off, at the Dell EMC booth, we are showcasing several key elements of what we affectionately, and descriptively, call our build to buy continuum for OpenStack cloud solutions. 

    At the “build” end of the spectrum, the Dell EMC Red Hat OpenStack Cloud Solution is a co-engineered and fully validated reference architecture that is jointly designed, tested, and supported by Dell EMC and Red Hat.  While it can be described as a reference architecture, it is much more than that, as it is a bundled solution that can be ordered, delivered, and deployed as a complete and validated solution based on purpose-selected Dell EMC hardware components and Red Hat Enterprise Linux and OpenStack Platform software components.  Yet as a reference architecture, it has the inherent flexibility to adapt and expand based on customer needs.  You can learn more about this solution in the OpenStack community area of Dell TechCenter and at Dell.com/OpenStack.

    The latest version, just announced, is the Dell EMC Red Hat OpenStack Cloud Solution Version 6, which is based on the OpenStack Mitaka release and Red Hat OpenStack Platform 9 (OSP 9). Read more about this release on the recent Dell4Enterprise blog post entitled Delivering Enterprise Capable OpenStack Clouds. This release includes important extensions for Red Hat OpenShift to enable Platform as a Service (PaaS), including support for Docker containers and Kubernetes orchestration, and for CloudForms to enable unified multi-cloud management and automation.

    Now at the “buy” end of the spectrum, the VxRack System 1000 with Neutrino Nodes (or VxRack Neutrino for short) is a turnkey, rack-scale, engineered system for Infrastructure as a Service (IaaS), purpose built for cloud native applications. VxRack Neutrino is a pre-configured, hyper-converged system with enterprise-grade management and reporting software that can be deployed as a stable and reliable OpenStack environment within hours.  And VxRack Neutrino can be optionally configured with Dell EMC Native Hybrid Cloud for a Pivotal Cloud Foundry implementation for cloud-native application development.

    Both of these OpenStack solutions from Dell EMC are enterprise-grade systems that represent the best characteristics of reliability and scalability and provide a wide range of choice from the highly flexible to the fully turn-key.

     

    OpenStack Summit Technical Sessions

    If all that is not enough, Dell EMC has a strong presence in the OpenStack Summit technical sessions.  Here is a sample of the sessions that we are leading or in which we are participating:

    The Murano application catalog helps OpenStack cloud admins to publish a well-tested set of on-demand and self-service applications for end users to easily access and deploy. This session demonstrates a simple end-to-end process for getting Murano running on your dev environment or your laptop by running Murano in Docker containers. This talk is led by Magdy Salem, Lida He, and Julio Colon of Dell EMC.

    This talk, led by David Paterson of Dell EMC and Daniel Mellado of Red Hat, explains how to automate the validation testing of an OpenStack deployment, by outlining and demonstrating the entire process for automating a detailed test scenario.

    Led by Arkady Kanevsky and Nicholas Wakou of Dell EMC, learn how Hadoop can be deployed on an OpenStack cloud using the OpenStack Sahara project, and how OpenStack can be optimized to get the performance of a Big Data workload on the Cloud to match that of a bare-metal configuration.

    During OpenStack Newton, the Cinder team made progress in numerous areas for block storage.  In this session, we will provide an update on what has been accomplished in Newton and also discuss what may be coming in the Ocata release.  This talk is led on Sean McGinnis and Xing Yang of Dell EMC.

    Based on leading-edge standards work and methodologies developed by Dell EMC and Red Hat, this session will explain how to use industry-standard techniques to measure and evaluate the performance of your OpenStack cloud with real-world workloads. This talk is led by Nicholas Wakou of Dell EMC and Alex Krzos of Red Hat.

    This innovative session will explore how TensorFlow, a new open source platform for artificial intelligence by the Google Brain team, can be used with OpenStack Sahara for data processing.  This talk is led by Lin Peng and Accela Zhao of Dell EMC.

    This session looks at Neutron networking challenges and effective troubleshooting tips for common issues such as connectivity issues among instances, connecting to instances on the associated floating IP, SSH working only part of the time, and application payload transfer issues. This talk is led by Mohammad Itani, Lida He, and Diego Casati of Dell EMC.

    The Rally project gives OpenStack developers and operators relevant and repeatable benchmarking data to determine how their clouds operate at scale. In this session, learn how to deploy Rally, run benchmark scenarios, and analyze the results of those tests. This talk is led by Magdy Salem, Javier Soriano, and Mohammad Itani of Dell EMC.

    This is a panel discussion on the pros and cons of various OpenStack consumption models, led by IDC Research Director Ashish Nadkarni, and featuring VS Joshi of Dell EMC and representatives from Mirantis and Canonical.

     

    Wow, did we say strong representation?  If you miss these sessions or are not fortunate enough to go to Barcelona, you can view all of the sessions at openstack.org/videos.

       

  • Dell TechCenter

    Dell's Customization of VMware ESXi and Its Advantages

    You may have heard about Dell Customized VMware ESXi images. For those who doesn’t know what’s really extra in Dell customized images, here you go:

    Earlier, Dell’s customization of VMware ESXi image includes adding new or updated VMware IOVP certified drivers relevant for Dell hardware and specific third party CIM providers from IHVs into the VMware ESXi image.But now, we don't add any Dell CIM providers or third party CIM providers to the Dell customized VMware ESXi image. Also Dell pick the latest patches published by VMware for the relevant branches along with the async drivers published in VMware download page. 

    The Dell customized ESXi images are certified by Dell through VMware’s program for OEM customization. In addition to the drivers and providers, Dell’s technical support information is updated via OEM customization process. Currently, the Dell-customized VMware ESXi Embedded ISO images for ESXi 4.x, ESXi 5.x and ESXi 6.x do not include Dell OpenManage Server Administrator.

    NOTE: CIM Providers integration is stopped from upcoming Dell Customized VMware ESXi images.

    • For detailed instructions to install and use Dell OpenManage Server Administrator, refer ‘Software -> Systems Management’ section @ Dell OpenManage documentation.
    • To know the specific details of the driver and provider versions integrated in different Dell customized images, Refer ‘Dell Customization of the ESXi image’ at Dell support website.
    • To download the Dell customized VMware ESXi 4.x, 5.x and 6.x images, go to Dell support website (Go to http://support.dell.com -> Select ‘Support for Enterprise IT’ -> Go to ‘Drivers and Downloads’-> ’Enter Service tag’ or select ‘Choose from a list of all Dell products’ option -> Select ‘Servers, Storage, & Networking’ option -> Select ‘PowerEdge’ -> Select the system as per your configuration -> Once the system is selected, Select the ‘Operating System’ as ESXi 4.x or ESXi 5.x or ESXi 6.x as per your requirement -> Click ‘Enterprise Solutions’ sub tab -> The ISO image is listed there ).

    The main advantage of using the Dell customized image is that it carries the updated drivers as part of the Installation CD. However, customers can further update the driver versions based on the requirement and availability at the VMware website.  

  • Dell TechCenter

    New Server Resources to the Rescue!


    New Server Resources To The Rescue At The Dell TechCenter

    Q: What one thing do all super heroes have in common?
    A: They’re all able to accomplish impossible feats for the greater good of humanity.

    Here at Dell, we may not have the ability to fly, scale walls or run around in spandex (I think we can all appreciate that). What we do have are solutions that can help fight the TCO Terror, Diabolical Deployment Mastermind, or even the dreaded Dark Lord Density. 

    Say hello to a resource that can help fight those vigilantes.

    Dell Tech Center… the Technical Superhero’s Dream

    The Dell Tech Center has been home to Dell’s wide portfolio and has been serving technical users for years. From forums, to wikis, deployment guides, to best practice videos, the Tech Center is the weapon every technical super hero needs.

    Super Charge the Data Center

    Looking to learn more about virtualizing an environment, accelerating storage, increasing capacity or enhancing performance? We’ve got all that and more!

    The PowerEdge server team has redesigned and launched the server pages to address technical user’s needs. With the entire PowerEdge portfolio broken down by form factor, commodity, or systems management, it’s easy to navigate the site and find relevant assets. In addition to that, we’ve added an icon system so finding assets that are relevant is quick and painless.

    What’s in it for me?

    Great question! We’ve designed the tech center server pages with technical users in mind. We’ve made it so that all of our resources are in one place. There are spec sheets, wikis, forums, and more. Not to mention all of our PowerEdge 13G technical whitepapers.

    So go check us out and bookmark the site! Fight data center villains with weapons that are up to the task.

  • Dell Big Data - Blog

    Building End-to-End Hadoop Solutions

    Building End-to-End Hadoop Solutions

    By Mike King

    Description:  Here are some key considerations for creating a full-featured Hadoop environment—from data acquisition to analysis.

    The data lake concept in Hadoop has a great deal of appeal.  It’s a relatively low-cost, scale-out landing zone for many diverse types of data.  Actually, I’ve yet to see a type of data one couldn’t put in Hadoop.  Although most accessible data is highly structured, data can also be semi-structured or multi-structured.  (To my way of thinking, technically there is no “unstructured” data, but that is a subject for another post.) 

    Data may be found internally or externally, and some of the best data is actually purchased from third-party providers that create data products.  Don’t ignore this “for-fee” data as it may allow you to connect the pieces in ways you couldn’t do otherwise.  In many cases the upfront cost is pale in comparison to the opportunity cost.  (Hey, that’s “Yet another Idea for a Cool Blog Post” [YAIFACBP].)

    Perhaps one of the richest parts of the Hadoop ecosystem is the ingest layer.  In short, it’s how you get data into Hadoop from a source to a synchronization—that synchronization is where you are moving your data into Hadoop or into a data lake.  Options for moving data include Sqoop, Flume, Kafka, Boomi, SSIS, Java, Spark Streaming, Storm, Syncsort, SharePlex, Talend and dozens of others. 

    While the ingestion is important, if all you do is fill your data lake you have failed.  There are several different aspects you should strongly consider for your data lake.  These include data quality, data governance, metadata, findability, organization, ILM, knowledge management and analytics.  How one accounts for each of these points regarding data tooling is of little importance, as there are many ways to skin the cat and my way may not be best for you.  Let’s examine each of them in order.

    Data quality can be thought of as cleaning data.  The old adage “garbage in – garbage out”—GIGO—aptly applies.  Data dimensions must include accuracy, completeness, conformance, consistency, duplication, integrity, timeliness and value.  When cleaning data, a suggestion would be to start with a few simple measures based on the dimensions for the data that matter most to you.  Keys (primary, alternate, natural or surrogate) and identifiers (IDs) tend to be the most important attributes when considering data quality.  These keys and IDs are also how we access our data.  Think about the impact to your business when the keys or IDs are incorrect.  Checking items against one or more metrics, standards, rules or validations will allow you to avoid the problems and remediate those that do occur via a closed-loop process. 

    Data governance involves the control, access and management of your data assets.  Each business must outline and define its own process.  As Albert Einstein once said, “Make things as simple as possible, but not simpler.”  I’d advise “data lakers” to start small and simple even if it’s only a spreadsheet that includes pre-approved users, sources, syncs and access controls maintained by your Hadoop administrator.  Data governance is an imperative for every data solution implementation.

    Metadata is “data about data” and is often misunderstood.  As a quick definition, if one considers a standard table in Oracle, then the column names and table names are the metadata.  The data values in the rows are data.  Similarly for Hive.  There are times when metadata is embedded in the payload, as with XML or JSON.  A payload is simply all the data contained in a given transaction.  A good practice when implementing a big data solution is to collect the disparate metadata in one place to enhance or enable management, governance, findability and more.  The most common manner for collecting the disparate metadata is to do this is with a set of tables in your RDBMS. 

    Findability is generally implemented with search.  In Hadoop this typically is either SOLR or elastic search.  Elastic search is one of the newest additions in Hadoop and is far easier to learn and configure, although either method will work.  Note that the search function is an imperative in any big data solution.

    The next key is organization, and although this may sound a bit trite, it is a necessity.  Developing a simple taxonomy and the rules on how you create and name your directories in HDFS is a great example of organization.  Create and publish your rules for all to see.  Note that those who skip this step will have unnecessary duplication, unneeded sprawl, lack of reuse and a myriad of other problems.

    Information Lifecycle Management (ILM) is a continuum of what one does with data as it changes state, most typically over time.  Think of data as something that is created, cleansed, enhanced, matched, analyzed, consumed and eventually deleted.  As data ages its value and usage declines.  With ILM one might store data in memory till it is cleansed and cataloged, then in a NoSQL database like Cassandra for 90 days, and finally compressed and stored in HDFS for two or more years before it is deleted. 

    Knowledge management is simply how one manages the knowledge garnered from data.  All too often one might ask, “If we only knew what we know…..”  Learning is something that has great value to the individual.  In a company or organization we should leverage knowledge so that the value of the knowledge multiplies.  Sharing knowledge makes others smarter and safer, and therefore more productive.  In Hadoop, how do your users learn about components like Hive, Sqoop and Pig?  How do they share their tips with others?  There are many more questions to ask, and using a “wiki” allows the secure knowledge sharing and management.

    The next aspect in a big data solution is arriving at the stage where we can begin to analyze the data. When we arrive at this step we begin to build the insights into the data that allow users to see the fruits of their labors.  The consumers of our data lake, like analysts and data scientists, should now have matured to using the data to begin to build the business, protect the business and understand their customers in a more comprehensive manner. Ultimately, driving data-driven insights allows users to be e more productive, make better decisions and get better results.

    Mike King is a Dell EMC Enterprise Technologist specializing in Big Data and Analytics 

  • vWorkspace - Blog

    What's new for vWorkspace - September 2016

    Updated monthly, this publication provides you with new and recently revised information and is organized in the following categories; Documentation, Notifications, Patches, Product Life Cycle, Release, Knowledge Base Articles.

     Subscribe to the RSS (Use IE only)

     

    Patches

     None at this time

     

     Downloads

    Product Release Notification – vWorkspace 8.6.2

    Type: Patch Release Created: September 2016

     

       

    Knowledgebase Articles

    New 

     

    212395 - Drive redirection in the Windows Connector is slow

    When using drive redirection in the Windows connector it may be slow to browse the user's local drives from the remote session.

    Created: September 14, 2016

     

    212629 - Universal Printers are not created in the remote session

    When using the Universal printer (UP) redirection, local printers are not created in the remote session. When enabling logging in the UP applet...

    Created: September 20, 2016

     

     Revised

    86259 - Provisioning a new VM fails with the ERROR”Action = unknown action, The maximum message size quo

    Revised: September 9, 2016

     

     70481 - HOW TO: Secure Access Certificate Configuration

    Revised: September 20, 2016

     

     88999 - Where can I find the vWorkspace PowerShell module and information on the supported features?

    Where can I find the vWorkspace PowerShell module and information on the supported features?

    Revised: September 23, 2016

     

     131760 - How to enforce two-factor authentication for all External clients but not for internal clients

    Is it possible to enforce two-factor authentication (2FA) for ALL Internet clients - regardless of if they connect via the external Web Interface...

    Revised: September 27, 2016

     

     73446 - How To: Improve Performance to a Virtual Desktop VDI Session

    When connecting to a VDI over a slow connection, users report that the performance on the VDI is degraded, or sessions are disconnected often.

    Revised: September 29, 2016

     

    Product Life Cycle

    Product Life Cycle - vWorkspace

    Revised: September 28, 2016

       

  • Dell TechCenter

    Memory performance on the PE FC630 and how CPU and Memory configurations affect it.

    This blog will help illustrate the performance impact of Processor and Memory selection on the PowerEdge FC630. First closely examine the table below, you will notice with higher TDP processors the heatsinks cover portions of memory channels 2 and 3 on each processors installed.

    This reduction in available memory slots is normally not a problem given total memory configurations of less than 256GB, as long as only memory channel 1 (i.e. sockets with white release tabs) are populated.

    But as you can see in the following figure when the 104mm heat sink is installed, four memory slots on each processor socket are physically blocked. These particular DIMM sockets span across memory channels 2 and 3 (i.e. the black and green release tab memory slots).

    If any memory slots beside the white slots (A1-4 & B1-4) are then populated, an unbalanced memory configuration will occur resulting in memory clock rate reduction and a dramatic drop in overall bandwidth performance. See the following Blog for comparative measurements:

    http://en.community.dell.com/techcenter/b/techcenter/archive/2016/09/23/13g-poweredge-server-performance-sensitivity-to-memory-configuration

    The picture below better illustrates how the heat sink block the memory slots in the PE FC630


    Since the 104mm heatsinks cover 4 of the black and green memory slots on both Processors, placing memory in the black and green memory slots can cause memory performance problems. The only way to correct this problem on higher-end processors with TDP’s of 135 watts and above is to use only the white release tab memory slots in the PE FC630, this may require the use of 64GB DIMMs on some memory configurations.

    Unbalanced memory configuration will be caused in two ways, not enough DIMMS in a channel and too many channels being activated. The first occurs when there are not enough DIMMs populated in the channel. When the black release tab memory slots (i.e. memory channel 2) on both processors are used, the memory channel will be unbalanced because we are missing two DIMMS in channel 2 on both processors, this generally causes the memory performance to be cut by around 73%.

    Three DIMMs per channel will happen, when memory channel 2 and 3 are used on both processors. This drops the memory bus speed from 2400Mhz on an E5-26xx v4 to 1866 MHz and from 2133MHz on an E5-26xx v3 to 1600 MHz in some cases. The memory bus speeds quoted here can be different depending on the ranking of the memory, see the table below to better understand how memory ranking choices can affect memory speed.


    The last thing that can affect memory bus speed and performance is the maximum speed at which the memory controller in the processor can run, see table below for more details. The E5-2600’s memory controller has a wide range, at the bottom the low-end E5-2600 v3’s processors start at 1600 MHz and at the top the high-end E5-2600 v4’s processors go up to 2400MHz.

    An Example would be the PE FC630 has 2400MHz memory installed in memory channel 1, but the system has E5-2609v4 processors, the 2400MHz memory with only run at 1866MHz on the current configuration and will fall to 1600MHz if you install memory in all 3 memory channels. Consult the table below to see the max memory bus speed of the processor that you might be considering or is currently installed in your system.

    The table below shows how adding more memory can cause the memory bus speed to drop. For instance if you have sixteen 8GB 2400MHz R DIMMs in the FC630 for a total of 128GB and you need another 32GB of system memory. To do this you add eight more 8GB 2400MHz R DIMMs in the green release tab slots (i.e. memory channel 3) of the PE FC630, the memory speed will drop from 2400MHz to 1866MHz, this causes around a 23% decrease in memory performance. If your application is memory sensitive it would have been better to replace the DIMMs in memory channel 1 (i.e. sockets with white release tabs) with 16GB R DIMMs, instead of add the 8GB R DIMMs to Memory channel 3 (i.e. sockets with green release tabs)


     

    In summary, the memory controller in the E5-26xx processor supports many DIMM configuration, but to maximize the memory performance on the PowerEdge FC630, you must consider all of these limitations. To better understand the memory configurations available on the PowerEdge FC630 please consult the User’s Guide at http://www.dell.com/support/home/us/en/19/product-support/product/poweredge-fc630/

  • vWorkspace - Blog

    Product Release: Wyse vWorkspace 8.6.2

    vWorkspace 8.6.2 is a minor relase, with enhanced features and functionality.

    New features:

    • Official support for Windows 10 as a VDI host
    • Windows 10 Remote Desktop display performance improvements (AVC 444 mode) for Mac, iOS, Android and Linux connectors
    • Wyse WSM OS streaming support for Windows 10
    • Monitoring and Diagnostics platform updates and stability enhancements
    • Network Level Authentication (NLA) support for Mac, iOS, Android, Linux and Chrome connectors
    • vWorkspace Password Management support for Mac, iOS, Android and Linux connectors
    • Web Access security improvements
    • VMware standard clone improvements
    • Linux VDE improvements
    • USB 3.0 support
    • 6 hotfixes bundled
    • Additional bug fixes

    Wyse vWorkspace 8.6.2 is available as a Patch Installer and a Full Installer version.

    Customers with active maintenance may download the latest version from here: https://support.software.dell.com/vworkspace/download-new-releases

    The Product information for Wyse vWorkspace is available here: https://support.software.dell.com/vworkspace/release-notes-guides