Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 8
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,858
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 227
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 9
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 511
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 4
Latest Blog Posts
  • Dell TechCenter

    Setting up a DSC (Desired State Configuration) HTTP Pull Server to deploy Hyper-V and Failover cluster

    In my previous blog I spoke about configuring an SMB Pull server on Dell PowerEdge R730XD server. In this blog I will be talking about:

    1. Configuring an HTTP Pull Server.

    2. Setting up a configuration blueprint to install Hyper-V, enable the Failover cluster feature, and configure the internal V-switch on the Hyper-V host

    Environment and pre-requisites –

    1. Hardware - Dell PowerEdge R730XD server with Windows Server 2016 RTM installed. Enabled the Hyper-V Role and configure 2 VM’s on the server.

    2. VM1- HTTPPull –-> This VM will be my HTTP Pull Server

    3. VM2- NHV-1 –-> This VM will be my client and I will configure the VM’s Local Configuration Manager (LCM) to pull the configuration from HTTP pull server. I installed the Hyper-V feature on this VM to enable nested virtualization (to do so follow this Microsoft article). You can skip this step if you are using physical host

    4. Install WMF 5.0 if you are using Server 2012 or older OS. Here is the download link. Windows Server 2016 has the latest Windows Management Framework (WMF) native.

    5. Download and save the following modules on your HTTPPull Server

      • xHyper-v (Save-Module -Name xHyper-v -Path 'C:\Program Files\WindowsPowerShell\DscService\Modules')

      • xPSDesiredStateConfiguration (Save-Module -Name xPSDesiredStateConfiguration -Path 'C:\Program Files\WindowsPowerShell\DscService\Modules')

      • NuGet ( Save-Module -Name NuGet -Path 'C:\Program Files\WindowsPowerShell\DscService\Modules')

    Once all the pre-requisites are met there are 6 simple steps to:

    1. Configure HTTP pull server.

    2. Point clients to the pull server for them to get their desired state configuration.

    Configure HTTP Pull Server

    1. Generate Configuration .MOF file -

    We need to run the script below to generate a MOF file which has the configuration details to configure the pull server. When you run the MOF file generated, it will install the IIS role if it is not already installed, and start configuring the Pull server.

     

    Configuration HTTPPullServer

    {

       # Module must exist on Pull server

    Import-DscResource -ModuleName xPSDesiredStateConfiguration

     

           # Load the Windows Server DSC Service feature

     

           Node HTTPPull

           {

                 WindowsFeature DSCServiceFeature

                 {

                     Ensure = 'Present'

                     Name = 'DSC-Service'

                 }

     

           # Use the DSC Resource to simplify deployment of the web service

     

           xDSCWebService PSDSCPullServer

           {

             Ensure = 'Present'

             EndpointName = 'PSDSCPullServer'

             Port = 8080

             PhysicalPath = "$env:SYSTEMDRIVE\inetpub\wwwroot\PSDSCPullServer"

             CertificateThumbPrint = 'AllowUnencryptedTraffic'

             ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules"

             ConfigurationPath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"

             State = 'Started'

             DependsOn = '[WindowsFeature]DSCServiceFeature'

           }

       }

    }

     

    # Running this Script will Generate a .Mof file in the location below

    HTTPPullServer -OutputPath 'C:\PullServerConfig\'

     

    At this point you will have a MOF file generated on your workstation/management station in the C:\PullserverConfig directory

    2. Push the configuration to the server -

    Use the command below to push the configuration from your local management box to your HTTP Pull server

    Start-DscConfiguration -Path 'C:\PullServerConfig\' -ComputerName HTTPPull -Wait -Force -Verbose

     

    Running the above command will set the VM (HTTPPull) as HTTP Pull server

    Click to enlarge the picture

    You can simplify a check against the HTTP Pull server using the function below. If you do not use the default service URL, you will need to adjust accordingly

     

    function Verify-DSCPullServer ($fqdn) {

       ([xml](invoke-webrequest "http://$($fqdn):8080/psdscpullserver.svc" | % Content)).service.workspace.collection.href

    }

    Verify-DSCPullServer 'HTTPPull.test.local'

     

    If you see the output as shown below after running the function your HTTP Pull server is communicating as expected.

     

    ## Expected Result: ####Do not Run this##

    #Configurations

    #Modules

    #Action

    #Module

    #StatusReport

    #Node

    #Reports

    #Nodes

     

    3. Create .MOF file to Configure LCM (Local Configuration Manager) for your clients -

     

    Now that we have the HTTP pull server setup, configure the client LCM to pull the configuration from HTTP pull server.

     

    Run the PowerShell script to create a Meta.MOF file which has the configuration details

     

    [DSCLocalConfigurationManager()]

    Configuration LCM_HTTPPULL

    {

       param

           (

               [parameter(Mandatory=$true)]

               [String[]]$ComputerName,

     

               [parameter(Mandatory=$true)]

               [String]$guid

           )

     

       Node $ComputerName

       {

           settings {

     

               AllowModuleOverwrite = $true

               ConfigurationMode = 'ApplyandAutoCorrect'

               RefreshMode = 'Pull'        

               ConfigurationID = $guid        

               RebootNodeIfNeeded = $true          

           }

               ConfigurationRepositoryWeb DSCHTTP {                          

                   ServerURL = 'http://HTTPPull.domain.local:8080/PSDSCPullServer.svc'

                   AllowUnsecureConnection = $true          

               }

       }

    }

    # Computer list

    $COMPUTERNAME= 'NHV1‘

     

    # Create Guid for the computers (Either you can create a new GUID or use existing one.In the example below I am using the existing GUID)

    $guid= '4c060e4d-0e59-422e-935a-11a8f4283c41'

     

    # Create the computer Meta.Mof in folder

    LCM_HTTPPULL -ComputerName $COMPUTERNAME -Guid $guid -OutputPath C:\PullServerConfig

     

    4. Set the Local Configuration Manager of your client machine -

     

    Now that we have the configuration Meta.MOF file created, we need to run the Meta.MOF file and set the LCM of the clients. Use the command below to do that.

    Running this command will modify the DSCLocalConfigurationManager properties

     

    Set-DscLocalConfigurationManager -ComputerName $COMPUTERNAME -Path C:\PullServerConfig -Verbose

    Click to enlarge the picture

    Here is the screenshot of the DSCLocalConfigurationManager before and after configuring NHV-1 VM to pull the configuration


    Click to enlarge the picture

    5. Create a configuration blueprint -

    The script below creates the configuration .MOF file to configure the client computers with Hyper-V role, Failover cluster Role, and installs all necessary roles needed to manage those roles and also creates the internal V-switch  

    configuration HyperVbuild

    {

         param (

                   [string]$NodeName = 'HTTP-Computers'

               )

     

             Import-DscResource -ModuleName xHyper-V

         Node $NodeName {

     

               WindowsFeature HyperV {

                   Ensure='Present'

                   Name='Hyper-V'

               }

     

               WindowsFeature HyperVPowershell {

                   Ensure='Present'

                   Name='Hyper-V-Powershell'

               }

     

               WindowsFeature FullGUITools {

                   Ensure = 'Present'

                   Name = 'Hyper-v-tools'              

               }

     

               File VMsDirectory { # (This will create a directory for your VMs in the c:\ drive You can choose to skip this option if you want to use default directory)

                   Ensure = 'Present'

                   Type = 'Directory'

                   DestinationPath = "$($env:SystemDrive)\VMs"

               }

     

               xVMSwitch InternalSwitch {

                   DependsOn = '[WindowsFeature]HyperV'

                   Ensure = 'Present'

                   Name = 'LabInternal'              

                   Type = 'Internal'

               }

     

               WindowsFeature FM {

                   Ensure = 'Present'

                   Name = 'Failover-Clustering'          

               }

     

               WindowsFeature FailoverClusteringTools {

                   Ensure = 'Present'

                   Name = 'RSAT-Clustering'          

                   DependsOn = '[WindowsFeature]FM'

     

           }

     

       }

     

    }

     

    HyperVBuild -outputpath C:\PullServerConfig

     

    Once we have created the Configuration .MOF file for the blueprint, we need to rename the configuration .MOF file with the GUID of the client machine and run the checksum. Below are the commands to use:

    Note: You can select and run all the commands once or you can run them separately.

    # 1. Grab GUID from one of the machines (These are client machines which we have configured as Pull-clients with LCM)

     

    $guid=Get-DscLocalConfigurationManager -CimSession NHV-1 | Select-Object -ExpandProperty ConfigurationID

     

    # 2. Run 5.1 and 5.2 both combined

    # Here we are inserting the grabbed GUID of the LCM and assigning that GUID to the configuration.mof file. (We need to do this because all the LCM's with that Guid will see the *Backup* feature because even this feature config file has same GUID)

     

    # 2.1 Specify the Source path of the configuration

    $Source = "C:\PullServerConfig\HTTP-Computers.mof"

     

    # 2.2 Destination should be HTTP Configuration folder on the pull Server. Here i am saing the filr to c:\ and then i will copy the GUID.mof and checksum file to the HTTP configuration folder

    $dest = "C:\PullServerConfig\$Guid.mof"

     

    # 3. Copy

    Copy-Item -Path $Source -Destination $Dest

     

    # 4. Then on Pull Server make Checksum

    New-DscChecksum $Dest

     

    6. Copy the configuration and checksum file to HTTP PULL server

     

    Now that the Guid.mof file and Guid.mof.checksum file is created on the local folder on your workstation we need to copy those files to HTTPPull server Configuration directory.

    "C:\PROGRAMFILES\WindowsPowerShell\DscService\Configuration"

     

    Note: I hit an issue when using the xHyper-V resource to create xVMSwitch.via Desired State Configuration of the clients for some reason the clients are not able to pull the xHyper-V resource from Pull server modules repository to configure “Internal Switch”. The workaround for this issue is to copy the xHyper-V resource module locally on your client server’s Modules repository C:\Program Files\WindowsPowerShell\Modules

    7. Test and update the configuration

    Get-WindowsFeature -ComputerName NHV-1 -Name *hyper-v*,*Failover, RSAT-Clustering # Shouldn't be installed yet

    Update-DscConfiguration -ComputerName NHV-1 -wait #Check to see if it installs

    Get-WindowsFeature -ComputerName NHV-1 -Name *hyper-v*,*Failover*, RSAT-Clustering # Should have installed by now

    Test-DscConfiguration -cimsession NHV-1 # Test to see if the computer is in desired state

  • Dell TechCenter

    At Dell EMC, simplicity is our methodology and our mantra

    Ed. Note: This blog was authored by Kevin Noreen, Marketing Director, Dell EMC Server Management 

    In regard to management of IT infrastructure, what does it mean to be “simple”? This question comes up frequently when I meet with customers to discuss our next generation of systems management products.  Unfortunately for many IT professionals, a majority of IT solutions are expensive, time consuming, and far too complex.  And some hardware vendors actually capitalize on promoting expensive certifications and unnecessary features. 

    In the past, IT professionals were forced to accept this “business as usual” approach, but customers today no longer value this vapid chicanery. At Dell EMC we believe that IT solutions are meant to enable business, not stifle it.  We believe that IT Pros should direct the IT infrastructure; infrastructure should not dictate to the IT Pro

    Research and hundreds of customer interviews have prompted us to consider that IT professionals are a beleaguered group; expected to meet an increasing number of dynamic business needs while at the same time staff and budget are decreasing.  Under these conditions, IT professionals have less time to drive business goals if consumed with complexities in managing IT infrastructure.  And skills associated with the maintenance of hardware are no longer highly valued by the industry.

    I recently spoke with an IT Director challenged with providing his customers better service while also managing with fewer IT staff.  Reflecting on data center hardware management, he opined, “I shouldn’t need a PhD to manage my IT infrastructure.”  Here at Dell EMC, we couldn’t agree more.  Unfortunately his dilemma is not unique.  And with the emergence of hyper-converged and software-defined architectures, tools necessary to manage server sprawl will become ever more important.

    In a recent Dell EMC commissioned blind study, 96% of IT decision makers acknowledged that having a simple yet comprehensive server management console is as important (if not more important) as fully optimizing the hardware.  In fact, good management tools should enable hardware optimization.  “The true value-add of a good management console is in facilitating other IT priorities,” one respondent commented. 

    The mandate is clear: make systems management simple and automated.  Free IT Pros to work on tasks that drive more value for the business - and its customers.  These comments inspire the direction we are taking at Dell EMC.

    We already have a unique and self-evident history in driving simplicityDell EMC OpenManage Essentials, in particular, has been an industry pioneer in this regard.  To date, we estimate that over 4 million servers are currently managed by our server management console since its initial launch in 2012.  Customers tell us that OpenManage Essentials allows them to manage thousands of servers at the cost of managing one based upon efficiencies gained. 

    For example, Richard Sparkman, IT director at Shelby American, relies on the OpenManage Essentials console to identify and proactively address potential performance bottlenecks. “With OpenManage Essentials, I can take care of our IT needs in about 80 percent of the time it took previously.  It helps me spend less time actively managing all my servers and more time on strategic tasks that may not relate directly to IT.”

    Building upon the success of OpenManage Essentials, our console portfolio is evolving.  Dell EMC is pleased to pre-announce our new server management console expected to launch in late 2017.  With a tenacious focus on simplicity, automation, and unification of data center management, our new console is designed for the Next Generation of IT Professionals.   

    We believe that the best measure of simplicity is the ability to free IT Pros from hardware management tasks and enable them to instead concentrate on accelerating business objectives and improving services to their customers.   

    Next Generation IT Pros will derive many both practical and inventive benefits from the new management features available in this Next Generation Console, including:

    • End-to-end server lifecycle management covering inventory, monitoring, reporting, configuration and deployment
    • Simplified discovery processes that enable devices to automatically register themselves with the Console
    • Intuitively designed interface that minimizes management steps
    • Modernized dashboard providing a clear view of alerts and remediation options
    • Integration with Dell EMC services for rapid support case creation and access to warranty status
    • Customized reporting capabilities
    • And many more

    Several new features will offer simplicities previously only available with modular architectures.  For example, PowerEdge modular solutions offer tremendous management depth, including detection and automated setup when a new blade is inserted, as well as eliminating credential prompts for ongoing device management.  Our new console will provide similar management capabilities along with incremental simplicity - unrestrained by sheet metal.

    Our Next Generation Console will be a catalyst for IT Pros to transcend existing IT obstacles with effort-less server management capabilities that return value in the form of real-time efficiencies and budget-savings.  

    Please stay tuned and watch here for additional news on the future of Dell EMC server management capabilities.

  • Dell TechCenter

    Dell Customized VMware ESXi 6.5 Upgrade Scenarios

    VMware recently announced vSphere 6.5 release. Dell supports direct upgrade to Dell customized ESXi 6.5 from both ESXi 6.0 Update 2 and ESXi 5.5 Update 3 branches. This document talks about some of the best practices to be considered before upgrading to Dell customized ESXi 6.5 in terms of UEFI Secureboot. This white paper is relevant for all users who plan to migrate to Dell customized version of VMware ESXi 6.5 and make use of UEFI secureboot feature. The paper can be downloaded from here and is also attached with this blog. 

    This upgrade guide covers the recommendations on enabling UEFI secureboot specific to relevant upgrade scenarios to ESXi 6.5. Prior to enable UEFI secureboot on an upgraded system, it's highly recommended to go through this white paper and understand the caveats. These caveats are not applicable for a fresh install of Dell customized VMware ESXi 6.5 as it contains all compatible VIBs for UEFI secureboot.

  • Dell TechCenter

    Dell Agent-free out-of-band Infrastructure Monitoring System using Docker container

    Docker is an open source, lightweight containerization platform for distributed applications with a principle of "Build once, Run Anywhere". It provides an ability to package software into standardised units on Docker for software development. It’s an OS virtualization layer that takes advantage of Linux kernel features like namespaces and control groups to provide complete application isolation and greatly simplifies the deployment and management of applications on any platform. Containers provide each application an independent runtime environment.

    Docker provides the tools necessary to build, run and manage applications packaged as Docker images. An application distributed as a Docker image incorporates all the dependencies and configuration necessary for it to run, hence eliminating the need for end-users to install necessary packages and troubleshoot dependencies.

    Running Dell OpenManage Plug-in for Nagios Core inside Docker Container

    Dell OpenManage Plug-in for Nagios Core provides a proactive approach to data center management that delivers features for monitoring 12th and later generations of Dell PowerEdge servers through an agent-free method using integrated Dell Remote Access Controller (iDRAC) with Lifecycle Controller technology, Dell chassis and Dell storage devices in the Nagios Core console. With this plug-in, customers have comprehensive hardware-level visibility of Dell PowerEdge servers, Dell chassis, and Dell storage including overall and component-level health monitoring for quicker fault detection and resolution. To learn more about this plugin, refer this link



    The idea is to leverage Docker's unique one-liner command to discover the server, storage and network devices and ready-to-be-managed by Nagios core. This surely protects customer's existing investments in Nagios Core and helps in simplifying the integration and management of Dell Infrastructure.

    Building the Docker Image:

    Assuming that you already have Docker Engine installed on your system, execute the below command to build the Docker image locally:

    $sudo git clone https://github.com/ajeetraina/dell-oob-monitoring
    $cd dell-oob-monitoring
    $sudo docker build -t ajeetraina/dell-oob-monitoring .

    In case you are not interested to build this Docker image, you can directly pull it from https://hub.docker.com/r/ajeetraina/dell-oob-monitoring/

    $sudo docker pull ajeetraina/dell-oob-monitoring


    Running the Docker Container:

    Follow the below steps to run the docker container:

    Step-1:

    There are two ways to discover the Dell Infrastructure -

    i. List out the iDRAC IPs(in case of servers, blades, FX2 and VRTX), Management IPs( Switch, Equallogic & Compellent) in a plain text file

    OR

    ii. Supply the subnet during the script execution to enable auto-discovery of Dell Infrastructure.

    For Example, I created a file called ips.txt under /IP directory on the Docker host system as shown below:

    File: /IP/ips

    192.168.10.2
    192.168.10.3
    192.168.10.4
    192.168.10.5
    ......
    ......

    Step:2:

    Run the below command to start the container:

    $sudo docker run -dit  --net=host -v /IP:/IP ajeetraina/dell-oob-monitoring


    This command mounts the host directory, /IP, into the container at /IP. The --net option binds the container port to the host port so that one can access Nagios UI with the host IP address.

    Step3:

    Execute the below command to start the script which initiates the necessary services to discover servers, storage and network devices and automatically ready-to-manage through Nagios UI.

    $sudo docker exec -it <container-id> sh discover

    This command takes sometimes(based on your inputs for list of iDRAC / Management IPs) to bring up Nagios and add the list of Dell Infrastructure onto Nagios Dashboard.

    • To access Nagios, go to https://<dockerhost-ip>:80/nagios; use credentials nagiosadmin / nagiosadmin

    If you have a firewall running in your host, be sure to open ports 80/tcp.

    When you login to the Nagios UI, in the left-hand column under the Hosts section, you can view the list of servers, storages and networking devices ready to be monitored.

    Support

    Please note these images are provided as-is and are not supported by Dell. If you find our container images useful, if you’d like to see new features added or would like to report a bug, please contact our mailing list Linux-PowerEdge or contact me directly at Ajeet_Raina{AT}Dell.com, we welcome your participation and feedback.

  • vWorkspace - Blog

    What's new for vWorkspace - November 2016

    Updated monthly, this publication provides you with new and recently revised information and is organized in the following categories; Documentation, Notifications, Patches, Product Life Cycle, Release, Knowledge Base Articles.

    Subscribe to the RSS (Use IE only)

     

    Patches

    None at this time

      

    Downloads

    Product Release Notification – vWorkspace 8.6.2

    Type: Patch Release Created: November 2016

     

    Knowledgebase Articles

    New 

    214733 - How to manually remove the Quest vWorkspace Catalyst

    How to manually remove the Quest vWorkspace Catalyst from Hyper-V server

    Created: November 2, 2016

     

    215010 - How to increase Metaprofile timeout for saving large profiles

    Created: November 10, 2016

     

    215022 - Password manager gives incorrect reason/error when password change fails

    Password manager gives the error that the domain controller cannot be contacted when a password is entered when the password strength / complexity...

    Created: November 10, 2016

     

     215010 - How to increase Metaprofile timeout for saving large profiles

    Created: November 10, 2016

     

    215061 - New servers are not shown in the Monitoring and Diagnostics console

    When a new server is added to the vWorkspace Farm it may not be seen in the Monitoring and Diagnostics environment. For example a new Remote...

    Created: November 11, 2016

     

     215269 - Black screen on login to 2012r2 session host

    When users try to connect to a Remote Desktop Session host they see a black screen and are eventually disconnected. The following two errors are...

    Created: November 16, 2016

     

    Revised

    63874 - The Remote Computer requires Network level authentication error.

    When trying to connect to a machine, an error occurs “The Remote Computer requires Network level authentication error.”

    Revised: November 3, 2016

     

    155112 - Cannot install connector from WA

    When connecting to the webaccess site for the first time, it detects that no client is installed. On clicking install the user sees error... 404...

    Revised: November 9, 2016

     

     213123 - Stop error or Blue Screen (BSOD) when vWorkspace Connector is redirecting scanners

    There is a Stop Error, commonly referred to as a Blue Screen of Death (BSOD), stating an error with 'pnusb.sys' when users are trying to use a...

    Revised: November 10, 2016

     

     213923 - Cannot access website using Chrome after upgrading to 8.6.2. Error: "ERR_TOO_MANY_REDIRECTS"

    Revised: November 10, 2016

     

    54133 - When connecting to a Terminal Server via Web Access with Secure-IT you may receive the error,

    After logging in to Secure-IT/Web Access you may receive the error, "Computer not Available" after clicking on a Managed Application linked to a...

    Revised: November 17, 2016

     

    213400 - Optional Hotfix 648071 for 8.6 MR2 vWorkspace Monitoring and Diagnostics Released

    This is an optional hotfix and can be installed on the following vWorkspace roles Monitoring and Diagnostics This release provides support for...

    Revised: November 18, 2016

     

    73446 - How To: Improve Performance to a Virtual Desktop VDI Session

    When connecting to a VDI over a slow connection users report that the performance on the VDI is degraded, or sessions are disconnected often.

    Revised: November 18, 2016

     

     53711 - Auto-configuration of vWorkspace Connectors

    Auto-configuration of vWorkspace Connectors

    Revised: November 21, 2016

     

     70481 - HOW TO: Secure Access Certificate Configuration

    Revised: November 24, 2016

     

    207445 - Is it possible to upgrade vWorkspace in stages?

    In many cases, upgrading a large vWorkspace environment cannot be done in a short time period. Is it possible to upgrade vWorkspace in stages?

    Revised: November 25, 2016

     

    Product Life Cycle - vWorkspace

    Revised: November, 2016