Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 8
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,858
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 227
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 9
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 511
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 4
Latest Blog Posts
  • Dell TechCenter

    What to know when installing and using operating systems on the R6415, R7415, and R7425 servers

    This blog was originally posted by Cantwell, Thomas.

    Dell has introduced the AMD Epyc server processor to Dell PowerEdge servers – What to know when installing and using operating systems on the R6415, R7415, and R7425 servers.

    Dell PowerEdge servers have traditionally used Intel processors, but the new AMD Epyc processor is now an option on the server models listed above, and there are some specific nuances you should to be aware of when installing/using some operating systems.

    General OS
    Dell AMD servers will initially ship with only UEFI mode. Legacy bios mode will not be an option. This will limit OS selections to those that are UEFI-capable.

    To install the Windows Server 2016 RTM using the OS install media:

    1. Go to System Settings (Press F2 during the POST), disable the Virtualization Technology for the processor settings.
    2. Install Windows Server 2016 with the OS media. 
    3. Apply latest Update Rollup available via Windows Update. Windows Server 2016 RTM requires a hotfix (Available on June 2017 Update Rollup and up) for a known AMD IOMMU issue. For Windows Server version 1709 deployment, you no longer need this hot fix.
    4. Go to System Settings, and enable Virtualization Technology for processor.

    If you wish to use Windows live debugging and want to use USB 3.0 as your debug transport, for the R6415 and R7415 platforms, only the internal USB port on the motherboard can be used for debugging, so the case must be opened.  On the R7425, no USB ports can be used for USB debugging and you must resort to serial debugging for live debug.

    For VMWare, be sure you update to the latest release:
    * Navigate to the Platform’s support page and under Drivers, change OS to ESXi 6.5 and then in the Enterprise Solutions category, download the latest image.

    For RHEL7.4, there is a bug that will prevent system boot up due to a kernel panic. The fix for the kernel panic is in this errata kernel:

    Thomas Cantwell

  • General HPC

    Skylake memory study

    Authors: Joseph Stanfield, Garima Kochhar and Donald Russell, Bruce Wagner.

    HPC Engineering and System Performance Analysis Teams, HPC Innovation Lab, January 2018.

    To function efficiently in an HPC environment, a cluster of compute nodes must work in tandem to compile complex data and achieve desired results. The user expects each node to function at peak performance as an individual system, as well as a part of an intricate group of nodes processing data in parallel. To enable efficient cluster performance, we first need good single system performance. With that in mind, we evaluated the impact of different memory configurations on single node memory bandwidth performance using the STREAM benchmark. The servers used here support the latest Intel Skylake processor (Intel Scalable Processor Family) and are from the Dell EMC 14th generation (14G) server product line.

    Less than 6 DIMMS per socket

    The Skylake processor has a built-in memory controller similar to previous generation Xeons but now supports *six* memory channels per socket. This is an increase from the four memory channels found in previous generation Xeon E5-2600 v3 and E5-2600 v4 processors. Different Dell EMC server models offer a different number of memory slots based on server density, but all servers offer at least one memory module slot on each of the six memory channels per socket.

    For applications that are sensitive to memory bandwidth and require predictable performance, configuring memory for the underlying architecture is an important consideration. For optimal memory performance, all six memory channels of a CPU should be populated with memory modules (DIMMs), and populated identically. This is a called a balanced memory configuration. In a balanced configuration all DIMMs are accessed uniformly and the full complement of memory channels are available to the application. An unbalanced memory configuration will lead to lower memory performance as some channels will be unused or used unequally. Even worse, an unbalanced memory configuration can lead to unpredictable memory performance based on how the system fractures the memory space into multiple regions and how Linux maps out these memory domains.

    Figure 1: Relative memory bandwidth with different number of DIMMs on one socket.

    PowerEdge C6420. Platinum 8176. 32 GB 2666 MT/s DIMMs.

    Figure 1 shows the drop in performance when all six memory channels of a 14G server are not populated. Using all six memory channels per socket is the best configuration, and will give the most predictable performance. This data was collected using the Intel Xeon Platinum 8176 processor. While the exact memory performance of a system depends on the CPU model, the general trends and conclusions presented here apply across all CPU models.

    Balanced memory configurations

    Focusing now on the recommended configurations that use all 12 memory channels in a two socket 14G system, there are different memory module options that allow different total system memory capacities. Memory performance will also vary depending on whether the DIMMs used are single ranked, double ranked, RDIMMS or LR-DIMMs. These variations are, however, significantly lower than any unbalanced memory configuration as shown in Figure 2.

    • 8GB 2666 MT/s memory is single ranked and have lower memory bandwidth than the dual ranked 16GB and 32GB memory modules.

    • 16GB and 32GB are both dual ranked and have similar memory bandwidth with 16G DIMMs demonstrating higher memory bandwidth.

    • 64GB memory modules are LR-DIMMS are have slightly lower memory bandwidth than the dual ranked RDIMMS.

    128GB memory modules are also LR-DIMMS but are lower performing than the 64GB modules and their prime attraction is the additional memory capacity. Note that LR-DIMMs also have higher latency and higher power draw. Here is an older study on previous generation 13G servers that describes these characteristics in detail.


    Figure 2: Relative memory bandwidth for different system capacities (12D balanced configs).

    PowerEdge R740. Platinum 8180. DIMM configuration as noted, all 2666 MT/s memory.

    Data for Figure 2 was collected on the Intel Xeon Platinum 8180 processor. As mentioned above, the memory subsystem performance depends on the CPU model since the memory controller is integrated with the processor, and the speed of the processor and number of cores also influence memory performance. The trends presented here will apply across the Skylake CPUs, though the actual percentage differences across configurations may vary. For example, here we see the 96 GB configuration has 7% lower memory bandwidth than the 384 GB configuration. With a different CPU model, that difference could be 9-10%.

    Figure 3 shows another example of balanced configurations, this time using 12 or 24 identical DIMMs in the 2S system where one DIMM per channel is populated (1DPC with 12DIMMs) or two DIMMs per channel are populated (2DPC using 24 DIMMs). The information plotted in Figure 3 was collected across two CPU models and shows the same patterns as Figure 2. Additionally, the following observations can be made:

    • With two 8GB single ranked DIMMs giving two ranks on each channel, some of the memory bandwidth lost with 1DPC SR DIMMs can be recovered with the 2DPC configuration.

    • 16GB dual ranked DIMMS perform better than 32GB DIMMs in 2DPC configurations too.

    We also measured the impact of this memory population when 2400 MT/s memory is used, and the conclusions were identical to those for 2666 MT/s memory. For brevity, the 2400 MT/s results are not presented here.

    Figure 3: Relative memory bandwidth for different system capacities (12D, 24D balanced configs).

    PowerEdge R640. Processor and DIMM configuration as noted. All 2666 MT/s memory.

    Unbalanced memory configurations

    In previous generation systems, the processor supported four memory channels per socket. This led to balanced configurations with eight or sixteen memory modules per dual socket server. Configurations of 8x16GB (128 GB), 16x16 GB or 8x32GB (256 GB), 16x32 GB (512 GB) were popular and recommended.

    With 14G and Skylake, these absolute memory capacities will lead to unbalanced configurations as these memory capacities do not distribute evenly across 12 memory channels. A configuration of 512 GB on 14G Skylake is possible but suboptimal, as shown in Figure 4. Across CPU models (Platinum 8176 down to Bronze 3106), there is a 65% to 35% drop in memory bandwidth when using an unbalanced memory configuration when compared to a balanced memory configuration! The figure compares 512 GB to 384 GB, but the same conclusion holds for 512 GB vs 768 GB as Figure 2 has shown us that a balanced 384 GB configuration performs similarly to a balanced 768 GB configuration.

    Figure 4: Impact of unbalanced memory configurations.

    PowerEdge C6420. Processor and DIMM configuration as noted. All 2666 MT/s memory.

    Near-balanced memory configurations

    The question that arises is - Is there a reasonable configuration that would work for capacities close to 256GB without having to go all the way to a 384GB configuration, and close to 512GB without having to raise the capacity all the way to 768GB?

    Dell EMC systems do allow mixing different memory modules, and this is described in more detail in the server owner manual. For example, the Dell EMC PowerEdge R640 has 24 memory slots with 12 slots per processor. Each processor’s set of 12 slots is organized across 6 channels with 2 slots per channel. In each channel, the first slot is identified by the white release tab while the second slot tab is black. Here is an extract of the memory population guidelines that permit mixing DIMM capacities.

    The PowerEdge R640 supports Flexible Memory Configuration, enabling the system to be configured and run in any valid chipset architectural configuration. Dell EMC recommends the following guidelines to install memory modules:

    RDIMMs and LRDIMMs must not be mixed.

    Populate all the sockets with white release tabs first, followed by the black release tabs.

    When mixing memory modules with different capacities, populate the sockets with memory modules with the highest capacity first. For example, if you want to mix 8 GB and 16 GB memory modules, populate 16 GB memory modules in the sockets with white release tabs and 8 GB memory modules in the sockets with black release tabs.

    Memory modules of different capacities can be mixed provided other memory population rules are followed (for example, 8 GB and 16 GB memory modules can be mixed).

    Mixing of more than two memory module capacities in a system is not supported.

    In a dual-processor configuration, the memory configuration for each processor should be identical. For example, if you populate socket A1 for processor 1, then populate socket B1 for processor 2, and so on.

    Populate six memory modules per processor (one DIMM per channel) at a time to maximize performance.

    *One important caveat is that 64 GB LRDIMMs and 128 GB LRDIMMs cannot be mixed; they are different technologies and are not compatible.

    So the question is, how bad are mixed memory configurations for HPC? To address this, we tested valid “near-balanced configurations” as described in Table 1, with the results displayed in Figure 5.

    Table 1: Near balanced memory configurations

    Figure 5: Impact of near-balanced configurations.

    PowerEdge R640. Processor and DIMM configuration as noted. All 2666 MT/s memory.

    Figure 5 illustrates that near-balanced configurations are a reasonable alternative when the memory capacity requirements demand a compromise. All memory channels are populated, and this helps with the memory bandwidth. The 288 GB configuration uses single ranked 8GB DIMMs and we see the penalty single ranked DIMMS impose on the memory bandwidth.


    The memory subsystem performance differences with balanced vs. unbalanced configurations and with different types of memory modules is not new to Skylake or Dell EMC servers. Previous studies for previous generations of servers and CPUs are listed below and show similar conclusions.
    • Memory should ideally be populated in a balanced configuration with all memory channels populated and populated identically for best performance. The number of memory channels are determined by the CPU and system architecture.
    • DIMM rank, type and memory speed have an impact on performance.
    • Unbalanced configurations are not recommended when optimizing for performance
    • Some near-balanced configurations are a reasonable alternative when the memory capacity requirements demand a compromise

    Previous memory configuration studies

    1. Performance and Energy Efficiency of the 14th Generation Dell PowerEdge Servers – Memory bandwidth results across Intel Skylake Processor Family (Skylake) CPU models (14G servers)


    3. 13G PowerEdge Server Performance Sensitivity to Memory Configuration – Intel Xeon 2600 v3 and 2600 v4 systems (13G servers)

    4. Unbalanced Memory Performance – Intel Xeon E5-2600 and 2600 v2 systems (12G servers)

    5. Memory Selection Guidelines for HPC and 11G PowerEdge Servers – Intel Xeon 5500 and 5600 systems (11G servers)

  • Windows 10 IoT Enterprise (WIE10) - Blog

    Wyse Easy Setup is now Available

    I am incredibly excited to announce the Global launch of our brand-new Wyse Easy Setup software for Windows-based Dell Thin Clients. Wyse Easy Setup is a free utility that enables IT Admins to quickly and easily configure, lock-down and deploy their Windows-based Dell Thin Clients (WES7 and WIE10 only). It is available here: 

    A huge thank you to everybody involved {especially Engineering and all the Sales Engineers that provided customer and personal feedback-you know who you are:)} that made this a possibility. This is also the first Product I’ve launched from scratch as a Product Manager- from Ideate/Incubate stage to Launch-I’m incredibly grateful to Dell and my now, not-so-new manager, Allen Tiffany for trusting me with this role roughly a year ago. I created a 5 minute video that gives an introduction to Wyse Easy Setup and talks about the features here



    Product Screenshot with Kiosk in background:

  • Dell TechCenter

    Top 10 Scale-out IT Business Trends for 2018

    Stephen Rousset – Distinguished Eng., ESI Director of Architecture, Dell EMC

    Top 10 Scale-out IT Business Trends for 2018

    In the spirit of beginning-of-new-year lists, this blog enumerates a number of the most prominent trends that ESI believes will be impacting customers who operate at scale in 2018, and how those trends will influence the direction and need for specific technologies. Some are continuing trends that may seem obvious at this point - but are still dominant forces - others are the emerging trends that need to be brought into consideration as future infrastructure development is being planned.

    1 - The role of data will continue to increase. The volume and importance of data will continue to drive a need for a greater speed of access, higher reliability and more rapid comprehension of the data. The scaleoutaccelerating adoption of Machine Learning and Deep Learning models is making technologies like Flash Storage (NVMe) pervasive, because in order to efficiently support the larger and larger amounts of data feeding those models, faster data access technologies must come to the forefront. Customers are increasingly looking for faster, yet reasonably priced, storage. This is also creating a new “mid-tier” of non-volatile storage that exists in the space between traditional storage and host memory, allowing for a tiered memory/storage model. 

    2 - The Internet of Things (IoT) is shifting computing to the Edge. IoT is enabling data gathering at every imaginable physical endpoint (this is part of what continues to drive the increase overall data). With this development came the realization that it is much more efficient to initially process all that data at the endpoint – avoiding the latencies of communicating the data to a centralized hub.  Consequently, this is resulting in a move to denser form factor infrastructures at the endpoint/edge, and for the need to incorporate previously unnecessary environmental and packaging considerations (like ruggedization) – for example, carriers are placing small footprint server installations outdoors at the base of cell phone towers to initially handle the data. Dell EMC Modular Edge Data Centers are one of the responses to this ongoing trend.

    3 - Accelerated shift in focus from technology architectures to service architectures. The efficiencies delivered by today’s software defined service architectures are undeniable at this point. By its software-defined nature, a service architecture makes innovating core business value easier, with less reliance on a hardware technology provider.  As more large scale infrastructures are migrated to this model the industry will be working to make operating in that environment simpler, and adding even more efficiencies. Examples of this are vendors enabling Containers and Microservices within technology offerings themselves, allowing customers to create services more quickly without specific hardware dependencies.  So as a hardware provider, it is incumbent on us to provide the internal business units reliable and flexible HW that allows them to concentrate on the service architectures and not worry about HW delivery and reliability.

    4 - Increased focus on speed of deployment. The move to service architectures has customers realizing that the speed of deploying/delivering new services is now an even more important component of any TCO calculation.  The time it takes to bring on new services can be directly related to new sources of revenue – and can be a significant factor in competitiveness in the market place. The speed of differentiation is becoming just as important as cost of differentiation. Getting to the point of “on-demand” deployment is the new high watermark.  IT solutions providers will need to be able to have a reliable, robust and global delivery capability so companies can achieve a faster “time to money”.

    5 - The drive to collect more infrastructure telemetry.  As infrastructures become larger and larger, the need (and desire) to simplify the management and understand critical metrics of the infrastructure becomes greater too. This has created the trend toward collecting much more infrastructure telemetry, and in turn, has pushed vendors to support common, standard APIs (like Redfish) to simplify the task of monitoring and managing large scale infrastructures across various vendors’ equipment. Increased telemetry via standard and open interfaces, answers the market demand to provide infrastructure-wide (rack scale) management capabilities to take business intelligence to their IT.  The foundation of basic management capabilities provided through industry accepted standards (for example, Redfish APIs) is opening the way to disaggregation of hardware resources and fully composable infrastructure.  This will be part of the path to ensuring optimal IT utilization for changing workload platform requirements. 

    6 - Continued reduction of stranded capacity. From the very earliest beginnings of server virtualization, IT organizations have been trying to improve the utilization rates of their infrastructure, which are sometimes shockingly low. This effort is ongoing and vendors and administrators are always looking to the latest technology to assist.  The previously discussed trend of telemetry expansion is allowing more thorough and accurate monitoring of systems resulting in an improved ability to increase utilization rates. In addition, advances in rack scale management and hardware resource management– like Intel RSD with MAAS – are enabling resource pooling that, through orchestration, can balance capacity and optimize resource utilization across a rack or even across the larger infrastructure by ensuring resource rations of compute, storage and networking are optimized for workload requirements. 

    7 - Cost containment of company cloud models. As rapidly growing companies move to the cloud model (private, public, hybrid), they frequently fall victim to “cloud sprawl” – silo’d groups independently  engaging public cloud instances instead of using the company’s existing private cloud resources. This has been due partly to the pressure for rapid roll out of new services. Organizations want to be competitive with the rate of innovation and turn to “outsourcing” for speed and ease of implementation. But this results in large, and often hidden, inefficiencies - low utilization rates on the private cloud and numerous, difficult to track public cloud expenses. Data centers are now looking for infrastructure that supports offering their users easier, rapid development of new business services on their private cloud.

    8 - Blockchain Infrastructures and Distributed Trust Systems. As the Internet of things (IoT) becomes even more pervasive, the need for a de-centralized form of security becomes ever greater – centralized schemes will not be able to handle the scale or the inherent latencies of the geographic distribution that is likely to proliferate with the trend toward edge computing. Blockchain, with its peer-to-peer distributed ledger technology, can provide a secure, decentralized framework for the highly distributed world of IoT, in the same way it has for crypto-currencies, it will be able to provide data integrity, faster transaction speeds, and no single points of failure.  We foresee Blockchain technology being incorporated into a wide range of offerings from vendors across the industry with items such as SmartNIC data encryption for data in flight and processor data encryption for local data.

    9 - Continuing trade-offs: Open vs. “Shrink-wrapped” vs. Cost. Large scale data centers will continue to struggle with achieving their inter-related (and frequently conflicting) goals. They will continue to negotiate a three-sided triangle of being Open (to provide the possibility of greater speed of competitive innovation through open source and DIY development), being quick to deploy services (through pre-configured, “shrink-wrapped” vendor offerings) and being cost conscious (balancing between how the other two factors impact CAPEX and OPEX).  For example, some data centers are choosing to implement fully configured “Ready Bundles” that allow them to deploy hardware with extreme speed (for a particular service/configuration) – but lack the flexibility to adapt to changes in the environment (i.e., updated application).

    10 - Simplifying operational complexity. Simplifying an increasingly complex and ever larger computing environment is a thread that runs through almost all of these trends. At the hardware level there is a push to reduce the number of different SKUs that compose an infrastructure, because using a consistent set of foundational building blocks simplifies procurement, maintenance and management.  Part of the rationale behind software defined data centers is that they allow the underlying hardware to be a common platform – not a patchwork of software-specific machines. As data centers migrate to software defined implementations, they want the hardware that complements the new model. Getting to a single architecture also simplifies the management challenges of scale out infrastructures and opens the door for rack scale management. The Dell EMC DSS 9000 is an example of a common platform that addresses this trend by providing that rack scale foundation being used with RSD and hooks for the next generation memory-centric fabric providing true system composability – Gen-Z.

    Conclusion. The Extreme Scale Infrastructure group is excited to be helping Dell EMC customers answer the challenges these trends present to large scale computing.  We look forward in 2018 to continue to work with them to push the bounds of what is possible, and to assist them in achieving their goals.

  • vWorkspace - Blog

    What's new for vWorkspace - October / November / December 2017

    Now updated quarterly, this publication provides you with new and recently revised information and is organized in the following categories; Documentation, Notifications, Patches, Product Life Cycle, Release, Knowledge Base Articles.

    Subscribe to the RSS (Use IE only)


    Knowledgebase Articles



    233590 - Is it possible to reset the password of QVwUser1 in Application Pool 'QuestWebAccess'?

    The QVwUser1 account is used in Application Pools | QuestWebAccess | Advanced Settings | Identity . Can the password be reset?

    Created: October 12, 2017


    233661 - Mandatory Hotfix 654624 for 8.6 MR3 Mac Connector

    This cumulative mandatory hotfix addresses the following issues: Microphone redirection improvements UI fixes DI mode fixes BYOD fixes The...

    Created: October 13, 2017


    234060 - Unable to launch the Management Console and Connection Broker are down

    The situation is as follow: Unable to start the vWorkspace Management Console, it fails with following error <img alt="The vWorkspace system...

    Created: October 26, 2017


    234432 - Mandatory Hotfix 654693 for 8.6 MR3 Linux Connector

    This mandatory hotfix addresses the following issues: Fixes and improvements of the Multiple Monitors mode Fixes and improvements of the...

    Created: November 9, 2017


    236133 - Mandatory Hotfix 654816 for 8.6 MR3 Connection Broker

    This is a Mandatory hotfix and can be installed on the following vWorkspace roles: Connection Broker Please see the full Release Notes...

    Created: December 22, 2017


    236134 - Mandatory Hotfix 654817 for 8.6 MR3 Management Console

    This is a Mandatory hotfix and can be installed on the following vWorkspace roles: Management Console Please see the full Release Notes...

    Created: December 22, 2017


    236138 - Mandatory Hotfix 654818 for 8.6 MR3 Web Access

    This is a Mandatory hotfix and can be installed on the following vWorkspace roles: Web Access Please see the full Release Notes attached...

    Created: December 22, 2017


    236140 - Mandatory Hotfix 654819 for 8.6 MR3 Windows connector

    This is a Mandatory hotfix and can be installed on the following vWorkspace roles: Windows Connector Please see the full Release Notes...

    Created: December 22, 2017 






     147847 - After updating PNTools user is presented with a blank screen on login.

    After updating PNTools user is presented with a blank screen on login. 

    Revised: October 2, 2017


    147847 - After updating PNTools user is presented with a blank screen on login.

    After updating PNTools user is presented with a blank screen on login. 

    Revised: October 2, 2017


    69164 - Possible communication errors when connecting from a Thin Client.

    When connecting from a Thin Client that has Symantec Endpoint Protection installed it is possible that the AppPortal will display one of the...

    Revised: October 13, 2017


    63553 - AppPortal/Web client cannot communicate with farm over SSL on Thin Client machines

    vWorkspace client software cannot connect from Thin Client machines to the farm through SSL. Communication from normal client OS (XP/Vista/7) is...

    Revised: October 13, 2017


    232952 - Mandatory Hotfix 654338 for 8.6 MR3 Management Console

    This is a Mandatory hotfix and can be installed on the following vWorkspace roles: Management Console   Please see the full Release Notes...

    Revised: October 13, 2017


    233590 - Is it possible to reset the password of QVwUser1 in Application Pool 'QuestWebAccess'?

    The QVwUser1 account&nbsp; is used in Application Pools | QuestWebAccess | Advanced Settings | Identity . Can the password be reset?

    Revised: October 27, 2017


    234044 - Mandatory Hotfix 654658 for 8.6 MR3 Android Connector

    This mandatory hotfix addresses the following issues: Support of Android 8.0 Oreo Support of the split-screen mode UI fixes Overall...

    Revised: October 26, 2017


    77059 - Naming Conventions for new VDIs not working

    Revised: November 2, 2017


    207588 - What are requirements for the instant provisioning?

    After performing VDI deployment using instant provisioning method, the provisioned VDI did not change its name and seems like it's not joined domain.

    Revised: November 2, 2017


    77233 - Operating Systems supported by Linux Connector

    Which Linux Operating Systems are supported for use with the Linux Connector?

    Revised: November 13, 2017


    96796 - In larger vWorkspace environments, users may experience timeouts when logging in when connection

    Revised: November 16, 2017


    56803 - HowTo: Enable Connection Broker Logging

    Steps to enable Connection Broker logging in vWorkspace

    Revised: November 16, 2017


    84706 - Add Connection Broker Servers

    Revised: November 16, 2017


    105373 - Video: How to Enable Diagnostics in vWorkspace 8.0

    How to use the diagnostic tool in vWorkspace 8.x.

    Revised: November 28, 2017


    108104 - vWorkspace mapped printers are disappearing from view and the users are no longer able to print

    vWorkspace mapped printers are disappearing from view and the users are no longer able to print. When the users launch their vWorkspace session...

    Revised: November 29, 2017


    58869 - Error "Couldn't send message to remote computer -failed to respond (10060)"

    When trying to deploy MSI packages to VDIs, the following error occurs "Couldn't send <g class="gr_ gr_9 gr-alert gr_gramm gr_inline_cards gr...

    Revised: December 4, 2017


    67742 -  Changes to the RDP connectivity when using vWorkspace WAN Acceleration (EOP Xtream)

    vWorkspace WAN Acceleration (EOP Xtream) can be enabled in one of 2 modes or disabled altogether, changing the state of this feature makes changes...

    Revised: December 27, 2017


    Product Life Cycle - vWorkspace