Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
CommAutoTestGroup - Blog CommAutoTestGroup 1
Custom Solutions Engineering Blog Custom Solutions Engineering 5
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,854
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 226
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 57
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 5
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 510
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 3
Latest Blog Posts
  • vWorkspace - Blog

    Guide to Optimising VDI and RDSH templates on VMware with vWorkspace

                                              Using SESparse disks with vWorkspace to avoid misaligned data

                                                                                                          Andrew Wood Dell Software Group

     

    Introduction:

     

    It is well documented that the older style vmfssparse disk format can cause misaligned data.  This is due to the Grain size of 512bytes

    Vmware vSphere 5.1   introduces a new disk format to combat this issue. SESparse disks can be used with Linked Clones so that you do not get the misaligned data.

    "... benefits from the new 4KB grain size, which improves performance by addressing alignment issues experienced in some storage arrays with the 512-byte grain size used in linked clones based on the vmfsSparse (redo log) format. The SE sparse disk format also provides far better space efficiency to desktops deployed on this virtual disk format, especially with its ability to reclaim stranded space."

    This means that there should be less IO required and the result should be a faster experience for all.

    Integration with vWorkspace:

    Despite VMware claiming this feature only exists in View, vWorkspace 8.x Supports vSphere 5.1. and when creating a Linked Clone using vWorkspace, it will always use the same Disk Format as the parent VM.

    This means that vWorkspace can also use the new format.

    However, vSphere 5.1 hides the disk format so it is not selectable within the GUI.

    This blog will show you the steps required to create a new Parent VM with the  new disk format.

     

    Creating a SESparse Parent VM:

     

    1. Log into your ESXi Shell

     

    2.Navigate the shell to your Datastore

    eg:

    cd vmfs

    cd volumes

    cd yourdatastore

    3.Create a new folder to store your new vm disk

    Eg:

    mkdir xpsparse

    cd xpsparse

     

     

    4. Create a new VMDK with the new SE Sparse disk format by running the following command (20GB disk in this example):

    vmkfstools -c 20g -d sesparse WindowsXP.vmdk

    5. In order to use the new style disk, you need to use the vSphere Web Client. The old thick vSphere Client does not understand the disk format.

     

    6. Create a new machine and set the compatibility to ESX 5.1 and later

     

    7. On the Customize hardware screen, remove the “New Hard Disk” - we won't be using it.

     

    8. Add the Existing Hard Disk that we created in step 4

     

    9. Do not worry that the disk size says 1MB this is only a display issue in VMware

     

     

    10.Create your template as normal and then take a snapshot

     

    11. Now, switch to the vWorkspace Management Console

     

    12. In the Add Computer wizard, Import the new Parent VM

     

    13. Select the new snapshot and complete the add computer wizard as normal

     

    14. Once the clone has completed, you can check the Linked Clone's vmdk is the new SESparse format. Use your favourite text viewer.

     

     

    Andrew Wood

  • Foglight for Virtualization and Storage Management

    Foglight for Virtualization Enterprise Edition, Event Analytics

    Sudden changes in the system behavior can often be traced back to “Change Events” which may have triggered the changes in the system behavior. Looking at Change Events in correlation with key performance metrics is a powerful troubleshooting technique to help us identify infrastructure changes that were the cause of changes in the system behavior.

     

    Foglight for Virtualization, Enterprise Edition has an event tracking functionality which may not be known to all users. The “Event Analytics” tabs in Virtual Machines and ESX hosts can overlay historical Alarms and Infrastructure Changes on top of selected performance metric(s) trends related to CPU, Memory, Network and Disk.

     

    Let’s follow a use case to better explore the value of Event Analytics function when troubleshooting performance problems. We are being notified by an application user complaining about slowness of their application. We can start the investigation by looking at the VM in question and the ESX host for the VM. By selecting the Event Analytics tab for the ESX and enabling Infrastructure and Alarms to overlay the selected metric(s), we can start the investigation by looking to see if any alarms were fired or any infrastructure changes in the environment may be the root cause of the performance problem. As you can see in the screenshot below we can quickly correlate Change Events with historical metric(s) behavior.

    In this case, by looking at the graph we can quickly observe several Infrastructure changes and Alarms around 12:00PM. In the above diagram you can see the “Active Memory” metric shows sudden change of behavior. Several Alarms are fired to notify the user of changes in the metrics behavior.

     

    We can further investigate by selecting the “Active Memory” metric and clicking on a star in order to see the root cause for the Alarm. The following screen shot shows the reason for the Alarm is due to memory utilization being outside of the normal operational range. As you may know, Intelliprofile analytics in Foglight provides the base lining capabilities to identify normal behavior. When the “Active Memory” metric breaks out of its normal pattern an Alarm is generated to notify the user of the metric behavioral change.

     

    To continue with the investigation, in the following screenshot, you can see the impact of the change on Disk Metrics. The latency associated with “Write Rate” clearly is an indicative of performance problem.

     

    And in the next screenshot we can look at the impact of this infrastructure change on network metrics, which clearly shows an abnormal Network Transfer Rate.

     

     

    By looking at the infrastructure changes, we can quickly recognize the ESX host performance change is related to creation of a new Virtual Machine on that ESX host. The new VM has changed the dynamics of resource utilization on that ESX which have resulted in performance degradation in the neighboring VMs. At this point, now that the root cause of the problem is identified, we can move forward with any of the multiple ways to correct the problem, such as moving the new VM to another ESX host, etc.

  • Dell TechCenter

    Citrix's latest desktop virtualization software: XenDesktop 7.5

    Citrix has just announced the release of the XenDesktop 7.5, the latest version of their desktop virtualization software. The release embodies several new features that we’re excited about which will help address a new set of needs and use cases, while supporting the continued growth in the use of XenDesktop.

    For instance, one of the key features in the announcement is support for hybrid cloud deployments, which enables XenDesktop users to add temporary or new capacity rapidly without incurring capital expenditures. Customers can provision virtual desktops and applications to a private or public cloud infrastructure in conjunction with their existing virtual infrastructure deployments while using the same XenDesktop management consoles and skillsets. This saves overall cost while greatly increasing flexibility and scalability.

    A second enhancement is that XenDesktop is now using the FlexCast Management Architecture, sharing this with their application virtualization product XenApp®. This innovation makes it a unified architecture for application and desktop delivery, which increases IT flexibility while saving cost.

    Dell continues to lead the industry in time-to-market with support for new Citrix releases. We were recognized in 2013 with a “Best of Synergy” award at the annual event, Citrix Synergy, in 2013 for our ‘Integrated Storage’ configuration which provides a very cost-effective infrastructure (at under $200/seat) to support XenDesktop deployments. Late last year, Dell introduced VRTX, an office environment-friendly converged infrastructure that brings the benefits of XenDesktop virtualization to smaller and mid-sized customers. And finally, Dell continues to work closely with Citrix and NVIDIA on pioneering solutions that enhance graphics performance, from high end 3D to the basic user. Read more about that here.

    Dell has long been a leader in the provisioning of thin/zero clients, especially with Dell Wyse zero clients which are optimized and Citrix Ready™verified for Citrix XenDesktop. Supported by tested reference architectures, we package, sell and support the complete end-to-end solution, including servers, storage, networking, applications, services and your choice of endpoints for better together operation.

    For more detailed information, please visit the Citrix announce page here.

  • Dell TechCenter

    Google+ Hangout: Deployment Automation on OpenStack with TOSCA and Cloudify

    Please register for our OpenStack Online Meetup February 6th, 9.00 - 10.00 am PST (UTC-8 hrs).

    TOSCA (Topology and Orchestration Specification for Cloud Applications) is an emerging standard for modeling complete application stacks and automating their deployment and management. It’s been discussed in the context of OpenStack for quite some time, mostly around Heat. In this session we’ll discuss what TOSCA is all about, why it makes sense in the context of OpenStack, and how we can take it farther up the stack to handle complete applications, both during and after deployment, on top of OpenStack.

    Presenters:

    Nati Shalom, Founder and CTO at GigaSpaces, is a thought leader in Cloud Computing and Big-Data Technologies. Shalom was recently recognized as a Top Cloud Computing Blogger for CIO’s by The CIO Magazine and his blog listed as an excellent blog by YCombinator. Shalom is the founder and also one of leaders of OpenStack-Israel group, and is a frequent presenter at industry conferences.

    Uri Cohen, or Head of Product at GigaSpaces. 

    This session will be hosted online via Google+ Hangout and IRC Chat (#OpenStack-Community @Freenode).

  • Dell TechCenter

    How to remove certificate warning message shown while launching iDRAC Virtual Console

    This blog post has been written by Shine KA and Hrushi Keshava HS

            Latest iDRAC7 firmware (1.50.50 onwards) now has a provision to store iDRAC SSL certificates while launching iDRAC Virtual Console and Virtual Media. Instead of saving iDRAC SSL certificates to IE / JAVA trust store, You have an option to save certificates you trust in a folder of your choice. This allows you to save several iDRAC certificates to the certificate store. Once you trust and save the certificate, you will no longer get a certificate warning message while launching iDRAC Virtual Console / Virtual Media. This behavior is available on Active-X and Java Plugins. Also, there is no need to save the iDRAC SSL certificate on the certificate store.

            While launching Virtual Console / Virtual Media, iDRAC must first validate iDRAC SSL certificate. If the certificate is not trusted, then iDRAC will check if there is any certificate store configured for iDRAC. If the certificate store is not configured, iDRAC will launch Virtual Console / Virtual Media with a certificate warning window. If certificate store is configured, iDRAC will compare current iDRAC SSL certificate with any of the stored certificates in certificate store. If a match is found, iDRAC Virtual Console will be launched without any certificate warning. If there is no matched certificate, iDRAC Virtual Console / Virtual Media will show a certificate warning, with an option to always trust the certificate. When you always trust the certificate, the current iDRAC SSL certificate will be stored in configured certificate store, and you will not see the certificate warning the next time you launch the iDRAC Virtual Console / Virtual Media.

    There are three easy steps to configure certificate store.

    STEP 1: Launching Virtual Console for first time

            When you launch Virtual Console for first time and iDRAC SSL certificate is not trusted, a certificate warning window will be shown. This window describes about the certificate information and steps to configure certificate store.

    Note: Details will give you more information about certificate like version, serial number, validity and more.

     

    STEP 2: Configuring Certificate Store on iDRAC

           After launching Virtual Console, navigates to Tools -> Session Options -> Certificate. Here you can choose the path where trusted certificate need to be stored.

    Note: Details will give you more information about Current session’s certificate like version, serial number, validity, and more.

     

    STEP 3: Launching Virtual Console with certificate store configured

           After specifying the certificate store location, the next time you launch Virtual Console a “warning security” window will pop up with option to “Always Trust this certificate”. You have an option to save this certificate to store by enabling this checkbox. Once certificate is trusted, you will no longer see the Certificate warning message.

    SUMMARY:

                By following these simple steps, you can quickly remove the certificate warning page and have peace of mind that you are accessing your iDRAC securely.

    Additional Information:

    More information on iDRAC

  • vWorkspace - Blog

    How to Shadow from the vWorkspace Management Console on 2012R2 Server (Remote Assistance)

    Hello vWorkspacers!

     

    As some of you will know,  Session shadowing is back in 2012R2!

    This will show you how to make it work again through our Console.

     

    VWShadow.exe is what our console uses and this does not work on a 2012R2 box.

     

    The old Shadow.exe doesn't exist in 2012R2. It has been replaced with the command mstsc.exe /shadow:sessionid /v:servername

     

    When you try to do Remote Assistance, our console simply runs

    "c:\windows\syswow64\vwshadow.exe servername sessionid"

     

    What I've done for now is created a miniscript in "Autoit" that I've converted to .exe

    It needs to be an exe so that it can replace vwshadow.exe.

    The script is really simple. All it does is launch mstsc with the arguments in the correct order for shadowing.

    The attached Zip file contains 3 variations.

     

    1. vwshadow -control.exe

    This launches:

    mstsc.exe /shadow:sessionid /v:servername /control

    This means you'll have control of the Users session but it does require consent.

    2. vwshadow -controlnoconsent.exe

    This launches:

    mstsc.exe /shadow:sessionid /v:servername /control /noConsentPrompt

    This means you'll have control of the Users session. It tries to connect without consent.

    *WARNING*This method may fail out with an error if you have no set the GPO which allows admins to do this.

    3. vwshadow -viewonly

    This launches:

    mstsc.exe /shadow:sessionid /v:servername

    This is view only.

    So, if you want to shadow using our Console on a 2012R2 server - you can! Just rename one of the exes in my zip file to vwshadow.exe and replace the existing c:\windows\syswow64\vwshadow.exe

    Natively 2012r2 cannot shadow Windows7 boxes Be aware that certain combos of OS /RDP versions still won't work.

    If MS update Remote Assistance for 2012r2 so that it can shadow Windows7 boxes, this will start working from our Console too.

    Enjoy, Andrew.

  • vWorkspace - Blog

    Optional Hotfix 336711 for 7.7 vWorkspace Virtual Desktop Extensions for Linux

    This is an optional hotfix for vWorkspace Virtual Desktop Extensions for Linux v7.7

     

         This release extends support for the following distro -

     

    ·         CentOS 5.9 and 6.4

    ·         RHEL 5.9 and  6.4

    ·         Ubuntu 12.04 and 13.04

     

    Download the hotfix here - https://support.software.dell.com/vworkspace/kb/119704

  • Dell TechCenter

    Compression and Dedupe Strategies for Online, Archival and Backup Tiers

    In my last post, I discussed the differences in various dedupe and compression techniques. In this post, I'd like to talk about how a product should apply the correct technique in each data tier.  People have different opinions on this, so here are my take on the correct technologies at each tier:

     Online storage:

    • The nature of this storage is fast access.  Data is used all the time and the importance is on speed, not really space savings.  The data path directly impacts application performance, so don’t go too crazy on saving space. 
    • Best techniques: Inline dictionary based compression coupled with fixed block dedupe.  Fixed block dedupe requires less CPU processing to be done (the boundaries are pre determined and do not have to be computed in real time) and dictionary based compression techniques are nothing more than memory lookups.  There is almost no CPU time being utilized.  So these two are fast and save space for many applications such as virtual machine workloads (virtual desktops), sparse data bases, blogs and so on.

    Archival storage:

    • Typically over here, the data is largely unstructured (just files) Data is seldom accessed, some latency can be tolerated – it is usually a human user accessing the data. 
    • Best Techniques: My vote here is to go with very large chunk sized variable dedupe (in a future blog post I’ll explain the details around the implications of chunk sizes in more details). 
      The more important topic here is that I would go with file specific compression techniques that understand the type of file it is dealing with and apply a file specific algorithm.  Additionally, I find that post process techniques work best here since dedupe and compression stages are more CPU intensive and you do not want to impact ingest speeds.

    Backup storage:

    • The most important thing to keep in mind about backup targets (such as our DR4100 product) is that you want to minimize backup windows (how fast you can protect your data).  To do this, you need the backup target to be as fast as the source can deliver that data… so we are talking the fastest ingest speeds possible, typically in the order of terabytes per hour. 
    • Best Techniques: To achieve these speeds, I find that having a small holding tank for incoming data to accumulate a meaningful portion of data (usually just a few gigabytes) so that you can apply variable sized dedupe with very quick compression and then storing the data to disk works best.

     

    What do you guys think about the variations in approaches?  Have you thought about what happens when data moves between tiers?  I have some thoughts on that but I’d love to hear your input.

  • KACE Blog

    Make large scale imaging and Windows migration a snap with K2000’s more powerful task engine and multicasting

    Systems imaging, especially on a large scale, is complicated and time consuming.  You have to worry about a multitude of factors – multiple images with complex configurations, ever growing application portfolios, remote sites with minimal IT support, and an increasingly heterogeneous environment with multiple hardware and operating system platforms.  The increasing acceptance of BYOD makes the whole process even more complicated.  And, you have to keep your end users satisfied – they want to have all their data and settings on their new machines or new OS image with minimal disruption to their work day.

    All this adds to the day to day stress of an IT administrator’s job.  The stress is even higher for administrators in organizations that have not completed their migrations from Windows XP to Windows 7 or 8 before the impending end of XP support by Microsoft on April 8th. 

    You can reduce the stress for you and your end users with the latest features added to the Dell KACE K2000 Systems Deployment Appliance.  K2000 makes it easier for IT organizations to meet their systems deployment and OS imaging needs.  Architected as an appliance, either actual or virtual, the K2000 is easy to deploy, up and running in less than a week for majority of customers, and easy to operate.  The latest release of K2000 v3.6, released on January 15, 2014, is focused on making large scale deployments faster, more efficient and more reliable, and introduced two new major capabilities to the appliance. 

    First is multicast deployment.  Multicast enables K2000 to send the same image data bits to multiple systems (typically 20-25) simultaneously.  The data is sent only once through the network pipe, which greatly speeds up large scale deployments while reducing bandwidth consumption. 

    Multicasting and task engine are tightly integrated to enable true "light off" deployment

    The second major capability added to the K2000 is a new, powerful task engine.  The task engine is tightly integrated with multicasting deployment and provides for real time, two-way communications between the K2000 appliance and the devices that are being deployed or imaged.  The result is real time feedback on each deployment task for each device as well as much better handling of deployment tasks, such as multiple reboots.  The task engine also provides superior task automation for scheduling of pre and post deployment tasks, such as disk de-encryption prior to installation of a new OS, and deployment of applications post-image install.  Finally, the task engine provides centralized logging of all deployment tasks from the K2000 web console for easier and more effective trouble shooting.  Combined, these new capabilities of the K2000 can help you perform true “lights off” deployment – schedule an entire computer lab to be reimaged overnight, and come in the next morning with the job done.  No impact to end users and no loss of sleep for you. 

    With the addition of the new task engine and multicasting, K2000 is faster, more reliable, uses less bandwidth, and helps you trouble shoot more efficiently.  These added capabilities help you do your job easier and move the K2000 to the front of the line when it comes to systems deployment and imaging solutions.  

    Click here to see a joint IDC and Dell KACE webinar on large scale imaging.

    Click here  to find out more about the Dell KACE K2000 Deployment Appliance.   

  • Dell TechCenter

    IT Pros, do you want to be a Dell TechCenter Rockstar? Apply by Feb. 20th!

    The Dell TechCenter Rockstars are an elite group of participants who are recognized for their participation in Dell related conversations on Dell TechCenter, other tech communities and through social media.

     Each year we select the Rockstar group through an application process and need your help identifying these unique individuals. If you or someone you know should be a TechCenter Rockstar, fill out our short application before February 20th 2014 and return it to enterprise_techcenter@dell.com to be considered for our 2014 Dell TechCenter Rockstar program!

    The benefits of the Dell TechCenter Rockstar Program

    • Official recognition on Dell TechCenter
    • Unique opportunities / experiences with Dell
    • Dell TechCenter gear
    • Invitations to applicable Dell events or conferences
    • Access to NDA briefings, early information on products
    • Increased engagement with Dell TechCenter team and Dell Engineers
    • networking opportunities with fellow DTC Rockstars

    What is a Dell TechCenter Rockstar?

    • Rockstars participate in Dell programs such as
    • Dell beta programs
    • TechCenter chats
    • Guest blog posts on DTC
    • Wiki articles
    • Answering forum posts on Dell TechCenter
    • Dell product feedback or usability sessions
    • Advocating Dell online on 3rd party IT communities, blogs, social media, etc
    • Rockstars conduct business in an ethical way living up to Dell code of conduct

    2014 Dell TechCenter Rockstar application

     How cool was the 2013 DTC Rockstar Program?

    In addition to a private forum and direct connections with Dell and the Dell TechCenter team, DTC Rockstars had access to special early briefings on the VRTX launches, hands on with precision workstations, opportunities to present at Dell World and TechCenter usergroups, additional exposure for their own blog posts on TechCenter and more.  The year culminated in many of the Dell TechCenter Rockstars flying to Austin to participate in the Dell World 2013 conference and a Samsung sponsored go-carting excursion. 

    Remember, we need applications turned in by February 20th, 2014, so don’t delay. We look forward to hearing from you, best of luck.