Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Using SESparse disks with vWorkspace to avoid misaligned data
Andrew Wood Dell Software Group
It is well documented that the older style vmfssparse disk format can cause misaligned data. This is due to the Grain size of 512bytes
Vmware vSphere 5.1 introduces a new disk format to combat this issue. SESparse disks can be used with Linked Clones so that you do not get the misaligned data.
"... benefits from the new 4KB grain size, which improves performance by addressing alignment issues experienced in some storage arrays with the 512-byte grain size used in linked clones based on the vmfsSparse (redo log) format. The SE sparse disk format also provides far better space efficiency to desktops deployed on this virtual disk format, especially with its ability to reclaim stranded space."
This means that there should be less IO required and the result should be a faster experience for all.
Integration with vWorkspace:
Despite VMware claiming this feature only exists in View, vWorkspace 8.x Supports vSphere 5.1. and when creating a Linked Clone using vWorkspace, it will always use the same Disk Format as the parent VM.
This means that vWorkspace can also use the new format.
However, vSphere 5.1 hides the disk format so it is not selectable within the GUI.
This blog will show you the steps required to create a new Parent VM with the new disk format.
Creating a SESparse Parent VM:
1. Log into your ESXi Shell
2.Navigate the shell to your Datastore
3.Create a new folder to store your new vm disk
4. Create a new VMDK with the new SE Sparse disk format by running the following command (20GB disk in this example):
vmkfstools -c 20g -d sesparse WindowsXP.vmdk
5. In order to use the new style disk, you need to use the vSphere Web Client. The old thick vSphere Client does not understand the disk format.
6. Create a new machine and set the compatibility to ESX 5.1 and later
7. On the Customize hardware screen, remove the “New Hard Disk” - we won't be using it.
8. Add the Existing Hard Disk that we created in step 4
9. Do not worry that the disk size says 1MB this is only a display issue in VMware
10.Create your template as normal and then take a snapshot
11. Now, switch to the vWorkspace Management Console
12. In the Add Computer wizard, Import the new Parent VM
13. Select the new snapshot and complete the add computer wizard as normal
14. Once the clone has completed, you can check the Linked Clone's vmdk is the new SESparse format. Use your favourite text viewer.
Sudden changes in the system behavior can often be traced back to “Change Events” which may have triggered the changes in the system behavior. Looking at Change Events in correlation with key performance metrics is a powerful troubleshooting technique to help us identify infrastructure changes that were the cause of changes in the system behavior.
Foglight for Virtualization, Enterprise Edition has an event tracking functionality which may not be known to all users. The “Event Analytics” tabs in Virtual Machines and ESX hosts can overlay historical Alarms and Infrastructure Changes on top of selected performance metric(s) trends related to CPU, Memory, Network and Disk.
Let’s follow a use case to better explore the value of Event Analytics function when troubleshooting performance problems. We are being notified by an application user complaining about slowness of their application. We can start the investigation by looking at the VM in question and the ESX host for the VM. By selecting the Event Analytics tab for the ESX and enabling Infrastructure and Alarms to overlay the selected metric(s), we can start the investigation by looking to see if any alarms were fired or any infrastructure changes in the environment may be the root cause of the performance problem. As you can see in the screenshot below we can quickly correlate Change Events with historical metric(s) behavior.
In this case, by looking at the graph we can quickly observe several Infrastructure changes and Alarms around 12:00PM. In the above diagram you can see the “Active Memory” metric shows sudden change of behavior. Several Alarms are fired to notify the user of changes in the metrics behavior.
We can further investigate by selecting the “Active Memory” metric and clicking on a star in order to see the root cause for the Alarm. The following screen shot shows the reason for the Alarm is due to memory utilization being outside of the normal operational range. As you may know, Intelliprofile analytics in Foglight provides the base lining capabilities to identify normal behavior. When the “Active Memory” metric breaks out of its normal pattern an Alarm is generated to notify the user of the metric behavioral change.
To continue with the investigation, in the following screenshot, you can see the impact of the change on Disk Metrics. The latency associated with “Write Rate” clearly is an indicative of performance problem.
And in the next screenshot we can look at the impact of this infrastructure change on network metrics, which clearly shows an abnormal Network Transfer Rate.
By looking at the infrastructure changes, we can quickly recognize the ESX host performance change is related to creation of a new Virtual Machine on that ESX host. The new VM has changed the dynamics of resource utilization on that ESX which have resulted in performance degradation in the neighboring VMs. At this point, now that the root cause of the problem is identified, we can move forward with any of the multiple ways to correct the problem, such as moving the new VM to another ESX host, etc.
Citrix has just announced the release of the XenDesktop 7.5, the latest version of their desktop virtualization software. The release embodies several new features that we’re excited about which will help address a new set of needs and use cases, while supporting the continued growth in the use of XenDesktop.
For instance, one of the key features in the announcement is support for hybrid cloud deployments, which enables XenDesktop users to add temporary or new capacity rapidly without incurring capital expenditures. Customers can provision virtual desktops and applications to a private or public cloud infrastructure in conjunction with their existing virtual infrastructure deployments while using the same XenDesktop management consoles and skillsets. This saves overall cost while greatly increasing flexibility and scalability.
A second enhancement is that XenDesktop is now using the FlexCast Management Architecture, sharing this with their application virtualization product XenApp®. This innovation makes it a unified architecture for application and desktop delivery, which increases IT flexibility while saving cost.
Dell continues to lead the industry in time-to-market with support for new Citrix releases. We were recognized in 2013 with a “Best of Synergy” award at the annual event, Citrix Synergy, in 2013 for our ‘Integrated Storage’ configuration which provides a very cost-effective infrastructure (at under $200/seat) to support XenDesktop deployments. Late last year, Dell introduced VRTX, an office environment-friendly converged infrastructure that brings the benefits of XenDesktop virtualization to smaller and mid-sized customers. And finally, Dell continues to work closely with Citrix and NVIDIA on pioneering solutions that enhance graphics performance, from high end 3D to the basic user. Read more about that here.
Dell has long been a leader in the provisioning of thin/zero clients, especially with Dell Wyse zero clients which are optimized and Citrix Ready™verified for Citrix XenDesktop. Supported by tested reference architectures, we package, sell and support the complete end-to-end solution, including servers, storage, networking, applications, services and your choice of endpoints for better together operation.
For more detailed information, please visit the Citrix announce page here.
Please register for our OpenStack Online Meetup February 6th, 9.00 - 10.00 am PST (UTC-8 hrs).
TOSCA (Topology and Orchestration Specification for Cloud Applications) is an emerging standard for modeling complete application stacks and automating their deployment and management. It’s been discussed in the context of OpenStack for quite some time, mostly around Heat. In this session we’ll discuss what TOSCA is all about, why it makes sense in the context of OpenStack, and how we can take it farther up the stack to handle complete applications, both during and after deployment, on top of OpenStack.
Nati Shalom, Founder and CTO at GigaSpaces, is a thought leader in Cloud Computing and Big-Data Technologies. Shalom was recently recognized as a Top Cloud Computing Blogger for CIO’s by The CIO Magazine and his blog listed as an excellent blog by YCombinator. Shalom is the founder and also one of leaders of OpenStack-Israel group, and is a frequent presenter at industry conferences.
Uri Cohen, or Head of Product at GigaSpaces.
This session will be hosted online via Google+ Hangout and IRC Chat (#OpenStack-Community @Freenode).
This blog post has been written by Shine KA and Hrushi Keshava HS
Latest iDRAC7 firmware (1.50.50 onwards) now has a provision to store iDRAC SSL certificates while launching iDRAC Virtual Console and Virtual Media. Instead of saving iDRAC SSL certificates to IE / JAVA trust store, You have an option to save certificates you trust in a folder of your choice. This allows you to save several iDRAC certificates to the certificate store. Once you trust and save the certificate, you will no longer get a certificate warning message while launching iDRAC Virtual Console / Virtual Media. This behavior is available on Active-X and Java Plugins. Also, there is no need to save the iDRAC SSL certificate on the certificate store.
While launching Virtual Console / Virtual Media, iDRAC must first validate iDRAC SSL certificate. If the certificate is not trusted, then iDRAC will check if there is any certificate store configured for iDRAC. If the certificate store is not configured, iDRAC will launch Virtual Console / Virtual Media with a certificate warning window. If certificate store is configured, iDRAC will compare current iDRAC SSL certificate with any of the stored certificates in certificate store. If a match is found, iDRAC Virtual Console will be launched without any certificate warning. If there is no matched certificate, iDRAC Virtual Console / Virtual Media will show a certificate warning, with an option to always trust the certificate. When you always trust the certificate, the current iDRAC SSL certificate will be stored in configured certificate store, and you will not see the certificate warning the next time you launch the iDRAC Virtual Console / Virtual Media.
There are three easy steps to configure certificate store.
STEP 1: Launching Virtual Console for first time
When you launch Virtual Console for first time and iDRAC SSL certificate is not trusted, a certificate warning window will be shown. This window describes about the certificate information and steps to configure certificate store.
Note: Details will give you more information about certificate like version, serial number, validity and more.
STEP 2: Configuring Certificate Store on iDRAC
After launching Virtual Console, navigates to Tools -> Session Options -> Certificate. Here you can choose the path where trusted certificate need to be stored.
Note: Details will give you more information about Current session’s certificate like version, serial number, validity, and more.
STEP 3: Launching Virtual Console with certificate store configured
After specifying the certificate store location, the next time you launch Virtual Console a “warning security” window will pop up with option to “Always Trust this certificate”. You have an option to save this certificate to store by enabling this checkbox. Once certificate is trusted, you will no longer see the Certificate warning message.
By following these simple steps, you can quickly remove the certificate warning page and have peace of mind that you are accessing your iDRAC securely.
More information on iDRAC
As some of you will know, Session shadowing is back in 2012R2!
This will show you how to make it work again through our Console.
VWShadow.exe is what our console uses and this does not work on a 2012R2 box.
The old Shadow.exe doesn't exist in 2012R2. It has been replaced with the command mstsc.exe /shadow:sessionid /v:servername
When you try to do Remote Assistance, our console simply runs
"c:\windows\syswow64\vwshadow.exe servername sessionid"
What I've done for now is created a miniscript in "Autoit" that I've converted to .exe
It needs to be an exe so that it can replace vwshadow.exe.
The script is really simple. All it does is launch mstsc with the arguments in the correct order for shadowing.
The attached Zip file contains 3 variations.
1. vwshadow -control.exe
mstsc.exe /shadow:sessionid /v:servername /control
This means you'll have control of the Users session but it does require consent.
2. vwshadow -controlnoconsent.exe
mstsc.exe /shadow:sessionid /v:servername /control /noConsentPrompt
This means you'll have control of the Users session. It tries to connect without consent.
*WARNING*This method may fail out with an error if you have no set the GPO which allows admins to do this.
3. vwshadow -viewonly
mstsc.exe /shadow:sessionid /v:servername
This is view only.
So, if you want to shadow using our Console on a 2012R2 server - you can! Just rename one of the exes in my zip file to vwshadow.exe and replace the existing c:\windows\syswow64\vwshadow.exe
Natively 2012r2 cannot shadow Windows7 boxes Be aware that certain combos of OS /RDP versions still won't work.
If MS update Remote Assistance for 2012r2 so that it can shadow Windows7 boxes, this will start working from our Console too.
This is an optional hotfix for vWorkspace Virtual Desktop Extensions for Linux v7.7
This release extends support for the following distro -
· CentOS 5.9 and 6.4
· RHEL 5.9 and 6.4
· Ubuntu 12.04 and 13.04
Download the hotfix here - https://support.software.dell.com/vworkspace/kb/119704
In my last post, I discussed the differences in various dedupe and compression techniques. In this post, I'd like to talk about how a product should apply the correct technique in each data tier. People have different opinions on this, so here are my take on the correct technologies at each tier:
What do you guys think about the variations in approaches? Have you thought about what happens when data moves between tiers? I have some thoughts on that but I’d love to hear your input.
Systems imaging, especially on a large scale, is complicated and time consuming. You have to worry about a multitude of factors – multiple images with complex configurations, ever growing application portfolios, remote sites with minimal IT support, and an increasingly heterogeneous environment with multiple hardware and operating system platforms. The increasing acceptance of BYOD makes the whole process even more complicated. And, you have to keep your end users satisfied – they want to have all their data and settings on their new machines or new OS image with minimal disruption to their work day.
All this adds to the day to day stress of an IT administrator’s job. The stress is even higher for administrators in organizations that have not completed their migrations from Windows XP to Windows 7 or 8 before the impending end of XP support by Microsoft on April 8th.
You can reduce the stress for you and your end users with the latest features added to the Dell KACE K2000 Systems Deployment Appliance. K2000 makes it easier for IT organizations to meet their systems deployment and OS imaging needs. Architected as an appliance, either actual or virtual, the K2000 is easy to deploy, up and running in less than a week for majority of customers, and easy to operate. The latest release of K2000 v3.6, released on January 15, 2014, is focused on making large scale deployments faster, more efficient and more reliable, and introduced two new major capabilities to the appliance.
First is multicast deployment. Multicast enables K2000 to send the same image data bits to multiple systems (typically 20-25) simultaneously. The data is sent only once through the network pipe, which greatly speeds up large scale deployments while reducing bandwidth consumption.
Multicasting and task engine are tightly integrated to enable true "light off" deployment
The second major capability added to the K2000 is a new, powerful task engine. The task engine is tightly integrated with multicasting deployment and provides for real time, two-way communications between the K2000 appliance and the devices that are being deployed or imaged. The result is real time feedback on each deployment task for each device as well as much better handling of deployment tasks, such as multiple reboots. The task engine also provides superior task automation for scheduling of pre and post deployment tasks, such as disk de-encryption prior to installation of a new OS, and deployment of applications post-image install. Finally, the task engine provides centralized logging of all deployment tasks from the K2000 web console for easier and more effective trouble shooting. Combined, these new capabilities of the K2000 can help you perform true “lights off” deployment – schedule an entire computer lab to be reimaged overnight, and come in the next morning with the job done. No impact to end users and no loss of sleep for you.
With the addition of the new task engine and multicasting, K2000 is faster, more reliable, uses less bandwidth, and helps you trouble shoot more efficiently. These added capabilities help you do your job easier and move the K2000 to the front of the line when it comes to systems deployment and imaging solutions.
Click here to see a joint IDC and Dell KACE webinar on large scale imaging.
Click here to find out more about the Dell KACE K2000 Deployment Appliance.
The Dell TechCenter Rockstars are an elite group of participants who are recognized for their participation in Dell related conversations on Dell TechCenter, other tech communities and through social media.
Each year we select the Rockstar group through an application process and need your help identifying these unique individuals. If you or someone you know should be a TechCenter Rockstar, fill out our short application before February 20th 2014 and return it to firstname.lastname@example.org to be considered for our 2014 Dell TechCenter Rockstar program!
The benefits of the Dell TechCenter Rockstar Program
What is a Dell TechCenter Rockstar?
2014 Dell TechCenter Rockstar application
How cool was the 2013 DTC Rockstar Program?
In addition to a private forum and direct connections with Dell and the Dell TechCenter team, DTC Rockstars had access to special early briefings on the VRTX launches, hands on with precision workstations, opportunities to present at Dell World and TechCenter usergroups, additional exposure for their own blog posts on TechCenter and more. The year culminated in many of the Dell TechCenter Rockstars flying to Austin to participate in the Dell World 2013 conference and a Samsung sponsored go-carting excursion.
Remember, we need applications turned in by February 20th, 2014, so don’t delay. We look forward to hearing from you, best of luck.