Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Kong Yang and Todd Muirhead talk Super Bowl Week and competition/coopetition. As always, we welcome your thoughts and feedback.
Please click below to view the video.
Learn firsthand about OpenStack, its challenges and opportunities, market adoption and CERN’s engagement in the community. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!
#1 Takeaway: CERN’s interest in OpenStack results from a growing need to do more with less resources
Rafael: Why did CERN decide for OpenStack and which alternatives did you consider?
Tim: Our interest in the areas of virtualization and cloud been ongoing for number of years. But we got to an interesting point about 18 months ago; we had an agreement to have a second Hungary based data center for CERN roughly doubling our computing capacity. However the IT staff numbers are fixed so this basically means that we need to look very carefully at the areas where we could be saving both operationally and also for the computing resources and make things more efficient. We had a number of investigations into virtualization on Hyper-V, System Center Virtual Machine Manager SCVMM based on Microsoft technologies and also around OpenNebula. Both of those were able to access good test beds but when we looked at the scale we would be facing over the next 18 months to 2 years, it was clear that on their own these wouldn’t allow us to provide an efficient solution.
At the same time the OpenStack project got announced and was starting to get to the point where we could be testing it. Around the OpenStack Diablo release time frame we started to have a look, and we were impressed with how much functionality was there. But also we were aware that there was a lot more that needed to be done before we could do large scale deployments. From that point of view we were being gradually building up the OpenStack deployment over the past year, and now we are running on around 2,000 guests on 500 hypervisors with aim towards 2015 to getting to about 15,000 hypervisors. We need to get to 90 percent of our computer center running on the virtualized environment.
Rafael: … which would probably be the biggest OpenStack deployment so far, is this correct?
Tim: My impression is that by 2015 we would not be the biggest. In fact one of the things that we find encouraging around OpenStack is that people are talking of these sort of numbers. Architecturally it’s a very scalable model but it’s more than just the pure architecture. It’s the question of not having to do it on your own but also having a chance to share with others who have similar experiences.
#2 Takeaway: CERN is expecting limitations to scalability to significantly decrease with the OpenStack Grizzly release
Rafael: What are the roadblocks that currently stand in the way of large scale OpenStack deployments?
Tim: One of the developments we have been watching very closely has been the cells development. Clearly a number of sites are pushing the thousand plus hypervisor scale at the moment, but the key break through will be with the OpenStack Grizzly release when the cells functionality is there, and this will allow us to construct hierarchies of cells of compute resources. This would remove one of the major limitations in terms of the total scalability. My understanding is that the code was dropped about two weeks ago … that’s one of the areas that we have been investigating in the short term.
#3 Takeaway: Multi hypervisor support is crucial. CERN is using Hyper-V and KVM hypervisors in its OpenStack environment (and is working closely with vendors to enhance functionalities)
Rafael: Can you tell us a bit more about your collaboration with the OpenStack Hyper-V team at Microsoft?
Tim: We have been very happy users of Hyper-V and SCVMM for our server consolidation environment. We were keen that we would be able to carry on working in a mode where the hypervisors are a tactical choice rather than a strategic choice. The multi hypervisor support was very important to us. We want to be able to test out performance compatibility questions on different hypervisors without being forced into a mode that says: “If you choose this cloud solution you must have this hypervisor.” We started to work with Hyper-V when it was first in OpenStack, and as it failed to keep up with the testing it was something we were very keen on working with Microsoft and the various companies that were working on Hyper-V functionality - most importantly to equivalence with hypervisors such as KVM.
Together with Microsoft we have been going through a lot of testing, validation of different combinations and also filling in the various areas where there are small functionality gaps, things like getting the consoles working. Within our OpenStack environment now we are running both Hyper-V and KVM hypervisors.
#4 Takeaway: Quo vadis, OpenStack? That question needs to be answered by the OpenStack Board of Directors and Technical Community
Rafael: Which challenges do you see ahead of the OpenStack community in general as well as for the OpenStack board?
Tim: If I take things from the board’s perspective … it’s been a very interesting six months since I was elected as an individual member. Working through some of the legal and construction of the board has been clearly the focus for the last 6 months. We now need to clarify some fairly hard issues around where OpenStack is going. In particular, there is a key discussion along with the Technical Committee which is to define the overall direction for what is OpenStack, and that will be key in defining where we go looking forward. On top of that, the other item that we do need to do some work on over the next 12 months is to improve the election process for the individual board members. Along with the Platinum and Gold Members there are the eight members of the board who cover the representatives for the individual members of the community. We need to make it clear so that this process is a transparent and representative one.
#5 Takeaway: Configuration, management of OpenStack and the ability to move workloads between OpenStack clouds needs to be enhanced
Rafael: What needs to be worked on in OpenStack?
Tim: As I look out one of the key things to establish will be clear feedback loops between the user community and the developers of the software. Now that OpenStack is in a state where it can be deployed in production and run, and we are getting reasonable experience of production deployments … we now need to get those experiences back into the main line code and move towards a mode where we improve the out of the box experience for the end users.
CERN has the advantage of having highly skilled team. We are able to bring some very good system administrators to work through some of the more difficult aspects of configuring and managing an OpenStack environment. That experience needs to be improved, so that it becomes a standard part of the product and the associated eco systems. Examples of this are the work with Puppet Labs we are doing … so that it becomes very easy to configure and deploy an OpenStack environment using Puppet. Most of this is just a question of coding the best practices into a set of scripts, a set of configuration details … such that people who are deploying OpenStack don’t need to be OpenStack experts.
The other item I think looking out is … we need to find a way to move workloads between OpenStack environments more easily. To define that core set of functions that everyone expects to be available in an OpenStack cloud and to be able to validate … so that when clouds are OpenStack compliant, then we are able to be sure that the workload that we run for example in the CERN private cloud could be equally deployed as we need it into a commercial OpenStack provider.
#6 Takeaway: CERN’s transparency on OpenStack usage and supply with experienced professionals might impact OpenStack’s market adoption
Rafael: How do early adopters like CERN propel and influence OpenStack market adoption?
Tim: CERN has the benefit of being a very open environment. We are able to be very explicit, very detailed in terms of how we use the various forms of computing resources. This means that we can perform outreach in a way that some of the private cloud companies for example are not able to. On top of that … because we have a resource pool as our staff, many of whom are on limited duration contracts … we have seen these people arriving at CERN, working for a while with OpenStack and then leaving CERN when their contracts come to an end. And this also helps to propagate and enlarge the pool of skilled resources available to deploy and use OpenStack.
I think CERN can act in a role of being relatively leading edge. However we also have to be very aware that we do not want to be in a situation where CERN’s requirements are unique. We don’t have the resources available to develop our own cloud solution. We have to make sure that whenever we have something that we think is a unique CERN requirement that we check that with other people who are deploying similar styles of cloud. In that respect things like peer review, like the Open Design Summit, like the user stories are very useful for us to allow us to check what we are doing is aligned with the rest of the direction that the industry is going in.
Rafael: What are your expectations towards Dell as you move into cloud, virtualization and … OpenStack??
Tim : CERN’s hardware procurement model is based on a very open tendering process. Under that we would write up specifications for the kind of machines that we are looking for and send them to a large number of vendors and then we will get back proposals for solutions. Where OpenStack is interesting for using in this respect is … it allows us to potentially be asking for a core subset of functionality in a modular way, so that vendors who wish to do enhancements are able to do so while remaining within the core functionality that we will require. One example would be how you do bare metal management, we wouldn’t need to require that you use specific base board controllers but instead we could be asking to say: “You must be providing hardware which is compatible with this implementation of OpenStack.” And that gives vendors a lot more opportunity to innovate … and equally on our side means that we assure we don’t end up in a situation where we have a large variety different hardware which increases our cost dramatically.
#7 Takeaway: CERN is applying the cattle (commodity) and pets (sophisticated) model for its hardware configurations
Rafael: What are your thoughts on cloud ready hardware in general. Should hardware become more stupid or should it become more intelligent in the future?
Tim: We have a lot of discussions around the role from basic commodity hardware through to the more sophisticated hardware configurations. In this respect we found a model by Cloudscaling to be a very useful approach which is ‘pets’ and ‘cattle’. The aim is … where we can, we want to move towards a situation where the software is providing the redundancy, and we are able to move to a mode where we don’t have machines that are critical. That having been said software takes a while to arrive at that level. We run a wide variety of applications written by physicists all over the world, and it’s not possible to guarantee that everyone would have written things to the level of redundancy that a large cloud provider can expect as part of the cattle style model.
What we are hoping to be able to do is to provide a cattle model for a large majority of our environment. But instead to have pets which are the more custom machines looked after more carefully and be able to sustain and support that … but within a single framework , and this means to start looking at things like the ability to restart virtual machines in the event of a hypervisor failing or built in migration capabilities in the event of a hardware failure. But we are certainly moving towards the mode of saying we want to see our redundancy moving higher in the stack so that we are able to cover underlying hardware, networking and equally operating systems software failures in more resilient fashion.
Rafael: Tim, thank you very much for this interview!
Tim: No problem, and thanks for all the work and the previous interviews that you have done have been very interesting.
CERN: http://www.slideshare.net/noggin143/20121017-openstack-accelerating-science Tim Bell: https://twitter.com/noggin143
Feedback Twitter: @RafaelKnuth Email: email@example.com
It is with great pleasure that we would like to invite you to participate in the Dell vWorkspace 8.0 Beta program!
The next major version of Dell vWorkspace will bring many new exciting features that we think are game changing in the desktop virtualization market. We sincerely hope that you are interested in the beta for the next major version of Dell vWorkspace and have the time and resources to put this beta to the –or more accurately your- test.
The Dell vWorkspace 8.0 Beta program is hosted on the vWorkspace Community. Although this is a public Beta, it is a monitored beta, which means you will have to apply for Dell vWorkspace 8.0 Beta program membership – which of course will be granted!
Here are the 3 easy steps to get access to the Beta:
The Dell vWorkspace 8.0 Beta community webpage provides all the information you need to get testing, including licenses.
You can provide feedback in one of two ways
Please note that this is a Beta, and is not supported by official Dell/Quest Technical Support. Please do not raise cases with Technical Support. Please discuss the problem on here on the Dell vWorkspace 8.0 Beta community site or email firstname.lastname@example.org but DO share all your feedback. We can never have too much!
We know and understand that you are all very busy so we sincerely appreciate all the time you can spend with the beta and all the feedback you are able to provide.
We look forward to your feedback!
The Dell vWorkspace Beta team
As I mentioned recently, Dell is promoting our FastPaaS solution based on CloudFoundry and we are looking for feedback from the user community. The requirements to participate in the program at this time are as follows:
To participate in the trial, please complete the following information at the registration page:
The registration page above requires you to enter information about your existing Dell Cloud account as the automated system will create a new VM within your Dell Cloud environment containing the FastPaaS trial so no download or setup is required.
After completing the registration, there is one more required step before the PaaS is deployed on your account. The screen below shows the information required for final setup.
When submitting the "Sub domain name" please enter the "yoursubdomain" information only as ".paasapp.net" is added by the tool on submission.
If you have any additional questions, please email them to email@example.com or leverage our FastPaaS Forum on the Dell Cloud Community site in Dell TechCenter.
Each year there are numerous industry awards & recognition programs that we pay attention to - some more than others. I look forward to one such program in particular, HPCWire's People to Watch 2013!I suppose I especially enjoy this program due to the fact that we're recognizing the people behind the HPC programs that are changing the world. Congratulations to everyone on the list - clearly you're making a difference in our community, and making waves that will last well beyond 2013!People to Watch 2013 List Details: http://www.hpcwire.com/specialfeatures/people_to_watch_2013Podcast Discussion of People to Watch 2013:http://media2.hpcwire.com/audio/130125_Episode203.mp3
Dell's Dr. Glen Otero sits down with Rich Brueckner of insideHPC to dive deeper into into Research Computing. Glen outlines the importance of the HPC Community coming together at events like SC12, to discuss challenges and opportunities facing the industry. At SC12, Dell sponsored a Birds of a Feather (BOF) session on Genomics Research and Personalized Medicine. Panel participants included Virginia Bioinformatics Institute (VBI), the Translational Genomics Institute (TGEN), and Dell. The discussions included everything from code optimizations, to making personalized medicine a reality for everyone. Glen uses an example from TGEN, where pediatric cancer is being treated using genome sequencing. The idea of personalized medicine makes so much sense when you think about it - each individual is different, therefore the treatment and medicine should be unique as well. You can view the entire video interview here.
Recently I had a colleague ask me for a good summary of Dell's view of high performance computing (HPC) and Dell’s role within the HPC Community.
It made me think back to an interview Tim Carroll conducted with insideHPC's Rich Brueckner at SC12. In the interview, Tim discusses Dell’s view of the industry and Dell’s focus on the success of Researchers worldwide. He highlights some of the achievements of Texas Advanced Computing Center (TACC), including TACC’s impressive showing on the Top500 List, TACC’s new HPC cluster is called “Stampede.” Tim stresses Stampede’s reach across the broader Community and the research results that will be accomplished.
It's a great point that in the end, it doesn't really matter how fast these computing systems run, but how much faster we can cure a child's cancer. You can watch the video below.
NOTE: The solutions outlined below are NOT officially supported by Dell.
A newer version of the firmware live image is available with OM 7.2 and this time with PXE images :). In response to multiple requests for PXE images on the linux-poweredge mailing list, I put together a rudimentary PXE solution for updating firmware. Let us go over the features of this version of the image first and then get to configuring the PXE solution.
The Hybrid ISO image is with OM 7.2 is posted at http://linux.dell.com/files/openmanage-contributions/om72-firmware-live/.
NOTE: all the commands which need root privileges can be used with sudo on this image.
PXE Image Configuration
The PXE images can be downloaded from http://linux.dell.com/files/openmanage-contributions/om72-firmware-live/pxe
Kiwi is the image building tool, I have been using for building the live firmware images. In kiwi terminology, the PXE image I built is referred to as "RAM only image". If you are familiar with RAM only images in kiwi, please go ahead and configure your PXE servers.
In the RAM only configuration, the complete image is copied from the TFTP server to the system memory(ramdisk) and run from the memory. The OM 7.2 firmware PXE image is around 3.5GB, so the target systems will need at least 4G of memory for a smooth boot.
Before going any further let us define TFTPROOT. By default most of the distributions use /tftpboot directory as TFTPROOT. A request like 'tftp 192.168.1.100 -c get images/xyz' should download the file TFTPROOT/images/xyz file on the tftp server (assuming 192.168.1.100 is the tftp server). The files in the PXE images have to be copied to specific directories relative to the TFTPROOT, so wanted to define it properly.
Now, follow the below steps to configure the PXE image:
NOTE: all references to 192.168.1.100 below have to be replaced with your tftp server’s IP.
i) Create TFTPROOT/KIWI/config.default file on the tftp server. Populate the file with the following text
This line suggests that the image name Centos62-OM72-Firmware-Net.x86_64-1.1.0 downloaded from the tftp server has to be staged on /dev/ram1 (ramdisk). 1.1.0 is the version of the image and 32768 is the block size used during PXE transfer.
Please note different configuration files can be used for different target systems. Please refer to section 12 at kiwi doc for related details. For now, I will continue using the default configuration for all target servers.
ii) Copy the Centos63-OM72-Firmware-Net.x86_64-1.1.0 file to TFTPROOT/image/ directory on tftp server.
iii) Copy the Centos63-OM72-Firmware-Net.x86_64-1.1.0.md5 file to TFTPROOT/image directory on the tftp server.
iv) Finally kernel and initrd images have to be made available for the live firmware image to start boooting. In my setup, /tftpboot/linux-install directory has the pxelinux.0 file and /tftpboot/linux-install/pxelinux.cfg/default is my PXE boot menu configuration file. The kernel and initrd images are posted to /tftpboot/linux-install/kiwi directory and the corresponding PXE menu item is configured as shown below:
MENU LABEL Firmware-Update
append initrd=kiwi/initrd-vmxboot-rhel-05.4.x86_64-2.1.2.gz ramdisk_size=4500000 kiwiserver=192.168.1.100
Screen shots of the PXE boot image:
After the kernel and initrd images are loaded to memory, the root image is downloaded from the tftp server. The progress of the same is show above.
After the root image is copied to /dev/ram1, the md5sum of the image is compared to that listed in Centos63-OM72-Firmware-Net.x86_64-1.1.0.md5 on the tftp server. After the md5sum is verified, the server continues to boot.
If you haven't used any of the Firmware liveDVD images before, you can follow the instructions listed at CentOS base liveDVD to update firmware on Dell servers to update the firmware on the target systems.
Issues to watch out for:
We would very much like to hear from everyone using these images. Please leave us a note on poweredge mailing list or a comment below on how you are using these images and what changes you would like to see in these images.
P.S: the root password is "linux" on the live images.
Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you’d like me to add your blog posts to my weekly compilation, please send me an email (firstname.lastname@example.org) or reach out to me via Twitter (@FloKlaffenbach). Thanks!
Featured Posts of the Week!
The Exchange alphabet: Backup by Johan Veldhuis
Put A Running Domain Controller In Your Hyper-V Replica DR Site? by Aidan Finn
How to make an existing Hyper-V Virtual Machine Highly Available by Thomas Maurer
SCVMM 2012 SP1 – Configure the Library Server by Thomas Maurer
NIC 2013 – Presentations available for download by Lai Yoong Seng
KB2804678–Cannot Exceed 256 Dynamic MAC Addresses By Default On Hyper-V Host by Aidan Finn
Microsoft #WindowsAzure AD Rights Management Administration Tools and Utilities #Office365 by James van den Berg
PowerShell function to check for a loaded snapin by Jeff Wouters
#PSTip Count occurrences of a word using a hash table by Jeffery Hicks
Convert PowerShell Object to Hashtable Revised by Jeffery Hicks
Join PowerShell Hash Tables by Jeffery Hicks
Rename Hashtable Key Revised by Jeffery Hicks
#PSTip How do I determine if my script is running in a RDP session? by Shay Levy
#PSTip Hide users from Welcome Screen by Shay Levy
#PSTip Passing local variables to a remote session in PowerShell 3.0 by Jan Egil Ring
PoshInternals – Get-Handle by Adam Driscoll
PoshInternals – Move-FileOnReboot, Remove-FileOnReboot and Get-PendingFileRenameOperation by Adam Driscoll
PoshInternals – Install-BlueScreenSaver by Adam Driscoll
#PSTip Validate if a folder exists by Ravikanth Chaganti
PoshUtils: Downloading SharePoint 2013 prerequisites for offline install by Ravikanth Chaganti
System Center Virtual Machine Manager
Manage Self Service (Multiple) Private and Public #Cloud with Tenants for in your datacenter with #SCVMM Part 2 of 2by James van den Berg
Windows Server Core
Q&A On The Microsoft Server & Cloud Blog by Aidan Finn
KB2803748 – Fixes KB2750149 On Windows Server 2012 Clusters by Aidan Finn
KB2803748 Failover Cluster Management snap-in crashes after you install update 2750149 on a Windows Server 2012-based failover cluster by Didier van Hoye
Von CMD zur PowerShell für Active Directory in German by Nils Kaczenski
James van den Berg - MVP for SCCDM System Center Cloud and DataCenter ManagementKristian Nese - MVP for System Center Cloud and Datacenter ManagementRavikanth Chaganti - MVP for PowerShellJan Egil Ring - MVP for PowerShellJeffery Hicks - MVP for PowerShellKeith Hill - MVP for PowerShellDavid Moravec - MVP for PowerShellAleksandar Nikolic - MVP for PowerShellShay Levy - MVP for PowerShellAdam Driscoll - MVP for PowerShellMarcelo Vighi - MVP for ExchangeJohan Veldhuis - MVP for ExchangeLai Yoong Seng - MVP for Virtual MachineRob McShinsky - MVP for Virtual MachineHans Vredevoort - MVP for Virtual MachineLeandro Carvalho - MVP for Virtual MachineDidier van Hoye - MVP for Virtual MachineRomeo Mlinar - MVP for Virtual MachineAidan Finn - MVP for Virtual MachineCarsten Rachfahl - MVP for Virtual MachineThomas Maurer - MVP for Virtual MachineAlessandro Cardoso - MVP for Virtual MachineRobert Smit - MVP for ClusterMarcelo Sinic - MVP Windows Expert-IT ProUlf B. Simon-Weidner - MVP for Windows Server – Directory ServicesMeinolf Weber - MVP for Windows Server – Directory ServicesNils Kaczenski - MVP for Windows Server – Directory ServicesKerstin Rachfahl - MVP for Office 365Matthias Wolf - MVP Group Policy
No MVP but he should be one
Jeff Wouters - PowerShell
This post was authored by Kesava Nair and Vaideeswaran Ganesan of the OpenManage Integrations team.
Dell Client Management Pack 5.0 is now available with support for Microsoft System Center 2012 and Windows 8!!
The Dell Client Management Pack for Microsoft System Center Operations Manager (SCOM) and System Center Essentials (SCE) integrates Dell Business Client Computers monitoring into Operations Manager. The Management Pack provides a list of Dell-specific views that you can use to observe and drill down the system status in a network.
Key new features in Dell Client Management Pack 5.0:
The product download links and product documentation are available on the Dell Client Management Pack wiki page .We encourage you to continue this conversation in the OpenManage Integrations with Microsoft SCCM and SCOM Forum if you have any comments or feedback.