Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Shine KA and Anshul Simlote are co-authors on this blog as well.
Dell has provided a new security feature in its iDRAC7 12G 1.30.30 release called Automatic System Lock. If user forgets to lock the server operating system while closing the last virtual console session, then other users can access that same session. This feature will provide a mechanism to lock the host system when the last virtual console session exits. The host lock sequence contains the appropriate keyboard keystrokes for locking the server operating system. The default lock sequence will work correctly on Windows systems and Linux systems (KDE or GNOME desktops), unless disabled.
The Lock sequence will be sent when the last virtual console session terminates irrespective of how it happens (e.g. virtual console session times out due to network disconnection or after idle timeout, session terminated by user through iDRAC7 Web GUI or other interfaces). However, it is assumed that iDRAC7 is in working state, therefore it will not be applicable when iDRAC resets, resets to default or during component crash. It is integrated properly with the reconnect feature of virtual console so that locking does not occur between reconnect attempts.
The host lock operation is done by sending keystroke sequences through the virtual keyboard. If the keystroke sequences are not interpreted correctly, it could result in stray characters being sent to the host. This would occur, for example, if the host is in the BIOS when the virtual console session exits. To prevent this, a mechanism has been provided that is used to temporarily disable the sending of keystroke sequences. The BIOS will provide information about the state of the server boot; so that the sending of keystrokes is temporarily disabled from beginning of BIOS (after server resets) until the ending of POST (begin of OS boot).
User can enable or disable this feature in the Virtual Console page of the iDRAC7 WebGUI. By default, this feature will be enabled.
This feature can be enabled or disabled from RACADM too-
Blog post written by Matthew Paul and Kaushal Gala
On behalf of the entire OpenManage Integration for VMware vCenter team, below are the highlights of our latest release:
Sr Product Manager - Matthew Paul providing an overview of the OpenManage Integration for VMware vCenter
To access this update, a new OVF must be deployed. There is no in place RPM upgrade available. A migration is supported from versions 1.6 and 1.7.
In addition, a new license file is required to use 2.0. All existing customers will receive an email with their license file(s) attached and instructions on how to get the new appliance. These emails will be going out the first week of November. For license assistance after the first week of November contact email@example.com
A 5 host 90 day evaluation version is available at 90 day eval. View details of the release in the OpenManage Integration 2.0 documentation. For additional information including white papers and how to videos visit the OpenManage Integration for VMware vCenter community website.
If you were tasked with developing a significant new business application with broad functionality for your company, would you start from scratch, or try to patch your solution together from 2 or 3 other partial applications that overlap in functionality? Which route do you think would deliver a cleaner, more robust and useable end product? Your answer may depend upon how much time you have to deliver the application.
Interestingly, Performance Monitoring vendors often face the same dilemma, when deciding whether to expand their product capabilities via acquisition vs. organic development. APM is a particularly challenging technology area for this type of decision, due to the high requirement for tightly integrated, highly correlated, end-to-end data from user experience to application code execution to databases to underlying middleware platforms, OS, hypervisor and other infrastructure. While all of this depth and breadth and extensibility is essential for APM, these capabilities must also be carefully balanced with usability to avoid the pitfalls of an overly complex solution.
At the end of the day, all APM vendors ultimately attempt to do the same thing: they each collect end-to-end data about your application performance, and then attempt to correlate and visualize that data in the most compelling, intuitive way possible. To do this effectively requires an overall data model that enables high-speed collection and correlation of low to ultra-high granularity/fidelity data, which means functional expansion by acquisition typically requires significant replatforming effort. Without that extra development integration and re-platform work, you end up with disconnected data sets, multiple management server technologies and consoles, and a raft of other solution challenges that ultimately limit how far users can go with the solution. Many vendors play games here, and pitch some stand-alone component of their solution as a sample of the fully integrated, end-to-end solution; but don’t get spoofed. Look for a common solution architecture that enables true data level collaboration between your DevOps, Ops, and IT administration teams involved in day to day incident management triage/combat against the ghouls and goblins attacking your application performance.
Why invest to automate processes within your virtual environment? After all – doesn’t virtualization in of itself deliver efficiencies within an organization? These are common questions among virtualization and operations administrators. But the reality is that when administrators personally attend to every aspect of virtual environment deployment and ongoing management, processes are slow and scale can only be achieved by adding more human resources. Additional time spent and IT staff resources for management drains cost efficiencies out of the virtualization deployment.
Conversely, adding automation into virtualization management processes enables IT staff to focus on the business, lower operational costs, reduce error rates and better scale IT resources. While the value proposition for automation is clear, administrators are looking for some practical advice for implementation.
Dell Software presented a one hour, free webcast featuring virtualization authority Scott D. Lowe on how to easily implement automation advice through planning, deployment, production and retirement stages within your heterogeneous virtual environment.
Download the webcast titled “Top Ten Virtualization Automation Tips for Infrastructure and Operations Administrators” here.
Among the top tips for automation, the webcast will cover ways to assess virtual environment management costs in order to best prioritize automation processes and will highlight various tools, embedded within hypervisors or from third party providers, available to simplify automation.
Want a preview of a few top tips? Consider these:
One of the most important – non-technical – tips administrators should consider is to work to change the definition of success within their virtual environment. Too often organizations reward inputs – rewarding on the perception that an administrator who spends time around the clock managing the virtual environment is actually doing a better job. Not so – with automation and strategic management, supported by real-time monitoring and network visualization – it’s the administrators that delivers better outputs: a higher ROI for virtualization deployment, that should be rewarded. This is the very premise of automation – it’s all about working smarter, not harder.
For the top ten automation tips, download the full webcast, “Top Ten Virtualization Automation Tips for Infrastructure and Operations Administrators,” for free here. You’ll also get a demo of Foglight for Virtualization from Dell Software, which provides the real-time visualization that virtualization and operations administrators need in order to effectively integration automation processes. For more information on Foglight for Virtualization or to download a free trial visit: http://software.dell.com/products/foglight-for-virtualization-enterprise-edition/.
You are invited to join a Community discussion on the latest updates, advantages and future plans for the Dell Crowbar Operations Platform.
Crowbar is a complete, easy to use Open Source Cloud operations platform for everyone that delivers usable, well-configured Open Source capable hardware in a fraction of the time required by traditional processes.
Join us for …
What: Open discussion on the Crowbar Operations Platform
When: Monday, November 4, 2013 9.00 a.m. – noon HKT
Where: SkyCity Marriott – Meeting Room: Sky Zone
Are you ready to gain the Crowbar edge? Join us to discuss …
This blog post was originally written by Aditi Satam and Syama Poluri. Send your suggestions or comments to WinServerBlogs@dell.com
Dell Remote Access Controller along with the Lifecycle Controller (iDRAC and LC) helps manage, monitor upgrade and deploy Dell Servers. It provides a holistic remote management capability that allows you to manage your server from anywhere at any time irrespective of the status of the Operating System.
PowerShell 3.0 is the new version in Windows Server 2012. It is installed by default in Windows 8 and Windows Server 2012. The CIM Cmdlets in PowerShell 3.0 allow seamless communication with any device that supports the DMTF standards - CIM and WSMAN or DCOM. PowerShell 4.0, which is a part of Windows Server 2012 R2 /Windows 8.1, also supports the CIM Cmdlets that were first introduced in PowerShell 3.0.
Integration of PowerShell in Windows Server 2012 / 2012 R2 along with Dell iDRAC7 provides a rich set of remote management capabilities on the Dell 12th Generation Servers. PowerShell Interoperability allows communicating with the iDRAC of any remote system as well as supports backward compatibility. You can simply install either Windows Management Framework 3.0 or Windows Management Framework 4.0 on your Windows 2008 R2+ or Windows 7 client systems and begin remote managing the Servers!
Once you setup a remote CIM session to iDRAC, you can execute a number of tasks ranging from remotely getting the BIOS, component firmware and hardware inventory to remote RAID configuration and many more. You can find the complete list here.
Below are some helpful resources that provide more information and guidance on how you can leverage this functionality to manage your Dell PowerEdge Servers-
Further links and downloads:
Efficiently Migrating from Windows XP and Protecting PCs not Migrated Before XP Support Expires
The clock is ticking on Windows XP support. With Microsoft ending support for Windows XP in a few months, migrating XP systems to Windows 7 or 8 is essential to ensure security and performance of your systems. However, most organizations find migration to be a drain on time and resources. And many will be unable to migrate all of their XP systems before Microsoft support for XP ends in April, 2014. According to a July 2013 survey by Dimensional Research, only 37 percent of the surveyed companies had completed their migration from Windows XP, and more than half of the 63 percent that were still in the process of migrating did not anticipate completion before April, 2014.
Will your organization complete its XP migration before the end of Microsoft support?
Dell has established a four-step process for successful and timely migration of your devices from Windows XP to Windows 7 or 8, and provides the tools to automate and simplify each step:
And what do you do with those devices that remain on Windows XP beyond the XP support deadline – either through lack of time to migrate all devices or due to compatibility issues? The following actions can provide you with some peace of mind that these devices remain protected:
You can find out more by accessing our joint webinar with IDC, Best Practices for Migrating from Windows XP and Protecting PCs not Migrated Before XP Support Expires. Lear more about our deployment and imaging appliance, KACE K2000, and our system management appliance, KACE K1000, by going to our website.
In the last blog post we saw how to create custom dashboards to suit our specific needs. To recap, custom dashboards allow us to create views that suit the day-to-day need to do the job effectively. One can place any metric collected by Foglight onto the dashboard. This allows us to focus only on the objects of interest.
As shown below, we are looking at 2 simple metrics for “Cluster 4”. Memory Swap In and CPU Used. We are also following same metrics for ESX hosts belonging to that cluster.
All is well. But what should happen when we want to get a little deeper?
Custom dashboards allow us a way to jump to story behind the simple gauge. Here is how:
Let’s select “Cluster 4 CPU Used Hz” table and get into “Edit properties”. To get there, simply click on the right-most icon on top-right corner of the field. It would open a window like below. Select the Actions tab:
There are two actions that can be selected. These actions tell Foglight what to do when user “clicks” on the gauge icon (Select) or when user hovers over the metric with the mouse pointer (Dwell). Let’s just pick one for our purpose.
We will select the first one: Select a metric. Click on the box next to “Select a metric”. The “Edit” icon on the right is enabled. Click on that metric. The new window pops us as shown below:
We have 4 options to choose from: popup, Dialog, Next page/Drill down or External URL. Depending on the desired style of the page, you can select any of the above options. Typically, most used option here is next page/drill down. But you can use popup for showing detailed information as well.
External URL can be used for opening a specific web page to take certain action or even providing definition.
Once you hit next, the screen will show options for showing which data to show: It can either be no data or parent data.
“No Data” here means the data unconnected to the metric on which the user is clicking. This can be for opening options window, for example or something completely different.
More straight forward choice here is to show selected Metric’s parent. Let’s go with this option and hit next.
The new screen shows all related metrics to choose from. Opening “VMware” and then “Cluster”, we select “Clusters with Most Running VMs”. The preview screen shows that 10 VMs are running. Hit Finish. And then hit “Apply” to return to the dashboard.
Now if you click on the Cluster 4 CPU Used metric, you will see the new popup open up.
If you click on the “Cluster4” object, you will go to the explore cluster dashboard, thus allowing you to explore deep into the cluster’s functionality.
Custom dashboards provide a powerful way to selectively view the data. Many dashboards can be created to suite a specific function or role and they are all linked. So “exploring” from one dashboard to another for in-depth look is just a click away
The latest Dell NSS-HA solution was published on September 2013, of which the version is NSS5-HA. This release leverages Intel Ivy Bridge processors and RHEL 6.4 to offer higher overall system performance than previous NSS-HA solutions (NSS2-HA, NSS3-HA, and NSS4-HA, and, NSS4.5-HA).
Figure 1 shows the design of NSS5-HA configurations. The major differences between NSS4.5-HA and NSS5-HA configurations are:
Except for those items and necessary software and firmware updates, NSS4.5-HA and NSS5-HA share the same HA cluster configuration and storage configuration. (Refer to NSS4.5-HA white paper for the detailed information about the two configurations.)
Figure 1. NSS5-HA 360TB architecture
Although Dell NSS-HA solutions have received many hardware and software upgrades to support higher availability, higher performance, and larger storage capacity since the first NSS-HA release, the architectural design and deployment guidelines of the NSS-HA solution family remain unchanged. In the rest of the blog only the I/O performance information of NSS5-HA will be presented, meanwhile, in order to show the performance difference between NSS5-HA and NSS4.5-HA, the corresponding performance numbers of NSS4.5-HA are also presented.
For detailed information about NSS-HA solutions, please refer to our published white papers:
Note: for any customized configuration/deployment, please contact your Dell representative for specific guidelines.
Presented here are the results of the I/O performance tests for the current NSS-HA solution. All performance tests were conducted in a failure-free scenario to measure the maximum capability of the solution. The tests focused on three types of I/O patterns: large sequential reads and writes, small random reads and writes, and three metadata operations (file create, stat, and remove).
A 360TB configuration was benchmarked with IPoIB network connectivity. A 64-node compute cluster was used to generate workload for the benchmarking tests. Each test was run over a range of clients to test the scalability of the solution.
The IOzone and mdtest utilities were used in this study. IOzone was used for the sequential and random tests. For sequential tests, a request size of 1024KiB was used. The total amount of data transferred was 256GiB to ensure that the NFS server cache was saturated. Random tests used a 4KiB request size and each client read and wrote a 4GiB file. Metadata tests were performed using the mdtest benchmark and included file create, stat, and remove operations. (Refer to Appendix A of the NSS4.5-HA white paper for the complete commands used in the tests.)
Figures 2 and 3 show the sequential write and read performance. For the NSS5-HA, the peak read performance is 4379MB/sec, and the peak write performance is 1327MB/sec. From the two figures, it is obviously that the current NSS-HA solution has higher sequential performance numbers than the previous one.
Figure 2. IPoIB large sequential write performance
Figure 3. IPoIB large sequential read performance
Figure 4 and Figure 5 show the random write and read performance. From the figure, the random write performance peaks at the 32-client test case and then holds steady. In contrast, the random read performance increases steadily beyond going from 32, to 48 to 64 clients indicating that the peak random read performance is likely to be greater than 10244 IOPS (the performance for 64-client random read test case).
Figure 4. IPoIB random write performance
Figure 5. IPoIB random read performance
Figure 6, Figure 7, and Figure 8 show the results of file create, stat, and remove operations, respectively. As the HPC compute cluster has 64 compute nodes, in the graphs below, each client executed a maximum of one thread for client counts up to 64. For client counts of 128, 256, and 512, each client executed 2, 3, or 4 simultaneous operations.
From the three figures, both NSS5-HA and NSS4.5-HA have very similar performance behaviors, as the two lines for NSS5-HA and NSS4.5-HA in each figure are almost identical; it indicates that the changes we have with NSS5-HA do not have obvious impact on the performance of metadata operations.
Figure 6. IPoIB file create performance
Figure 7. IPoIB file stat performance
Figure 8. IPoIB file remove performance
Foglight is now more scalable than ever! In this educational session, our product expert will show you how to deploy Foglight to monitor large-scale environments. By the end of the webcast, you’ll know how to:
Watch the on-demand webcast.