Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Please join us Tuesday May 1st, 2012 at 3 PM Central US time, where we will be talking about SCVMM 2012 and the Dell Compellent SMI-S provider. We will have experts on hand to take and answer your questions and we might even have a demo available so you do not want to miss this one.
Join here http://delltechcenter.adobeconnect.com/chat tomorrow.
To unsubscribe from the Dell TechCenter distribution, please click "Email unsubscribe to this blog" on the TechCenter News Blog page or email "unsubscribe" to email@example.com.
Last month, I shared some insights about what we do to manage quality, specifically our efforts to support next generation server products. Today, I want to share another example of our quality efforts – how we managed a systemic memory issue at its origin – with our ODM partner. Please note that James Pledge, Senior Manager, Memory Engineering, wrote all the technical detail for this article!
In late 2009 and 2010, Dell started seeing increased incidents of correctable memory events on servers. Our error management system was directing customers to perform proactive service replacements to prevent potential future uncorrectable events. Key customers were complaining of needing to replace DIMMs too frequently. The Memory team (engineering, quality, and reliability) investigated and identified key factors:
While hardware and software advances were maximizing system utilization and allowing for more cost-effective memory, better performance and greater uptime with each successive generation, PLUS the DIMMs received from our suppliers were no different from those distributed to our competitors, this sudden increase in events was unacceptable! But it wasn’t until all the factors came together that we recognized the problem.
What We Did to Fix the Issues
At this point, we drove quality improvements that addressed these issues:
Following these changes, our DIMM quality performance improved month after month with the best performance ever and no major customer excursions by year-end 2011.
How we are Leveraging our Learnings
We started this journey with one supplier, with whom we had a solid relationship based on years of support, and this partnership took us down a path of making both companies stronger.
These lessons learned have now become best practices that we use with all our suppliers. Specifically, the process we used to validate the changes with our key partner has now been adopted for all our memory qualifications as an ‘early warning’ system which allows us to:
So tell us what you think! We’re working hard to improve your customer experience, and we want to hear from YOU.
For more information on server memory, see the Memory team’s website:
This blog post was co-written with Rob Cox of the Dell OpenManage Essentials team.
The Dell Troubleshooting Tool has long been recognized as a handy ‘Swiss Army Knife’ of network diagnostics. It has a lot of great features that are useful outside of Dell OpenManage Essentials (OME) and can help with some of your daily activities: SNMP, WMI and misc protocol checks, Port check, SMTP email check.
We include it in the latest version of OME, but we’ve had some requests to post it as a separate download. So, If you have OME installed, you already have this tool installed. But if you are not an OME user, give the Dell Troubleshooting Tool a try.
If you have been keeping up with our OpenManage Essentials videos page, you have seen the Dell Troubleshooting tool in action. For example, in the embedded video below, it is used to test SNMP connectivity between a Linux server that we've set up to be monitored by OpenManage Essentials.
To get a better feel for the tool, you can also click on the following screenshot or the one at the beginning of this post.
Thanks, and we hope you enjoy the tool!
Download the Dell Troubleshooting Tool
Dell Troubleshooting Tool Wiki page
Enterprise Strategy Group (ESG) recently analyzed the total cost of storage ownership over five years for two theoretical customers at the small and large ends of the “medium-sized business” segment. ESG compared a Dell EqualLogic iSCSI SAN solution to competing solutions from two other industry-leading storage vendors.
The ESG Lab analysis was quantitative in that it compared the cost of acquisition (hardware and software), support, management (including manpower), and power and cooling over five years. The smaller configuration was built with a mix of SAS and SATA drives, while the larger configuration factored in SSDs due to customer’s performance requirements.
ESG found that EqualLogic’s TCO is impressively lower over a five-year period than its competitors. ESG credited a majority of the difference to two main areas: software and management. EqualLogic software is provided at no additional charge; specifically, EqualLogic Group Manager, Host Integration Tools, and SAN Headquarters are free with PS series hardware.
Management also accounted for a significant difference in cost when comparing Dell with other storage vendors’ systems. The management savings come from the easy-to-use software tools and EqualLogic’s ability to utilize hardware across generations – all of EqualLogic’s 12 generation hardware can run the latest firmware and can be mixed within the same storage pool.
Moreover, ESG found the upgrade process for EqualLogic very very appealing for TCO – after adding a new EqualLogic array, the older EqualLogic array can be evacuated and retired in one-click thereby completely eliminating costs associated with the migration of old data to new systems.
Summary of findings:
The complete ESG TCO study is available here.
Getting the most from technology requires an understanding of what solutions best support your business objectives. Regardless of industry or company size, one objective on the top of company’s priority lists is to drive down costs and there is a glaringly obvious expenditure that is constantly being reviewed – IT budgets. The problem with this is that while finance departments want this budget reduced, the need for more computational power, storage capacity, mobility, speed & collaboration grows ever larger. This is a conundrum indeed. How can we be expected to do more with less? In a lot of cases these finance guys don’t understand technology. If they did they would see that by investing relatively little in new technologies they can save a lot in the long run.
So how can we help them understand that different technology solutions can help drive out inefficiencies and reduce opex? Finance directors don’t want to hear about speeds and feeds. They don’t want to know about redundancy and backup windows. They don’t care about ISCSI or fiber channel. You might think that all they care about is how to save money. This is mostly true, but they can also understand that spending wisely delivers those savings in the long run. Dell recently launched the Technology Solutions Training Portal. This is a portal where businesses can come to learn about the different technology solutions that are shaping todays IT landscape. Cloud computing, virtualization, storage consolidation, mobility and systems management are some of the solutions areas covered in our 20 minute training modules.
What are the industry drivers behind these technologies? What are the business benefits? What do the solutions consist of? What does Dell have to offer? These are the questions that are answered by the Dell Technology Solutions Training Portal. The modules are free to access and can be completed at a time that fits with a busy schedule. So rather than trying to explain the technical reasons for investing in a new storage consolidation solution, direct your business decision makers to the storage consolidation or Equallogic modules and let them see the business benefits for themselves. Remember, doing more with less is only possible when you know how.
In Episode 21, Kong Yang and Todd Muirhead draft Dell | VMware solutions and discuss their impacts in data center solutions. Kong decided to choose Dell Management Plug-In for VMware vCenter, Dell AppAssure, Dell Force10 and sleeper pick - Dell Wyse. Todd went with VMware vCOPs, VMware View, VMware vCloud Director and sleeper pick - Cloud Foundry. Let us know what you think.
Finally, as promised, here is my Cloud Foundry tweet: "@jerrychen @herrod @cloudfoundry the hoodie needs to read 'Cloud Foundry: The Open Can of Whoo-PAAS!' ;-) cc @jtroyer @tony_dunn."
Please click below to view the video.
Today I am going to talk around a common problem in virtual environments (especially if you are running ESX 3.x), CPU bottlenecks.
Please make sure you have read the previous blog on this topic so you understand what it is, this is more an educational post on how to use vFoglight in different ways.
There are different ways inside vFoglight you can track high CPU % Ready values:
As you can see there is different ways of finding the information you need. Choose the method that you prefer!
We’ll start with the HTML5 navigation timing. This dataset is available in the newer Internet Explorer, Firefox, and Chrome browsers. It is not available on Safari or many of the mobile device browsers. What’s nice about it is that we can get full page load time in the browser and we can see breakouts of DNS lookup, redirect, SSL handshake, processing, and cache access timing. For more details you can reference this page: http://test.w3.org/webperf/specs/NavigationTiming/. These measurements are taken on each page. There are element end times available through this API but one of the current weaknesses in the data is that we don’t get start times of the elements on the page, these measurements are called resource events. A good use of resource events will be to use them to gage the effects of partner infrastructures or third party components that are in your pages.
The beauty of these new datasets will become more evident when you see them all combined in one Foglight page. Think of looking at a bunch of hits or even a session in FxV today. Now think of that, already rich data, with supplemental data that comes right from the user’s desktop. If you thought Foglight allowed you to understand a lot more about what your user’s are experiencing, just wait until you get a look at this stuff.
As always feedback is welcomed and encouraged…
We all remember back to old ago we are using JRE default keystore (CACERTS) to mange foglight certificate. Which work well however we had to remember and find way to use keytool and you could lost all you key if you upgrade your JRE without back it up.
Start from Foglight Agent Manager, we introduced the new way to manager it. Many people/Foglighter may not know it and this is purpose for this blog
Key Store Location
random password managed by foglight
Fglam building tool (command switch)
Agent Manager 18.104.22.168
Copyright (c) 2012 Quest Software Inc.
Build Number: 5622-20120217-1647-b96 (64-bit)
--add-certificate Adds the certificate file to the list of trusted certificates
in Agent Manager. The certificate will be trusted for all
SSL connections. The required argument is:
--delete-certificate Deletes the certificate alias from the list of trusted certificates
in Agent Manager. The certificate will no longer be
trusted for SSL connections.
--list-certificates Displays a list of certificates that have been added to
Agent Manager. Certificates that are included in the JRE
are not displayed.
-to be continued
Ubuntu Server 12.04 (a.k.a. Precise Pangolin) was released today by Canonical. Some of the new features in Ubuntu Server 12.04 include a rebase to the 3.2 kernel, inclusion of the latest OpenStack release and full support for Xen virtualization. For full details on these and other exciting features, please refer to the Release Notes from Canonical.
For the official announcement from Canonical, go here.
Ubuntu Server 12.04 introduces support for the latest generation of Dell PowerEdge and PowerEdge-C servers, including PCIe Express Flash SSD, a solid-state storage solution for outstanding IOPS performance.
In addition, the Dell Linux engineering team is working with Canonical to re-certify Dell PowerEdge and PowerEdge-C servers that had been previously certified with older releases of Ubuntu Server. Re-certification efforts are under way, so be sure to visit the Canonical hardware certification site for latest updates.
Where to Get Support
Ubuntu Server 12.04 is an LTS (Long Tem Support) release, which has a 5-year support life from Canonical, as opposed to 18 months for standard releases. Customers looking to deploy Ubuntu Server on their data centers should look at LTS releases. For details on LTS support details, see here.
Ubuntu Server 12.04 is not factory-installed or officially supported by Dell. However, OS support contracts are available from Canonical through the Ubuntu Advantage program. Certified Dell hardware is preferred in order to provide best support, but it is not required.
Dell OpenManage Support
The Dell Linux Engineering team will soon release a build of OpenManage 7.0 for Ubuntu Server 12.04. Please stay tuned for a future announcement on these forums.