Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
By now I'm sure you've heard about The Experts Conferece - SharePoint...It includes three days of training (April 29-May 2). Just in case you've been too busy controlling your SharePoint Chaos, let me re-cap:
And - the part I am most excited about - we have TWO (not just one!) amazing keynotes lined up!
If you are faced with the pains and pleasures of SharePoint, you should NOT miss this conference.
This is going to be the HOTTEST TEC event yet! If you'd like to find out more, you can visit www.theexpertsconference.com or go to www.sharepointforall.com/special_events and interact directly with our experts!
And if you plan to regsiter by April 6, please email me for a discount code to register for $1475 which is $200 off the current price! email@example.com
Last week, we welcomed more than 40 people in our first EMEA Cloud Chat from various geographies, amongst them Dell engineers from our US offices as well as sales and marketing people, trainers and customers from Germany, France, Italy, Spain and Sweden. Thank you all for taking time and engaging with us!
As announced earlier, we will follow up with further cloud chats, diving deeply into specific solution areas. We will host upcoming cloud chats every third Thursday of a month, 3.00 pm CET. Our next chat is scheduled for April 19th and we will talk about Dell’s OpenStack distribution and Dell Crowbar,the leading open source installation tool for OpenStack and Hadoop. A pre-chat blog post is already in the work, providing a detailed outlook. Stay tuned!In case you couldn’t attend (or if you’d like to learn more about the solution areas discussed in the chat) please find below a summarized transcript. We cleaned it up and reordered some fragments, making the transcript more digestible (the entire transcript is almost 20 pages long):Dell Openstack & CrowbarStephen Spector: As a starting point, we want to highlight the announcement yesterday (March 21st 2012) on the Openstack (Crowbar) coming to Europe and Asia. The Dell team has a complete Openstack "distro" available globally and the Crowbar project allows you to leverage the concept of Devops to install a complete install of Openstack on bare metal. Dell’s Barton Geore has a nice blog post but we have Rob Hirschfeld here with us. Rob Hirschfeld is the main architect behind Crowbar and the technical lead on the Dell Openstack distro. Rob, can you provide a bigger overview of Crowbar to help people understand how that helps Openstack?
Rob Hirschfeld: Sure. Crowbar is an open source project that we started to make sure that it was fast and repeatable to install Openstack (and other cloud software). It was very important to us that we could get to production. What we did was start from the Openstack Chef scripts. So Crowbar uses Chef as a foundation, but we needed to have hardware deploy & orchestration. We've been getting some great interaction from EMEA community members on the Crowbar list. So, we've been seeing activity about both Diablo and Essex (recently added).
Stephen Spector: Is Dell releasing Essex soon or are we staying with Diablo?
Rob Hirschfeld: Our Diablo distro include Dashboard & Keystone. The current release is Dialbo because that's what's released. But we felt like it was critical to include Keystone & Dashboard in that.
Stephen Spector: When is Essex set to release?
Rob Hirschfeld: We don't have an official date yet - we're evaluating it and need to see the final bits.
Stephen Spector: Can you provide some links on where people can get more info on Crowbar and Openstack from Dell?
Rob Hirschfeld: http://dell.com/openstack and http://dell.com/crowbar. Crowbar is open source, so the best way to start learning it in detail is from the github wiki: http://github.com/dellcloudedge/crowbar/wiki.
Stephen Spector: Please welcome Ralph Hibbs, Marketing Director from the Dell Boomi Application Integration solution team. Dell Boomi is an innovative cloud solution allowing Enterprise customers to leverage the latest in SaaS solutions while continuing to leverage their existing enterprise software investments. Ralph, can you tell us more about Dell Boomi?
Ralph Hibbs: Dell Boomi is an application integration platform that resides in the cloud. It can connect any combination of applications, regardless of where they reside: public cloud, private cloud or on-premise.
Stephen Spector: Can you provide an example of app integration?
Ralph Hibbs: Earlier this week, we announced that we are working with several EMEA customers in helping the Oneworld Airline Alliance deploy an IT hub in the cloud to exchange frequent flyer information. This has us working with such EMEA companies as Air Berlin and British Airways. The IT hub between airlines is one example of application integration. Another example, inside a corporation would be connecting Salesforce.com to a financial application, such as SAP or Oracle to exchange information about closed orders and invoicing.Stephen Spector: I know there was a blog post on that, do you have the link for everyone?Ralph Hibbs: Here's a link to the Oneworld press release.DVS Simplified ApplianceStephen Spector: Please welcome Brent Doncaster, Product Marketing for virtualization solutions. Can you provide an intro to the newly released Dell solution for Virtualized Desktops?Brent Doncaster: Dell recently announced a new addition to our DVS portfolio of desktop virtualization solutions - DVS Simplified, an integrated VDI appliance is now available in the USA, and it will available in EMEA in 2 weeks. DVS Simplified Appliance is in addition to our enterprise solution and our Dell Cloud based "as-a-service" offer for virtualizing desktop environments.Stephen Spector: Is the DVS Simplified Appliance based on Citrix or another technology?Brent Doncaster: DVS Simplified Appliance integrates and factory installs Citrix VDI-in-a-box software with a factory pre-configured Dell sever. It’s an "appliance" is the easy to think of it - we have done all the mechanical software installation and pre-configuration. The cool thing about DVS Simplified Appliance - from out of the box to being up and running can take as little as an afternoon.Dell Public VMware vCloudStephen Spector: Please welcome Adam Dawson, Program Manager on Dell Security Solutions, to discuss the new security features added to Dell vCloud. Can you briefly talk about the Dell - Trend Micro relationship for our public VMware vCloud?Adam Dawson: This is a really cool solution. Security is every CIO's primary concern in moving to the cloud. So we built a relationship with Trend Micro to offer cloud-based encryption key management for your Dell vCloud infrastructure data. You can encrypt the data in Dell's cloud, and you will be the only one with access to it. Dell won't have access to the keys or the data, but you only pay for the keys you use and the billing will be integrated into your monthly vCloud bill.Stephen Spector: Where are the keys stored?Adam Dawson: The encryption keys are stored in Trend Micro's data center in Germany. The VM with encrypted data requests the key at startup and if the VM passes integrity checks, the key is dispatched to the VM which allows access to the encrypted data.Ralph Hibbs: How is the Dell & Trend Micro solution delivered? A cloud service, appliance?Adam Dawson: The Trend Micro service is delivered as a SaaS solution in the cloud. You can order from Dell, and Dell will provision your account through Trend Micro's software portal online.Stephen Spector: What gets encrypted on the VM? Just data or the VM itself?Adam Dawson: You encrypt individual data volumes (drive letters in Windows, mount points in Linux). So you can pick and choose to encrypt only sensitive data.Stephen Spector: Quick note on Dell's Public Cloud - vCloud Datacenter Service currently available in the US. We are working very hard to bring to the European market (stay tuned for more info on this).Florian Klaffenbach: The work on the German cloud data center will start next week.Stephen Spector: Please welcome Matt Domsch, Solutions Architect in the Office of the CTO, to discuss Dell’s new Disaster Recovery and Backup Solutions. I want to ask you Matt about Disaster Recovery and how public clouds are changing the way customers view this important deliverable.Matt Domsch: A lot of companies I get to speak with daily have DR plans. They've been building second DR data centers and watching their costs skyrocket. Cloud gives companies a second, often less expensive and less hassle method to do DR.Stephen Spector: Are companies using public clouds for the backup?Matt Domsch: The concept is fairly simple - you create standby application instances in a cloud, and you regularly back up your critical data to the Dr site storage. Having both storage and ability to run VMs in a remote public cloud, gives you the ability to "flip over" fairly quickly. Public cloud for backup is popular, at a consumer level, and increasingly, at a business level.The great thing about backups is they are usually "write once, read never", which make them the perfect thing to place in a cloud, where performance variations day-to-day don't really impact your running business. For customers, Dell is working actively on new backup and disaster recovery solutions for the public cloud so stay tuned for more details on those solutions.Stephen Spector: Customers - as you can see Dell has a great deal of Cloud solutions for application integration (Dell Boomi), public and private clouds both open source and VMware based, security (dell Secureworks & partnership with Trend Micro), VDI solutions that are the simplest installs in the marketplace, as well as special cloud hardware to meet customer demands. Feel free to visit dell.com/cloud for more info.Stephen Spector: I have a good blog post on the various Dell Cloud solutions that I wrote last week. This blog post has all the Dell Cloud solutions and what they can do. The trick is to meet the customer's needs!Thank you to everyone for attending the 1st Emea Cloud Chat today and I am pleased that we were able to have a variety of Dell cloud experts provide some basic info on the various cloud solutions that we offer. At future chats, we will go in more detail on those solutions to offer you more insight into the products and how they can drive your business forward.
Chat HostsStephen Spector, Cloud Evangelist at Dell (Twitter: @SpectorAtDell)Florian Klaffenbach, Community Technologist at Dell (Twitter: @FloKlaffenbach)Rafael Knuth, Social Media Manager at Dell (Twitter: @RafaelKnuth)
Posted on behalf of Gireesha US from Dell Linux Engineering Team.
Red Hat has announced the availability of Red Hat Enterprise Linux (RHEL) 5.8 Operating System on Feb 2012, which includes driver updates, bug fixes and support for new hardware. For detailed information, please refer to RHEL 5.8 release notes.
Red Hat Enterprise Linux 5.8 has been extensively validated on supported Dell PowerEdge Servers. Please refer to the important information guide for a complete list of issues fixed in RHEL 5.8 on Dell PowerEdge servers.
What’s new :
Listed are the details of new features including hardware enablement.
Driver updates for the storage & network controllers, these drivers has been upgraded to include the Bug fixes.
I get asked by a lot of admins about thin provisioning and how that data can be easily understood. Admins are looking to understand the risk in over provisioning datastores, because if you run out of space you can cause outages for several VMs that no longer can read and write to a disk. Unfortunately, there is no metric for a datastore that can show that, but this is where a derived metric can really help. With vFoglight, I can create a metric based on other metrics or anything really. So, in the case of datastores, I want to take all of the VMDKs, thin or thick, add up their committed/uncommitted space and create a datastore version of that. With this new set of metrics, as well as the existing VM level metrics, we can craft some really simple to understand, but powerful views.
Well, I've taken the liberty of creating a sample Thin Provisioning cartridge you can use to add these derived metrics to your environment and it also has several nice views around thin provisioning in general. You can check it out here: http://communities.quest.com/docs/DOC-12605
Drag out the View in Thin_Provisioning and it will give you a list of all VMs that currently have thin provisioned disks.
Drag out a datastore to get either the Graph & Table or just the Table. This shows the total allocation that is currently in use and data that could be written (think over provisioning)
Drag out all datastores to see all Datastores thin provisioning information
We have posted two blogs (1) (2) to discuss DELL NFS Storage Solution with High Availability (NSS-HA) in the past. This article introduces a new configuration of DELL NSS-HA solution which is able to support larger storage capacities (> 100 TB) compared to the previous configurations of NSS-HA.
Dell gets the support from Red Hat for XFS capacities greater than a 100 Terabytes . Details on the work that was done to get this exception for Dell is in the blog post: Dell support for XFS greater than 100 TB.
As the design principles and goals for this configuration remain the same as previous Dell NSS-HA configurations, we will only describe the difference between this configuration and the previous configurations. For complete details, please refer to our white papers titled “Dell HPC NFS Storage Solution High Availability Configurations, Version 1.1.” and “Dell HPC NFS Storage Solution High Availability Configurations with Large Capacities, Version 2.1.”
In previous configurations of NSS-HA (1), each storage enclosure was equipped with 12 3.5” 2TB NL-SAS disk drives. The larger capacity 3TB disk drives are a new component in the current configuration. The storage arrays in the solution, Dell PowerVault MD3200 and PowerVault MD1200 expansion arrays are the same as in the previous version of the solution but with updated firmware. The higher capacity 3TB disks now allow higher storage densities in the same rack space. Table 1 provides information on new capacity configurations possible with the 3TB drives. This table is not a complete list of options; intermediate capacities are available as well.
In previous configurations of NSS-HA, the file system had a maximum of four virtual disks. A Linux physical volume was created on each virtual disk. The physical volumes (PV) were grouped together into a Linux volume group and a Linux logical volume was created on the volume group. The XFS file system was created on this logical volume.
With this configuration, if more than four virtual disks are deployed, the Linux logical volume (LV) is extended, in groups of four, to include the additional PVs. In other words, groups of four virtual disks are concatenated together to create the file system. Data is striped across each set of four virtual disks. However it is possible to create users and directories such that different data streams go to different parts of the array and thus ensure that the entire storage array is utilized at the same time. The configuration is shown in Figure 1 for a 144TB configuration and a 288TB configuration.
Red Hat High Availability Add-On is a key component for constructing a HA cluster. In previous configurations of NSS-HA, the add-on used is distributed with RHEL 5.5. With this release, the version distributed with RHEL6.1 is adopted. There are significant changes in the HA design between the previous RHEL 5.5 release and the new RHEL 6.1 release. New and updated instructions to configure the HA cluster with RHEL 6.1 are listed in the appendix A of our white paper “Dell HPC NFS Storage Solution High Availability Configurations with Large Capacities, Version 2.1.”
In previous configurations of NSS-HA, the version of XFS is 2.10.2-7 which is distributed with RHEL 5.5. In the current version of NSS-HA, the version of XFS used is 3.1.1-4 and is distributed with RHEL 6.1. The most important feature of the current XFS for users is that it is able to support greater than a 100 Terabytes of storage capacity.
Table 2 lists the similarities and difference in storage components. Table 3 lists the similarities and differences in the NFS servers.
Dell NSS-HA solution provides a high availability and high performance storage service to high performance computing clusters via an InfiniBand or 10Gigabit Ethernet network. Performance characterization of this version on the solution is described in “Dell HPC NFS Storage Solution High Availability Configurations with Large Capacities, Version 2.1.” Additionally, in our next few blogs, we will discuss the performance of the random and metadata tests on 10GbE and other performance related topics.
By Xin Chen and Garima Kochhar
1. Dell NFS Storage Solution with High Availability - an overview
2. Dell NFS Storage Solution with High Availability – XL configuration
3. Red Hat Enterprise Linux 6 Cluster Administration -- Configuring and Managing the High Availability Add-On.
4. Dell HPC NFS Storage Solution High Availability Configurations, Version 1.1
Enterprise storage needs demand solutions that can scale up and scale out in terms of capacity and performance. This is especially true in HPC environments where the additional constraint of cost is paramount. Dell has responded with cost effective solutions for the HPC storage needs in three different spaces (as illustrated in the figure below):
In particular, NSS is a very cost-efficient solution that can deliver high performance at moderate capacities. However, previous versions of the NSS were restricted to a ceiling of 100 Terabytes (100 * 2^40 or 100 TiB) due to the Red Hat support limit for XFS.
Given industry current demands for storage solutions with bigger capacity in this space, Dell has been working extensively with Red Hat on XFS testing and validation to expand the support beyond the 100 TiB barrier and meet the current business needs of our customers.
As a result of this effort, Red Hat has granted Dell support for XFS up to 288 TB (raw disk space) on NFS Storage Solutions with a single namespace, and even bigger capacities on custom design solutions. This is a very important milestone for Dell’s quest towards providing Petabyte storage solutions.
For details about the performance characteristics and different capacities of our new version of the NSS, please take a look at the NSS white paper.
Written by Garima Kochhar, Jose Mario Gallegos, and Xin Chen
Managing Enterprise Quality: What are we doing for our next generation products?
You already know the mission of the Enterprise Customer Loyalty Team: It’s about improving the health of our customer relationships and driving actionable customer insights into our product development cycles. Today I want to share a little more context about the larger organization we represent –- Enterprise Quality -- what our Quality Engineers do to manage quality in general, and what we’re doing specifically to support a seamless transition to our next generation server products.
Enterprise Quality Vision
Our CULTURE of Quality incorporates the customer in all we do. Built on a foundation of solid quality management, it started with a basic hardware focus followed closely by software. Now our Quality JOURNEY has matured to providing solutions being crucial to everyone’s success. This top-down driven transformation was evident at our recent launch of new Enterprise Products and Solutions, where Michael Dell talked about our continued transformation and evolution from a PC manufacturer to a company providing end-to-end IT solutions.
For this launch of next generation products, our Quality goal was to provide our customers a seamless transition. This required our support from early product development through post-launch:
Early product development:
We’ve already received valuable, timely feedback from customers who are rolling out 12th Generation installations.
So tell us what you think! Are we missing anything? Are there other activities besides what we have described? Are you hearing any buzz about our 12th Generation products as they enter the market?
We’re working hard to improve your customer experience, and we want to hear from YOU.
Please join us today for an "Open Mic" chat with the Dell TechCenter crew at 3pm Central. We'd also like your help in evaluating a new chat platform for the community. You can login as a guest at the link below.
Join us here: http://meet30005221.adobeconnect.com/delltechcenterchat/
Calendar event here: http://en.community.dell.com/techcenter/c/e/40.aspx
To unsubscribe from the Dell TechCenter distribution, please click "Email unsubscribe to this blog" on the TechCenter News Blog page or email "unsubscribe" to firstname.lastname@example.org.
As I mentioned a few weeks back in “Digging Deep into Dell Boomi” I planned to come back and write about the various topologies available to deploy Dell Boomi Atoms, or integration run-times. For more details on the Dell Boomi solution you can read my initial blog post or visit the website.
One of the secret sauces to Dell Boomi is how integration execution is managed. Once your integration processes have been designed and tested in the visual designer, they are loaded into a lightweight, dynamic run-time engine called an “Atom” for execution. The Atom is a pretty cool technology element—we’ve even applied for a patent on it.
The cool thing about Atoms is they can be deployed anywhere: the AtomSphere (Boomi’s Atom Cloud), another public cloud (such as Amazon Web Services or Dell’s public vCloud), or safely behind your firewall for on-premise application integration.
The Atoms control the actual movement of data. If Atoms are deployed behind the firewall, no data flows through the Dell Boomi Atom Cloud. It is extremely secure and requires no holes in your firewall. The Atom stays connected with Dell Boomi for integration updates, Atom updates and sending status to the centralized management console.
The Dell Boomi Atoms support the following deployment scenarios:
Full Cloud Deployment
In this scenario, the Atom is located in Dell Boomi Atom Cloud environment managed and operated by Dell. All data is sent to and from one SaaS application, such as NetSuite to another SaaS application, such as Salesforce.com through the Dell Boomi Atom located in the Atom Cloud (see image below).
Cloud Deployment of Both Cloud and On-Premise Apps
In this scenario, the customer has a public SaaS solution outside their firewall but also an on-premise legacy application such as SAP or Oracle which requires the data to flow between the Dell Boomi Atom and the two applications; one in a public environment and one behind the firewall. An image of this type of solution is shown below in the Third Party Datacenter Deployment except the Dell Boomi Atom is hosted and managed by Dell.
Third Party Datacenter Deployment In this scenario, the Dell Boomi Atom is hosted by a third party vendor on a public cloud such as Dell vCloud or in a datacenter managed by a vendor other than Dell. This option requires the Dell Boomi customer to setup and configure their Atom on the third party hardware but it operates similar to the way that the Dell Boomi SaaS setup is shown above.
Behind the Firewall Deployment In this model, the Dell Boomi Atom is deployed within the corporate datacenter providing an on-premise data movement for the customer. The image below demonstrates this solution:
The Dell Boomi Atom is placed in the datacenter “between” the Oracle and SAP applications where data is transferred following the rules setup before deployment for Oracle/SAP to Salesforce conversion. Dell does monitor the integration process and log files & status notifications are sent to the Dell Boomi cloud for tracking purposes only. If Atoms are deployed behind the firewall, no data flows through the Dell Boomi Atom Cloud. It is extremely secure and requires no holes in your firewall. The Atom stays connected with Dell Boomi for integration updates, Atom updates and sending status to the centralized management console.
But wait…there more
Did I say Atoms are cool. They can be combined together to form…wait a minute…remember chemistry and you’ll guess it. Yes, Atoms can be combined into Molecules for load balancing, High availability, Disaster recovery and parallel processing.
To learn more about this unique cloud technology from Dell visit http://boomi.com or call -800-732-3602 for sales.
It's exciting to see Dell HPC making a difference in scientific research across the world. Recently the top publication in China focused on education, called China Informationization Education, posted an article about the role high performance computing (HPC) is playing in advancing research at some of China's top research institutions. Focused on helping to improve the efficiency of scientific research, with an emphasis on biology, HPC is having a large impact. The article features Tsinghua University and also cites information provided by Beijing Normal University, and the University of Science and Technology of China. Below is a link to the article that has been translated into English as well as the original version in Chinese.
High Performance Computing (HPC) Speeds Scientific Research for Top Universities
As always, we welcome your feedback and comments.