Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
This blog post was written by Dell engineers Barun Chaudhary
As Microsoft ushers in a new operating system era, we will see cutting edge technologies introduced in Microsoft Windows Server 2012. This blog post provides a quick glimpse of the Windows Server 2012 readiness status of Dell’s Server portfolio.
Microsoft Windows Server 2012 provides a wide spectrum of new features which should empower IT professionals. A complete list of the new features in the beta release of Microsoft Windows Server 2012 can be found at the following links:
All current Dell PowerEdge Servers starting with our 9th generation through our newly available 12th generation servers have been thoroughly tested with Microsoft Windows Server 2012 pre-release builds and are expected to support this operating system when a full version is released. We have also tested the current OpenManage consoles with Server 2012 Beta and we've documented known issues. Full OpenManage support will only be available at the RTS of Windows Server 2012.
To help our customers adopt Microsoft Windows Server 2012 on Dell PowerEdge Servers, Dell engineering teams have performed extensive testing of the new OS features on our platforms and have come up with an exclusive Dell Early Adopter Guide for Microsoft Windows Server 2012 which we believe will provide a comprehensive understanding of Dell’s hardware and OpenManage support, recommended installation methods, and known issues.
To see which Dell devices will be supported by Windows Server 2012 inbox drivers, please refer to:
"I am doing nightly full SQL backup, and a weekly SharePoint farm backup. Is this enough?" - "Would out of the box SQL and SharePoint tools be good enough for my backup, or should I look for a 3rd party product?" - "What is the best SharePoint backup strategy?"
I almost inevitably hear one of these questions whenever I speak about SharePoint backup and recovery at SharePoint events or meet customers. While it is natural to look for guidance, it is important to understand there is no such thing as the "best SharePoint backup strategy" that would work for everyone. Once you understand that, you're cool to start exploring the strategy that works best for the unique combination of your company's business processes and technical environment.
Here're my five tips to consider when you are planning backup and recovery for your SharePoint environment:
Tip #1 - Involve business stakeholders and IT budget owners early in the backup and recovery planning.
Backup and recovery plan should not be created just for the sake of it. IT professionals must consider how SharePoint is used, which business processes are dependent on it, and what is the impact of downtime and/or data loss. Take a step back and look at SharePoint in the context of overall company's business continuity. Involving both budget owners and content/process owners will help to set proper expectations among all stakeholders.
Tip #2 - Define different restore time and restore point objectives for different services and content.
Restore time objective (RTO) defines how quickly the business needs content and services back after the failure. Restore point objective (RPO) defines how much data it is acceptable to lose without significant productivity loss. These two metrics are the core of any backup and recovery requirements. These metrics should be the outcome of your work with the business stakeholders.
Be aware of a common trap with these requirements: it may seem like a good idea to define common RTO and RPO for "SharePoint" in general. However, not all content and services have the same value for your organization. Work with the business to break the objectives down by applications, business processes, typical use cases, etc. For example, a SharePoint-based web application running an online store is probably business critical for a company, whereas a team project site that may only impact a handful of users if it goes down. Assume these are both hosted within the same SharePoint farm. If you define just one target metric for the entire farm, the cost and complexity of a solution may significantly grow.
Tip #3 - Consider which SharePoint failure scenarios you are protecting from, and work on recovery procedures for each of these scenarios.
A backup and recovery plan cannot be a single straightforward step-by-step process. SharePoint farm is a complex living organism with various components and inter-dependencies. While loss of one database may only impact few non-critical sites, corruption of another database can make entire farm unavailable. Are there any natural disaster risks specific to your location? Do you have strict size quotas and retention policies that may provoke "hoarders anonymous" to regularly wipe away SharePoint content - both stalled and critical files alike? Make sure to address all common/expected scenarios as separate procedures in your recovery plan document.
Tip #4 - Identify dependencies and create a checklist to avoid "false starts" and wasted time and effort.
SharePoint does not exist in a vacuum. Service and content availability depends on various other systems: underlying network and server infrastructure, SQL Server and IIS technologies, authentication systems such as Active Directory, etc. Obviously, it would be a waste of time and effort to rush into "recovering" anything in SharePoint when its availability is impacted by another system's downtime. A good recovery plan must include a checklist of dependencies, clear and simple ways to verify each of them, together with contact infomation for the responsible groups within IT.
It is also good to have a clear definition of which events trigger the execution of which recovery scenarios. If there is a monitoring system in place, who gets notified and which specific events/thresholds are being monitored. If recovery is triggered by a helpdesk call, what is the escalation process. And so on.
Tip #5 - Prepare the communication plan.
If you ever had to go through a disaster recovery, you will probably agree it is a stressful event for an administrator. Having your boss call you every couple minutes asking for the status update does not help either. Preparing and documenting communication strategy may help to somewhat reduce the pressure from management and the business. Again, it is important that the communication plan covers the same detailed scenarios.
For example, your recovery scenario might require to bring up a critical site collection in a temp environment ASAP before the rest of the farm can be restored. This is a fairly common scenario, since RTO may be different for different content within the same SharePoint environment. Who should be notified about the farm and the critical site restore progresses? How the status update is communicated? Addressing these questions in a communication plan can help avoid the chaotic storm of calls and emails when it comes to the real recovery.
Free Extra Tip #6! Don't assume anything, document every single step required however insignificant it may seem.
The ideal SharePoint recovery plan can be executed by a college student currently on her internship with your IT department without disturbing your summer vacation. Don't assume anything, there is nothing obvious or self-explaining. Create the plan, test and document it - then give it to someone else to test, asking them to literally follow each word in the document. Make notes, update the document and test again. And don't forget to repeat the excersise regularly to always keep the plan up to date.
And One More Bonus Tip #7! Look for ways to reduce RTO and RPO for specific scenarios leveraging existing backup systems.
In an ideal world, both RTO and RPO would be close to 0 - everything should be available instantly with no data loss. In the real world though the cost of implementation and maintenance of a system that would allow that can outweigh all the business benefits. So you'll have to choose backup methods and strategies that better fit the budgets available. That said, in some cases you can dramatically improve the metrics with lower efforts and costs. We specifically designed Quest Recovery Manager for SharePoint to help you achieve just that!
The main takeaway I hope you get from this post is that SharePoint backup and recovery strategy cannot be created by SharePoint administrator alone. Work with the business stakeholders other IT members. Ask a lot of questions, document your findings and confirm any assumption you might have. Finally, test everything you have documented, update the procedures accordingly, and then test again. Repeat that regularly and be safe.
Product Manager @QuestSharePoint
PS If you find this useful, you may be interested in my very irregular SharePoint backup and recovery blog.
Resolve support issues quickly and protect your SharePoint investment from data loss, unsupportable code, and inconsistent migrations and deployments across your environment.
Supportability is vital to any SharePoint governance plan. In fact it’s one of our Five Pillars of SharePoint Governance – security, auditing, reliability, usability and supportability.
In this webcast we are going to take a deep dive into the supportability governance pillar and we have our friend Jason Himmelstein from Sentri to help as well as Quest’s own Chris McNulty.
The Five Pillars of SharePoint Governance – Supportability (a webcast)
North America - Wed, June 6, 2012 @ 11:30 a.m. EST | 8:30 a.m. PST | 4:30 p.m. BST - Register here
Europe – Wed, June 6, 2012 @ 12:30 p.m. BST | 1:30 p.m. CEST - Register here
Supporting SharePoint is a nightmare when there’s no documentation for system and customization changes, no backup and recovery plan, and no system for resolving end user issues in a timely manner.
In this webcast featuring Jason Himmelstein from Sentri, you’ll get crucial insight into the key areas of support you should consider when developing your governance plan. We’ll explore real-life examples of how Quest’s SharePoint solutions address all of your support concerns, making it easier to keep your environment up and running – and keep your users happy.
North America - Register here
Europe – Register here
I am currently waiting with fellow technologist Dave Asprey from Trend Micro to see if our session for VMWORLD 2012 is voted into the agenda. Please go to https://vmworld2012.activeevents.com/scheduler/publicVoting.do and vote for Dave and myself - Session 2136 Collaboration for Flexible Security: A Dell - Trend Micro Case Study. You will need to register for the VMWORLD site to vote. Voting is open from May 29 - June 8. 2012.
While there, you might want to vote for these other Dell sessions:
Earlier this month, I was honored to serve as the host at Quest’s The Experts Conference 2012 for SharePoint in San Diego.
We brought together over two dozen industry luminaries for a wide ranging series of presentations on:
Special thanks to our two conference keynotes – Bill Baer from Microsoft and Michael Lotter of CFA Institute.
I’d also like to thank all our speakers, many of whom made the trek from far away to friends and family. It was a great chance for us to introduce some new faces to the ever-sterling TEC lineup. We also got to co-host a raucous Q&A meeting with the local San Diego SPUG (SharePoint User Group.) Thanks to Chris Givens and Randy Williams for setitng that up. (And good luck with SharePoint Saturday San Diego next month!)
Additionally, I’d be remiss in not mentioning all the help and support from Doug Davis, Bill Evans, Ghazwan Khairi, Gib Patt and Susan Roper throughout the week. And in particular, our own Michelle Fallon was, as always instrumental to keeping everything running, during the event, afterwards, and in the months of lead up, planning and coordination. Thank you, thank you. Well done.
If you missed TEC2012, you have two options:
Hope to see you soon!
In case you couldn’t attend our OpenStack chat please find below a summarized transcript. We cleaned it up and reordered some fragments, making the transcript more digestible.Stephen Spector: Let's go ahead and get started. We have several people joining us today from OpenStack - customers, developers, etc so please feel free to ask questions. I will start with a basic question - do we have any people on the chat who are new to OpenStack? Marc Keilwerth: Yes.Michael Ormaza: I am new OpenStack! Stephen Spector: OpenStack is an open source cloud infrastructure solution for Compute and Object Storage. The Compute part does the VM starting, stopping, running, etc. - what you think of with a cloud. The Object Storage part is like Amazon S3 and stores billions of objects for long-term storage with not much access - it is not similar to a hard drive.Let me intro the guest speakers... From Dell we have Rob Hirschfeld who is Lead Architect on Dell Crowbar and Dell OpenStack solutions. From Rackspace we have Jim Plamondon who is the Director of Strategy for Rackspace and OpenStack enthusiast. Jim Plamondon: Well, Director of Developer Relations, anyway -- although I like the "strategy" part! Stephen Spector: I have more people joining soon as well from Dreamhost - OpenStack customer. Stephen Spector: Jim - can you give us a short answer on how Rackspace sees OpenStack as an open community project Jim Plamondon: Sure. The one-line description is that Rackspace is working as hard as possible to be just another member of the community. Rackspace started OpeStack with NASA and then bought the contract outfit that NASA used to create its portion. So Rackspace owned the whole shebang. But in the latest OpenStack release, Rackspace contributed only about half the code to the new release. DELL Rob Hirschfeld: I think that Rackspace's commitment to "open community" is essential to point out. Jim Plamondon: So the community has grown far beyond Rackspace alone DELL Rob Hirschfeld: Because OpenStack is more than open source - there have been other open source clouds. Jim Plamondon: Everything is transitioning to an OpenStack Foundation Stephen Spector: Rob, can you give us your thoughts from Dell as a founding member and what Dell's role is. DELL Rob Hirschfeld: It's the open community that makes OpenStack unique and vibrant in the cloud landscape. Before we got involved in OpenStack, we'd already been working with several scale out cloud platforms. When we started with OpenStack, it was clear from the beginning that the objectives were different. The critical differences were: multiple contributors, commitment to be a developer AND scale operator, expectation of creating an ecosystem.Stephen Spector: Jim what is the foundation and why is that important? Jim Plamondon: Final point: Rackspace's role in OpenStack is continuing to decline, even as we ramp up our contributions, because the wider contributor community is growing so much faster. It's already five times the size of CouldStack and Eucalyptus combined. (And yes, that's why the Foundation is so important -- it gives OpenStack life independent of any one firm.)Stephen Spector: Jim, is the Foundation the "owner" of OpenStack? Jim Plamondon: What is the foundation: a not-for-profit org that will own the trademarks (and hence the validation suites) and other IP, and handle related legal & marketing issues. The community "owns" OpenStack. The foundation is the vehicle of that ownership. DELL-Christian S: Would the foundation also coordinate the development?Jim Plamondon: The community coordinates the development, through the Project Technical Lead. Code rules. Sumbit great code, and YOU are directing OpenStack's development! That is, the Foundation is the servant of the community, not its master.Stephen Spector: Before we talk technical, are there examples of customers using OpenStack today?Jim Plamondon: The success of OpenStack's portability is shown by the number and breadth of OpenStack-based public clouds (Rackspace, HP, AT&T, Deutsche Telekom, Internap, etc.), private clouds (eBay, Sony, San Diego Supercomputer Center, MercadoLibre, GeeconFX, Argonne National Labs, etc.), and cloud gear & service providers (Dell, Piston, Nebula, MorphLabs, etc.).DELL Rob Hirschfeld: I'd expand that to say that it's not just code that rules, CONTRIBUTIONS rule. There are a lot of ways that you can contribute to OpenStack Jim Plamondon: Excellent point, Rob. DELL Rob Hirschfeld: Dell has taken a leadership position in making deployment work in OpenStack. ATT has done some great work supporting documentation. Jim Plamondon: And I've taken a leadership role in buying pizza for meetups. That's a contribution, too ;-) Stephen Spector: Jim, that is my type of contribution DELL Rob Hirschfeld: We'd love to see people who are deploying, contribute by sharing stories, case studies, best practices, etc. Jim Plamondon: Case studies of real-world deployments -- including problems encountered and their resolution -- are worth their weight in GOLD right now. Stephen Spector: Rob, can you provide a short overview on the basic organization of OpenStack - Compute, Object Storage, and the project process for new features. DELL Rob Hirschfeld: One of the great things about OpenStack is that we want people to participate. I uploaded a white paper with some of that and I'll give a quick intro too. OpenStack is a group of projects that work together because a cloud is not just one component and they interoperate based on documented APIs. Stephen Spector: Please see the file - 2011 Bootstrapping Open Source Clouds.pdf in the files section for more detail. DELL Rob Hirschfeld: There are two "core" projects today, Nova (Compute) & Swift. Nova is for making compute resources available (like EC2) in the form of VMs (and even bare metal in some cases). Stephen Spector: Rob, are the projects independent or do I need all of them for OpenStack to work? Jim Plamondon: Openstack projects: http://openstack.org/projects/ DELL Rob Hirschfeld: Swift is for storing objects and large files (S3 like) like VM images. Those projects are joined together by a shared authentication system (keystone), a VM cache (glance), and a UI (horizon). Stephen Spector: Attendees - feel free to ask questionss if you have any.DELL Rob Hirschfeld: Those are the "big" pieces, but there are other components that joined). Stephen Spector: Rob, how does Dell Crowbar fit into OpenStack? Is Crowbar open source? DELL Rob Hirschfeld: Crowbar is an Apache 2 open source project that was originally created as an OpenStack installer. It's become broader than that as we've added more cloud applications like Apache Hadoop. Stephen Spector: Is Crowbar considered a DevOps tool? DELL Rob Hirschfeld: We wanted to make sure that OpenStack was easy and predictable to install. Yes, we embed Chef Server into it. It's a key part of how we believe data centers should be managed. Jim Zhang: Can you explain project Quantum? Jim Plamondon: Link: http://wiki.openstack.org/Projects/CoreApplication/Quantum Stephen Spector: Thanks Jim Z! Jim Plamondon: I know that's not an explanation. DELL Rob Hirschfeld: Quantum is part of a set of OpenStack projects working on software defined networking for the cloud. It gives users an API to setting up their own networks which means that you can deploy your applications with much richer network configurations than are currently enabled by cloud APIs because they are leveraging their existing work in OpenFlow and Open vSwitch configuration management.Stephen Spector: Rob, is a company like Nicira part of Quantum? DELL Rob Hirschfeld: Yes, Nicira and BigSwitch are very active companies. Jim Zhang: What is development of Quantum? Firewall, load balance etc.Stephen Spector: Rob, is it part of the OpenSwitch effort? I think that is the project. Stephen Spector: Jim, I believe Quantum is in beta now and will ship in next OpenStack release? DELL Rob Hirschfeld: The goal of Quantum is to provide a way way to connect to services like firewall, load balancers, and other network focused services. If you want those services for your application, you need a way to describe the network interconnects, that what Quantum can help with.Jim Plamondon: Info on Quantum: http://blog.ioshints.info/2012/02/nicira-bigswitch-nec-openflow-and-sdn.html (Rather indirect info -- it mostly on the players in the virtual netowrking space, but it shows the "Big Picture.").DELL Rob Hirschfeld: I wrote a post about Quantum that may help, http://robhirschfeld.com/2012/02/08/quantum-network-virtualization-in-the-openstack-essex-release-2/Marc Keilwerth: How much time and headcount would i need to implement a public cloud for about 1500 VMs? Assuming I have no experience with OpenStack.Stephen Spector: Marc, OpenStack is a bit tricky to install as it is a set of technologies to link together. Having Crowbar is a huge help in setting it up. I would give at least a month to get it up and runninig. DELL Rob Hirschfeld: Marc, a public cloud? One that you want to sell VMs to customers on? Marc Keilwerth: yes, public cloud for selling VMs to customers, including accounting.DELL Rob Hirschfeld: For a private cloud, you don't need many people. If you're charging money for it then you need to be prepared to support it.DELL Rob Hirschfeld: Marc, more than 1 and less than 20. I don't mean to give you a short answer, but it really depends on what your business model is and how much operations experience you already have. Marc, maybe I'm not understanding what you really want to find out.Marc Keilwerth: Well, I´m trying to understand if we should use OpenStack for our public cloud or CloudStack. We made some smaller trials with CloudStack and it looks more complete and easier to me. Maybe OpenStack is too large for a 1500 VM cloud?Stephen Spector: Marc - the two projects have various + and - to them and it depends on the amount of control, flexibility and technical expertise you have. OpenStack is much more developer oriented with users assemnbling pieces they need. CloudStack is more of a customer oriented solution at this time with the focus on the end user and not so much on the back-end hoster.VTT: Any plans to support other hypervisors, more specifically, Hyper-V?DELL Rob Hirschfeld: Microsoft was making statements about bringing Hyper-V back during the last Summit. My understanding is the they would only be for the Nova compute nodes, the rest of the infrastructure would be run on Linux. Jim Plamondon: Regarding Hyper-V: Microsoft recently hired some guys who know how to make it work with OpenStack. They are chartered with making it work with OpenStack.Stephen Spector: What are the next big features other than networking OpenStack is working on?Jim Plamondon: In the great spiral staircase of product development, every now and then you need a "landing" -- a flat spot where you stop climbing for a little while, and focus on stability, reliability, scalability, and other ilities. I think OpenStack's in such a landing, for a couple more months, before starting the next climb. Stephen Spector: Jim, your answers always amaze in a poetic way! DELL Rob Hirschfeld: Speaking of the spiral staircase... I wanted to invite people to join in our OpenStack Essex Deploy Hack Day next Thursday http://bit.ly/crowbarOSED. We're about to release our deployment scripts (via Crowbar) for the last release of OpenStack (Essex). Jim Plamondon: CloudStack definitely has some nice shiny chrome, and really big tailfins. But it also has a community of only 40-ish developers, all of whom work for Citrix. That's about where OpenStack was two years ago. OpenStack's community has over 200 contributors in the last release alone -- out of more than a thousand contributors altogether -- and OpenStack's community is growing way faster.Because OpenStack's contributor community is larger, and growing faster, it's going to add shiny *** and tail fins faster than CloudStack can add the deep technical features, like Quantum, than OpenStack is adding.Stephen Spector: If I wanted to learn even more on OpenStack, where can I find info on the various user groups? DELL Rob Hirschfeld: We've been tracking both projects, and we do hear about customers who are trying a bake off between CloudStack and OpenStack. We have a lot of customers who are using OpenStack in the "small" with 6-60 node clouds.Marc Keilwerth: How much does crowbar cost?DELL Rob Hirschfeld: Crowbar is included at no cost in our OpenStack solution. Stephen Spector: OpenStack User Group list -0 http://wiki.openstack.org/OpenStackUserGroups?action=show&redirect=OpenStackUsersGroupall over the world list... DELL-Christian S: since we just had some questions on OpenStacks maturity - what are the (ideal) prerestiques for a customer to deploy an OpenStack environment (# of admins/developers, business model,...)? jparrott: Having built and deployed our own "bare-metal" build system for our Redhat Dell servers, the technical internals of Crowbar are very interesting. My first question -- can Crowbar deploy my baremetal servers with PXE alone, or does it require an ISO? Also, does Crowbar support UEFI (which seems to be default on my new Dell servers) DELL Rob Hirschfeld: It's open source and you can download and use it yourself. The solution adds components that configure RAID & BIOS for the servers that we include in the solution but Crowbar is not hardware specific Jim Plamondon: Marc: To use Crowbar (or OpenStack), all we ask is that you be a good citizen of the open source community. That is, what you learn...share! DELL Rob Hirschfeld: There are many differences between OpenStack and CloudStack, but the key ones that we've seen focus on are the community of development and use of commercial hypervisors Jim Zhang: Can customer download Crowbar free of charge? Stephen Spector: JimZ - Yes Jim Zhang: :) Marc Keilwerth: I´ve heard that running and operating OpenStack is much more challenging than CloudStack. Is that (still) true? DELL Rob Hirschfeld: Jim's right - Crowbar is also community developed. We welcome code changes and user experiences Stephen Spector: https://github.com/dellcloudedge/crowbar DELL Rob Hirschfeld: Crowbar: http://bit.ly/crowbarwiki There are build instructions there and I've been maintaining an ISO on my personal site (linked from the wiki). If you're determined to use VMware & Hyper-V, you'll likely find more support from CloudStack.Jim Plamondon: Althogh Hyper-V & VMWare support in OpenStack will be muuuuch better by Folsom, in October.Kevin Jones: Jim, better than currently or better than CloudStack?Jim Plamondon: Kevin: Better than currently in OpenStack. Not sure how it will compare, then, to CloudStack.Stephen Spector: Kevin - OpenStack and CloudStack have different target markets and depend on how you want to create your cloud solution.Jim Zhang: Crowbar is owned by Dell, right? DELL Rob Hirschfeld: Crowbar is lead by Dell. It's open source.Marc Keilwerth: Is the OpenStack web GUI complete enough that our business customers can deploy and manage their VMs on their own? Or do we have to develop our own web portal for OpenStack?DELL Rob Hirschfeld: Horizon, the OpenStack GUI, is completely enough for users to access all the feature of the cloud. It's a good basic GUI for the capabilities. DELL Rob Hirschfeld: Dell has brought enStratus into our solution because we heard that customers wanted a more sophisticed GUI for work flows, accounting, user management, and provisioning. Marc Keilwerth: Does the other OpenStack hoster like Rackspace use the OpenStack web GUI for their customers? Or do they use another - maybe self-made and not open source - web GUI/portal? DELL Rob Hirschfeld: Our recommendation is that you can get the basics up and running fast with the built in OpenStack GUI then you'll likely want to extend it with ecosystem components that match your business needed (Opscode Chef, enStratus, etc). Jim Plamondon: Good questions. More? Perhaps from someone who hasn't been heard yet?DELL Rob Hirschfeld: I'm uploading a copy of the three-way presentation we did at the OpenStack summit showing the OS Dashboard, Opscode Chef and enStratus Stephen Spector: If you like, we can host a CloudStack chat in the near future. Is that interesting to attendees? DELL Rob Hirschfeld: One other point about the OpenStack GUI is that it only talks to its own cloud. Chef & enStratus can talk to multiple clouds at the same time, so you can use one front end for multiple targets. If you are planning to host a public OpenStack cloud, you'll likely want to build a more custom portal. It exposes the features that you'd want to leverage, but user experience is a place to differentiate your offering.Kevin Jones: Does Dell offer a public cloud based on Openstack to its customers? DELL Rob Hirschfeld: For hands on OpenStack experience, please join us next Thursday for the deploy day! Stephen Spector: Kevin - We offer a vCloud today for public and are working on an OpenSource cloud for later in the year. Kevin Jones: Okay thanks Rob, what will this be based on?Stephen Spector: Kevin - as it is open source, we will leverage a variety of projects.Jim Plamondon: http://content.dell.com/us/en/enterprise/by-need-it-productivity-data-center-change-response-openstack-cloud Jim Zhang: If my customer need help on OpenStack, who should I contact from Dell?DELL Rob Hirschfeld: You can email generally into firstname.lastname@example.org Jim Zhang: Got it!DELL Rob Hirschfeld: http://dell.com/openstack has a lot of information Ivan@Dell: Thanks for all the info! Dell TechCenter: You're welcome Ivan! Good bye everybody!
DownloadsDell Cloud TrainingOpenStack Chef ConferenceDell OpenStack Whitepaper
Chat HostsStephen Spector, Cloud Evangelist at Dell (Twitter: @SpectorAtDell)Florian Klaffenbach, Solution Expert – Microsoft & Cloud Computing at Dell (Twitter: @FloKlaffenbach)Rafael Knuth, Social Media Manager at Dell (Twitter: @RafaelKnuth)
On May 15th, I participated as a sponsor of the Austin Cloud User Group with a discussion on the Dell vCloud and a unique customer, GreenButton. I taped the presentation along with a 20 minute Q&A session. The full video of the session is available on YouTube:
If you are interested in having Dell sponsor a cloud user group in your area please contact me via twitter (@SpectorAtDell) or email.
Congratulations are in order for Henry Neeman and the University of Oklahoma, as they introduced "Boomer" as the newest addition to its Supercomputing Center for Education and Research. And with an impressive 109 TeraFLOPS peak performance, the addition of Boomer is a great thing to not only the University, but to the state of Oklahoma as well!
For perspective, the Dell-based supercomputer is 100 times faster than OU's first supercomputer from 2002, and three times faster than "Sooner" the previous HPC-king on campus.
But what's most important is the type of research Boomer will be involved with, which ranges from weather modeling, molecular dynamics, and even high-energy physics. Henry, who serves as the director of the OU Supercomputing Center, is as excited as the researchers and students who can access it, saying, "... [Boomer] will enhance research capabilities by connecting scientific collaborators throughout the state and nation.” Finally, Boomer is a part of two other really exciting projects, including OneNet, which is Oklahoma's statewide research, education, and government network, and the Oklahoma PetaStore, which is designed to handle many Petabytes so researchers can better collaborate.For more information on Boomer, and exciting initiatives happening at OU, visit the links below.OU Supercomputing Center for Education & Research
insideHPC: Boomer Super Comes Online at University of Oklahoma
Fastest Academic Supercomputer In Oklahoma Begins Operation
OU Deploys Fastest Academic Supercomputer in Oklahoma History
In Episode 25, Kong Yang and Todd Muirhead talk about summer vacation and what it really means for those of us preparing for Dell World and VMworld. We welcome your thoughts and feedback.
Please click below to view the video.
With around 70 per cent of Earth covered by water, it is vital to our economies, daily life and health of the entire planet. For this reason, Mercator Ocean is on a mission. The French oceanic analysing and forecasting centre uses supercomputers to deliver complex 3D simulation systems about the state of the oceans – after all, the more we know, the better we can protect them. The existing solution needed to be replaced and Dell spoke to Bertrand Ferret, Head of IT at Mercator Ocean.
What does it take to capture the ocean?
Bertrand Ferret: A lot of processing power and very large power consumption. The models incorporate millions of pieces of data, and demand great capacity in terms of memory and performance. When our 232-Gflops R&D system came to the end of its lifecycle, we wanted to greatly increase performance while reducing power and cooling costs.
How did you go about finding a solution?
Bertrand Ferret: We conducted an in-depth survey. I was particularly interested in Dell PowerEdge C-series servers with their flexible processor configurations, but the solution wasn’t available yet. My Dell contact told me about a Dell high performance computing road show. I decided to wait and see it in action. We also received a trial server for onsite testing. That’s when I knew that the Dell PowerEdge C6100 rack server was for us – new nodes can be added as required, creating a high-density infrastructure with excellent processing power.
Oceans don’t stop – even during server installations. How did you minimise disruptions?
Bertrand Ferret: Dell physically configured the servers and delivered them in just 12 days. Our objective was to install the solution ourselves in less than 15 days, which we achieved.
What has your new infrastructure helped you achieve? We’ve cut our costs by half and multiplied computing power by six. And with ultra-dense rack servers, we’re not wasting space in the data centre, so we’re not limiting the future evolution of power.
Mercator Ocean is buoyed by its savings and high performance. For the full story, click here.