Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
One of the key features in Foglight for Virtualization, Standard Edition (aka "FVS" and formerly known as vOPS) is the ability to safely deploy new VMs while ensuring resources are not over-committed. FVS analyzes dynamic workload demands and automatically right-sizes resources and eliminates waste (old snapshots, zombie VMs, etc). FVS also determines exactly how many more VMs can be safely deployed, and helps you create and execute a plan for VM deployment while reserving resources in advance, which guarantees those resources will be available as needed. This enables you to efficiently plan and manage quickly changing demands for resources in the environment. Additionally for a team of administrators it establishes a “single version of the truth” in terms of understanding and controlling the available capacity for new workloads.
FVS helps you understand how many VMs, and their sizes, can safely be added to your existing infrastructure using the Capacity Availability Feature. Use the Calculator to select default VM slot sizes or input your own specific sizes via the New Custom Size button. You can see in the example below we’re using average VM sizes to determine how many additional VMs can be added to the infrastructure:
We see the “Hyperv.hyper-v.net” resource object in particular can safely run seven additional VMs based on the calculated size (using the average VM sizes in this example), and the major resource constraint is storage.
We also see that we don’t have any VMs reserved via the column titled Reserved VMs, along with the summary line just above the table showing zero Reserved VMs. Once reservations are made, those VMs will populate the Reserved VMs column and the summary line. To make reservations and deploy VMs, we first click the Plan Deployments tab in the Calculator:
Then click the Schedule VM Reservation button and input the requisite information in the dialog box to reserve resources and schedule VM deployment at the best time for your organization. In this example, we’re adding an additional database server for a web expansion project. For sizing the VM, we use the Custom setting to input specific resource sizing or we can select the sizing of any existing VM. We then choose the host or cluster, and storage:
Once you save the information, the placed VM is converted to a reservation and will appear in the Reserved VMs column of the Capacity Availability View:
If you use templates, click the Plan Deployments tab in the Calculator, followed by the Deployment tab to enter that requisite information. Note the template’s allocation values can be easily adjusted on any given deployment:
All reserved and deployed VMs can be seen and managed from the Planned Deployment tab of the Capacity Manager, including editing and deleting:
FVS delivers a simplified, orderly approach to identifying available VM slots, reserving scarce resources for new workloads, and automating the creation, scheduling, and placement of those VMs. This eliminates the risk of over-committing resources while enabling you to get the most out of your virtual infrastructure.
Visit http://software.dell.com/products/foglight-for-virtualization-standard-edition/ for more information
I was very pleased to meet with Joseph B. George, Barton George and with Rob Hirschfeld during this year’s OSCON. Joseph and I spoke about the announcements Dell made around OpenStack and Hadoop, Barton shared with me the latest news on the Dell Sputnik developer laptop and Rob gave me some valuable insights into an interesting discussion in the OpenStack community: How should the community define what is part of OpenStack project’s core?
Joseph B. George, Director of Cloud & Big Data Solutions at Dell
The Dell OpenStack-Powered Cloud Solution is now available with new options such as support for OpenStack Grizzly and for Dell Multi-Cloud Manager (formerly Enstratius) as well as extended reference architecture support, including the Dell PowerEdge C8000 shared infrastructure server solution, high density drives, and 10Gb Ethernet connectivity.
The latest update of Crowbar includes open sourced RAID and BIOS configuration capabilities. SUSE has integrated Crowbar functionality as part of SUSE Cloud 1.0 version, SUSE’s OpenStack-based private cloud distribution. SUSE Cloud 2.0 is currently in beta and will be available in the fall.
Last but not least, Dell now offers Dell Cloud Transformation Services to help customers assess, build, operate and run OpenStack cloud environments.
On the Hadoop side of things, the Dell Cloudera Hadoop Solution now supports the newest version of Cloudera Enterprise. Updates allow customers to perform real-time SQL interactive queries and Hadoop-based batch processing, which simplifies the process of querying data in Hadoop environments. In addition to that, Dell has tested and certified the Intel Distribution for Apache Hadoop on Dell PowerEdge servers.
Barton George, Director of Developer Programs at Dell
Rob Hirschfeld, Cloud Architect at Dell
Rob recently published a series of blog posts on a very interesting (and important) discussion within the OpenStack community: What should be core of the OpenStack project? The conversation currently floats towards a pragmatic approach: a particular feature should become part of OpenStack’s core only when it passes a set of mandatory tests (in addition to at least one reference implementation). The OpenStack User Committee is supposed to engage in this process by providing guidance on which tests are required.
For further details go to Barton George’s and Joseph B. George’s personal blogs.
With the release of SharePoint 2013, SharePoint Online in Office 365 has been significantly improved. SharePoint Online, originally considered to be just a shared document repository, has now evolved into a true collaboration platform in which enterprises can run enterprise-wide applications.
I talk to my SharePoint sales team every day about what's going on in the market, what they're hearing from customers on the phone and in the field, and what conversations they are having with partners. I can say that since the release of SharePoint 2013, we're getting more inquiries about Office 365 migration now than we've had in the past two years combined.
If you are looking to migrate your legacy on-prem content to the cloud, it's important to know that there are no viable native tools available. Therefore, you need to evaluate third-party tools to help get the job done efficiently.
Dell Software's Migration Suite for SharePoint supports direct migrations to Office 365 SharePoint from all previous versions of SharePoint, both server and WSS/Foundation. We also support direct migration from Windows file shares and Exchange public folders.
Our systems consulting manager, Ghazwan Khairi, just recorded a few new videos showcasing Migration Suite's ability to migrate to SharePoint Online. Both are short videos, so take a quick look and if you're interested, we offer free trials of the tool for migrating up to 1 GB of content.
Good luck with your upcoming migration project!
Video 1: How to migrate sites from SharePoint 2007 to Office 365
Video 2: How to migrate lists from SharePoint 2010 to Office 365
Considering the number of companies that have adopted the ITIL model for IT process, a solution with a module such as the Change Analyzer is more relevant than ever. Foglight for Virtualization Standard's Change Analyzer - Infrastructure History tab will allow you to report, "Who did what, when?
This can be useful for companies with strict Change Management policies. I speak with many customers who tell me that the Change Management group is interested in getting visibility in what's going on in a virtual enviroment. You can easily report changes that occurred such as:
- VM Removed
- Virtual Disk Added to Datastore
- HA Disabled
- Datastore Removed
- VM CPU Allocation Increased
- VM Memory Allocation Decreased
Change Analyzer can also be used as an auditing mechanism. When there are dozens of people with full admin access to vCenter, resource allocation can get chaotic. Events such as large virtual disks being added and increases in memory allocations can make capacity management and tracking of resources extremely difficult. Change Analyzer can help "police" such actvities.
Below is a screenshot of the Infrastructure History for a Report Period of "This Year" - here's where you can see the time, risk level, the event, the details, and who performed the operation.
The Risk Definitions button can help you define what events are considered "Low", "Medium", or "High" risk. By clicking the Risk Definitions button, you can customize the risk levels of the various events.
By utilizing Foglight for Virtualization Standard, you can better monitor your virtual enviornment, as well as proactively audit and track changes so that they better align with senior management process goals.
How would you rate yourself for grocery shopping on any particular day? For me, it depends on whether I went grocery shopping prepared or unprepared. For example, if I were coming home from work one Wednesday and made the last-minute decision to stop by the grocery store, I would be unprepared and not have a shopping list. If I remembered and picked up 5 out of 5 items that I needed, I would meet my performance metric for that Wednesday.
On another day, let’s say a Sunday; I go to the grocery store to do my weekly shopping. This time, I need about 20 things and want to be prepared with a shopping list if I can finish my grocery shopping in 20 minutes that is my performance metric for Sunday.
Random vs. Sequential data access pattern is similar to grocery shopping on a Wednesday vs. Sunday. In a sequential workload, blocks are read and/or written in a sequential manner on the disk (e.g., applications such as data warehousing). In a random workload, blocks are read and/or written randomly on the disk (e.g., Online transaction processing).
In the storage world, the performance of these two workloads is measured by IOPS in one case and Throughput in the other, just like the performance measure is different while grocery shopping on Wednesday vs. Sunday.
IOPS (Input Output per second) represents the number of individual I/O operations taking place in a second. IOPS figures can be very useful, but only when you understand the nature of the I/O, such as the I/O size and how random it is.
Throughput (also known as bandwidth) is a measure of data volume over time—in other words, the amount of data that can be pushed or pulled through a system per second. Throughput figures are therefore usually given in units of MB/sec and are more meaningful for sequential workloads since blocks are accessed serially.
Throughput = IOPS x I/O size
The most common database application workload models are online transaction processing (OLTP) and data warehouse (DW)/Decision Support Systems (DSS). SQL Server DW workloads differ significantly from traditional OLTP workloads in how they should be tuned for performance because of the different I/O patterns inherent in both designs—just like grocery shopping.
Below is a table that summarizes some of the most important differences between OLTP and DW workloads.
DW applications are typically designed to support complex analytical query activities using large data sets. The queries executed on a DSS database typically take a long time to complete and usually require processing large amounts of data. A DSS query may fetch millions of records from the database for processing. To support these queries the server reads large amounts of data from the storage devices. OLTP applications are optimal for managing rapidly changing data. These applications typically have many users performing transactions while at the same time changing real-time data. Although individual data requests by users usually reference few records, many of these requests are being made at the same time. Examples of different types of OLTP systems include airline ticketing systems, banking/financial transaction systems, and web ordering systems.
Since DW workloads differ significantly from that of OLTP systems, it is important to understand the performance criteria’s for each of these and then properly design, size and deploy Microsoft SQL Server on the right storage platforms. Hence, careful planning prior to deployment is crucial for a successful SQL Server environment.
Maximizing SQL Server performance and scalability is a complex engineering challenge as I/O characteristics vary considerably between applications depending on the nature of the access patterns. Several factors must be considered in gathering storage requirements before arriving at a conclusion. A key challenge for SQL Server database and SAN administrators is to effectively design and manage system storage, especially to accommodate performance, capacity and future growth requirements. At Dell, engineers perform benchmark testing to provide storage best practices and sizing guidance that help you plan before you deploy these applications in your environment. Based on the benchmark testing, we recently published two performance based whitepapers focusing on SQL DW and OLTP workloads on EqualLogic PS 6110 and PS6100 storage arrays respectively. To get a deeper understanding of how EqualLogic storage arrays performed on these applications, see these detailed best practices documents:
Best Practices and Sizing Guidelines for Transaction Processing Applications with Microsoft SQL Server 2012 using EqualLogic PS Series Storage and
Best Practices for Decision Support Systems with Microsoft SQL Server 2012 using Dell EqualLogic PS Series Storage Arrays.
Visit the EqualLogic Technical Content page for a collection of EqualLogic best practice whitepapers.
You can follow me on Twitter @Magi_Kapoor for more SQL and EqualLogic discussions.
By Kristina Kermanshahche, Chief Architect, Health & Life Sciences, Intel Corporation. Posted by Christine FronczakA few weeks ago I was at an industry conference and heard a colleague engaged in cancer research talk about personalized medicine and clinical analytics, and how these two important aspects of healthcare need to come together to benefit patient care. “We have the will to do this,” he said. “But do we have the way?”After seeing Intel work closely with several life sciences customers and technology leaders recently, I believe we do have the way. Here are a few examples:
A key takeaway from Intel’s work with the life sciences industry is watching the dramatic decrease in the amount of time needed to gather results.Intel’s collaboration with TGen and Dell has focused on this aspect of genomic research. We optimized the analytics pipeline to the point where a sequencing test that used to take seven days to evaluate now can be done in four hours. This is hugely important because clinicians now have the ability to sequence multiple times during the course of a patient’s treatment, closely monitoring how that patient responds to the personalized protocols such as developing *** resistance, and perform any necessary treatment interventions within a clinically meaningful timeframe. For a cancer patient, this time savings could mean the difference between life and death. That is the way forward.At the end of the day, there is little doubt that genomics research is positively impacting clinical treatments. Analyzing genomic information and incorporating the data into practice is happening now. Not five or 10 years from now, but today. This translates into actionable clinical care and leads to efficient treatment plans.That is the way forward.What questions do you have about genomics, personalized medicine and clinical analytics?For additional information: http://www.dellhpcsolutions.com/dellhpcsolutions/static/genomics.html
About Kristina Kermanshahche
Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you’d like me to add your blog posts to my weekly compilation, please send me an email (email@example.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!
What Is Windows Server 2012 Hyper-V Live Migration? by Aidan Finn
Some More Background on Windows Update KB2855336 by Hans Vredevoort
House Keeping In The Cluster Aware Updating GUI by Didier van Hoye
Deploy Roles Or Features To Lots Of Servers At Once by Aidan Finn
Azure Premium for SQL Database Preview by Toni Pohl
Constraining SMB Powered Redirected IO Traffic To Selected Networks In A WS2012 Hyper-V Cluster by Aidan Finn
Windows Firewall On Hyper-V Management OS (Host) Has Nothing To Do With Virtual Machines by Aidan Finn
Hyper-V Virtual Fibre Channel Design Guide veröffentlicht in German by Daniel Neumann
Extended Hyper-V Replica–Windows Server 2012 R2 by Lai Yoong Seng
Microsoft Virtualisierungs Podcast Folge 31 – LiveMigration in WS2012 R2 in German by Carsten Rachfahl
Office 365 Tipp: Verlinkung auf der Seite und der Navigation in German by Kerstin Rachfahl
More Office365 Midsize would be sold if….. by Nick Whittome
#PSTip How to convert words to Title Case by Shay Levy
Using PowerShell to Modify DCOM Launch & Activation Settings by Keith Hill
System Center Core
What Is Microsoft System Center? by Damian Flynn
SC2012 R2 Service Provider Foundation, Windows Azure Pack & Service Management Automation Installation in German by Daniel Neumann
Update Rollup 3 For System Center 2012 Service Pack 1 by Aidan Finn
Windows Server Core
Some ODX Fun With Windows Server 2012 R2 And A Dell Compellent SAN by Didier van Hoye
Getting Started With DataOn JBOD In WS2012 R2 Scale-Out File Server by Aidan Finn
Windows Server 2012R2 IOPS resource metering #Hyper-v #ws2012r2 #winserv #msftprivatecloud by Robert Smit
Windows Server 2012R2 Grant access to hyper-v VM’s #Hyper-v #ws2012r2 #winserv #msftprivatecloud by Robert Smit
Windows Server 2012R2 RDS Virtual Desktop Concurrency Setting #Hyper-v #ws2012r2 #winserv #msftprivatecloud by Robert Smit
The VMworld schedule builder is now live for VMworld US, remember you have to build your schedule a head of time and popular sessions sell out quickly.
I wanted to highlight David Davis and my session called Mythbusting Goes Virtual. I did this presentation 2011 with Eric Sloof and it was very popular since it was funny, different and educational.
The entire content is brand new so add it to your schedule before it´s sold out . We are also open for submission of myths that can be added to this presentation or an upcoming version. Please leave your myth in the comments section below.
VSVC5353 - Mythbusting Goes Virtual (Monday, Aug 26, 2:00 PM - 3:00 PM)
Some things never change, or do they? vSphere is getting new and improved features with every release. These features change the characteristics and performance of the virtual machines. If you are not up to speed, you will probably manage your environment based on old and inaccurate information. The Mythbusting team has collected a series of interesting hot topics that we have seen widely discussed in virtualization communities, on blogs and on Twitter. We’ve put these topics to the test in our lab to determine if they are a myth or not.
David Davis - Author, Blogger, Speaker, www.TrainSignal.com
Mattias Sundling - Evangelist, Dell Software
Other Dell sessions:
STO5448 - Dell Solutions for VMware Virtual SAN (Monday, Aug 26, 5:30 PM - 6:30 PM)
In this session, we will present a high-level overview of how Dell and VMware are partnering together to bring the most optimized hardware, software, and systems management together for deploying and managing VMware Virtual SAN on Dell PowerEdge Servers.
Bryan Martin - Senior Product Manager, Dell
Sheetal Kochavara - Sr Product Manager Storage Virtualization, VMware
VAPP6124 - Automating VMware Cloud and Virtualization Deployments with Dell Active Infrastructure (Monday, Aug 26, 12:30 PM - 1:30 PM)
Whether it’s deploying a VMware vSphere based virtualization farm, or an automated private cloud with vCloud, there are weeks or even months spent on designing, architecting and deploying the environment.. Today, in most data centers the processes required to configure server, storage, networking, and software are often manual, time consuming and prone to human errors. Dell’s vision of the next generation data center is around enabling an IT-service centric, automated data center through Dell Active Infrastructure. With pre-integrated converged solutions optimized for VMware vSphere environments, and an embedded converged management platform that allows template based automation and orchestration of physical and application infrastructure, Dell & VMware are enabling customers to accelerate time to value, increase efficiency and improve quality. In this session, you will learn how Dell has designed an optimized VMware stack for virtualization and private cloud, and how Active System Manager is integrated with VMware to automate the design, deployment and management of virtualization and private cloud environments. The session will also include a demonstration of how Active System Manager can automate provisioning and configuration of VMware environments.
Aaron Prince - Technical Marketing, Dell
Ganesh Padmanabhan - Director, Product Marketing, Dell, Inc
EUC5870 - Carolina Farm Bureau Insurance Company Enhances Customer Experience with VMware View and Dell (Monday, Aug 26, 5:30 PM - 6:30 PM)
Cayse, NC-based Carolina Farm Bureau Insurance Company (CFBIC) moved to a desktop virtualization environment based on VMware View 5.0 and Dell Wyse P25 zero clients to better serve the needs of distributed network of agents across a number of states. Facing issues such as network latency, application compatibility and security that affected customer service and which caused frustrated agents to lose business to online players, CFBIC needed to upgrade its IT infrastructure to allow employees to be able to securely access mission critical applications in a fraction of the time and provide accurate policy information and process claims.
Dave Rubetti - Infrastructure Architect, Carolina Farm Bureau Insurance Company
And as always, please stop by our booth for exciting demos, see you all at VMworld!
Happy System Administrator Appreciation Day!!!
Systems Administrators, today is your day!
You work hard all year round without proper recognition for the important work you do.
To help right this wrong, we created these lab gear themed tributes just for you. Enjoy these animated GIFs and keep reading to learn about a Dell KACE SysAdmin Day contest that could win you a brand new Alienware laptop!
At Dell TechCenter, no matter what day it is, we're always here to help Systems Administrators. Visit TechCenter for all things IT and related to Dell. We're here to interact with you and share best practices, white papers, how-tos, demos, and more.
Check our videos, blogs, wiki articles, forums, chats, reference architectures, configuration guides and more at DellTechCenter.com.
Happy SysAdmin Day from the guys in the DellTechCenter.com lab!
Need more appreciation? Check out this vintage Dell TechCeneter SysAdmin Day card!
Finally, win a Dell Alienware laptop courtesy of Dell KACE!
Dell KACE SysAdmin Day 2013 Twitter Contest
Follow and tweet @DellKACE a picture of your IT Desk, even if it looks like the pile of networking cables above and use the hashtag, your “#ITMess4Success”
Enter the contest by following these 3 easy steps:
Win prizes like an Alienware Laptop and 5 SysAdmin Survival Kits ($100 worth of IT Awesome = 145-Piece Comp Toolkit, 64GB USB, Nerf Gun, IT Ninja Swag and more!)
CONTEST ENDS THIS FRIDAY (SysAdmin day) at 2PM Eastern / 1PM Central / 12PM Pacific!
For more information on the contest and to see examples, check out: http://dell.to/14U2L8i. For legal rules, check out: http://dell.to/193pDWY.
Inktank, the company behind Ceph is hosting Ceph Days in multiple cities globally - a great opportunity to learn about their cloud storage technology.
What is Ceph?Ceph is a fully open source, distributed object store, network block device, and POSIX-compatible distributed file system.
What are Ceph Days?Ceph Days are a full day event dedicated to learning about the power of Ceph.You will meet and hear Sage Weil, creator of Ceph and CTO of Inktank, key community members and storage experts, on how Ceph is transforming the future of storage. There is also a hands-on Ceph installation workshop.
Whom are you targeting that event at?Some of the titles that we are expecting include: Cloud Architect, Data Center Managers, Data Engineer/Architect, Systems Engineer/Architect, DevOps Engineer/Architect, Solutions Architect.
What can people learn at Ceph Days?
Where do you host these events?
How much do you charge?
Where to find more information?Please go to: http://www.inktank.com/cephdays/