Out with the Old, In with the New
Is it easy for a company to throw out its old or existing backup and recovery infrastructure and bring in a new one? Some vendors are certainly pushing that as a viable option. In practice, however, it is not realistic for many reasons. In many organizations, backup and recovery infrastructure and operation have been built up over long periods of time. In today’s world, legacy products and solutions coexist with new, highly virtualized applications and widely distributed IT infrastructure.
Now companies must manage and safeguard their data across physical, virtual, and cloud. As a result, many firms find it very difficult, if not impossible, to modernize and optimize their backup and recovery environment. We see this time and time again. Exacerbating this trend is the difficultly to adequately predict expansive data growth at all levels of an organization. Much of the data growth that is created resides outside the confines of data centers and is unstructured.
Organizations have significant challenges managing existing practices and policies for data protection and recovery with new application deployment, increased virtualization, and the march to the cloud. The cloud (public, private, or hybrid) has been a transformative mechanism for data protection and recovery. Furthermore, the cloud can facilitate companies to build out business continuity or disaster recovery (DR) strategies at a much lower cost. Additionally, archiving to the cloud is now a viable option.
So, how do purpose-built backup appliances help organizations meld their old and new data protection and recovery strategies in the face of unabated data growth, expanding virtualized environments, and push to the cloud? First and foremost, we need to begin outlining why companies should buy an appliance solution rather than cobbling their own together from software, servers, and storage they currently own.
Why buy a backup appliance versus building one with backup software, a server, and storage array? Here’s some sound and prudent advice to take into account when considering a do it yourself (DIY) approach:
A backup appliance alleviates all of the challenges listed above and provides the following benefits:
There are currently a plethora of backup appliance choices in the marketplace today. Nevertheless, not all appliances are the same. In short, many of the current vendors building appliances in the market today are using proprietary software, hardware, and cloud components. In many cases, an organization will need to purchase additional SKUs or options to build out a comprehensive backup and disaster recovery strategy. Ironically, this opens the door for another set of challenges and can be more costly to support long-term. A company evaluating a backup appliance needs to consider if the vendor can deliver, support, and maintain the solution long-term. Not many vendors have the reach or resources to build out their own IP across software, hardware and storage technologies.
You’re probably not looking forward to migrating off of Windows Server 2003, but you’ll be better off once you’ve done it.
It’s like that old car that somebody holds on to for 27 years and can’t bear to get rid of. The paint is faded, the seats are shot, it doesn’t run efficiently anymore, and no matter how loud the radio plays, the road noise and rattles are louder. And it’s no longer secure – you can’t lock the doors anymore and the ignition switch is so worn that you have to use a screwdriver to turn the engine over.
“But it knows me,” the owner says. “Why shouldn’t I keep using it?”
Alone, But Not Alone
Of course the car doesn’t really “know” him, any more than Windows Server 2003 “knows” you. In the same way that the manufacturer no longer supports the car, Microsoft no longer supports Windows Server 2003, which means that it is officially impossible to keep it running.
That means you’ll be alone when something goes wrong, and even in the federal government you’re running out of company.
But you’re not alone in planning to migrate off of Windows Server 2003. In July 2015, Market Connections conducted a survey to find out how federal agencies are managing their Windows 2003 server migrations. Of the 200 respondents, 44% work in defense agencies and 56% in civilian agencies.
As you can see in the upper graphic, 49% of agencies do not plan to run Windows Server 2003 after support ends (which it did four months ago). But it’s worrisome that almost one third do plan to use it. I hope you’re not among them, holding onto that dilapidated car and trying to get another year out of it.
The lower graphic is more encouraging, though, in that 55% said their Windows Server 2003 upgrade plan is in process and 34% said it was complete. You should either stay away from those three-percenters who have no plan to upgrade or do your best to find them and persuade them otherwise.
What Can Go Wrong? Lots, If You Don’t Prepare
I’ve managed and worked on migration projects going back to the days of Windows NT 3.51. Even agencies that overcome the biggest barriers to Windows Server migration – applications from other companies, lack of budget/funding, the potential for data loss and worries about downtime – can still hit big snags as their projects progress. Whether in the government or the commercial sector, many customers have a difficult time accurately assessing their environment at the start of the project. When you don’t get that right, the rest can only become more difficult.
We look at ZeroIMPACT migration – the kind that causes the fewest headaches for sysadmins, users and management – as a methodology with four phases or pillars:
We find that the lack of #1 causes huge issues for #2, #3 and #4. So even when you are determined to get rid of the old car, if you don’t prepare for your move to the new one, you’re in for a bumpy ride.
Stay tuned for more posts in this series. I’ll go over the four pillars in more detail and show you the logical progression from one to the next. Meanwhile, have a look at the Market Connections white paper, Windows Server Migration – How to Achieve a ZeroIMPACT Migration, and see where you stand among your federal government colleagues on the spectrum of Windows Server 2003 migration.
About Jeffrey Honeyman
Jeff Honeyman manages messaging and content for government and education for Dell Software. He is also a saxophone and clarinet player and science fiction reader.
View all posts by Jeffrey Honeyman |
Follow #ThinkChat on Twitter this Friday, December 4, at 11:00 AM PST, for a live conversation exploring the impact of evolving technology on the retail experience!
Join us for this December/Holiday month tweet up and bring your retail experiences to share with all of us. In the last 10 years, people have had the unprecedented and historical retail, experience of evolving with the technology from google express or Amazon virtual pantry, allows shoppers to shop for groceries at home and have it delivered, to “Show-rooming”, where customers go to brick and mortar stores to feel, experience, and demo products but do all the actual buying on line. How has the evolution of retailing affected your gift giving or buying behavior? Are you an on-line retail consumer? Or do you still prefer to go to your local Fry’s electronics store to pick up a stocking stuffer or two? More interestingly, where do you see this market motion evolving? How can retailers ride these waves of change? Bring your ideas to the tweet up to share and discuss.
Join Shawn Rogers (@ShawnRog), marketing director for Dell Statistica, and Joanna Schloss (@JoSchloss), BI and analytics evangelist in the Dell Center of Excellence, for this month's #ThinkChat as we conduct a community conversation around your thoughts and real-life experiences!
Questions discussed on this program will include:
Where: Live on Twitter – Follow Hashtag #ThinkChat to get your questions answered and participate in the conversation!
When: December 4, at 11:00 AM PST
The fundamentals of systems management have changed, so you’re faced with managing and securing a growing number of devices, a variety of operating systems and multiple types of users, in addition to your traditional systems management tasks. Despite this acceleration in the scope, complexity and speed of change in your environment, your IT budget most likely remains flat or gets reduced, requiring you to do more with less.
Doing More with Less
So, when one organization is able to eliminate IT overtime costs and save one full-time salary annually by automating its anypoint systems management tasks, I like to share the story with other IT pros. First, let me tell you about some of the organization’s systems management challenges. They’re probably similar to the challenges you face every day. While this organization happens to be an educational institution, endpoint management issues are the same whether your organization teaches students, saves lives or manufactures widgets.
Westphal College of Media Arts and Design at Drexel University has an IT staff of five, a director and four technicians, who are responsible for managing 800 PC, Mac and Linux desktops, including performing manual upgrades. The technicians scrambled from machine to machine, sometimes remotely, to install updates to the operating system, browsers, plug-ins and software applications during the one-week break between academic quarters.
No matter how quickly they worked, they were unable to deliver consistent systems management, maintenance and updates across their IT environment. For example, they had no way of ensuring that all machines were running the same version of applications, nor could they easily determine which computers were out of sync with the others.
The team’s approach to remote systems management was to obtain or build installers containing the updates, and then use a variety of tools like PsExec, Active Directory and Apple Remote Desktop to deploy them across the network. Using this approach, it was impossible to report on whether the updates had been successful and on which machines.
Needless to say, this manual approach to systems management took a toll on the team’s overtime budget, with the OT payroll inflating to 100 hours during the week between quarters. And, while the technicians were focused on deploying upgrades and patches, they didn’t have time to support the users’ other needs or address new IT initiatives.
The Solution Became Clear
Prompted by these inefficiencies in timing and consistency, as well as a university-wide security initiative to encrypt all computers, the director tasked his team with finding a way to replace its manual processes with an all-inclusive automated solution to anypoint systems management. After listening to their needs, a reseller recommended the Dell KACE K1000 Systems Management Appliance. The team looked at other tools, but after a brief trial, the organization purchased the K1000.
They immediately saw that the KACE appliance addressed their biggest pain point with the software distribution, managed installations and patch management required to keep the desktops up to date and secure. The greatest time savings came with the ability to reuse their work once they loaded a patch or managed installation into the K1000.
They then began expanding their use of the KACE appliance. After several months of success managing installations and scripting remotely, they took a broader view and began consolidating their information systems. Using the K1000’s integrated service desk functionality, they realized flexibility they never had before as they could now create triggers, custom ticket roles and direct connections into inventory that showed all requests associated with each machine. Next, they built custom assets and email alerts in KACE to help them track loaned equipment, so they wouldn’t miss due dates.
Cost Savings for Drexel University
The K1000 Systems Management Appliance was quickly paying for itself. The organization eliminated overtime during break week – from 100 extra hours to finishing a day and a half early with the K1000. According to the IT director’s FTE calculation, the cost savings to his department is equivalent to the annual salary of one full-time IT pro. His department also benefits from compliance with the university’s security initiative. The KACE appliance provides automated patching as well as the reporting tools needed to show that the encryption agent is present on all 800 computers and to assist in documenting that the IT group is in full compliance.
With the K1000, the IT staff can also offer a shorter turnaround time on break fixes. Once IT has identified the problem and verified the fix, IT can deploy it centrally in hours instead of days and make the computer available to users much more quickly than before. The IT director is also seeing the strategic benefit of the KACE appliance as it affords him a comprehensive overview of all 800 desktops.
As this organization discovered, manual or individual point solutions no longer suffice in today’s IT environments. IT professionals must now view anypoint management as an imperative that cannot be ignored and one that needs to be addressed with an all-inclusive solution.
Watch the Full Story
I love to share KACE success stories, but I know you’d rather hear directly from your peers. So I’ve included a link to a 4-minute video featuring Jason Rappaport, director of IT, Antoinette Westphal College of Media Arts and Design, Drexel University, along with some members of his IT team. In the video, they detail how they were able to create a central view of their multi-platform environment, implement reporting on 800 desktops to comply with the organization’s security initiative, and speed application deployment with Dell KACE appliances.
About Stephen Hatch
Stephen is a Senior Product Marketing Manager for Dell KACE. He has over eight years of experience with KACE and over 20 years of marketing communications experience.
View all posts by Stephen Hatch
“This will definitely solve a problem for me.” Those were the exact words of a visitor to the Dell Security Service Provider booth at IT Nation 2015 in Orlando, FL. The problem? “I’m a VAR and I want to expand by offering security services, but I lack expertise in that area. Can Dell help me grow my business?”
Our answer? An emphatic “Yes.” Dell SonicWALL recently announced a fully managed security service available to VARs without having to be an MSP. This service is delivered by our select security providers Solutions Granted and Western NRG, each with proven expertise is using the Dell Global Management System (GMS) to manage firewalls. As an add-on to Dell’s Firewall-as-a-Service (FWaaS) program, the GMS Fully Managed Service gets you all the services needed to manage a firewall: the initial setting and security policy configuration, verification/validation of the set-up, off-site firewall back-ups, immediate security event notifications, recurring specialized and branded security reports, 24x7 technical support, and warranty exchange assistance.
For MSPs who would like to manage their own devices in the Firewall-as-a-Service program using GMS, but don’t want to deploy a GMS server of their own, we have two other tiers of GMS infrastructure to meet their needs. The lower tier grants 24x7 access to GMS in the cloud for the MSP to perform centralized management and monitoring of their FWaaS firewalls along with event notifications and off-site firewall back-ups. The middle tier includes the lower tier and adds regularly scheduled, specialized and branded reporting.
Each of these GMS services complements our FWaaS bundles to provide enterprise-grade network security at an attractive monthly price with no up-front costs and without having to worry about deployment, maintenance, and support of an on-premises GMS system.
Whether or not you elect to add-on GMS services, Dell’s Firewall-as-a-Service program offers our complete portfolio of next-generation firewalls backed by 24x7 support and a license for GMS. All FWaaS firewalls are loaded with a comprehensive suite of top-rated security services that includes anti-virus, anti-spyware, intrusion prevention, application intelligence and control, and web content filtering. These services are automatically updated using countermeasures to threats collected from the 1 million sensors in the Dell Global Response Intelligent Defense (GRID) network.
To learn more about the benefits of the Dell Firewall-as-a-Service program, go to A Winning Partnership and read about the experiences of our partnership with Speros Technology and Hi-Tech Computers. I’m confident that like Speros, Hi-Tech, and our visitor at IT Nation, you will experience the same close partnership and high level of winning when you join Dell’s FWaaS program and provide your customers with the protection they want in today’s cybersecurity environment.
About Wilson Lee
Wilson Lee is the product line manager for Policy Management and Reporting solutions at Dell SonicWALL. When he’s not working to keep your network secure, Wilson enjoys raising a family, teaching ESL, and daydreaming about tennis.p>
View all posts by Wilson Lee |
When you think about securing down a network using a next-generation firewall, in most cases the process immediately goes from the Internet to the local area network (LAN). This may be a good way of thinking if you only have hard wired desktop clients. However what if the network includes servers that need inbound access from the Internet or a wireless network? What steps can you take to protect a network that’s a little more sophisticated?
Let’s look at an example of a small network where the user has a few desktop clients connected to the physical LAN, wireless clients and a storage server. For this specific use case the network segmentation is set up in the following way. The LAN network has all of the desktop clients, a wireless LAN (WLAN) network for the wireless clients and a de-militarized zone (DMZ) where the storage server is connected.
From the LAN, clients are allowed to get to the Internet, but access to the other network segments is blocked. This includes the default policy to block all incoming access from the WAN or Internet.
For the wireless users, they can get to the internet but are blocked from accessing any of the other network segments. In order for the wireless users to access other network segments they must authenticate to the firewall. Once authenticated, each wireless user can gain access to the other network segments as needed. This was done to increase security from the WLAN and prevent unauthorized access to the other network segments.
Finally, on the storage server segment, the default policy is to block access to all other network segments. This is done to ensure that if the storage server was to become compromised by a vulnerability to its software it would not allow a hacker gain access or malware to spread to other network segments on the LAN or WLAN. For WAN access, all traffic is blocked, although a specific set of ports is allowed to provide the ability to automatically update the software on the storage server.
Now you may look at this and be thinking this is overkill for such a small network. However being in the security industry for the past 15 years and educating partners and customers on proper network designed I figured it would only benefit my own network security by implementing a security design that limits access between network segments.
While I’m not saying that all networks need to have this level of complexity, it is a good idea to think about network segmentation and not put all connected devices on a single segment just because it’s easy. The network segmentation will help to control traffic not only north and south, but also provide controls for traffic going east and west between network segments.
With the Dell SonicWALL firewalls it’s possible to create a wide variety of segments using either physical or logical interfaces or the internal wireless radio if available. Once an interface is defined, you can then apply a zone classification such as LAN, DMZ, WLAN or custom, and from there apply policies to control access between the various segments and limit unauthorized access. For increased security you can also apply authentication requirements as well. To learn more about how Dell SonicWALL next-generation firewalls can help secure your network read the “Achieve Deeper Network Security and Control” white paper.
About Matt Dieckman
Matthew Dieckman, Product Line Manager Network Security has over 15 years of network security experience and has held various product management, marketing and technical roles with Dell Security, SonicWALL, Mistletoe Technologies and Nortel Networks. Matthew holds a B.S. in Business Administration from the University of San Francisco.
View all posts by Matt Dieckman | />
Dell Wyse vWorkspace is now available through Microsoft’s Azure Marketplace. It’s easily accessible and takes no time at all to provision a single virtual image that provides a virtual desktop environment for your end users. It’s great for a Proof-of-Concept or just to view the powerful and easy-to-use management tools vWorkspace provides. After deploying the vWorkspace Azure trial, you can immediately deploy RDSH-based windows desktops and seamless applications to nearly any endpoint.
Below are just some of the benefits provided through this offering:
o With vWorkspace Automated Configuration easily connect your users to their virtual desktop
o Using the Azure Portal create an Azure virtual machine from template in minutes
o Your users can obtain a configuration and connect to their virtual desktops using only their email address (no complicated settings)
o Deliver Azure-hosted Windows virtual desktops and applications to nearly any device type
o Only pay when your Azure virtual machine is on and being used.
And soon, with Server 2016, you will be able to take advantage of Azure Stack and deploy the vWorkspace Azure Trial Offer into your own datacenter or leave it in Azure where it can be accessed from anywhere. Together, vWorkspace and Azure provide great capabilities.
Where you can access the offer?
The vWorkspace Azure trial can be accessed here. It provides easy step-by-step instructions on how to configure the image for your organization’s use.
How much does it cost?
The vWorkspace Azure Trail Offer uses a Bring-Your-Own-License model, and will require that you provide RDS SALs to honor Microsoft licensing requirements. However, vWorkspace will honor the first five concurrent connections free of charge.
Where can you find documentation?
Quick Start documentation is attached to this post. You can find additional documentation such as the administration guide and release notes for vWorkspace 8.6 at documents.software.dell.com.
Written by Kris Piepho, Dell Storage Applications Engineering
For customers looking to migrate data from a PS Series array to an SC Series array, Storage Center 6.7 now includes the Thin Import feature.
Thin Import works at a block-level and uses synchronous replication to import data from PS to SC Series storage. All blocks on the source LUN are read and then written to the target volume on the SC Series arrays with the exception of zeroed blocks, which are not actually committed to disk. The result is a thin-provisioned volume on the SC Series array.
How does Thin Import work?
Thin Import works in one of two ways, online and offline. In online mode, a destination volume is created on the SC series array, mapped to the server and then data is migrated to the destination volume. I/O from the server continues to both the destination and source volumes during the import. Online mode can be used for importing volumes that host mission-critical applications. Offline mode simply migrates data from the source volume to a destination volume. It does not recreate the mapping on the source volume. Online imports tend to take longer than offline because I/O continues to the volume from the server.
How long does an import take?
This can vary depending on available bandwidth between the arrays, amount of data to be transferred and the volume workload (in online mode). Another factor that determines the import speed is the location of the destination volume. By default, the import process imports data to the lowest tier of storage. Although writing to faster disks usually means faster import times, it’s a good idea to leave this setting alone because importing directly to Tier 1 could potentially fill all available space in the tier.
How can I do it?
Luckily, everything you need to take care of is covered in the best practices guide. This guide includes key prerequisites for both PS Series and SC Series arrays that need to be completed before starting an import.
For you visual learners, be sure to watch the demo video that accompanies the best practices guide. The video is a great way to see the import process in action.
Good luck and happy importing!
There’s big news in the SharePoint and Office 365 community! Just yesterday, on November 18, 2015, Microsoft released SharePoint Server 2016 Beta 2.
Microsoft first released the SharePoint 2016 IT Preview in August 2015. Yesterday's release is essentially a roll up of updates and enhancements based on customer and partner feedback. You can check out the official Microsoft announcement on the Office Blog.
The Microsoft blog summarizes the updates made in SharePoint Server 2016 Beta 2 since the IT Preview, including:
In fact, many of these features are highlighted in the e-book “What’s New in Microsoft SharePoint 2016 and Office 365”.
As you are learning about or even trying out SharePoint 2016, it’s also time to start thinking about your next migration. With each of the previous major releases —SharePoint 2010 and 2013 — we saw a big wave of migration activity, and we expect the same with 2016. It’s the start of the clock or the kick in the *** for many organizations to evaluate their existing environment and consider whether they will upgrade to the latest and greatest on-premises, or take this opportunity to move to the cloud in Microsoft Office 365.
Now is the time to start planning and testing your migration.
Dell Software Migration Suite for SharePoint is already ready to go for SharePoint 2016 Beta 2! Dell Migration Suite is built with a future-proof design – leveraging Microsoft’s Web-based APIs – that enables us to support any new SharePoint version at the time of release. As such, Migration Suite can be used today in SharePoint 2016 beta labs, and will support SharePoint Server 2016 upon its release.
So we invite you to download a free trial of Dell Migration Suite for SharePoint today! The agentless solution has been proven to help customers and partners reduce migration time by up to 80% or more.
Customer Proof verified by TechValidate.
See for yourself how much time you can save!
Note: Dell does not offer support for Windows Server 2016 at this time. Dell is actively testing and working closely with Microsoft on WS 2016, but since it is still in development, the exact hardware components/configurations that Dell will fully support are still being determined. The information divulged in our online documents prior to Dell launching and shipping WS2016 may not directly reflect Dell supported product offerings with the final release of WS 2016. We are, however, very interested in your results/feedback/suggestions. Please send them to WinServerBlogs@dell.com.
Nano Server is the new Windows Server 2016 installation option which is positioned as a purpose-built operating system designed for cloud applications.
Nano Server TP4, released on Nov 19th 2015, is available as a ‘wim’ file in the WS2016 OS media along with the roles and drivers that can added to it.
Fig. 1. Inside Nano Server folder on Windows Server 2016 TP4
There are PowerShell deployment scripts available to help automate the VHD image creation process. This VHD can be then used to deploy Nano as a virtual machine on Hyper-V, or for bare-metal OS installation.
Unlike any Windows OS before, Nano has a NO GUI and limited local interaction with the OS. Nano is purpose-built to be managed and maintained via a remote management station. Once you login to the OS, you see a ‘Nano Server Recovery Console’ which provides general information about the installed OS and displays the NIC IP addresses. The two configuration options include Networking and Firewall.
Fig. 2. Nano Login on Dell iDRAC console for Dell PowerEdge R730 XD
Fig. 3. Nano Server Recovery Console on Dell iDRAC console for Dell PowerEdge R730 XD
The primary method to manage Nano Server is Windows PowerShell remoting. You need to setup PowerShell sessions to the Nano OS and run commands over the network. Dell iDRAC provides a great way to monitor and manage the Nano Server. We will look at this functionality in detail along with ways to create, configure and deploy Nano Server on Dell PowerEdge servers, in a series of blog posts. So stay tuned!
Nano Server Technical Preview 4 download: https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview
What's New in Windows Server 2016 Technical Preview 4
Getting Started with Nano Server: https://msdn.microsoft.com/en-us/library/mt126167.aspx
Nano Server on Channel 9: https://channel9.msdn.com/Series/Nano-Server-Team
How to use WDS to PxE Boot a Nano Server VHD: http://blogs.technet.com/b/nanoserver/archive/2015/06/03/how-to-use-wds-to-pxe-boot-a-nano-server-vhd.aspx
For more information on Nano Server visit Dell Tech Center : Nano Server