Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
A virtualized environment is everything your organization hoped for and more, right? Almost unlimited resources, easy spin-up/spin-down of virtual servers and a big cost-savings on hardware. When industry experts, analysts and even your colleagues proclaimed it to be true, you quickly charted your course and set sail on the virtual sea.
So why hasn’t your ROI materialized? Why isn’t your virtualization strategy panning out the way you expected? Why is your organization experiencing spikes in both operating expenses (OpEx) and capital expenses (CapEx)?
After speaking with hundreds of customers, we’ve concluded that the answer to these questions rests with one simple fact:
It’s so easy to create and distribute virtual machines (VMs) that companies have become complacent about the need to properly manage them.
As part of this three-blog series, we’ll explain the key concepts you need to optimize virtualization management within your organization.
While you can create and run dozens of VMs per physical server, keep in mind that even though the workloads are virtual, the server and storage resources are not.
VM density is an important metric because you trade it off against optimal performance. If your density is too high, then your VMs are competing for precious resources, which can lead to poor performance. While it can be a problem, it’s easier to find than low density.
If your density is too low, then you’re underutilizing your physical resources. The most common symptom is excellent performance, which makes it look as though there’s no problem at all. However, once you edge back toward high density that’s when you begin to see performance problems. Striking the balance is what optimizing virtualization management is all about.
VM size is also a factor in density. Creating immense VMs leads to inefficient sharing of disk space, physical memory and CPU.
Physical sprawl is pretty easy to spot. While rows of servers take up a lot of real estate and cost a lot of money, VM sprawl is less conspicuous and can be more difficult to identify. Almost every virtual data center exhibits some symptoms of VM sprawl:
VM sprawl leads to wasted storage and computing resources in the virtual environment. It’s inconspicuous at first, but it doesn’t remain that way for long and it eventually affects your performance and OpEx.
Read the E-book: An Expert's Guide to Optimizing Virtualization Management
We’re just getting warmed up. In the next two blogs in this series, we’ll take a look at optimizing virtualization management. Meanwhile, download our guidebook, An Expert's Guide to Optimizing Virtualization Management. It highlights five important areas many companies overlook in their virtual landscape where they can usually recover lost ROI and regain the promise of virtualization.
About John Maxwell
John Maxwell leads the Product Management team for Foglight at Dell Software. Outside of work he likes to hike, bike, and try new restaurants.
View all posts by John Maxwell |
Finding Work / Life Balance
What did your workday look like? You’d wake up every morning, commute to the office and start solving problems. Some days, the problems couldn’t wait – 6AM conference calls, email bombardments, systems going down - all before you hit the snooze button. Then, in the evenings, you’d do your best to leave the office at a reasonable hour. Which - you and I both know - didn’t always happen.
But thanks to Dell Software, your workday is now more productive and ends when it should. We get it: work-life balance matters. And when you work more efficiently, you have the time to pursue your passions and do the things you love. In fact, this infographic shows you how!
And now, we want to hear from you and see your smiling faces! So, we’re launching the #ExpectMore Photo Contest,* giving you the chance to show us what you do with the time Dell Software saves you at work.
What Is Your Passion?
Do you use Toad at work to tackle complex databases, then, go home in the evening and fly the drone you and your son built in the workshop?
Do you optimize your VMware storage infrastructure with Foglight for Storage Management, then coach your daughter’s soccer club to victory on the weekend?
Do you handle otherwise daunting email migrations with Migration Manager for Exchange, then mountain bike through the trails of your local state park?
How to Enter
Entry into the Contest is simple.
Each Friday during the Contest Period, we will select the most creative entry. Once selected, we will announce the winner via our @DellSoftware Twitter channel. The winner must then contact us via Twitter to receive a $50 Amazon Gift Card!
Stefanie quickly runs down database issues so she has time for her true passion - skateboarding!
Kris easily finishes his endpoint management heavy lifting so he has time for his true passion - strength training!
Expect more from your software solutions with Dell, and start doing more with your free time.
We appreciate everyone who entered, you made our decision very difficult! Here are the winners:
* NO PURCHASE NECESSARY. Legal residents of the 50 United States (D.C.) 18 years or older. Ends 12/17/2015. To enter and for Official Rules, including prize descriptions visit http://dell.to/1kScbOv. Void where prohibited.
Follow #ThinkChat on Twitter this Friday, December 4, at 11:00 AM PST, for a live conversation exploring the impact of evolving technology on the retail experience!
Join us for this December/Holiday month tweet up and bring your retail experiences to share with all of us. In the last 10 years, people have had the unprecedented and historical retail, experience of evolving with the technology from google express or Amazon virtual pantry, allows shoppers to shop for groceries at home and have it delivered, to “Show-rooming”, where customers go to brick and mortar stores to feel, experience, and demo products but do all the actual buying on line. How has the evolution of retailing affected your gift giving or buying behavior? Are you an on-line retail consumer? Or do you still prefer to go to your local Fry’s electronics store to pick up a stocking stuffer or two? More interestingly, where do you see this market motion evolving? How can retailers ride these waves of change? Bring your ideas to the tweet up to share and discuss.
Join Shawn Rogers (@ShawnRog), marketing director for Dell Statistica, and Joanna Schloss (@JoSchloss), BI and analytics evangelist in the Dell Center of Excellence, for this month's #ThinkChat as we conduct a community conversation around your thoughts and real-life experiences!
Questions discussed on this program will include:
Where: Live on Twitter – Follow Hashtag #ThinkChat to get your questions answered and participate in the conversation!
When: December 4, at 11:00 AM PST
The fundamentals of systems management have changed, so you’re faced with managing and securing a growing number of devices, a variety of operating systems and multiple types of users, in addition to your traditional systems management tasks. Despite this acceleration in the scope, complexity and speed of change in your environment, your IT budget most likely remains flat or gets reduced, requiring you to do more with less.
Doing More with Less
So, when one organization is able to eliminate IT overtime costs and save one full-time salary annually by automating its anypoint systems management tasks, I like to share the story with other IT pros. First, let me tell you about some of the organization’s systems management challenges. They’re probably similar to the challenges you face every day. While this organization happens to be an educational institution, endpoint management issues are the same whether your organization teaches students, saves lives or manufactures widgets.
Westphal College of Media Arts and Design at Drexel University has an IT staff of five, a director and four technicians, who are responsible for managing 800 PC, Mac and Linux desktops, including performing manual upgrades. The technicians scrambled from machine to machine, sometimes remotely, to install updates to the operating system, browsers, plug-ins and software applications during the one-week break between academic quarters.
No matter how quickly they worked, they were unable to deliver consistent systems management, maintenance and updates across their IT environment. For example, they had no way of ensuring that all machines were running the same version of applications, nor could they easily determine which computers were out of sync with the others.
The team’s approach to remote systems management was to obtain or build installers containing the updates, and then use a variety of tools like PsExec, Active Directory and Apple Remote Desktop to deploy them across the network. Using this approach, it was impossible to report on whether the updates had been successful and on which machines.
Needless to say, this manual approach to systems management took a toll on the team’s overtime budget, with the OT payroll inflating to 100 hours during the week between quarters. And, while the technicians were focused on deploying upgrades and patches, they didn’t have time to support the users’ other needs or address new IT initiatives.
The Solution Became Clear
Prompted by these inefficiencies in timing and consistency, as well as a university-wide security initiative to encrypt all computers, the director tasked his team with finding a way to replace its manual processes with an all-inclusive automated solution to anypoint systems management. After listening to their needs, a reseller recommended the Dell KACE K1000 Systems Management Appliance. The team looked at other tools, but after a brief trial, the organization purchased the K1000.
They immediately saw that the KACE appliance addressed their biggest pain point with the software distribution, managed installations and patch management required to keep the desktops up to date and secure. The greatest time savings came with the ability to reuse their work once they loaded a patch or managed installation into the K1000.
They then began expanding their use of the KACE appliance. After several months of success managing installations and scripting remotely, they took a broader view and began consolidating their information systems. Using the K1000’s integrated service desk functionality, they realized flexibility they never had before as they could now create triggers, custom ticket roles and direct connections into inventory that showed all requests associated with each machine. Next, they built custom assets and email alerts in KACE to help them track loaned equipment, so they wouldn’t miss due dates.
Cost Savings for Drexel University
The K1000 Systems Management Appliance was quickly paying for itself. The organization eliminated overtime during break week – from 100 extra hours to finishing a day and a half early with the K1000. According to the IT director’s FTE calculation, the cost savings to his department is equivalent to the annual salary of one full-time IT pro. His department also benefits from compliance with the university’s security initiative. The KACE appliance provides automated patching as well as the reporting tools needed to show that the encryption agent is present on all 800 computers and to assist in documenting that the IT group is in full compliance.
With the K1000, the IT staff can also offer a shorter turnaround time on break fixes. Once IT has identified the problem and verified the fix, IT can deploy it centrally in hours instead of days and make the computer available to users much more quickly than before. The IT director is also seeing the strategic benefit of the KACE appliance as it affords him a comprehensive overview of all 800 desktops.
As this organization discovered, manual or individual point solutions no longer suffice in today’s IT environments. IT professionals must now view anypoint management as an imperative that cannot be ignored and one that needs to be addressed with an all-inclusive solution.
Watch the Full Story
I love to share KACE success stories, but I know you’d rather hear directly from your peers. So I’ve included a link to a 4-minute video featuring Jason Rappaport, director of IT, Antoinette Westphal College of Media Arts and Design, Drexel University, along with some members of his IT team. In the video, they detail how they were able to create a central view of their multi-platform environment, implement reporting on 800 desktops to comply with the organization’s security initiative, and speed application deployment with Dell KACE appliances.
About Stephen Hatch
Stephen is a Senior Product Marketing Manager for Dell KACE. He has over eight years of experience with KACE and over 20 years of marketing communications experience.
View all posts by Stephen Hatch
Written by Kris Piepho, Dell Storage Applications Engineering
For customers looking to migrate data from a PS Series array to an SC Series array, Storage Center 6.7 now includes the Thin Import feature.
Thin Import works at a block-level and uses synchronous replication to import data from PS to SC Series storage. All blocks on the source LUN are read and then written to the target volume on the SC Series arrays with the exception of zeroed blocks, which are not actually committed to disk. The result is a thin-provisioned volume on the SC Series array.
How does Thin Import work?
Thin Import works in one of two ways, online and offline. In online mode, a destination volume is created on the SC series array, mapped to the server and then data is migrated to the destination volume. I/O from the server continues to both the destination and source volumes during the import. Online mode can be used for importing volumes that host mission-critical applications. Offline mode simply migrates data from the source volume to a destination volume. It does not recreate the mapping on the source volume. Online imports tend to take longer than offline because I/O continues to the volume from the server.
How long does an import take?
This can vary depending on available bandwidth between the arrays, amount of data to be transferred and the volume workload (in online mode). Another factor that determines the import speed is the location of the destination volume. By default, the import process imports data to the lowest tier of storage. Although writing to faster disks usually means faster import times, it’s a good idea to leave this setting alone because importing directly to Tier 1 could potentially fill all available space in the tier.
How can I do it?
Luckily, everything you need to take care of is covered in the best practices guide. This guide includes key prerequisites for both PS Series and SC Series arrays that need to be completed before starting an import.
For you visual learners, be sure to watch the demo video that accompanies the best practices guide. The video is a great way to see the import process in action.
Good luck and happy importing!
Note: Dell does not offer support for Windows Server 2016 at this time. Dell is actively testing and working closely with Microsoft on WS 2016, but since it is still in development, the exact hardware components/configurations that Dell will fully support are still being determined. The information divulged in our online documents prior to Dell launching and shipping WS2016 may not directly reflect Dell supported product offerings with the final release of WS 2016. We are, however, very interested in your results/feedback/suggestions. Please send them to WinServerBlogs@dell.com.
Nano Server is the new Windows Server 2016 installation option which is positioned as a purpose-built operating system designed for cloud applications.
Nano Server TP4, released on Nov 19th 2015, is available as a ‘wim’ file in the WS2016 OS media along with the roles and drivers that can added to it.
Fig. 1. Inside Nano Server folder on Windows Server 2016 TP4
There are PowerShell deployment scripts available to help automate the VHD image creation process. This VHD can be then used to deploy Nano as a virtual machine on Hyper-V, or for bare-metal OS installation.
Unlike any Windows OS before, Nano has a NO GUI and limited local interaction with the OS. Nano is purpose-built to be managed and maintained via a remote management station. Once you login to the OS, you see a ‘Nano Server Recovery Console’ which provides general information about the installed OS and displays the NIC IP addresses. The two configuration options include Networking and Firewall.
Fig. 2. Nano Login on Dell iDRAC console for Dell PowerEdge R730 XD
Fig. 3. Nano Server Recovery Console on Dell iDRAC console for Dell PowerEdge R730 XD
The primary method to manage Nano Server is Windows PowerShell remoting. You need to setup PowerShell sessions to the Nano OS and run commands over the network. Dell iDRAC provides a great way to monitor and manage the Nano Server. We will look at this functionality in detail along with ways to create, configure and deploy Nano Server on Dell PowerEdge servers, in a series of blog posts. So stay tuned!
Nano Server Technical Preview 4 download: https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview
What's New in Windows Server 2016 Technical Preview 4
Getting Started with Nano Server: https://msdn.microsoft.com/en-us/library/mt126167.aspx
Nano Server on Channel 9: https://channel9.msdn.com/Series/Nano-Server-Team
How to use WDS to PxE Boot a Nano Server VHD: http://blogs.technet.com/b/nanoserver/archive/2015/06/03/how-to-use-wds-to-pxe-boot-a-nano-server-vhd.aspx
For more information on Nano Server visit Dell Tech Center : Nano Server
We've all been to college at one time or another. Some of you reading this post are still in school even now. And the majority of us are probably still paying off student loans.
Speaking of college costs, maybe you have already learned about Dell Statistica's response to students in need. Our answer: FREE academic software!
Major Costs Add Up at School
Ponder your college years for a moment. Good times and challenging courses. But let’s focus on the struggle of the whole college experience ROI. What are your top complaints in this regard? If they relate to costs, you are in broad company. A nationwide campusgrotto.com survey of higher education students reveals a list of popular complaints, with a measurable percent stemming from costs:
Okay, we can't help you with the cafeteria food, but you'll notice the other complaints are indeed about costs.
Additionally, a plurality (39%) of respondents to Princeton Review's recent "College Hopes & Worries Survey" said their biggest concern is the level of debt incurred to pay for a degree.
It comes as no surprise that everything at college costs more money than we like, and it all adds up. Consider textbooks alone, the bane of every undergrad out there. Costs vary greatly from one major to the next, but assuming new book purchases are required, a study based at University of Virginia indicates that a statistics major is neither the most nor least expensive when it comes to textbooks. However, the study did find the average statistics textbook costs about $110, and students must buy multiple textbooks throughout that major's curriculum. The most expensive statistics book topped out at $342.
And, as if that weren’t enough…students in the data sciences get to tack on the cost of basic analytics software, too. It's like buying a virtual textbook on top of the physical textbooks.
What is the skills gap?
Meanwhile, though it may vary from industry to industry, the data scientist skills gap is real. Even as long ago as 2011 McKinsey & Company was already reporting that there will be a shortage of talent necessary for organizations to take advantage of big data. Barring some kind of change in the human resources supply chain, they predicted by 2018 “the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.” This is great news for students looking to break into this career path.
Change that Matters
So, our free academic program in North America is the kind of “change” we can apply readily to impact that human resources pipeline at the university level. It may not sound like much, but remember that every little bit helps when we are talking about reducing the financial burden of students seeking a strong foundation with skills-based training and key software tools in order to increase their value in the competitive data science field.
Think about it: The world needs more statistics and data science graduates to handle the deluge of big data challenges that are developing in every industry. Would the cost of just one more textbook—or, in this case, an analytics software package required by the professor—make or break the average student's ability to pursue the degree? Why risk it? We'll just give it away and let the chips fall where they may! If we choose to give away some software to help put more problem-solvers into the world’s workforce, then that's what we will do.
And the value of such a program? Priceless! Not only is the free academic bundle a boon to the study of analytics in North American academia, but because it will expand the pool of graduates qualified for real-life analytical pursuits across industries, the effects of this program are literally immeasurable, with potentially world-changing impact. You just never know where the next genius case study will originate. Truly, the gift that keeps on giving.
Read the Oct/Nov Statistica Newsletter >
Shawn Rogers, Joanna Schloss, and Paul Hiller joined the #ThinkChat to discuss IoT’s effect on Manufacturing. What does it mean to your business? How is IoT used day to day? What barriers exist in the adoption of IoT in manufacturing? And More! Check out the recap below of some of the highlights!
Thanks for joining the conversation, and don't forget to mark your calendar for our next #ThinkChat on December 4th at 11:00 AM PST. We'll be discussing how technology has transformed retailing during the holidays! Follow #ThinkChat on Twitter and join in!
View the whole #ThinkChat on Twitter
What does IoT mean to you and your business?
From the Community:
On the day to day, how do you use IoT?
Are there any barriers to the adoption of IoT in manufacturing?
What exactly is connectivity? Is the IoT always “on” or can things connect themselves temporarily?
What is the difference between “Internet of Things” and “many things with communicative apps?
For more information on IoT in Manufacturing, download our White Paper, "Key Considerations for Analytics Platforms in Regulated Manufacturing". See you December 4th for our next #ThinkChat!
“Rack ‘em and stack ‘em.”— a winning approach for a long time but not without its limitations. A generalized server solution works best when the applications running on those servers have generalized needs.
Enter “Big Data.” Today’s application and workload environments can be required to process massive amounts of granular data and, thus, often consist of applications that place high demands on different server hardware elements. Some applications are very compute intensive and place a high demand on the server’s CPU where others in the same environment are tasked with unique processing requirements performed on specialized graphical processing units (GPUs).
Whether it is customer, demographic, seismic data — or a whole host of other uses — the number crunching and processing required across the suite of applications can result in processing demands that are radically different from demands of prior years. Enter Hybrid High Performance Computing. These systems are built to serve two masters: CPU-intensive applications and GPU-intensive applications delivering a hybrid environment where workloads can be optimized and run-times reduced through ideal resource utilization.
The results of Hybrid CPU/GPU Computing adoption have been impressive. Just a few examples of how Hybrid CPU/GPU Computing is delivering real value include:
You can learn more about leveraging hybrid CPU/GPU computing in this whitepaper.
When you’re up to your ears in a database upgrade project, like an upgrade to Oracle 12c, you begin to look forward to little things.
Like going a few hours with no system notifications or text messages from your DevOps team.
Like buying lunch in the cafeteria and actually getting to eat it there, instead of at your desk.
Like getting home in time to watch Jeopardy! with the kids.
Of course, you could spend an entire career performing upgrades and running migration projects, and never hit all three of those, or any of your favorite little things. But looking forward to them is always the light at the end of the tunnel.
If you want to achieve some of those little things while you’re in the middle of your migration or upgrade project, have a look at our new e-book, Simplify Your Migrations and Upgrades. We wrote it to give you some high-level perspective before you get bogged down in the project itself.
Part 1 takes you through some of the basics on avoiding risk, downtime and long hours, including the five common pitfalls that afflict most migration projects:
Keep in mind the best ways to avoid those pitfalls:
Have a look at the e-book for more ideas on structuring your migration and upgrade projects. A half-hour of Jeopardy! with the kids beats a half-hour of DNS changes and node restarts any day.
About Steven Phillips
With over 15 years in marketing, I have led product marketing for a wide range of products in the database and analytics space. I have been with Dell for over 3 years in marketing, and I’m currently the product marketing manager for SharePlex. As data helps drive the new economy, I enjoy writing articles that showcase how organizations are dealing with the onslaught of data and focusing on the fundamentals of data management.
View all posts by Steven Phillips |