Foglight Administrators DB Performance Monitoring APM vWorkspace Virtualization
Economic recessions come and go (hopefully), but it’s always a “bull-market” where data is concerned. Today there's more data being generated, moved, processed, stored and retained than ever before — and for longer periods of time. In production terms, this makes today’s systems administrator a kind of highly trained factory manager.
Your data infrastructure — servers, storage, hardware, software and virtualization — exists to support the applications that drive your business forward. And just like the brick-and-mortar factories that turn out tractors, TV sets and radial tires; your virtualized data center requires scrutiny to maximize efficiency and control costs.
Utilization vs. productivity
You could be running your information factory at maximum capacity, utilizing all your resources, but still have poor productivity. How? If a car-manufacturing factory turns out a high volume of tires that fail safety standards, that drives down net production. In the same way, if an administrator has to go back and fix and redo things, that takes away from their productivity. Poor use of resources can be one reason why processes fail the first time.
Virtualization isn’t just about consolidation; it is also about enabling agility. Not everything in your environment should be consolidated, but almost everything could be virtualized. Rethink how you configure your servers and other resources – maybe you can put that database onto a virtual machine for increased agility, flexibility, load balancing, fail over, and more.
Bottom line: it’s not enough for a component to be highly utilized; it must be productive. What's the quality of service that's being delivered?
The big picture on processes
Not all your servers need to be consolidated the same way. Look at how your workloads are interacting. Are they competing with each other? Do you know where the problems are? Is it the application, the database, the middleware, the file system or the OS? The key lies in gaining visibility into your entire virtual environment, not just the hypervisor or the operating system, or the application — but how they collectively work together. Fast applications need fast resources, versus those that are more data-centric. It's a bit like a balancing act. The key to staying on the wire lies in understanding your different applications and workloads.
Find a factory manager who never takes a sick day
Creating efficiency in your factory doesn’t always mean cutting costs. Sometimes it means spending money to boost productivity (and profits) down the line. Anywhere you can remove complexity; that tends to lower costs. Traditionally, gaining insight into all your different plugins, caching tools, path managers, hypervisors, storage and more, required several tools. To achieve all this with one solution you need a product that’s multi-tiered and scalable, one that can “manage by exception.” Rather than burning resources riding herd over the many VMs in your environment that are functioning fine; a good solution will allow you to set baselines and notify you only when things go awry. It should be able to discover powered-off VMs, zombie VMs, abandoned images, and unused templates and snapshots throughout your virtual infrastructure. Reclaiming these wasted resources can save thousands in license fees, disk space, administrator time and server utilization — and keep your virtual factory profitable through changing market conditions.
To learn more, tune in to this on-demand webcast hosted by Greg Schulz from Server StorageIO: Holistically Manage and Solve Virtualization Performance Issues like a Pro