Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
This short video is part of Shawn's #ThinkChat video series with Tom Davenport, professor of management and IT at Babson College. Here the two briefly discuss how the landscape is evolving and how a growing community of advanced analytic users and enablers are fueling change. It’s the age of the analytic amateur and the semi-pro!
John takes a look at the many ways organizations can benefit from a decentralized, collaborative approach to analytics, an approach made realistically possible for more and more companies with the advent of simple, cloud-enabled tools.
Change can be daunting, especially when it involves unfamiliar technology to accomplish daily tasks. So when an entire workforce must migrate to a new software platform after years with legacy code, what kinds of questions do they ask? And how are their fears replaced with curiosity? David Sweenor introduces the first chapter of a three-part e-book describing Dell’s own recent migration.
In his latest article published at www.HealthDataManagement.com, Dr. Hill discusses the technology revolution that will involve predictive analytics in thousands of healthcare applications and workflows, and he shares his perspective regarding the industry's various opportunities, disruptors and hurdles.
The latest release of Dell Command | Monitor (formerly known as Dell OpenManage Client Instrumentation) v9.1 is available now. All products of the Dell Command Suite are listed here.
The Dell Command | Monitor (DC | M) enables remote management application programs to access the client's system information, monitor the status, or change the state like shutting it down remotely. Through standard interfaces, DC | M (or OMCI) exposes key system parameters, allowing administrators to manage inventory, monitor system health, and gather information on deploying Dell Enterprise Client systems like Latitude, Dell Precision Mobile, Optiplex and Dell Precision Workstation.
New features and enhancements in DC | M v9.1 include:
We have been engaged over the forums and have tried to address most of the major concerns to enhance the customer experience. Once again, we will love to hear from you to improve the product.
Downloads for DC | M v9.1 are available here.
The Dell TechCenter community has provided great feedback to the OMCI team, so we encourage you to continue that discussion in the OMCI - OpenManage Client Instrumentation forums.
We’ve posted earlier about how hackers get into your systems and steal data from your endpoints, and then how they monetize this stolen information. If you have thousands of unsecured mobile endpoints on your network, it means there are equal numbers of opportunities for hackers to breach your constituents’ information.
As a savvy IT pro, you understand that all of your machines must have the most up-to-date security patches — both OS and application — to prevent intrusion. Still, you might be wondering if there is even more you can do to uncover holes in the armor of these endpoints. The answer is decidedly yes! There are vulnerability standards available that can help advance the goal of vulnerability detection. Scanners built upon these standards can give you predictable results, and they are continually updated as the user community at large discovers more vulnerabilities.
One of the most well-known is the Open Vulnerability and Assessment Language (OVAL®). Before the advent of OVAL, there wasn’t a common way for IT administrators to find all software vulnerabilities, configuration issues, programs, and/or patches on their endpoints. Sure, you can and should use a patching tool to make sure all OS security patches are addressed. But, that is only part of the story. With OVAL there is a standard repository for vulnerability tests that is continually updated by the community. The community reviews and vets new definitions before they are added to the repository.
At the heart of the community is the OVAL Board which consists of members from industry, academia, and government organizations. OVAL is funded by the office of Cybersecurity and Communications at the U.S. Department of Homeland Security and is the summation of the efforts of a broad selection of security and system administration professionals from around the world.
Often, the question arises: can’t hackers use this information to break into my system? Certainly, any public discussion or availability of vulnerability and configuration information may help a hacker. However, there are several reasons why the benefits of OVAL outweigh its risks.
So if you truly want to decrease your exposure to outside threats, you can be proactive by performing vulnerability scans. Doing them based on OVAL definitions gives you the knowledge that the entire security community has your back.
We’ve been discussing the new security landscape, how it’s affecting IT processes and people, and what can be done to further protect your environment and that of your constituencies. For more information and a helpful list of controls, check out our new white paper: Protecting Your Network and Endpoints with the SANS 20 Critical Security Controls.
About Sean Musil
Sean Musil is a Product Marketing Manager for Dell KACE. He believes the internet should be free and secure.
View all posts by Sean Musil |
by David Detweiler
Congratulations to Team South Africa on their second place finish in the Student Cluster Competition at the International Supercomputing Conference (ISC) in Frankfurt, Germany earlier this month. The students hailing from the University of Witwatersrand narrowly missed three-peating as champions.
The team was comprised of Ari Croock, James Allingham, Sasha Naidoo, Robert Clucas, Paul Osei Sekyere, and Jenalea Miller, with reserve team members Vyacheslav Schevchenko and Nabeel Rajab. Together, they represented the Centre for High Performance Computing (CHPC) at the competition.
The South African students competed against teams from seven other nations over a sleep-depriving three days. During the competition, the teams were tasked with designing and building their own small cluster computers, and run a series of HPC benchmarks and applications. In addition, the students were assigned to optimize four science applications, three of which were announced before the competition, with the fourth introduced during the event.
The competition was sponsored by ISC and the HPC Advisory Council. Each team was scored based on three criteria:
With young people like team South Africa entering the field, the future of HPC looks brighter than ever. Congratulations on a job well done!
Attend the Dell World Software User Forum to deep dive into Dell’s latest innovations – including the latest Dell One Identity Manager capabilities.
Here is a taste of what's in store for our two IAM-focused sessions:
Dell One Identity Manager - Simplify access review by adopting a real-world, risk-based approach - This session covers how to reduce your exposure while increasing security and compliance through adopting a risk-based approach to this important taskManaging Cloud Environments with Dell One Identity Manager - Learn how Dell One Identity Manager easily integrates cloud assets, such as MS Azure, Openstack and AWS, to provide enhanced governance and user-lifecycle, tenant/project and privileged-account management capabilities. For a complete list of Software User Forum sessions, visit the agenda page. Don’t hesitate - register and get a complimentary pass for a colleague.Your registration includes admission to all Dell World general sessions, solutions showcase, and the big Opening Night Concert headlined by a star you’ll love.
Reasons You Need to Attend
We’re looking forward to meeting you and your colleagues in Austin!
In the latest issue of Statistica Monthly News (yes, you can subscribe for free), our readers found a link to a webcast that talks all about Statistica’s new partnership with Microsoft, a relationship that produces some incredible hybrid cloud functionality for data analysis using Azure Machine Learning (ML).
We are talking about a hybrid cloud solution whose powerful functionality completely belies Azure’s namesake: a shade of bright blue often likened to that of a cloudless sky. Cloudless? Hardly. The Statistica-Microsoft partnership is all about the Cloud!
The fun story in the webcast describes how one website was running an analytics program as an API on Azure. Designed to guess ages and genders of people in photographic images, the site was expecting a few thousand submissions, but it went from zero to 1.2 million hourly visitors within just two days of going live, and up to seven million images per hour. By day six, 50.5 million users had submitted over 380 million photos! Normally, we would hear about sites crashing with such a viral overload. But this site kept humming along even when the action ramped up so dramatically, primarily because Azure scaled dynamically as intended, handling the unforeseen load like a champ.
Think about embedding this kind of cloud access and flexible scalability as a directly callable function inside Statistica—well, that just makes way too much sense, right? But that is what’s happened! Azure ML is really a development environment for creating APIs on Azure, with the intent to enable users to have machine learning in any application, whether that is a web app or a complex workflow driven by Statistica. For instance, you can host your complicated models in the cloud with Azure and run non-sensitive, big data analytics out there—a very practical time saver and money saver. Then you can bring those analyzed results back down to join perhaps more sensitive data and analytics output behind your firewall. You can learn more when you watch our “Cloud Analytics” webcast.
The user community and the landscape of advanced analyst are benefiting from dramatic change and shifts in strategy and technology. Early adopter’s, as far back as 1954 were leveraging basic analytics to drive their company forward benefiting from the age of Analytics 1.0. Today we are experiencing the early stages of Analytics 3.0 and the promise of innovation that it brings.
In this week’s #ThinkChat segment Tom and I discuss how the landscape is evolving and how a growing community of advanced analytic users and enablers are fueling change. It’s the age of the analytic amateur and the semi-pro!! In this week’s #ThinkChat segment Tom and I discuss how the landscape is evolving and how a growing community of advanced analytic users and enablers are fueling change. It’s the age of the analytic amateur and the semi-pro!!
#ThinkChat Conversation with Tom Davenport Part 6 of 7
To view other segments in the #ThinkChat series click here.
Optimizing your virtualization management is more than an exercise in fighting this month’s performance and capacity fires. Once you’ve worked out your current problems, don’t forget that you need to put things in place to keep them from coming back.
Sure, pat yourself on the back that you’ve optimized the unresponsiveness and latency out of your virtual environment. But also keep your eye on the road ahead to manage capacity and plan for the future.
In the previous post of this series, I covered variables like VM sprawl, storage connections and balancing resources. In this final post, I’ll take the long view of growth in your virtual data center.
Managing capacity in vSphere clusters
vSphere clusters are the foundation of VM performance and high availability, but you have to plan and implement them correctly to get the most out of them.
To protect every VM, set aside one host’s worth of resources in reserve. Those resources stand ready to process VMs when the cluster loses a node, but they also represent unused resources. In reality, virtual administrators often build their clusters without the necessary reserve or lose it when the physical resources are needed elsewhere.
If you can’t afford the hardware for universal high availability, then you have to plan your Admission Control Policy to prioritize high-value workloads. Or, if some of your VMs can tolerate downtime in emergency situations, try a percentage policy that lets you balance spare hardware capacity against production needs. Keep in mind, though, that vSphere’s percentage policy requires extra planning and a regular checkup as you add or modify cluster nodes.
Modeling for the future
Continuing on the theme of art vs. science from my last post, there’s always been—and always will be—an element of gut-feel to IT forecasting, and optimizing virtualization management is no exception.
Think of all the moving parts and variables around optimizing performance in a physical data center. When you’re taking full advantage of virtualization, you have about the same number of moving parts and variables in a single rack, maybe in a single server.
It’s hard to gut-feel all of that, so successful management involves keeping a watchful eye on virtual behaviors and reporting those that are amiss. Capacity planning requires tools for trending, forecasting and alerting that will project time and resource consumption limits based on growth rates. There are plenty of quantifiable factors you can pull in before you have to resort to gut-feel.
New guidebook: An Expert's Guide to Optimizing Virtualization Management
Driving the promise of virtualization is a matter of determining the right amount of available resources for the workload you and your users impose on your virtualization environment. Many companies we talk to report that they enjoyed big-time ROI in the first months, quarters and even years of their virtualization efforts, until they suddenly realized they weren’t saving as much money anymore.
We created a new guidebook, An Expert's Guide to Optimizing Virtualization Management, filled with concepts and strategies for recovering lost ROI in the virtual data center. Have a look at it and see where you might be leaving money on the virtual table.
About John Maxwell
John Maxwell leads the Product Management team for Foglight at Dell Software. Outside of work he likes to hike, bike, and try new restaurants.
View all posts by John Maxwell |
As Dell’s executive director of our information management solutions, I’ve seen firsthand how the world of data is rapidly evolving. With the immense changes taking place in how organizations gather and use data, the need for ethics, compliance and governance has never been more crucial, both as a key component to business success, as well as customer success.
It goes without saying that with big data comes big responsibility. That means it’s up to our organizations to find the line of acceptable use in our policies and support ethical behaviors to the best of our abilities. Although much of what we can do with the data we collect may still be somewhat unregulated, we have an obligation to do the right thing. Customers not only expect that security protection and safeguards be in place, but also that reasonable care be undertaken to ensure that these are not being abused. That is why we must achieve better organizational operations and high moral standards.
So how can you leverage big data while creating an ecosystem that exemplifies ethical conduct with data? Let’s examine a few observations and tips, based on my experience in the world of security and analytics. My goal is to help you achieve more success with the big data assets available to your organization ― without alienating your customers and prospects.
Here’s how you can dominate big data the ethical way:
Do practice ethics at every turn, whether your organization is deploying a field marketing initiative or a product marketing campaign. Always ensure your team respects and garners the explicit consent of your customers before adding them to your prospect database. Practice the “better safe than sorry” rule here; doing so will help ensure your prospects and customers are happy and building a mutually consensual relationship.
Don’t rely on regulations alone, and don’t assume your customers won’t be upset by ethical breaches as long as the law wasn’t broken. As the organizational leader, you have a responsibility to ensure that rules are abided. Implement policies with your own best practices in mind. Most importantly, train your teams to truly understand what those best practices are, and why it’s critical they support your policies. Finally, make the enforcement and leveraging of these policies a part of the fabric of your customer interactions. Your employees should drive and leverage rules for governing data based on the best interests of your customers.
Do practice analytics and data gathering. When data is diligently collected, integrated and tracked, the analytics generated from your data ecosystem are invaluable. Nothing is more important to the current and future success of your company. And taking an ethical stance on data use will drive excellence in operational performance and help build mutually beneficial, long-term relationships with your customers.Next Steps
To learn more about these best practices and hear from other organizations who are achieving success with an ethical approach to using big data, join me at the Dell World Software User Forum this October in Austin, Texas. Register today, and use the agenda builder tool to reserve your place in the special presentation, “Big Data Privacy & Best Practices.”
Don’t Miss the Dell World Software User Forum >
The fundamentals of systems management have changed. IT professionals like you are now faced with managing and securing a growing number of mobile and bring your own devices (BYOD), a variety of operating systems and network connected smart devices, in addition to traditional endpoint management tasks. You must approach “anypoint” systems management as an imperative, and Dell KACE appliances and complementary software can fill this need.
Attend Dell World Software User Forum and address these challenges head on by getting direct access to “anypoint” management experts through a broad selection of KACE educational sessions. In these sessions, you’ll see some of the newest and most popular KACE features and capabilities.
We’re targeting software pros like you who want to up their game by enhancing their KACE appliance use and knowledge, while exploring the added benefits of the wider Dell Software product portfolio. You should come ready to be immersed in the future of “anypoint” systems management. You’ll learn about the latest trends in big data and cloud management, advanced analytics, and the ins and outs of secure network access.
The Agenda Builder is now live, so once you’ve registered, you can create a personalized Dell World Software User Forum experience.
Featured and favorite KACE sessions include:
Chromebooks are entering business and education at an unprecedented rate. Chromebook inventory information is now integrated with the K1000’s systems management workflows and processes, allowing you to use the K1000 to perform day-to-day management tasks, such as hardware inventory, reporting, and service desk, for Chrome devices. Attend this session and learn how to best manage them with your K1000.
Increase Security with an Effective Patch Process
Patching might have been the easy part...designing a sustainable patch management system with integrated automation and reporting is your real challenge. In this session, you'll learn best practices and different approaches to streamlining all the patching security tasks that are critical to your organization.
“Anypoint” Systems Management: Managing All of Your Connected Devices
The K1000 can manage more than just your laptops, desktops, Macs and servers. In this session, we'll demonstrate how to get your other network-enabled devices into your device inventory using agentless technology, for true "anypoint" systems management.
Your DWUF registration includes admission to all Dell World general sessions, solutions showcase, and the big opening night concert headlined by a name you already know and love.
And don’t forget: the BOGO (buy one, get one) offer is available. Each paid registrant will be able to bring a colleague of his or her choice, free.