Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 9
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,861
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 229
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 12
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 512
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 6
Latest Blog Posts
  • Dell TechCenter

    Dell 13G PowerEdge servers now with Intel’s latest Xeon E5-2600 v4 processors, further raise the bar for enterprise performance

    Dell PowerEdge 13th generation servers are now available with the Intel Xeon Processor E5-2600 v4 product family (formerly code-named “Broadwell-EP”).


    Benchmarking of the PowerEdge R730 and R630 rack servers models confirmed they deliver the full potential of the new architecture by attaining a world record on the compute-intensive SPECcpu2006 metric.



    *Source: Based on best SPEC CPU2006 results published as of March 31, 2016. SPEC® and the benchmark name SPECint® are registered trademarks of the Standard Performance Evaluation Corporation. For latest SPEC CPU2006 benchmark results, visit

    Also, the R730 achieved the world record on the business transactional SAP-SD two-tier (Linux) Sales and Distribution metric.



    To learn more about these records, please visit Intel’s own world record leader board page.

    Claim based on best published two-processor, two-tier SAP SD* Standard Application Benchmark result using the Linux* operating system published at as of 31 March 2016. New configuration: Dell PowerEdge* R730 platform with two Intel® Xeon® processor E5-2699 v4 (22 cores, 44 threads), Red Hat* Enterprise Linux* 7.2, SAP Sybase ASE* 16, SAP Enhancement Package 5 for SAP ERP* 6.0, number of SAP SD* benchmark users: 21,050. Source: SAP SD two-tier certification number 2016003.

    And thanks to both the CPU family’s improved design and Dell’s Energy Smart implementation; this additional performance required no additional power or cooling.



    See the full whitepaper on Dell TechCenter for test configuration, methodology and findings.

  • Dell Big Data - Blog

    Guy Harrison's Book Signing Draws Crowds at Strata + Hadoop World

    The Dell Booth is a popular place to be at this year's Strata + Hadoop, thanks to Guy Harrison's book signing events. The author, who is also an executive director for Dell R&D Information Management, is know for his books, such as Oracle Performance Survival Guide and MySQL Stored Procedure Programming (with Steven Feuerstein). His latest book, Next Generation Databases is garnering a lot of attention in the industry.

    Focused on the latest developments in database technologies, this is a book for enterprise architects, database administrators, and developers who need to understand what is new, what works, and what’s just hype. The aim is to help you choose the correct database technology at a time when concepts such as Big Data, NoSQL and NewSQL are making what used to be an easy choice into a complex decision with significant implications.

    Harrison has divided into two sections, with the first section examining the market and technology drivers that lead to the end of the complete “one size fits all” relational dominance, and taking a closer look at each of the major new database technologies.  The second half of the book includes the nitty gritty details of the major new database technologies; examining how databases like MongoDB, Cassandra, HBase, Riak and others implement clustering and replication, locking and consistency management, logical and physical storage models and the languages and APIs provided. Harrison also muses upon the future of database technology and predicts the explosion of new database technologies over the last few years will be followed by a much needed consolidation phase. He believes there are some potentially disruptive technologies on the horizon such as universal memory, blockchain and even quantum computing.

    Harrison has spent most of his career in the relational database world. We are very lucky to have him with us now at Dell. He’s always interested in hearing what other people think and their questions and concerns.  He will be with us for another book signing and giveaway today, so stop by booth #931 at 1:30 PM. 

  • General HPC

    Measuring Performance of Intel Broadwell Processors with High Performance Computing Benchmarks

    Authors: Ashish Kumar Singh, Mayura Deshmukh and Neha Kashyap

    The increasing demand for more compute power pushes servers to be upgraded with higher and more powerful hardware. With the release of the new Intel® Xeon® processor E5-2600 v4 family of processors (architecture codenamed “Broadwell”), Dell has refreshed the 13th generation servers to benefit from the increased number of cores and higher memory speeds thus benefiting a wide variety of HPC applications.

    This blog is part one of “Broadwell performance for HPC” blog series and discusses the performance characterization of Intel Broadwell processors with High Performance LINPACK (HPL) and STREAM benchmarks. The next three blogs in the series will discuss the BIOS tuning options and the impact of Broadwell processors on Weather Research Forecast (WRF), NAMD, ANSYS® Fluent®, CD-adapco® STAR-CCM+®, OpenFOAM, LSTC LS-DYNA® HPC applications as compared to the previous generation processor models.

    In this study, performance was measured across five different Broadwell processor models listed in Table2 along with 2400 MT/s DDR4 memory. This study focuses on HPL and STREAM performance for different BIOS profiles across all five Broadwell processor models and compares the results to previous generations of Intel Xeon processors. The platform we used is a PowerEdge R730, which is a 2U dual socket rack server with two processors. Each socket has four memory channels and can support up to 3 DIMMs per channel (DPC). For our study, we used 2 DPC for a total of 16 DDR4 DIMMs in the server. 

    Broadwell (BDW) is a tick in Intel’s tick-tock principle as the next step in semiconductor fabrication. It is a 14nm processor with the same microarchitecture as the Haswell-based (HSW, Xeon E5-2600 v3 series) processors with the same TDP range. Broadwell E5-2600 v4 series processors support up to 22 cores per socket with up to 55MB of LLC, which is 22% more cores and LLC than Haswell. Broadwell supports DDR4 memory with max memory speed of up to 2400 MT/s, 12.5% higher than the 2133 MT/s that is supported with Haswell.

    Broadwell introduces a new snoop mode option in the BIOS memory setting, Directory with Opportunistic Snoop Broadcast (DIR+OSB), which is the default snoop mode for Broadwell. In this mode, the memory snoop is spawned by the Home Agent and a directory is maintained in the DRAM ECC bits. DIR+OSB mode allows for low local memory latency, high local memory bandwidth and I/O directory cache to reduce directory update overheads for I/O accesses. The other three snoop modes: Home Snoop (HS), Early Snoop (ES), and Cluster-on-Die (COD) are similar to what was available with Haswell. The Cluster-on-die (COD) is only supported on processors that have two memory controllers per processor. The Dell BIOS on systems that support both Haswell and Broadwell will display the supported snoop modes based on the processor model populated in the system.  

    Table 1 describes the other new features available in the Dell BIOS on systems that support Broadwell processors.

    Table1: New BIOS features with Intel Xeon E5 v4 processor family (Broadwell)

    BIOS feature


    Snoop Mode > Directory with Opportunistic Snoop Broadcast (DIR+OSB)

    Directory with Opportunistic Snoop Broadcast, available on select processor models, works well for workloads of mixed NUMA optimization. It offers a good balance of latency and bandwidth.

    System Profile Settings > Write Data CRC

    When set to enabled, the DDR4 data bus issues are detected and corrected during ‘write’ operations. Two extra cycles are required for CRC bit generation which impacts the performance. Read-only unless System Profile is set to Custom.

    System Profile Settings > CPU Power Management > Hardware P States

    If supported by the CPU, Hardware P States is another performance-per-watt option that relies on the CPU to dynamically control individual core frequency. Read-only unless System Profile is set to Custom.

    System Profile Settings > C States > Autonomous

    Autonomous is a new BIOS option for C States in addition to the previous options, Enable and Disable. Autonomous (if Hardware controlled is supported), processor can operate in all available Power States to save power, but may increase memory latency and frequency jitter.   


    Intel Broadwell supports Intel® Advanced Vector Extensions 2 (Intel AVX2) vector technology, which allows a processor core to execute 16 FLOPs per cycle. HPL is a benchmark that solves a dense linear system. The HPL problem size (N) was chosen to be 177408 along with a block size (NB) of 192. The theoretical peak value of HPL was calculated using the AVX base frequency, which is lower than rated base frequency of the processor model. Broadwell processors consume more power when running Intel® AVX2 workloads than non-AVX workloads. Starting with the Haswell product family Intel provides two frequencies for each SKU. Table 2 lists the rated base and AVX base frequencies of each Broadwell processor used for this study. Since HPL is an AVX-enabled workload, we would calculate HPL theoretical maximum performance with AVX base frequency as (AVX base frequency of processor * number of cores * 16 FLOP/cycle)

    Table 2: Base frequencies of Intel Broadwell Processors

                  Base Frequencies of Intel Broadwell processors



    Rated  base frequency (GHz)

    AVX base frequency (GHz)

    Theoretical Maximum Performance (GFLOPS)

    E5-2699  v4, 22 core, 145W




    E5-2698  v4, 20 core, 135W




    E5-2697A v4, 16 core, 145W




    E5-2690 v4, 14 core, 135W




    E5-2650 v4, 12 core, 105W





    Table 3 gives more information about the hardware configuration and the benchmarks used for this study.

    Table 3: Server and Benchmark details for Intel Xeon E5 v4 processors


    PowerEdge R730


    As described in table 2


    16 x 16GB DDR4 @ 2400 MT/s (Total=256GB)

    Power Supply

    2 x 1100W

    Operating System

    RHEL 7.2 (3.10.0-327.el7.x86_64)

    BIOS options

    System profile – Performance and Performance Per Watt (DAPC)

    Logical Processor – Disabled

    Power Supply Redundant Policy – Not Redundant

    Power Supply Hot Spare Policy – Disabled

    I/O Non-Posted Prefetch - Disabled

    Snoop modes – OSB, ES, HS and COD

    Node Interleaving - Disabled


    BIOS Firmware


    iDRAC Firmware


    From Intel Parallel Studio 2016 update1


    Intel MPI - 5.1.2




    Intel Broadwell Processors


    Figure1: HPL performance characterization

    Figure 1 shows HPL characterization of all five Intel Broadwell processors used for this study, with the PowerEdge R730 platform. Table 2 shows the TDP values for each of the Broadwell processors. The text value in each bar shows the efficiency of that processor. The “X” value on top of each bar shows the performance gain over 12 core Broadwell processor. The HPL performance improvement with top bin Broadwell processor is not correspondingly increasing as number of cores. For example, adding 83% more cores in top bin 22 core than 12 core Broadwell processor, allows HPL a 57% performance improvement. The line pattern on the graph shows the HPL performance per core. Since the HPL performance is not accelerating as per number of cores, the performance per core has decreased by 8 to 15 % for 20 and 22 core processors respectively.


    Figure2: STREAM (Triad) Performance characterization

    The STREAM benchmark calculates the memory bandwidth by counting only the bytes that the user program requested to be loaded or stored. This study uses the results reported by the TRIAD function of the stream bandwidth test.

    Figure 2 plots the STREAM (TRIAD) performance for all Broadwell processors used for this study. The bars show the memory bandwidth in GB/s for each of the processors. As per the graph, memory bandwidth across all Broadwell processors is approximately same. Since, the memory bandwidth across all Broadwell processors are same, the memory bandwidth per core is decreasing due to more number of cores.

    BIOS Profiles comparison 

    Figure 3: Comparing BIOS profiles with HPL

    Figure 3 plots HPL performance with two BIOS system profile options for all four snoop modes across all five Broadwell processors. As Directory + Opportunistic Snoop Broadcast (DIR+OSB) snoop mode performs well for all workloads and DAPC system profile balances performance and energy efficiency, these options are set as default in the BIOS and so has been chosen as the baseline.

    From this graph, it can be seen that Cluster-on-Die (COD) memory mode with the “Performance” System Profile setting performs 2 to 4 % better than other BIOS profile combinations across all Broadwell processors. The Cluster-on-die (COD) is only supported on processors that have two memory controllers per processor, i.e. 12 or more cores.


    Figure 4: Comparing BIOS profiles with STREAM (TRIAD)

    Figure 4 shows the STREAM performance characteristics with two BIOS system profile options for all the snoop modes. Opportunistic snoop Broadcast (OSB) snoop mode along with DAPC system profile is chosen as the baseline for this study. Memory Bandwidth with each BIOS profile combination except Early snoop (ES) mode with both system profiles are almost same. The memory bandwidth with Early snoop (ES) mode for both system profiles is lower by 8 to 20 % and the difference is more apparent for 22 core processor up to 25%. The Early Snoop (ES) mode have less Requester Transaction IDs (RTIDs) distributed across all the cores, while other snoop modes gets higher RTIDs, that is higher number of credits for local and remote traffic at the home agent.

    Generation over Generation comparison

    Figure 5: Comparing HPL Performance across multiple generations of Intel processors

    Figure 5 plots generation over generation performance comparison for HPL with Intel Westmere (WSM), Sandy Bridge (SB), Ivy-Bridge (IVB), Haswell (HSW) and Broadwell (BDW) Processors. The percentages on the bars shows the HPL performance improvement than their previous generation processor. The graph shows that the 14 core Broadwell processor with similar frequencies performs 16% better than 14 core Haswell processor for the HPL benchmark. Broadwell processors measure better power efficiencies than the Haswell processors. The top bin 22 core Broadwell processor performance is 49% better than 14 core Haswell processor. The purple diamonds in the graph show the performance per core. The “X” value on top of every bar shows acceleration over 6 core WSM processor.


    Figure 6: Generation over generation comparison with STREAM

    Figure 6 plots performance comparison of STREAM (TRIAD) for multiple generations of Intel processors. From the graph, it can be seen that the memory bandwidth on the system has increased over generations. The theoretical maximum memory frequency increased by 12.5% in Broadwell over Haswell (2133 MT/s to 2400 MT/s) and this translates into 10 to 12% better measured memory bandwidth as well. However the maximum core-count per socket has increased by up to 22% in Broadwell over Haswell, and so the memory bandwidth per core depends on the specific Broadwell SKU. The 20 core and 22 core BDW processors support only ~3 GB/s per core and that is likely to be very low for most HPC applications, the 16core BDW is on par with the 14core HSW at ~4 GB/s per core.


    The performance of all Broadwell processor used for this study is higher for both HPL and STREAM benchmarks. There is ~12% increase in measured memory bandwidth for Broadwell processors compared to Haswell processors. Broadwell processors measure better power efficiencies than the Haswell processors. In conclusion, Broadwell processors may fulfill the demands of more compute power for HPC applications.        



  • KACE Blog

    Technology in Education — 3 Big Issues for Student-Centered Learning

    Getting the most out of the educational process while also getting the most out of technology is a tricky balance.

    Guiding education responsibly is the process of mixing learning models, creativity, exercises and spontaneity. Managing IT responsibly seems to be the process of saying “no” most of the time, as I mentioned earlier in this blog post series.

    But when the two are in balance, and one-size-fits-all instruction gives way to flexible, student-centered learning both inside and outside the classroom, then the biggest winners are the students.

    Student-Centered Learning — Three Big Issues

    We’ve released a new e-book called Aligning the Learning Model with the IT Model to emphasize the role of IT in helping educators teach the way they want and students learn the way they need. From years of experience with school districts and colleges worldwide, Dell has identified three big issues IT faces in playing that role:

    1.    Managing all endpoints from a single appliance

    It might make your job easier if you could ban all Windows laptops or iPads from your network, but it wouldn’t make you very popular. It might also cramp the diversity your school needs to develop its learning model.

    But if you have to live with that many platforms on your network, you’d probably prefer to have a single tool and interface to manage all of them. Straddling separate tools for separate platforms gets old quickly, so managing all endpoints from a single appliance is a priority for many system administrators in education.

    2.    Supporting assessments and testing

    When the technology required to administer tests was limited to filling in bubbles on a test sheet with a pencil, teachers managed on their own. As standardized testing has moved to PCs, Chromebooks and tablets, they rely on IT to keep the devices updated, the network reliable and the students honest.

    IT can configure and maintain a secure image, but then it has to provision every device with it. Automating that process is a huge time-saver; without automation, the task becomes unwieldy, error-prone and unreliable.

    3.    Fostering interoperability

    Consider some of the moving parts behind a school’s portal:

    • Student information system
    • Learning management system (LMS)
    • Data warehouse
    • Parent notification system
    • Single sign-on for digital content
    • Communication platform to connect students, teachers and administration

    When those moving parts are made without standards in mind, they become harder for IT to manage and control. It should be possible for sysadmins to monitor the machines running those components and keep them interoperable.

    New E-book: Aligning the Learning Model with the IT Model

    We’ve put together an e-book called Aligning the Learning Model with the IT Model. Starting from the concept of student-centered learning, it draws the path to systems management and systems deployment with Dell KACE appliances in education. Have a look at the e-book for three case studies and more details on aligning learning with IT in both K-12 and higher education.

    Christopher Garcia

    About Christopher Garcia

    A ten-year Dell veteran, Chris has had experience in various marketing roles within the organization. He is currently a Senior Product Marketing Manager.

    View all posts by Christopher Garcia 

  • Dell Big Data - Blog

    Join us at Strata + Hadoop World Conference in San Jose, March 29-31, 2016

    Strata + Hadoop World 2016 returns to San Jose on Tuesday, March 29, 2016. The event is known as one of the foremost gatherings for the world’s leading data scientists, analysts, and executives in big data and analytics. Attendees represent innovative companies – from startups to well-established organizations – who come together for networking, sharing case studies, identifying proven best practices, and learning effective new analytic approaches and core skills.

    We invite you to join us at Dell Booth #931, to share case studies and use cases, demos of new products and solutions we’re bringing to market, and for many, a special book signing event.

    Dell is very honored to host Guy Harrison, author of "Next Generation Databases,” who will be on hand for in-booth book signings throughout the conference. Mr. Harrison’s book is for enterprise architects, database administrators, and developers with the need to understand the latest developments in database technologies. It’s the book to help you choose the right database technology at a time when concepts such as big data, NoSQL and NewSQL are making what used to be an easy choice, into a complex decision with significant implications. Mr. Harrison will be at the booth on Tuesday, March 29 at 5:15 PM, on Wednesday, March 30 at 1:30 PM, and Thursday, March 31 at 1:30 PM.

    On Wednesday, March 30, join us for a Dell interactive panel presentation, at 11:50 AM, in room LL20B. The team will help attendees to outline their big data journeys as they address the question, “Where are you on your data journey?” Here they will take a close look at how Hadoop enables data-driven insights across organizations no matter where they are on their big data journey. Dell’s own Anthony Dina will host the interactive panel, with panelists Adnan Khaleel, Jeff Weidner, and Armando Acosta. Together they will explore how business units have taken advantage of Hadoop’s strengths to quickly identify and implement two use cases: an early use case for ETL offload that then led to a detailed and robust advanced analytics solution that enabled Dell to use marketing analytics to transform the business and strengthen customer relationships.


    In-booth theater presentations from Dell, Cloudera and Intel, a Robot Run for prizes, in-booth demos and chances to win BB-8 robots are just a few of the fun plans we have at the Dell booth at Strata+Hadoop World. Plan to stop by!


    Together with Intel and Cloudera, we are also hosting a networking dinner, with wine and beer tastings, on Tuesday. March 29, at The Farmers Union in San Jose. If you are interested in attending, please stop by Dell booth, #931 during the Opening Reception.

    We look forward to seeing you in San Jose!

  • Information Management

    Toad Ranks #1 as Database Development and Optimization Tool

    SharePlex #3 Replication Tool in 2014 Worldwide Revenue from IDC


    It’s always great to get a pat on the back from an independent source like IDC. In their recently released report titled Worldwide Database Development and Management Tools Market Shares, 2014: Year of the Secure Database (November 2015), IDC reported Dell Software in the #1 position for Worldwide Database Development and Optimization Software with Toad, and the #3 position for Worldwide Database Replication Software with SharePlex.

    Here are a few highlights of the report:

    • Good news – The whole pie is getting bigger. The worldwide market for database development and management tools (DDMT) increased by 2.2 percent in 2014, to $2.2 billion.
    • Better news – Dell’s piece of the pie is growing even faster. “Of the top 5 vendors [in DDMT], only Dell showed significant growth [13.7% market share], due to the continued popularity of its Toad product line for database management and SharePlex for database replication,” according to Carl W. Olofson, research vice president for Database Management Software at IDC.
    • Brightest news – Olofson calls database development and optimization software “the brightest light in the DDMT picture overall, with 9% market growth.” Dell’s Toad product line tops the category with 20.8 percent growth, contributing substantially to that bright light.
    • Cool news for DBAs – Regarding the fast growth, Olofson writes that “Dell’s Toad has developed a cult-like following, proving that database management can be cool.” Tell that to the people who think we’re all geeks.
    • Good news for replication – Dell is the leading independent DDMT software vendor without its own DBMS. In the submarket of Worldwide Database Replication Software, SharePlex came in third in revenue, with 16.8% market share.

    If you’re already using Toad and SharePlex, the IDC report should validate your choice. If you’re still using other products and thinking you’re ready for a change, download a trial of Toad or SharePlex and experience the difference, first hand.

    Nicole Tamms

    About Nicole Tamms

    Nicole is an experienced product marketer with over seven years of experience. She has also held various other technology marketing and corporate communications roles at Quest, now part of Dell, over the past 15 years. She has specialized in industry analyst relations and media relations as the PR and analyst relations manager for Quest, prior to her roles in product marketing.

    View all posts by Nicole Tamms | Twitter

  • Foglight for Virtualization and Storage Management

    The most critical features in virtualization management: What’s important to you?

    Why do people pick the teams they pick in the college basketball championship?

    Let’s say you correctly picked Duke over Wisconsin for the final game in 2015. What did you see in all those teams, players and coaches that convinced you they’d go that far? Sure, there was the fact that Mike Krzyzewski had won four national championships in the previous 34 years at Duke, but you’d have needed more than that to go on. And with quintillions-to-1 odds against simply guessing a perfect bracket, you’d have needed some kind of method to your, well . . .


    Anyway, I’d be perfectly happy to continue writing about March basketball, and reading your comments on it, but this post is meant to be about how you figure out the most critical features in virtualization management software. The odds against you aren’t in the quintillions, but they’re pretty long, so you shouldn’t rely on guesswork.
    We’re offering you an e-book called “The Definitive Guide to Virtualization Management Software”. Part 3 of the e-book is titled The most critical features in virtualization management. Here are some of the highlights.

    The most critical features in virtualization management

    Per my previous post about selecting a virtualization management solution , you’ve whittled the field down to a few top vendors and their products. Now it’s time to dive deeper into the features of each product and match them to your needs.

    The e-book devotes nine pages to helping you build your checklist of features. I’ll focus on visualization and virtual infrastructure optimization.


    It’s not hard to find out how much memory virtual machines are using on a physical server. Finding out what’s going on inside that memory, however, is more difficult.
    Suppose a utility tells you that virtual machines are using 16,384 MB on a physical server. That doesn’t mean much. But that superficial statistic could mask several undesirable conditions:

    • The physical server has only 16,384 MB, so no more physical memory is available.
    • The memory utilization is trending sharply upward.
    • A single virtual machine demands 80 percent of that memory.
    • Paging is now occurring because physical memory is exhausted.

    The more easily you can see what’s happening inside the VMs, the sooner you can take corrective action when necessary.

    Virtual infrastructure optimization

    You’re always balancing between Over and Under.

    If you overprovision, you waste your investment in server capacity and you risk wasting resources that other virtual machines could use. If you underprovision, then the applications running inside will perform poorly. You want right-sized infrastructure, but that rarely lasts for long because the applications running inside it have variable usage over time, requiring more resources: memory, storage or compute.

    Virtualization management tools help you optimize on other fronts:

    • Controlling VM sprawl. VMs are a dime a dozen now, so teams all around your company can spin them up, replicate them and work with them for a while. Still, that dime a dozen can add up to real money in needless spend, plus the annoyance and time people incur when they’re hunting for resources to create new VMs for current needs. Besides making it easier to contain VM sprawl, a good virtualization management product includes lifecycle management tools for creating a catalog of virtual machines, an approval process for adding new ones, a usage monitoring system for any added virtual machines and a removal process for end of useful life.
    • Removing zombie VMs. There’s a happy, cathartic effect to eliminating VMs that nobody uses anymore. The right tools help you identify those zombie VMs to reduce unnecessary resource consumption.
    • Removing orphaned VMs. Orphans are nearly as wasteful as zombies. They take up disk space but aren’t actually powered on. Identifying and deleting them is a big step toward optimizing disk usage.

    E-book: The Definitive Guide to Virtualization Management Software

    Visualization and optimization are just two of the most important features to look for in a virtualization management solution. Others include alerting, performance monitoring, capacity planning and storage analysis.
    Read Part 3 of our e-book, The Definitive Guide to Virtualization Management Software, for a list of 19 top features, then pick the ones that make the most sense in your organization. The e-book is available for download now.

    Danalynne Menegus

    About Danalynne Menegus

    Sci-fi/80s alt music/grammar/reading/cooking/marketing/events geek and product marketer for Dell Software.Opinions and posts are my own.

    View all posts by Danalynne Menegus | Twitter

  • KACE Blog

    The Dell KACE 2000 and Deployment to Skylake-powered PCs — Like the Platform Change from Vinyl to Cassettes

    Once upon a time, you’d take your records to a party at your friend’s house and she’d put them onto her record player. It’s how you shared music.

    One day, you took your records to a party at a new friend’s house to share your music with her.

    “Oh, sorry,” she said. “We only have cassettes. We can’t play your records on our cassette deck.”

    It was a drag when you couldn’t play your music on someone else’s platform, especially when you’d spent ages buying, collecting and trading up a solid collection of your favorites.

    Sure, cassettes sounded better than all the skipping and crackling of records, and they made it possible to listen to your music in the car or anywhere you had a portable cassette player. But standing there in your friend’s living room with a stack of vinyl you couldn’t play, it was no fun learning your first lesson about platform changes.

    Platforms Will Never Stop Changing

    That brings me to a recent change in computing platforms. Laptop computers and all-in-one PCs have begun shipping with the 6th generation of Intel Core processors, also known as Skylake. You’ve probably already seen messages that look like this:

    Your company will start purchasing those systems soon, or maybe they’ll start coming in through BYOD. You’ll want to install your corporate system images on them, the images you’ve painstakingly built, balanced and refined over previous generations of Core processors.

    “Oh, sorry,” IT will say. “We can’t install your images on 6th-generation processors.”

    So there you’ll be, once again a victim of platform changes.

    Sure, there are plenty of reasons to upgrade to the 6th generation of Intel Core processors. You get higher processing speed and Hyper-Threading Technology for smoother multitasking, with sharper 3D and advanced video/photo editing.

    But those improvements required new microcode and new memory reference code for the DDR4-capable controller, which required a new BIOS.  Many of the drivers at the OS level are new because the previous ones were not Skylake-aware. And Windows 10 now includes support for Speed Shift, which offloads transitions in processor power from OS to hardware control. You’ll get better system responsiveness with lower power consumption.

    But you can’t play your records on it.

    KACE 2000 for Deploying Images to Skylake-powered PCs

    Yes, you’ll have to build out and test new images with Skylake-compatible versions of the drivers and application software your users need to get their job done. Getting the most out of the 6th generation of Intel Core processors may mean that you have to go back to the drawing board and assemble new images.

    But at least you won’t have to provision each new laptop or PC manually. You can use the KACE K2000 Systems Deployment Appliance to ensure that your new computers will be properly imaged and ready to work on day one. If you use the unattended installation capability of the K2000 to deploy operating systems, you won’t be affected at all. And if you deploy images, the unattended installation feature automates the image building process, making the creation of a Skylake-compatible image much easier.

    The K2000 saves you time and money by automating the traditional, manual process of imaging dozens or hundreds of computers. Furthermore, with the K2000’s abilities to copy existing hardware drivers that exist on your network and use them for other systems that might need them, you will save time that is usually spent hunting and gathering the right drivers. It’s your assurance that the right applications get to the right profiles, that driver updates will enable all of the capabilities of each new system and that all deployment tasks run from a single, central location, no matter how many remote locations you have to support.

    Platform changes are inevitable; the K2000 makes deployment to them manageable. It won’t play your vinyl music on a cassette deck — nothing will — but once you’ve collected apps and drivers and built up the images you want to share across all your 6th-generation Intel Core-powered computers, the K2000 will make it easier to convert your records to tapes by including a tape recorder.

    At the right party, that’s even cooler than cassettes.

    See how the Green Clinic Health System has slashed its deployment costs and saved IT 20 hours a week on desktop management.

    Christopher Garcia

    About Christopher Garcia

    A ten-year Dell veteran, Chris has had experience in various marketing roles within the organization. He is currently a Senior Product Marketing Manager.

    View all posts by Christopher Garcia 

  • KACE Blog

    Technology in Education — Managing Endpoints in Student-Centered Learning

    Where technology and education meet, what is the role of IT managers like you?

    Nobody will ask you to prepare lesson plans or call roll. Nobody needs you to stay after and tutor the students who need help. In fact, you don’t even need to set foot in the classroom.

    When you’re not performing systems management and deployment, and building up the infrastructure to support hundreds or thousands of devices, it seems as though you spend most of your time saying “no:”

    • No, you may not put your BYO tablet on the same network as the school’s PCs.
    • No, you may not visit warez or social media sites.
    • No, I won’t grant you administrator access to your laptop.
    • No, you may not skip security patching just this once.
    • No, you may not load your favorite games onto your machine.

    Is it hard to think that you’re helping technology advance in education, when you say “no” so often?

    Managing Endpoints in Student-Centered Learning

    But without IT, education would move even more slowly than it seems to move now.

    Why? Because if you didn’t say “no,” all of those things you’re forbidding would cause vastly more problems than they do. Teachers and students would waste much more time watching pages load and wondering why they couldn’t print. Your colleagues in IT would spend most of their time cleaning up hard drives and undoing people’s experiments.

    I mentioned the changing roles of IT managers in my previous post. Before IT was formalized in most schools, technology in the classroom was a collection of mismatched, barely functioning computers on a table near the back door. Quantity mattered in those days, and IT systems management was a matter of sorting through as many donated computers as possible and getting them to run some piece of educational software.

    IT managers in education have seen their role change from maximizing the number of computing devices per classroom to maximizing the number of devices — PCs, laptops, Chromebooks, tablets, smartphones — they can reasonably control on the network.

    The diversity of operating systems — Windows, Mac OS, Android, iOS and *nix — further complicates the picture as IT administrators try to manage endpoints to support student-centered learning. Systems management, provisioning and deployment are now must-haves.

    New E-book: Aligning the Learning Model with the IT Model

    Our new e-book, Aligning the Learning Model with the IT Model describes the changing role of IT in student-centered learning. It examines systems management and systems deployment in education, including three case studies of school districts and colleges that use Dell KACE appliances to inventory manage and patch their IT assets across large and small student populations.

    Christopher Garcia

    About Christopher Garcia

    A ten-year Dell veteran, Chris has had experience in various marketing roles within the organization. He is currently a Senior Product Marketing Manager.

    View all posts by Christopher Garcia 

  • Information Management

    Dell’s Joanna Schloss Named Among Top Women in Business Intelligence and CMSWire Contributors of 2015

    Joanna Schloss is our subject matter expert in data warehousing, business analytics, business intelligence and big data analytics. Along with her work on data and information management in the Dell Center of Excellence, Joanna speaks and publishes frequently. We’re gratified when the industry takes notice, as two of its organs recently have.


    Solutions Review named Joanna among its Top 9 Women Influencers in Business Intelligence and Data Analytics. She joins female colleagues from Gartner, Microsoft, Forrester and other consulting and analyst firms.

    Additionally, CMSWire named Joanna to its list of 2015 Contributors of the Year. Joanna publishes a monthly article on BI and analytics for CMSWire, and she’s been named to the list in the past as well.

    Webcast: “Data Preparation for Analytics – Discussion with a Data Scientist”

    Joanna conducts our March 24 webcast, Data Preparation for Analytics – Discussion with a Data Scientist, in which she interviews Johnathan Walker, data scientist at Dell Financial Services, on using Toad Data Point to simplify and accelerate data preparation so you can get on with the work of predictive analytics.

    Tune in for either the live webcast or the on-demand recording to see why CMSWire lauds Joanna for her ability to “make the confusing and complex clear” and to “share her enthusiasm for the potential of new technology” like big data analytics, IoT analytics and data warehousing.