Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 8
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,858
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 227
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 9
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 511
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 4
Latest Blog Posts
  • KACE Blog

    K2000 Kloser Look: Backing Up Your K2000 Appliance

    Backing up the K2000 Systems Deployment Appliance isn’t difficult; however, it is quite different from backing up your K1000 Appliance- if you happen to own both. It’s a bit of a manual process at first, but once you’ve defined the items inside the K2000 which require backing up, configuring the K2000 appliance to off-board them is very simple.

    It’s a 2 step process:

    1. Identify and schedule which items you’d like to export out of the K2000 database. There are 2 methods for exporting. A one-time only export for freeing up disk space on an appliance that’s tight on disk space (the item is completely removed from the database) or a regularly scheduled export (leaves a copy of said item inside the database and places a copy of the item(s) on a Samba share for offboarding. The items are exported to a Samba share called ‘Restore’. Think of it as a staging area for the offboarding process.
    2. Schedule the file copy process. This is the process that the K2000 will use to move those exported items to another location on your network where true backup occurs (tape backup, volume snapshots, online backup, etc). The offboarding process can leave copies of these files on the Restore share or you can configure it to ‘clean up’ (delete) the items from the Restore share after the copy process is complete.

    It would be a real loss if you spent hours creating your images, post-installation tasks, etc., only to lose them in the event of a disaster and have to recreate them. If you are not backing up your K2000 on a regular basis, please visit this FAQ for an easy-to-follow step-by-step procedure:

    Tight on disk space for backups? Consider switching to offboard storage:

    Always remember, your backups are only as good as the data being backed up. Be sure to *test* your backups and ensure they are viable. Disaster Recovery is not the time to discover that your backups were incomplete or corrupted.

  • KACE Blog

    K1000 Kloser Look: K1000 Database Key Relationship Matrix

    I Know My Keys Are Around Here Somewhere…

     Whether you’re just starting out with SQL queries or have been building them from scratch before the K1000 was a twinkle in someone’s eye, one thing you’ve probably had to come to grips with is constructing JOIN statements. (And for those who have not yet begun that journey, JOIN statements simply allow data from more than one database table to be harvested.)

    As you may know, the main trick to writing a JOIN statement is figuring out which columns to use – and therein lies the rub. As there’s no current master diagram showing all of the relationships a table’s “master column” (typically called the primary key) has with other tables (typically to columns with the same data, called foreign keys), a lot of time can be wasted trying to figure out this part… especially when tables aren’t directly related at all!

    While it remains true that there’s no updated master diagram, the good news is that there’s a new KACE user-reviewed blog on ITNinja with SQL queries which will help you map out these table relationships:

    K1000 Reports – K1000 Database Keys Relationship Matrix Queries

    For example, want to know which tables relate to the MACHINE table and how? Just run the example filtered query, and there they are! Want to do the same for other tables? Simply change a couple of values and run the report!

    And as usual, if you have any questions or need more examples, just post in the comments section – the blog’s author (jverbosk) is more than happy to help. So, have fun on the journey ahead and be sure to share your finds so that we can all benefit! ^_^

  • Dell TechCenter

    Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #56

    "The Elway Gazette"

    Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (flo@datacenter-flo.de) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

    Featured Posts of the Week!

    ESX the PowerShell Way with Get-ESXCli by Jeffery Hicks

    Cmdlet Reference Download for Windows Azure Pack for Windows Server #WAP #WindowsAzure #SCVMM by James van den Berg

    MP Control Manager not responding by Damian Flynn

    WS2012 R2 Hyper-V Manager Can Be Used With WS2012 Hyper-V by Aidan Finn

    Azure 

    Cmdlet Reference Download for Windows Azure Pack for Windows Server #WAP #WindowsAzure #SCVMM by James van den Berg

    Events

    Das war die TechNet Conference 2013 in German by Toni Pohl

    Hyper-V

    Hyper-V VM-Ressourcen priorisieren in German by Benedict Berger

    Alles, was ihr schon immer über Generation-2-VMs in Hyper-V 2012 R2 wissen wolltet, aber euch nicht zu fragen trautet in German by Nils Kaczenski

    Introducing Hyper-V Server 2012 R2 by Aidan Finn

    WS2012 R2 Hyper-V Manager Can Be Used With WS2012 Hyper-V by Aidan Finn

    Removing the ghost Hyper-V vNic adapter when using Converged Networks after in-place upgrade to W2012R2 by Alessandro Cardoso

    Office 365

    CHANGE OFFICE 365 PASSWORD EXPIRATION POLICY by Thomas Maurer

    PowerShell

    ESX the PowerShell Way with Get-ESXCli by Jeffery Hicks

    Managing VMware Tools with PowerCLI by Jeffery Hicks

    Sharepoint 

    Two new SharePoint 2013 Workflow Articles on MSDN - Debugging & Workflow CSOM by Andrew Connell

    Inspecting the SharePoint OAuth Token with a Fiddler Extension by Andrew Connell

    System Center Core

    Technical Documentation Download for System Center 2012 #sysctr #Cloud #SCVMM by James van den Berg

    System Center Configuration Manager

    MP Control Manager not responding by Damian Flynn

    System Center Service Manager

    Service Manager 2012 R2: Incident Management by Damian Flynn

    System Center Virtual Machine Manager

    TECHDAYS 2013 – FABRIC MANAGEMENT WITH VIRTUAL MACHINE MANAGER SESSION ONLINE by Thomas Maurer

    Migrating to System Center Virtual Machine Manager 2012 R2 by Damian Flynn

    Windows Server

    In the Spotlight => Hybrid #Cloud with NVGRE (WS SC 2012 R2) #SCVMM #Hyperv by James van den Berg

    Tools

    MAP Toolkit 9.0 Beta is now available by Robert Smit

    DELL Server DRAC Card Soft Reset With Racadmin by Didier van Hoye

  • Dell TechCenter

    Dell Management Packs now support Microsoft System Center 2012 R2!

    You can now use the Dell OpenManage Integration Suite for Microsoft System Center with Microsoft System Center 2012 R2 Operations Manager.  We have qualified all the Management Pack suites listed below on the latest version of OpsMgr and updated the documentation for R2 support.

    • Dell Server Management Pack Suite Discover, inventory and monitor Dell PowerEdge servers (agent-based option with Windows/OpenManage Server Administrator and agent-free option using WSMAN for 12th Generation of Dell PowerEdge Servers), Chassis Management Controllers using SNMP &  iDRACs using SNMP.   Download: version 5.1 , Documentation

    • Dell Client Management Pack –Discover, inventory and monitor Dell Client PCs running Windows and OpenManage Client Instrumentation (OMCI) Download: Dell Client Management Pack version 5.0 , Documentation

    • Dell Printer Management Pack –Discover, inventory and monitor Dell Printers using SNMP Download version 5.0, Documentation

    •Dell MD Storage Array Management Pack Suite –Discover, inventory and monitor Dell PowerVault MD Storage arrays Download version 5.0, Documentation

    • Dell EqualLogic Management Pack Suite–Inventory and monitor Dell EqualLogic storage arrays using SNMP Download version 5.0, Documentation

    The latest releases of all the listed Dell Management Packs will work as-is for System Center 2012 R2 Operations Manager; the only exception is that the Chassis Modular Server Correlation feature of the Server MP Suite is not supported on R2 (110032 in the online Release Notes).

  • Foglight for Virtualization and Storage Management

    Happy Thanksgiving: What are Virtualization Administrators Thankful for?

    ‘Tis the Season to Give Virtual Thanks!

    Since its inception, virtualization and supporting technologies have come a long way. Virtualization has evolved into a necessity for companies of all sizes, promising benefits like cost savings, increased performance and better availability of critical applications. Today, IT administrators are equipped with the tools they need to do their jobs better, smarter and faster. With Thanksgiving just a few days away, we thought it’d be fun to list out the top 3 things we think virtualization administrators will be giving thanks for this year. So sit down with a slice (or 2) of pumpkin pie and enjoy!

    1.

    Better Optimization:

    When it comes to virtualization, we are definitely NOT operating within a “one size fits all” mantra; the environment must be tailored to specific organizational needs and requirements. What’s the point of virtualizing an environment if it’s not going to operate at top-level efficiency? Tips like “Build vSphere clusters without reserves” or “Leave your VMs Powered On” help us approach our virtual environments from the right direction.

    Smarter Visibility:

    Last week’s webcast, “Best Practices: Managing Virtual Infrastructure Health and Performance” talked about the lack of visibility IT admins deal with when it comes to infrastructure performance. Visibility alone still leaves you with a burden of tasks to complete. Thankfully, there are things that can be done to achieve optimal performance and tools that can be deployed to manage complex virtual environments.

    Faster Automation:

    Automation is the IT gift that keeps on giving; not only does it save time, but it also eliminates the need for attention spent on manual and repetitive tasks. Automating processes within a virtual environment helps prevent additional time from being spent on VM configuration and control, IT staff resources from being bogged down and cost efficiencies from being drained. Now that you’ve got leftover time in your day from automating all those processes, why don’t you check out our webcast, “Top Ten Virtualization Automation Tips for Infrastructure and Operations Administrators” here.

    Happy Thanksgiving! Let us know what you are thankful for in the comments below.

  • Foglight for Virtualization and Storage Management

    Gartner Technology Predictions for 2014 and the impact to Virtualization Administrators

    Gartner has spoken. Each October, at the Gartner Symposium/ITxpo, the large IT analyst firm pulls out its crystal ball and make predictions on what will happen in the following year, based on lengthy conversations with customers and vendors. Gartner’s predictions often serve as the Bible for many companies setting spending priorities for the new year. Like clockwork, last month Gartner published its thoughts for 2014. As we expected, great news for virtualization administrators – Your job still matters! I’ve outlined three Gartner trends that we think you should pay attention to:

    Hybrid Cloud and IT as a Service Broker

    According to published reports of the Gartner predictions, the analyst firm believes that the convergence of internal clouds and external private cloud services is essential. Enterprises that will jump on this bandwagon must design a private cloud with a hybrid future in mind – otherwise, future integration/interoperability will be nearly impossible.

    Why this matters: Cloud has been a buzz word forever, and from the looks of it, Gartner says that will it remain a hot topic 2014. Gartner predicts companies will be investing in developing hybrid clouds and recommends they look for ways to manage cloud resources to properly measure infrastructure performance and cost. Since cloud environment rely on virtual infrastructure, virtualization administrators play an important role in the success of adopting cloud technologies.

    Web-Scale IT

    Gartner says that Amazon, Google and Salesforce.com are innovating how IT services can be delivered. IT organizations are told that they should align with the methodology of these vendors to copycat processes, architectures and practices.

    Why this matters: In keeping with the cloud theme, Gartner predicts that more and more companies will look to public cloud infrastructures for ways to properly manage the growth of their own IT Infrastructure—and how IT resources are consumed by IT’s “users,” which is to say employees. The key to any public cloud is elasticity—the ability to increase capacity as business needs warrant. Virtualization administrators will be part of the discussion as companies try to copy the “web-scale” features of public clouds within their own entities. By virtue of working in virtual environment, virtualization administrators understand the elasticity provided by cloud environments and how to manage compute consumption.

    3-D Printing

    Gartner predicts that the growth of 3-D printers is projected at 75 percent this coming year and will increase to 200 percent in 2015. Improved designs, streamlined prototyping and short-run manufacturing are just some of the reasons attributed to the consumer hype.

    Now this prediction does not tie directly to virtualization – but I wanted you to start thinking about all the cool graphics you are going to create and print in 2014. It’s simply too cool not to mention.

    From the looks of it, it’s clear that the trend toward virtual environments and accompanying technologies will continue to gain speed. Virtualization plays a key role in helping IT be more agile, while enabling a flexible infrastructure that can be scaled based on need. So be confident, fellow virtualization pros. You are in a hot space.

    If you are looking to impress your boss in 2014 and cut operational costs by reducing infrastructure complexity then it is time to get more information on Foglight for Virtualization or to download a free trial. SImply visit: http://software.dell.com/products/foglight-for- virtualization-enterprise-edition/.

  • Dell TechCenter

    Building a Private Cloud File Solution Using Owncloud

    Editor’s Note:

    Dell Solution Centers in conjunction with Intel have established a Cloud and Big Data program to deliver briefings, workshops and Proofs of Concept focused on hyper-scale programs such as OpenStack and Hadoop.  The program’s Modular Data Center contains over 400 servers to support hyper-scale capability to allow customers to test drive their solutions. 

    In this blog series, Cloud Solution Architect Kris Applegate discusses some of the technologies he is exploring as part of this program – and shares some really useful tips! You and your customer can learn more about these solutions at one of our global Solution Centers; all have access to the Modular Data Center capability and we can engage remotely with customers who cannot travel to us.

    *******************************************

    One of the most popular Software-as-a-Service offerings users associate with modern cloud computing is a file share / backup. Having a location where you can easily store, share, and backup your files across multiple computers is an attractive option for private cloud customers also. One solution to this is the open source software  Owncloud (http://owncloud.org). This software allows you to run a private instance on Windows or Linux and share not only files, but photos, music, contacts, and calendars. Moreover, you can attach it to either local storage, network storage, or even other cloud storage providers.

    A couple possible use cases could be:

     Remote Office / Branch Office File Share: Using VMware or Microsoft Hyper-V, you could run this in a VM attached a VRTX and provide users with an easy-to-use file sharing solution.

    Large Enterprise Private File Share:  Attaching this to a large scale-out file storage solution such as Openstack Swift would enable the handling of thousand or even hundreds of thousands of user’s files on a robust elastically scalable storage infrastructure. 

    Users can access this cloud by means of the web interface, client software (Windows, Mac, Linux), or mobile applications (Apple/Android). They can send links to other Owncloud users or even to any public internet user. These links can be password protected and even have an expiration date.

    Owncloud also has an open architecture that allows users to write plugins to extend the funcationlity of the platform. Some popular ones enable Anti-virus, encryption, URL shortening, and LDAP/AD integration.

    Today’s users are already familiar with similar tools as these in their personal lives, so delivering them something just as intuitive and simple can go a long way in making them productive faster.  If you would like to discuss this use-case or any other Dell solution, talk to your account team about how a Solution Center engagement might help you.

     

  • Dell TechCenter

    Follow-up: Ubuntu on the Precision M3800

    In my previous post about the Precision M3800, I noted that there are still a couple of pieces that do not fully work on Linux. Those were the touchscreen, which requires a "quirk" to fix, and the SD card reader, which did not work. I have some updates on those.

    Kent Baxley of Canonical and I are working on getting the quirks for the touchscreen included in the official Ubuntu and the upstream kernels. Until then, you can still use the packages available at <http://linux.dell.com/files/ubuntu/contributions/>.

    As far as the SD card reader, there is a fix in the upstream 3.13-rc1 kernel's rts5249 driver. We have worked with Canonical so that this fix is expected to be in an upcoming Stable Release Update (SRU) kernel for Ubuntu 13.10. For users of other Linux distributions, I am working with the patch's author to try to get this patch into older Linux stable kernels.

  • General HPC

    It's Back-to-Back "Student Cluster Challenge" Wins for TACC

    The students from The Texas Advanced Computing Center (TACC) at the University of Texas, won the "Student Cluster Challenge" at SC13 in Denver last week. This is the second consecutive win for a TACC team, after their team also triumphed at SC12 in Salt Lake.

    This is also the third consecutive team sponsored by Dell to "bring home the gold." The TACC teams join Team South Africa, which won the Student Cluster Challenge at the International Supercomputing Conference (ISC) in June.

    Congratulations to TACC on this impressive performance!

    Meet some of the team on this insideHPC video.

  • High Performance Computing - Blog

    Accelerating High Performance LINPACK (HPL) with Kepler K20X GPUs

    by Saeed Iqbal and Shawn Gao 

    The NVIDIA Tesla K20 GPU has proven performance and power efficiency across many HPC applications in the industry.  The K20 is based on the latest Kepler GK110 architecture by NVIDIA, and incorporates several innovative features and micro-architectural enhancements implemented in the Kepler design. Since the K20 release, NVIDIA has launched an upgrade to K20 called the K20X.  The K20X has a higher number of processing units and higher memory and memory bandwidth. This blog quantifies the performance and power efficiency improvements of K20X compared to K20. The information presented in this blog is beneficial in making an informed decision between the two powerful GPU options.

    High Performance LINPACK (HPL) is an industry standard compute intensive benchmark. HPL is traditionally used to stress the compute and memory subsystem.  Now, with the increasingly common use of GPUs, a GPU-enabled version of HPL is developed and maintained by NVIDIA.  The GPU-enabled version of HPL utilizes the traditional compute subsystem of CPUs and compute accelerator of GPUs. We used the Kepler GPU-enabled HPL version 2.0 for this study.

    We use the Dell PowerEdge R720 for the performance comparisons.  The PowerEdge R720 is a dual socket server and can have up to two internal GPUs installed. We keep the standard test configuration to be two GPU per server.  The PowerEdge R720 is a versatile full-featured server with a large memory capacity.    

    Hardware Configuration and Results

    The Server and GPU configuration details are compared in the tables below.

    Table 1:  Server Configuration

    Server

    Model

    PowerEdge R720

     

    Processor

    Two Intel Xeon E5-2670 @ 2.6 GHz

     

    Memory

    128GB ( 16x8G)  1600MHz  2 DPC

     

    GPUs

    NVIDIA Tesla K20 and K20X

     

    Number of GPUs installed

    2

     

    BIOS

    1.6

    Software

    Benchmark : GPU-accelerated HPL

    HPLHhhHPLHPLHPL

    Version 2.1

     

    CUDA, Driver

    5.0, 304.54

     

    OS

    RHEL 6.4

     Table 2:  K20 and K20X: Relevant parameter comparison

    GPU Model

    K20X

    K20

    Improvement (K20X)

    Number of cores

    2,688

    2,496

    7.6%

    Memory (VRAM)

    6 GB

    5 GB

    20.0%

    Memory bandwidth

    250 GB/s

    208 GB/s

    20.2%

    Peak Performance(SP)

    3.95 TFLOPS

    3.52 TFLOPS

    12.2%

    Peak  Performance(DP)

    1.31 TFLOPS

    1.17 TFLOPS

    11.9%

    TDP

    235W

    225W

    4.4%

    Figure 1: HPL performance and efficiency on R720 for K20X and K20 GPUs. 

    Figure 1 illustrates the HPL performance on the PowerEdge R720. The CPU-only performance is shown for reference. Clearly, there is a performance improvement with K20X of about 11.2% on HPL GFLOPS, compared to K20. Compared to the CPU-only configuration the HPL acceleration with K20X GPUs is 7.7X. Similarly with the K20 GPUs, it is 6.9X.  In addition to improved performance, the compute efficiency on K20X is slightly better than K20. As shown in Figure 1, K20X has a compute efficiency of 82.6% and K20 an efficiency of 82.1%.  It is typical for CPU-only configurations to have higher efficiency than heterogeneous CPU+GPU configurations, as in Figure 1.  The CPU-only configuration is 94.6%, and the CPU+GPU configurations are in the lower 80s. 

    Figure 2: Total Power and Power Efficiency on PowerEdge R720 for K20 and K20X GPUs. 

    Figure 2 illustrates the total system power consumption of the different configurations of the PowerEdge R720 server.  The first thing to note from Figure 2 is that GPUs consume substantial power.  The CPU-only configuration power consumption is about 450W, which increases to above 800W when K20/K20X GPUs are installed in the server. This represents an increase of up to 80% in power consumption. This should be taken into account during the power budgeting of large installations and the power system supply.   However, once the power is delivered to the GPUs, they are much better than CPUs alone in converting the energy to useful work. This is clear from the improved performance per watt numbers shown in Figure 2.  The K20X shows a performance per watt of 2.79 GFLOPS/W, which is about 4X better than the CPU-only configuration.  Similarly, the K20 has 2.68 GFLOPS/W power efficiency, which is about 3.8X better than the CPU-only configuration.    It is interesting to note that K20X shows a 7% improvement over its predecessor K20.

    Summary

    The K20X delivers about 11% higher performance and consumes 7% more power than the K20 for the HPL benchmark.   These results are in line with the expected increase when the theoretical parameters are compared.