Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
CommAutoTestGroup - Blog CommAutoTestGroup 1
Custom Solutions Engineering Blog Custom Solutions Engineering 5
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,854
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 226
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 57
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 5
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 510
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 3
Latest Blog Posts
  • Information Management

    How Do I Make Those Pesky WMI/Access Denied Errors In Spotlight Go Away?

    Do you ever wonder why Spotlight throws an Access Denied or WMI error while monitoring a Windows server when your user account has all the permissions needed to access that server? You might encounter these errors on the Home page or even at the time of connection. The frustrating part is that even though you can connect via Remote Desktop and Ping the server, you still get those annoying errors!  Well, Spotlight actually runs various WMI commands to connect to your server and collect performance metrics for monitoring. These underlying commands are a common cause of these errors. Utilizing some simple commands, you can identify and address the root cause and be on your way to monitoring in no time!

    Before we get into the details, let’s clarify the characteristics of these errors:

    • Errors would be consistent and not random.
    • Errors are received on the Home page of Spotlight, with the acronym ‘WMI’ followed by ‘Access Denied’ or ‘Invalid Class’ along with the WMI class name in the error message:

      • "Collection 'Open Sessions' failed: WMI query "Win32_ServerSession" failed: Access denied.[0x80041003]"
      • “WMI query Win32_PerfRawData_PerfOS_Memory failed: Invalid class.
        [0x80041010] [Error Code: -2147217392].”
      • Errors are received when attempting to connect to the server, containing ‘Access Denied’ or ‘RPC’ within their messages:
        • "Windows host is in an unplanned outage: Access is denied (Exception from HRESULT: 0x8007005 (E_ACCESSDENIED)"
        • “Monitored Server - Windows Connection Failure: Cannot connect to windows host 'NNN.N": Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)' for Windows 2008 Servers”
        • "Windows (WMI) connection error: Error 800706BA: The RPC server is unavailable"

    To find the root cause and rectify these errors, follow these simple steps:

    1. First confirm the login account used in Spotlight for the connection using these steps:
      1. On the Spotlight Home page, using the connection tree located on the left side of the page, right-click on Windows connection name that’s failing and select the Properties option.
      2. Under the Details tab take note of the login account used here. This account has two possible sources:
        1. A domain account was manually entered.
        2. The “Use Diagnostic Server Credentials” option was used. In this case, the user must login to the server where Spotlight Diagnostic Server is installed. Using the Services.msc console, confirm the account owner of ‘Spotlight Diagnostic Server’ services.
    2. Next, test WMI using the same credentials:
      1. Log into server where Spotlight Diagnostic Server is installed
      2. Open CMD window and modify the command below in order to enter your information:  wmic /node: <host name> /user: <domain>\<user name> path Win32_PerfRawData_PerfOS_Memory
      3. Use the host name and user name retrieved from step 1 above in this command.  If error encountered had a specific WMI class name in the error message, then enter the class name as well in command.  Otherwise, use the sample ‘Win32_PerfRawData_PerfOS_Memory’ class mentioned above.
      4. Running this command should ask you for a password and then return data from host server.
      5. Most likely you’ll receive the same Spotlight error running this command.  Windows errors such as this one can be resolved by System Administrators of that server. Once you can run this WMI without encountering any errors, that’s an indication that Spotlight error will also rectify itself.

    Our Deployment Guide includes a ‘Troubleshooting WMI’ section providing more details about these errors.

    Interested in trying Spotlight on SQL Server Enterprise for yourself?  Download a free 30-day trial.  

  • Hotfixes

    Optional Hotfix 593313 for vWorkspace 8.6 MR1 Management Console Released

    This is an optional hotfix and can be installed on the following vWorkspace roles -

    • vWorkspace Management Console

     


    This release provides support for the following:

    Feature
    Description
    Feature ID
    Management Console
    Hyper-V templates with more than one NICs can be imported to MC with Null values
    625458
    Management Console
    Error when adding client record : COMMIT TRANSACTION has no BEGIN
    599838
    Management Console
    When changing provisioning settings in the VWs MC, DMGeoLocDCsAndHosts values can be removed and affect provisioning
    588140
    Management Console
    After upgrading to vworkspace 8.6 - can't control user sessions
    453743

     

    This hotfix is available for download at:

     https://support.software.dell.com/vworkspace/kb/205465

  • Dell Cloud Blog

    SPEC Cloud IaaS Benchmarking: Dell Leads the Way

    By Nicholas Wakou, Principal Performance Engineer, Dell

     

      

    Computer benchmarking, the practice of measuring and assessing the relative performance of a system, has been around almost since the advent of computing.  Indeed, one might say that the general concept of benchmarking has been around for over two millennia.  As Sun Tzu wrote on military strategy:

    "The ways of the military are five: measurement, assessment, calculation, comparison, and victory.  If you know the enemy and know yourself, you need not fear the result of a hundred battles.”
     

    The SPEC Cloud™ IaaS 2016 Benchmark is the first specification by a major industry-standards performance consortium that defines how the performance of cloud computing can be measured and evaluated. The use of the benchmark suite is targeted broadly at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers.  The SPEC Cloud Benchmark addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms, either public or private.

    Dell has been a major contributor to the development of the SPEC Cloud IaaS Benchmark and is the first – and so far only – cloud vendor either private or public to successfully execute the benchmark specification tests and to publish its results.  This article explains this new cloud benchmark and Dell’s role and results.

    How it works and what is measured

    The benchmark is designed to stress both provisioning and runtime aspects of a cloud using two multi-instance I/O and CPU intensive workloads: one based on YCSB (Yahoo! Cloud Serving Benchmark) that uses the Cassandra NoSQL database to store and retrieve data in a manner representative of social media applications; and another representing big data analytics based on a K-Means clustering workload using Hadoop.  The Cloud under Test (CuT) can be based on either virtual machines (instances), containers, or bare metal.

    The architecture of the benchmark comprises two execution phases, Baseline and Elasticity + Scalability. In the baseline phase, peak performance for each workload running on the Cloud under Test (CuT) alone is determined in 5 separate test runs.  Data from the baseline phase is used to establish parameters for the Elasticity + Scalability phase.  In the Elasticity + Scalability phase, both workloads are run concurrently to determine elasticity and scalability metrics.  Each workload runs in multiple instances, referred to as an application instance (AI).  The benchmark instantiates multiple application instances during a run.  The application instances and the load they generate stress the provisioning as well as the run-time performance of the cloud.  The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in the cloud.  The benchmark runs the workloads until specific quality of service (QoS) conditions are reached.  The tester can also limit the maximum number of application instances that are instantiated during a run.

    The key benchmark metrics are:

    • Scalability measures the total amount of work performed by application instances running in a cloud.  The aggregate work performed by one or more application instances should linearly scale in an ideal cloud.  Scalability is reported for the number of compliant application instances (AIs) completed and is an aggregate of workloads metrics for those AIs normalized against a set of reference metrics.
       
    • Elasticity measures whether the work performed by application instances scales linearly in a cloud when compared to the performance of application instances during the baseline phase.  Elasticity is expressed as a percentage.
       
    • Mean Instance Provisioning Time measures the time interval between the instance provisioning request and connectivity to port 22 on the instance.  This metric is an average across all instances in valid application instances.
       

    Benchmark status and results

    The SPEC Cloud IaaS benchmark standard was released on May 3, 2016, and more information can be found in the Standard Performance Evaluation Corporation’s announcement.  At the time of the release, Dell submitted the first and only result in the industry.  This result is based on the Dell Red Hat OpenStack Cloud Solution Reference Architecture comprised of Dell’s PowerEdge R720 and R720xd server platforms running the Red Hat OpenStack Platform 7 software suite.  The details of the result can be found on the SPEC Cloud IaaS 2016 results page

    Dell has been a major contributor to developing the SPEC Cloud IaaS benchmark standard right from the beginning from when the charter of SPEC Cloud Working Committee was drafted to when the benchmark was released.  So it is no surprise that Dell was the first company to publish a result based on the new Cloud benchmark standard.  Dell will continue to use the SPEC Cloud IaaS benchmark to compare and differentiate its cloud solutions and will additionally use the workloads for performance characterizations and optimizations for the benefit of its customers.

    At every opportunity, Dell will share how it is using the benchmark workloads to solve real world performance issues in the cloud.  On Wednesday, June 29th, 2016, I will be presenting a talk entitled “Measuring performance in the cloud: A scientific approach to an elastic problem” at the Red Hat Summit in San Francisco.  This presentation will include the use of SPEC Cloud IaaS Benchmark standard as a tool for evaluating the performance of the cloud.

    Computer benchmarking is no longer an academic exercise or a competition among vendors for bragging rights.  It has real benefits for customers, and now – with the creation of the SPEC Cloud IaaS 2016 Benchmark – it advances the state of the art of performance engineering for cloud computing.

      

  • Information Management

    Toad for Oracle 12.9 Is Now Available

    Toad for Oracle 12.9 is now available. The new release focuses on Integration between:

    • Toad for Oracle and Toad Intelligence Center
    • Toad for Oracle and Code Tester for Oracle.

    In addition, further enhancements in the areas of Team Coding.

    This release continued emphasis on these themes:

    • Code Quality and Agile Workflow
    • Continued Stability improvements
    • Continued Team Coding improvements 

    Focusing on these above themes, the below are some of the many highlighted features introduced in this release of Toad for Oracle.

    Team Coding

    A tremendous amount of work was invested in redesigning Team Coding to improve overall performance, usability, and stability, and to better support agile database development.

    Quick important note: The 12.9 implementation of Team Coding is not backward compatible with earlier versions of Toad. Version 12.9 can read a Team Coding environment that was created with an earlier version, but the earlier versions cannot read a Team Coding environment that is created in version 12.9.

    Below are some of the major enhancements in Team Coding:

    • Team Coding was moved from the Utilities menu to the main menu (by default). Captions and ordering were changed. You may need to reset the toolbars to see these changes.
    • Migration of Toad 12.6 and earlier Team Coding settings perform much faster.
    • The Configure (local Team Coding settings) and Administer (server-side settings) windows are now merged into one window that is accessed from Team Coding | Configuration. All developers can access the Local Settings node to configure Team Coding settings that support the management of files and scripts in the VCS. Team Coding Administrators can access the Server Settings nodes that are visible when Team Coding is installed in a database.
    • Local files and scripts can now be controlled manually while working in a Team Coding environment. If the Editor senses that the item you are working with is a file or script, it defaults to the file-based source control functionality and allows you to select a project folder, then add, check out, or check in the item. You can use the VCS tab of the Team Coding Manager to manage those actions. Toad will use the VCS settings (including the VCS provider) that are defined for Team Coding on the database server, and the location of your local scripts should exist within the local folders mapped to the VCS.
    • Transaction History for controlled objects was added to the Team Coding Object Summary, accessed from Team Coding | Show Team Coding Objects. This history shows detailed information about the transaction type, revision numbers, location in the VCS, user names involved in the transaction, and other information.
    • Code Tester is now integrated into Team Coding. You can set this option on the Team Settings tab of the Team Coding Configuration window. When enabled, Team Coding will run any predefined test for an object before checking the object back into Team Coding. If the tests succeed, the check-in will automatically continue. If they fail, the check-in process will be stopped and the Code Tester Output window will open so that the developer can see which tests succeeded and/or failed in order to make the necessary changes to the code.

    Automation Designer

    • Code Tester was added to Automation Designer:
    • A Code Tester action was added to the DB Misc tab in the Automation Designer.
    • From this interface, you can run test suites, unit tests, and test cases defined in Code Tester.
    • A script source can now be a script that has been published to Toad Intelligence Central. Output can be discarded or published back onto the Toad Intelligence Central server.
    • Export Dataset
    • A new Select Statements (from dual) export format was added. This creates a select statement from dual (or multiple select statements from dual, UNION ALL'd together) so you can reproduce a particular dataset from a query without the original tables.
    • When exporting from a grid with a range of columns selected, you can now right-click on the Columns to Exclude dropdown and select Include only selected columns. Non-selected columns are excluded from the export.

    Debugging

    • Changes were made to better handle input and output of composite types.
    • When debugging a trigger you can now select the triggering event to generate statements of the appropriate type in the anonymous block.

    Schema Browser

    • A Constraints tab was added on the right-hand side for Views.
    • Partition and Subpartition tabs now have a button to rename the selected partition or subpartition.
    • Compare Schema has the following enhancements:
    • Extra/Missing columns will now generate a row for the table under Objects which differ. Columns are still listed under in one schema but not the other.
    • Added a filter to the Results tab. You can filter by object name or type of difference. To make room for the filter, the Group by object type checkbox was removed from the toolbar, but is still available as a right-click item on the results tree.
    • You can now right-click on a table in the Results tab and then send it to a Rebuild Table window or DBMS Redefinition Wizard.

    Editor

    • A new Treat blank line as statement terminator option was added to Toad Options | Editor | Execute/Compile. When checked, it is no longer necessary to properly terminate statements with a semicolon, as long as a blank line exists between statements. This option is enabled by default. Blank lines within PL/SQL do not fragment the DDL, so you can work with PL/SQL in the normal manner. Affected areas of the Editor are: Navigator, Explain Plan, Code Insight, statement detection for F9 execution, and all refactoring features.
    • A Team Coding toolbar was added to the Editor and is active when there is a script or file open. A Team Coding tab was also added.
    • There is a new button on the Script Output tab of the Editor that will export all of the grids to a single Excel file.

     

    A complete set of 12.9 new features is available at Toad for Oracle 12.9 – Release Notes - New Features

  • KACE Blog

    The Fuzzy Art of Solution Evaluation: 8 Broader Questions for Choosing the Right Solution and Vendor

    A couple of weeks ago, I offered seven key questions for building a solid evaluation methodology to help you as you embark on any journey to select new technologies, whether you’re looking for information systems management software, endpoint security and endpoint management tools, cloud technologies, or other solutions.

    But once you’ve defined your process for solution evaluation, what happens next?

    It’s time to think about the broader questions. After all, it’s not all just a feature/function checklist that gets you where you need to be, but the sum of both tangible and less-tangible solution (and vendor) characteristics. Keep in mind your end game, which includes both enhancing your competitive advantage and returning more time for innovation, especially for the IT team.

    Here are eight key questions to ask yourself:

    1. What’s our major pain point? Your organization needs to agree on the major goal for the solution. For example, are you most concerned about improving customer service, user support or network security? Or perhaps about making the most of your limited IT staff? Are quick implementation and fast time to value critical? Of course, the major pain point will likely be different for, say, an endpoint protection tool than for IT management software.
    2. Is the solution right-sized? Like most everything we purchase, technology comes in high-, mid- and low-end options. Consider whether a candidate solution can meet your organization’s needs without either adding superfluous capabilities or short-changing important requirements.
    3. Does the solution offer all the capabilities you need? Here’s where you pull out the feature/function checklist and ensure the solution ticks all the boxes. Take the time to be exhaustive in your consideration.
    4. What are the pros and cons of the competitor candidates? Be sure to take a close look at the more subtle characteristics of each vendor. Are they innovative? Will they invest in the solution moving forward?
    5. Is the form factor what your organization needs? On premises, as a managed service or in the cloud — most solutions today offer a variety of delivery vehicles. Think through which is best for your infrastructure, ongoing maintenance and security needs.
    6. How is the vendor’s reliability and support? After the purchase is made, will the vendor be there with education, technical support and regular upgrades?
    7. Does the vendor’s philosophy and approach align with yours? This may seem really fuzzy, but it is important to be in sync, since you’ll have to collaborate with the vendor to get your desired results.
    8. Can the vendor validate ROI? Bottom line, can the vendor provide substantiation that your IT investment will deliver a solid, calculable return?

    Applying These Criteria to Systems Management Solutions

    If you’re wondering how to work these questions into your solution evaluation, take at look at our new Endpoint Systems Management Evaluation Guide. This checklist includes not only features and functionality but also the best fit considerations detailed above. And if you’re in the market for a systems management solution, be sure to put Dell KACE on your list. Dell KACE systems management appliances provide an all-in-one solution that is comprehensive, easy to use, fast to implement and actively supported. We invite you to see how they rate under even the most rigorous solution evaluation methodology.

    About Stephen Hatch

    Stephen is a Senior Product Marketing Manager for Dell KACE. He has over eight years of experience with KACE and over 20 years of marketing communications experience.

    View all posts by Stephen Hatch

  • Desktop Authority

    No Time, No Problem! We’ll Bring Our Virtual Trade Show to You

    We’re all busy. Busy answering emails, putting out fires, supporting users - and doodling in endless meetings (like the panda I am working on right now). Sure, doodling might seem counterproductive, but then I like to think of it as busy alleviating stress.

    Being busy and being productive are two separate things.

    Let’s face it, IT pros like you are in short demand and even shorter on time. Managing the day-to-day tasks, let alone all the new technologies and buzzwords that go with them, is something that you do after the work day ends – or as I like to refer to it - when the real work gets done.

    You know, the fun stuff like researching how to secure your endpoints remotely, or how to enable your users to run apps that require admin privileges without actually giving them admin privileges (because that’s asking for trouble). And what about figuring out how to automatically manage GPO settings across targeted AD users - all that fun stuff happens after “regular” working hours. The irony is, if you had all that you might have a bit more free time.

    Barren is the busy life

    When was the last time you were able to simply get away for a few hours to expand your IT skills, attend a training session or check out the latest technologies that could help you spend less time on IT administration, and more time on innovation? Let me venture a guess – never?

    What if I told you that there was a way for you to gain access to everything you need without actually leaving your desk? Taking that a step further, what if you were able to network with our IT evangelists, experts and your peers all from your desktop - so you can still be productive at work and have access to the tools and resources you need to advance your knowledge? Would you take us up on it? 

    Come on, no excuses

    Well, that’s exactly what the Dell Software Virtual Trade Show is all about. You’ll have access to our IT experts, virtually, so you can get the insight, tips and tricks, and technical information you need to proactively manage your desktop environment - without putting a dent in your productivity since you won’t have to leave your desk. 

    But perhaps the best reason you should consider attending the virtual trade show is that you can set your own schedule and arrive whenever you like. Simply log in to the virtual expo on June 23rd any time between 8:00 a.m. and 3:00 p.m. PDT (or 9:00 a.m. – 4:00 p.m. BST if you’re in the UK or EMEA) to connect, explore, and learn how Dell Software can help you support your organization – especially when you can do it right from your desktop.

     

    Reserve your spot today

  • General HPC

    Introducing 100GBps with Intel® Omni-Path Fabric in HPC

     By Munira Hussain, Deepthi Cherlopalle

    This blog introduces the Omni-Path Fabric from Intel® as a cluster network fabric used for intra-node communication for application, management and storage communication in High Performance Computing (HPC). It is part of the new technology referring to Intel® Scalable System framework based on IP generated from the coalition of Qlogic, Truescale and Cray Aries. The goal of Omni-Path is to eventually be able to meet the demands of the exascale data centers in performance and scalability.

    Dell provides complete validated and supported solution offering which includes the Networking H-series Fabric switches and Host Fabric Interface (HFI) adapters. The Omni-Path HFI is a PCI-E Gen3 x16 adapter capable of 100 Gbps unidirectional per port. The card supports 4 lanes supporting 25Gbps per lane.

    HPC Program Overview with Omni-Path:

    The current solution program is based on Red Hat Linux 7.2 (kernel version 3.10.0-327.el7.x86_64). The Intel Fabric Suite (IFS) drivers are integrated in the current software solution stack Bright Cluster Manager 7.2 which helps to deploy, provision, install and configure an Omni-Path cluster seamlessly.

    The following Dell servers support Intel® Omni-Path Host Fabric Interface (HFI) cards

    PowerEdge R430,PowerEdge R630, PowerEdge R730, PowerEdge R730XD, PowerEdge R930, PowerEdge C4130, PowerEdge C6320

    The management and monitoring of the Fabric is done using the Fabric Manager (FM) GUI available from Intel®. The FMGUI provides in-depth analysis and graphical overview of the fabric health including detailed breakdown of status of the ports, mapping as well as investigative report on the errors.

     Figure 1: Fabric Manager GUI

    The IFS tools include various debugging and management tools such as opareports, opainfo, opaconfig, opacaptureall, opafabricinfoall, opapingall, opafastfabric, etc. These help to capture a snapshot of the Fabric and to troubleshoot. The Host based subnet manager service known as opafm is also available with IFS and is able to scale up to 1000’s of nodes.

    The Fabric relies on the PSM2 libraries to provide optimal performance. The IFS package provides precompiled versions of the open source OpenMPI and MVAPICH2 MPI along with some of the micro-benchmarks such as OSU and IMB used to test Bandwidth and Latency measurements of the cluster.

    Basic Performance Benchmarking Results:

    The performance numbers below were taken on Dell PowerEdge Server R630. The server configuration consisted of the dual socket Intel® Xeon® CPU E5-2697 v4 @ 2.3GHz, 18 cores with 8*16 GB @ 2400MHz. The BIOS version was 2.0.2, and the system profile was set to Performance.

    OSU Micro-benchmarks were used to determine latency. These latency tests were done in Ping-Pong fashion. HPC applications need low latency and high throughput. As shown in Figure 2, the back to back latency is 0.77µs, and switch latency is 0.9µs which is on par with industry standards.

    Figure 2: OSU Latency - E5-2697 v4

    Figure 3 below shows the OSU Uni-directional and bi-directional bandwidth results with OpenMPI-1.10-hfi version. At 4MB Uni-directional bandwidth is around 12.3 GB/s, and bi-directional bandwidth is around 24.3GB/s which is on par with the theoretical peak values.

    Figure 3: OSU Bandwidth – E5-2697 v4

    Conclusion:

     

    Omni-Path Fabric provides a value add to the HPC solution. It is a technology that integrates well as a high speed fabric needed for designing flexible reference architectures with the growing need for computation. Users can benefit from the open source fabric tools like FMGUI, Chassis Viewer and also FastFabric that is packaged with the IFS. The solution is automated and validated with Bright cluster Manager 7.2 on Dell Servers.

    More details on how Omni-Path perform in the other domains is available here. This document provides Intel® Omni-Path Fabric technology key features and provides a reference to performance data conducted on various commercial and open source applications.

     

     

     

     

     

     

     

  • KACE Blog

    5 Summer Essentials to Get You Through the Long, Hot Days

    Today is the first official day of summer and people are starting to get geared up for many warm weather activities. While your kids are on summer break, you’ll probably spend more time with family and friends. As the days get hotter and the nights get warmer, now is the perfect time to soak up some Vitamin D, spend some time away from work, and have a nice cold beverage.

    How ever you decide to spend your time, here are five summer essentials to get you through the long, hot days, especially while you’re out on vacation:

    1. Sunscreen

    No one likes a bad sunburn. Sunscreen is extremely important to protect your skin from UV rays, which can have long-term effects if not treated properly. Wearing sunscreen will ensure your skin is protected from sun damage and you won’t need aloe vera at your bedside.

    2. Sunglasses

    Find a nice pair of sunglasses to shade your eyes from the blinding sun. This will ensure visibility to your surroundings when you go hiking or when you fire up the grill. Did you know your eyes can get sunburn too?

    3. A comfortable wardrobe

    Put away the sweaters and pull out the polo shirts. Shorts, swim suits, sandals/sneakers, and a hat are all a must if you are planning on some outdoor activities. It’s never too late to inventory what items you have, need to replace, or simply get rid of.

    4. A beach bag or backpack

    Whether you’re going to the beach or going to a theme park with the kids, a bag or backpack will take you far. Fill it up with everything you need so you know what’s on hand, and you’ll be set for the day.

    5. The right gadgets

    If you’re planning a pool party, barbeque, or a camping/fishing trip, you’re going to need the appropriate gadgets. Goggles and pool floats to swim, a cooler for food and drinks, bug repellant spray and a camp lantern, or some travel apps would make your life easier. Make sure you prepare ahead of time so you don’t forget an essential part of the plan.

    6. A clean makeover (bonus)

    Shave off that winter beard, clean up that hairdo and look fresh all summer while you’re at the beach or hitting the pool. Be careful of tan lines!

    Summer feels like a much slower time, but I say this every year: Where did summer go? Fortunately, you won’t have to stress about work while you’re away because Dell KACE has it under control, and can be another one of your summer essentials. KACE systems management appliances can provision, manage, secure, and service all of your network connected devices so you can focus on spending time with your family instead of being trapped at work. Relax — it’s time to make summer memories, and let Dell KACE appliances handle your systems management chores.

    Alyssa Luc

    About Alyssa Luc

    Alyssa Luc joined Dell Software in 2015 as a Social Media and Community Advisor for the KACE product team. Her specialties include customer advocacy and advocate marketing.

    View all posts by Alyssa Luc | Twitter

  • Information Management

    Yellow Triangles Showing Up in Index Fragmentation

    Hello and welcome to my first blog.  If I'm honest, I've been pondering hard on what my first blog should be and I think I found just the topic to get my feet wet. 

    One of the common cases that I often see users of Spotlight on SQL Server Enterprise (or SoSSE) run into is, "My Spotlight is broken.  I see a yellow triangle in the Index Fragmentation part of the Spotlight dash board."

    What is this yellow triangle that shows up in place of where data should be?  If you click into the yellow triangle, it will show the following error:

     

    Before Spotlight 11.5, the message would be...

        “Collection 'Fragmentation Overview' failed: Timeout expired. 

        The timeout period elapsed prior to completion of the operation

        or the server is not responding.”

     

    From Spotlight 11.5 and onward, you will see...

     

        “Monitored Server - SQL Server Collection Execution Failure

        1/11/2016 1:11:00 AM Collection 'Fragmentation Overview' failed :

        Collection 'Fragmentation by Index' failed : The SQL query has

        been executing for longer than its timeout limit but has not

        timed out. It may be hung.”

     

    The query to get Index Fragmentation can potentially be a long running query.  If the monitor server is busy, the query for Index Fragmentation can time out.  By default the collection schedule collects at 4:15 AM, which we hope is a time when the servers are not too busy; however, that may not be the case.  Since Fragmentation only runs once in the morning and it fails because Spotlight can't complete the query, you end up seeing this yellow triangle all day.  We are working on a way to make this collection executes in less time.  In the meantime, this is how you can tackle the issue.

     

    1.  Try to run it now to see if you do get data.  Do so by changing the Index Fragmentation collection schedule.

        a. Go to "Configure | Scheduling"

        b. In the top drop down, change from “Factory Setting” to the SQL Server with the issue

        c. Look for the schedule "Fragmentation Overview" and click on the square icon next to it for the pop-up window

            Note: If you are running pre 11.5 versions, then highlight the fragmentation schedule and make the changes at the bottom.

        d. Uncheck "Factory Settings" and click on "at 4:15 AM every day"

        e. Change to 1 Minute and if data does come back than we know maybe 4:15 AM is not an ideal time to run this 

     

    2.  Once you have determined that 4:15 AM is not a good time to run the this collection, you can either guess, with trial and error, other times to run the collection or run it on intervals.  If choosing to run the collection on intervals, make the times between the intervals hours long.  For example, maybe 4 or 8 hour intervals.

    Looking for more on Spotlight on SQL Server Enterprise?  Check out this video on how to optimize and tune SQL Server performance anywhere on any device or download a free trial.

  • Dell TechCenter

    Replication and migration: Selecting the right solution for Dell SC Series storage

    Written by Chuck Armstrong, Dell Storage Engineering

    With the release of Storage Center OS (SCOS) 7, Dell Storage Manager (DSM) 2016 R1, and PS Series Firmware v9.0, the number of options for replication and migration have grown substantially. With so many available options, you might ask: Which one is right for my environment?

    I’ve got good news! This blog post will help identify the ideal solutions for several scenarios. After reading through these, you should be able to determine which solution fits best for your environment.

    Let’s start with replication.

    If your environment has multiple locations, chances are pretty good that you’re using replication, or at least have plans to do so. How to best utilize replication depends on what you need to replicate, the distance over which the replication will occur, and the Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

    To replicate between SC Series arrays, the options from which to choose are:

    • Synchronous
    • Asynchronous
    • Semi-synchronous (semi-sync)

    With synchronous replication, either high availability or high consistency can be selected as the mode of operation. Beyond that, replication can be configured with multiple sites using mixed (parallel), cascade, or hybrid as the replication topology.

    When to use which mode and topology is explained below.

    Synchronous replication

    Synchronous replication is the only option to achieve an RPO of zero. However, distance (latency incurred through distance) is a limiting factor in the ability to implement synchronous replication. Synchronous replication increases application latency as a result of how replication takes place: For every write an application makes to the primary volume, that write must be replicated to the remote volume, provide an acknowledgement back to the primary volume, and finally acknowledge the write to the application. What all of this means is, the combination of allowable latency in the application and the latency of replication in the specific environment will determine if synchronous replication is a viable option for your environment.

    If you’ve identified synchronous replication as your method, it’s time to select which mode: high availability or high consistency.

    High availability mode: If the replication communication from the primary (source) volume to the remote (destination) volume is interrupted, the primary volume, and the applications using it, remain active, resulting in no interruption in user productivity. However, the data on the remote volume would become stale since it cannot receive updates from the primary volume. Following a replication communication interruption, a site failure could potentially result in data loss due to the stale data on the remote volume.

    High consistency mode: This addresses communication interruptions between primary and destination volumes differently and prioritizes data consistency. If there is a communication interruption preventing replication, the primary volume stops performing writes because they cannot be performed on the remote volume. This ensures data will always be consistent between primary and remote volumes, eliminating the data-loss vulnerability. However, if volume replication cannot occur, applications using that volume will stop functioning because they cannot execute writes, resulting in an interruption in user productivity.

    Asynchronous replication

    Asynchronous replication differs from synchronous replication in the way that writes occur on primary and remote volumes. Writes from an application to the primary volume are written and acknowledgement is immediately sent back to the application. Replication occurs at a later time (when a snapshot is taken), after which, acknowledgement of the write is sent from the remote volume to the primary. This method does not increase latency in the application, but reduces consistency between the primary and remote volumes.

    When the RPO allows for more flexibility, or when the distance over which replication needs to occur is too far to support synchronous replication, asynchronous replication is an option. In fact, asynchronous replication is used more often than synchronous replication, primarily due to a lower cost of infrastructure required to support asynchronous compared to synchronous replication. The RPO can be reduced by improving the connection and changing the snapshot schedules to replicate more often. The better the connection, the more data can be sent, and the more frequently that data can be sent.

    Semi-sync replication

    Semi-sync replication is the best of both worlds regarding synchronous and asynchronous replication. From the synchronous side, every write an application sends to the primary volume is automatically sent to the remote volume for replication, as opposed to holding it for a snapshot to trigger the replication. From the asynchronous side, the acknowledgement of the write back to the application is sent immediately after the write occurs on the primary volume, instead of waiting for acknowledgement from the remote volume before acknowledging the write to the application.

    Semi-sync replication reduces the RPO — nearing zero — without introducing the application latency incurred with synchronous replication. Semi-sync replication is as close as asynchronous replication can get to synchronous replication without actually being synchronous replication.

    Mixed (parallel), cascade, and hybrid replication topology

    So your environment has more than two locations and you want another level of data protection? We’ve got you covered.

    Mixed topology: In this topology, a volume can be replicated to two separate volumes at two separate locations. With mixed topology, one replication method can be synchronous and the other asynchronous, or both methods can be asynchronous. The mixed replication topology is useful if your environment has two failover sites. When using synchronous replication to one of the two sites, the synchronous replication rules and requirements still apply.

    Cascade topology: Rather than replicating from a single source to multiple targets (mixed), the cascade replication topology replicates the source volume to a single destination volume which is then replicated to a secondary destination volume at a third location. The replication method from the source to the first destination can be either synchronous or asynchronous, but the replication from the first to the second destination can only be asynchronous. This is useful if your environment has a hot site as the first destination site, and as an added measure of protection, a cold or more distant site as a third location.

    Hybrid topology: Because replication is configured on a per-volume basis, a hybrid replication topology can also be configured. Hybrid topology includes one or more volumes being replicating using the mixed (parallel) topology and one or more volumes being replicated using the cascade topology. This might be used when multiple locations are involved and different applications require different levels of protection. For example, some applications might need two hot sites, while other applications might only need one hot site with a secondary cold site configuration. In this case, the hot and cold designations are related to portions of the protected environment rather than the site location itself.

    Live Volume

    Live Volume has many similarities and some differences to replication. One similarity is that Live Volume can be configured to be synchronous or asynchronous. The major difference is that Live Volume enables mobility of your environment between SC Series storage, rather than protection from failure. The SC Series storage taking part in Live Volume can be located in the same data center or a different data center. Live Volume enables movement of the workload to a different set of servers and storage to enable hardware maintenance, which can be especially challenging for non-virtualized environments.

    Replicated Live Volume

    Although Live Volume is different than replication, a Live Volume can be used as the source volume in a replication configuration. That means the Live Volume environment can also be protected by a remote site. One replication limitation is that asynchronous Live Volumes can be the source for either a synchronous or asynchronous destination volume, whereas a synchronous Live Volume can be the source for only an asynchronous destination volume. For a deeper look into replication and Live Volume options, see the Storage Center Synchronous Replication and Live Volume Solutions Guide.

    Asynchronous replication with PS Series storage — Cross-platform replication

    Bi-directional, asynchronous replication between SC arrays and PS groups is a new option available with our latest software and firmware releases. This new capability enables an environment with both SC and PS storage to better utilize existing storage assets. For example, one or more locations with PS groups could replicate to an SC array at a remote site. Or, existing PS groups can be moved to a remote site as the replication target, protecting an SC array at the primary site. For additional details, see the Cross-Platform Replication solutions guide and the Cross-Platform Replication video series.

    Migration from PS to SC

    Now that both PS Series and SC Series storage platforms can be integrated into a single management environment, the ability to migrate from PS Series to SC Series storage has become more relevant. Migrating from PS Series to SC Series storage is accomplished using Thin Import. This process supports either online or offline migration of volumes to SC Series storage, and allows you to repurpose the PS array to a remote site for replication, if desired. There is much more information on this feature in the Thin Import solution guide and Thin Import video.

    VMware-specific replication

    All this information is great, but what about your VMware environment? Should you use Live Volume or VMware Site Recovery Manager (SRM), which uses the Dell Storage Replication Adapter (SRA)?

    Live Volume provides a different type of protection than SRM. Live Volume enables mobility of the virtual environment between two SC Series arrays, which can be in the same or different data centers. This would be ideal if each location supports shifts of end users. For example, the day shift workers all report to site one and the night shift workers all report to site two. In this case, using Live Volume to move the running environment from site one to site two and back again is a great solution.

    Alternatively, VMware SRM enables recovery from a site failure to a secondary location. This is used to protect a primary site using a hot site. Additionally, when the updated SRA becomes available, a Live Volume environment will be able to be protected with an SRM solution.

    Additional information

    While this post summarizes each replication method, be sure to check out these additional resources:

    Find even more information on storage topics at Dell.com/StorageResources.