Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Do you ever wonder why Spotlight throws an Access Denied or WMI error while monitoring a Windows server when your user account has all the permissions needed to access that server? You might encounter these errors on the Home page or even at the time of connection. The frustrating part is that even though you can connect via Remote Desktop and Ping the server, you still get those annoying errors! Well, Spotlight actually runs various WMI commands to connect to your server and collect performance metrics for monitoring. These underlying commands are a common cause of these errors. Utilizing some simple commands, you can identify and address the root cause and be on your way to monitoring in no time!
Before we get into the details, let’s clarify the characteristics of these errors:
To find the root cause and rectify these errors, follow these simple steps:
Our Deployment Guide includes a ‘Troubleshooting WMI’ section providing more details about these errors.
Interested in trying Spotlight on SQL Server Enterprise for yourself? Download a free 30-day trial.
This is an optional hotfix and can be installed on the following vWorkspace roles -
This release provides support for the following:
This hotfix is available for download at:
By Nicholas Wakou, Principal Performance Engineer, Dell
Computer benchmarking, the practice of measuring and assessing the relative performance of a system, has been around almost since the advent of computing. Indeed, one might say that the general concept of benchmarking has been around for over two millennia. As Sun Tzu wrote on military strategy:
"The ways of the military are five: measurement, assessment, calculation, comparison, and victory. If you know the enemy and know yourself, you need not fear the result of a hundred battles.”
The SPEC Cloud™ IaaS 2016 Benchmark is the first specification by a major industry-standards performance consortium that defines how the performance of cloud computing can be measured and evaluated. The use of the benchmark suite is targeted broadly at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers. The SPEC Cloud Benchmark addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms, either public or private.
Dell has been a major contributor to the development of the SPEC Cloud IaaS Benchmark and is the first – and so far only – cloud vendor either private or public to successfully execute the benchmark specification tests and to publish its results. This article explains this new cloud benchmark and Dell’s role and results.
How it works and what is measured
The benchmark is designed to stress both provisioning and runtime aspects of a cloud using two multi-instance I/O and CPU intensive workloads: one based on YCSB (Yahoo! Cloud Serving Benchmark) that uses the Cassandra NoSQL database to store and retrieve data in a manner representative of social media applications; and another representing big data analytics based on a K-Means clustering workload using Hadoop. The Cloud under Test (CuT) can be based on either virtual machines (instances), containers, or bare metal.
The architecture of the benchmark comprises two execution phases, Baseline and Elasticity + Scalability. In the baseline phase, peak performance for each workload running on the Cloud under Test (CuT) alone is determined in 5 separate test runs. Data from the baseline phase is used to establish parameters for the Elasticity + Scalability phase. In the Elasticity + Scalability phase, both workloads are run concurrently to determine elasticity and scalability metrics. Each workload runs in multiple instances, referred to as an application instance (AI). The benchmark instantiates multiple application instances during a run. The application instances and the load they generate stress the provisioning as well as the run-time performance of the cloud. The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in the cloud. The benchmark runs the workloads until specific quality of service (QoS) conditions are reached. The tester can also limit the maximum number of application instances that are instantiated during a run.
The key benchmark metrics are:
Benchmark status and results
The SPEC Cloud IaaS benchmark standard was released on May 3, 2016, and more information can be found in the Standard Performance Evaluation Corporation’s announcement. At the time of the release, Dell submitted the first and only result in the industry. This result is based on the Dell Red Hat OpenStack Cloud Solution Reference Architecture comprised of Dell’s PowerEdge R720 and R720xd server platforms running the Red Hat OpenStack Platform 7 software suite. The details of the result can be found on the SPEC Cloud IaaS 2016 results page.
Dell has been a major contributor to developing the SPEC Cloud IaaS benchmark standard right from the beginning from when the charter of SPEC Cloud Working Committee was drafted to when the benchmark was released. So it is no surprise that Dell was the first company to publish a result based on the new Cloud benchmark standard. Dell will continue to use the SPEC Cloud IaaS benchmark to compare and differentiate its cloud solutions and will additionally use the workloads for performance characterizations and optimizations for the benefit of its customers.
At every opportunity, Dell will share how it is using the benchmark workloads to solve real world performance issues in the cloud. On Wednesday, June 29th, 2016, I will be presenting a talk entitled “Measuring performance in the cloud: A scientific approach to an elastic problem” at the Red Hat Summit in San Francisco. This presentation will include the use of SPEC Cloud IaaS Benchmark standard as a tool for evaluating the performance of the cloud.
Computer benchmarking is no longer an academic exercise or a competition among vendors for bragging rights. It has real benefits for customers, and now – with the creation of the SPEC Cloud IaaS 2016 Benchmark – it advances the state of the art of performance engineering for cloud computing.
Toad for Oracle 12.9 is now available. The new release focuses on Integration between:
In addition, further enhancements in the areas of Team Coding.
This release continued emphasis on these themes:
Focusing on these above themes, the below are some of the many highlighted features introduced in this release of Toad for Oracle.
A tremendous amount of work was invested in redesigning Team Coding to improve overall performance, usability, and stability, and to better support agile database development.
Quick important note: The 12.9 implementation of Team Coding is not backward compatible with earlier versions of Toad. Version 12.9 can read a Team Coding environment that was created with an earlier version, but the earlier versions cannot read a Team Coding environment that is created in version 12.9.
Below are some of the major enhancements in Team Coding:
A complete set of 12.9 new features is available at Toad for Oracle 12.9 – Release Notes - New Features.
A couple of weeks ago, I offered seven key questions for building a solid evaluation methodology to help you as you embark on any journey to select new technologies, whether you’re looking for information systems management software, endpoint security and endpoint management tools, cloud technologies, or other solutions.
But once you’ve defined your process for solution evaluation, what happens next?
It’s time to think about the broader questions. After all, it’s not all just a feature/function checklist that gets you where you need to be, but the sum of both tangible and less-tangible solution (and vendor) characteristics. Keep in mind your end game, which includes both enhancing your competitive advantage and returning more time for innovation, especially for the IT team.
Here are eight key questions to ask yourself:
Applying These Criteria to Systems Management Solutions
If you’re wondering how to work these questions into your solution evaluation, take at look at our new Endpoint Systems Management Evaluation Guide. This checklist includes not only features and functionality but also the best fit considerations detailed above. And if you’re in the market for a systems management solution, be sure to put Dell KACE on your list. Dell KACE systems management appliances provide an all-in-one solution that is comprehensive, easy to use, fast to implement and actively supported. We invite you to see how they rate under even the most rigorous solution evaluation methodology.
About Stephen Hatch
Stephen is a Senior Product Marketing Manager for Dell KACE. He has over eight years of experience with KACE and over 20 years of marketing communications experience.
View all posts by Stephen Hatch
We’re all busy. Busy answering emails, putting out fires, supporting users - and doodling in endless meetings (like the panda I am working on right now). Sure, doodling might seem counterproductive, but then I like to think of it as busy alleviating stress.
Being busy and being productive are two separate things.
Let’s face it, IT pros like you are in short demand and even shorter on time. Managing the day-to-day tasks, let alone all the new technologies and buzzwords that go with them, is something that you do after the work day ends – or as I like to refer to it - when the real work gets done.
You know, the fun stuff like researching how to secure your endpoints remotely, or how to enable your users to run apps that require admin privileges without actually giving them admin privileges (because that’s asking for trouble). And what about figuring out how to automatically manage GPO settings across targeted AD users - all that fun stuff happens after “regular” working hours. The irony is, if you had all that you might have a bit more free time.
Barren is the busy life
When was the last time you were able to simply get away for a few hours to expand your IT skills, attend a training session or check out the latest technologies that could help you spend less time on IT administration, and more time on innovation? Let me venture a guess – never?
What if I told you that there was a way for you to gain access to everything you need without actually leaving your desk? Taking that a step further, what if you were able to network with our IT evangelists, experts and your peers all from your desktop - so you can still be productive at work and have access to the tools and resources you need to advance your knowledge? Would you take us up on it?
Come on, no excuses
Well, that’s exactly what the Dell Software Virtual Trade Show is all about. You’ll have access to our IT experts, virtually, so you can get the insight, tips and tricks, and technical information you need to proactively manage your desktop environment - without putting a dent in your productivity since you won’t have to leave your desk.
But perhaps the best reason you should consider attending the virtual trade show is that you can set your own schedule and arrive whenever you like. Simply log in to the virtual expo on June 23rd any time between 8:00 a.m. and 3:00 p.m. PDT (or 9:00 a.m. – 4:00 p.m. BST if you’re in the UK or EMEA) to connect, explore, and learn how Dell Software can help you support your organization – especially when you can do it right from your desktop.
Reserve your spot today
By Munira Hussain, Deepthi Cherlopalle
This blog introduces the Omni-Path Fabric from Intel® as a cluster network fabric used for intra-node communication for application, management and storage communication in High Performance Computing (HPC). It is part of the new technology referring to Intel® Scalable System framework based on IP generated from the coalition of Qlogic, Truescale and Cray Aries. The goal of Omni-Path is to eventually be able to meet the demands of the exascale data centers in performance and scalability.
Dell provides complete validated and supported solution offering which includes the Networking H-series Fabric switches and Host Fabric Interface (HFI) adapters. The Omni-Path HFI is a PCI-E Gen3 x16 adapter capable of 100 Gbps unidirectional per port. The card supports 4 lanes supporting 25Gbps per lane.
HPC Program Overview with Omni-Path:
The current solution program is based on Red Hat Linux 7.2 (kernel version 3.10.0-327.el7.x86_64). The Intel Fabric Suite (IFS) drivers are integrated in the current software solution stack Bright Cluster Manager 7.2 which helps to deploy, provision, install and configure an Omni-Path cluster seamlessly.
The following Dell servers support Intel® Omni-Path Host Fabric Interface (HFI) cards
PowerEdge R430,PowerEdge R630, PowerEdge R730, PowerEdge R730XD, PowerEdge R930, PowerEdge C4130, PowerEdge C6320
The management and monitoring of the Fabric is done using the Fabric Manager (FM) GUI available from Intel®. The FMGUI provides in-depth analysis and graphical overview of the fabric health including detailed breakdown of status of the ports, mapping as well as investigative report on the errors.
Figure 1: Fabric Manager GUI
The IFS tools include various debugging and management tools such as opareports, opainfo, opaconfig, opacaptureall, opafabricinfoall, opapingall, opafastfabric, etc. These help to capture a snapshot of the Fabric and to troubleshoot. The Host based subnet manager service known as opafm is also available with IFS and is able to scale up to 1000’s of nodes.
The Fabric relies on the PSM2 libraries to provide optimal performance. The IFS package provides precompiled versions of the open source OpenMPI and MVAPICH2 MPI along with some of the micro-benchmarks such as OSU and IMB used to test Bandwidth and Latency measurements of the cluster.
Basic Performance Benchmarking Results:
The performance numbers below were taken on Dell PowerEdge Server R630. The server configuration consisted of the dual socket Intel® Xeon® CPU E5-2697 v4 @ 2.3GHz, 18 cores with 8*16 GB @ 2400MHz. The BIOS version was 2.0.2, and the system profile was set to Performance.
OSU Micro-benchmarks were used to determine latency. These latency tests were done in Ping-Pong fashion. HPC applications need low latency and high throughput. As shown in Figure 2, the back to back latency is 0.77µs, and switch latency is 0.9µs which is on par with industry standards.
Figure 2: OSU Latency - E5-2697 v4
Figure 3 below shows the OSU Uni-directional and bi-directional bandwidth results with OpenMPI-1.10-hfi version. At 4MB Uni-directional bandwidth is around 12.3 GB/s, and bi-directional bandwidth is around 24.3GB/s which is on par with the theoretical peak values.
Figure 3: OSU Bandwidth – E5-2697 v4
Omni-Path Fabric provides a value add to the HPC solution. It is a technology that integrates well as a high speed fabric needed for designing flexible reference architectures with the growing need for computation. Users can benefit from the open source fabric tools like FMGUI, Chassis Viewer and also FastFabric that is packaged with the IFS. The solution is automated and validated with Bright cluster Manager 7.2 on Dell Servers.
More details on how Omni-Path perform in the other domains is available here. This document provides Intel® Omni-Path Fabric technology key features and provides a reference to performance data conducted on various commercial and open source applications.
Today is the first official day of summer and people are starting to get geared up for many warm weather activities. While your kids are on summer break, you’ll probably spend more time with family and friends. As the days get hotter and the nights get warmer, now is the perfect time to soak up some Vitamin D, spend some time away from work, and have a nice cold beverage.
How ever you decide to spend your time, here are five summer essentials to get you through the long, hot days, especially while you’re out on vacation:
No one likes a bad sunburn. Sunscreen is extremely important to protect your skin from UV rays, which can have long-term effects if not treated properly. Wearing sunscreen will ensure your skin is protected from sun damage and you won’t need aloe vera at your bedside.
Find a nice pair of sunglasses to shade your eyes from the blinding sun. This will ensure visibility to your surroundings when you go hiking or when you fire up the grill. Did you know your eyes can get sunburn too?
3. A comfortable wardrobe
Put away the sweaters and pull out the polo shirts. Shorts, swim suits, sandals/sneakers, and a hat are all a must if you are planning on some outdoor activities. It’s never too late to inventory what items you have, need to replace, or simply get rid of.
4. A beach bag or backpack
Whether you’re going to the beach or going to a theme park with the kids, a bag or backpack will take you far. Fill it up with everything you need so you know what’s on hand, and you’ll be set for the day.
5. The right gadgets
If you’re planning a pool party, barbeque, or a camping/fishing trip, you’re going to need the appropriate gadgets. Goggles and pool floats to swim, a cooler for food and drinks, bug repellant spray and a camp lantern, or some travel apps would make your life easier. Make sure you prepare ahead of time so you don’t forget an essential part of the plan.
6. A clean makeover (bonus)
Shave off that winter beard, clean up that hairdo and look fresh all summer while you’re at the beach or hitting the pool. Be careful of tan lines!
Summer feels like a much slower time, but I say this every year: Where did summer go? Fortunately, you won’t have to stress about work while you’re away because Dell KACE has it under control, and can be another one of your summer essentials. KACE systems management appliances can provision, manage, secure, and service all of your network connected devices so you can focus on spending time with your family instead of being trapped at work. Relax — it’s time to make summer memories, and let Dell KACE appliances handle your systems management chores.
About Alyssa Luc
Alyssa Luc joined Dell Software in 2015 as a Social Media and Community Advisor for the KACE product team. Her specialties include customer advocacy and advocate marketing.
View all posts by Alyssa Luc |
Hello and welcome to my first blog. If I'm honest, I've been pondering hard on what my first blog should be and I think I found just the topic to get my feet wet.
One of the common cases that I often see users of Spotlight on SQL Server Enterprise (or SoSSE) run into is, "My Spotlight is broken. I see a yellow triangle in the Index Fragmentation part of the Spotlight dash board."
What is this yellow triangle that shows up in place of where data should be? If you click into the yellow triangle, it will show the following error:
Before Spotlight 11.5, the message would be...
“Collection 'Fragmentation Overview' failed: Timeout expired.
The timeout period elapsed prior to completion of the operation
or the server is not responding.”
From Spotlight 11.5 and onward, you will see...
“Monitored Server - SQL Server Collection Execution Failure
1/11/2016 1:11:00 AM Collection 'Fragmentation Overview' failed :
Collection 'Fragmentation by Index' failed : The SQL query has
been executing for longer than its timeout limit but has not
timed out. It may be hung.”
The query to get Index Fragmentation can potentially be a long running query. If the monitor server is busy, the query for Index Fragmentation can time out. By default the collection schedule collects at 4:15 AM, which we hope is a time when the servers are not too busy; however, that may not be the case. Since Fragmentation only runs once in the morning and it fails because Spotlight can't complete the query, you end up seeing this yellow triangle all day. We are working on a way to make this collection executes in less time. In the meantime, this is how you can tackle the issue.
1. Try to run it now to see if you do get data. Do so by changing the Index Fragmentation collection schedule.
a. Go to "Configure | Scheduling"
b. In the top drop down, change from “Factory Setting” to the SQL Server with the issue
c. Look for the schedule "Fragmentation Overview" and click on the square icon next to it for the pop-up window
Note: If you are running pre 11.5 versions, then highlight the fragmentation schedule and make the changes at the bottom.
d. Uncheck "Factory Settings" and click on "at 4:15 AM every day"
e. Change to 1 Minute and if data does come back than we know maybe 4:15 AM is not an ideal time to run this
2. Once you have determined that 4:15 AM is not a good time to run the this collection, you can either guess, with trial and error, other times to run the collection or run it on intervals. If choosing to run the collection on intervals, make the times between the intervals hours long. For example, maybe 4 or 8 hour intervals.
Looking for more on Spotlight on SQL Server Enterprise? Check out this video on how to optimize and tune SQL Server performance anywhere on any device or download a free trial.
Written by Chuck Armstrong, Dell Storage Engineering
With the release of Storage Center OS (SCOS) 7, Dell Storage Manager (DSM) 2016 R1, and PS Series Firmware v9.0, the number of options for replication and migration have grown substantially. With so many available options, you might ask: Which one is right for my environment?
I’ve got good news! This blog post will help identify the ideal solutions for several scenarios. After reading through these, you should be able to determine which solution fits best for your environment.
Let’s start with replication.
If your environment has multiple locations, chances are pretty good that you’re using replication, or at least have plans to do so. How to best utilize replication depends on what you need to replicate, the distance over which the replication will occur, and the Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
To replicate between SC Series arrays, the options from which to choose are:
With synchronous replication, either high availability or high consistency can be selected as the mode of operation. Beyond that, replication can be configured with multiple sites using mixed (parallel), cascade, or hybrid as the replication topology.
When to use which mode and topology is explained below.
Synchronous replication is the only option to achieve an RPO of zero. However, distance (latency incurred through distance) is a limiting factor in the ability to implement synchronous replication. Synchronous replication increases application latency as a result of how replication takes place: For every write an application makes to the primary volume, that write must be replicated to the remote volume, provide an acknowledgement back to the primary volume, and finally acknowledge the write to the application. What all of this means is, the combination of allowable latency in the application and the latency of replication in the specific environment will determine if synchronous replication is a viable option for your environment.
If you’ve identified synchronous replication as your method, it’s time to select which mode: high availability or high consistency.
High availability mode: If the replication communication from the primary (source) volume to the remote (destination) volume is interrupted, the primary volume, and the applications using it, remain active, resulting in no interruption in user productivity. However, the data on the remote volume would become stale since it cannot receive updates from the primary volume. Following a replication communication interruption, a site failure could potentially result in data loss due to the stale data on the remote volume.
High consistency mode: This addresses communication interruptions between primary and destination volumes differently and prioritizes data consistency. If there is a communication interruption preventing replication, the primary volume stops performing writes because they cannot be performed on the remote volume. This ensures data will always be consistent between primary and remote volumes, eliminating the data-loss vulnerability. However, if volume replication cannot occur, applications using that volume will stop functioning because they cannot execute writes, resulting in an interruption in user productivity.
Asynchronous replication differs from synchronous replication in the way that writes occur on primary and remote volumes. Writes from an application to the primary volume are written and acknowledgement is immediately sent back to the application. Replication occurs at a later time (when a snapshot is taken), after which, acknowledgement of the write is sent from the remote volume to the primary. This method does not increase latency in the application, but reduces consistency between the primary and remote volumes.
When the RPO allows for more flexibility, or when the distance over which replication needs to occur is too far to support synchronous replication, asynchronous replication is an option. In fact, asynchronous replication is used more often than synchronous replication, primarily due to a lower cost of infrastructure required to support asynchronous compared to synchronous replication. The RPO can be reduced by improving the connection and changing the snapshot schedules to replicate more often. The better the connection, the more data can be sent, and the more frequently that data can be sent.
Semi-sync replication is the best of both worlds regarding synchronous and asynchronous replication. From the synchronous side, every write an application sends to the primary volume is automatically sent to the remote volume for replication, as opposed to holding it for a snapshot to trigger the replication. From the asynchronous side, the acknowledgement of the write back to the application is sent immediately after the write occurs on the primary volume, instead of waiting for acknowledgement from the remote volume before acknowledging the write to the application.
Semi-sync replication reduces the RPO — nearing zero — without introducing the application latency incurred with synchronous replication. Semi-sync replication is as close as asynchronous replication can get to synchronous replication without actually being synchronous replication.
So your environment has more than two locations and you want another level of data protection? We’ve got you covered.
Mixed topology: In this topology, a volume can be replicated to two separate volumes at two separate locations. With mixed topology, one replication method can be synchronous and the other asynchronous, or both methods can be asynchronous. The mixed replication topology is useful if your environment has two failover sites. When using synchronous replication to one of the two sites, the synchronous replication rules and requirements still apply.
Cascade topology: Rather than replicating from a single source to multiple targets (mixed), the cascade replication topology replicates the source volume to a single destination volume which is then replicated to a secondary destination volume at a third location. The replication method from the source to the first destination can be either synchronous or asynchronous, but the replication from the first to the second destination can only be asynchronous. This is useful if your environment has a hot site as the first destination site, and as an added measure of protection, a cold or more distant site as a third location.
Hybrid topology: Because replication is configured on a per-volume basis, a hybrid replication topology can also be configured. Hybrid topology includes one or more volumes being replicating using the mixed (parallel) topology and one or more volumes being replicated using the cascade topology. This might be used when multiple locations are involved and different applications require different levels of protection. For example, some applications might need two hot sites, while other applications might only need one hot site with a secondary cold site configuration. In this case, the hot and cold designations are related to portions of the protected environment rather than the site location itself.
Live Volume has many similarities and some differences to replication. One similarity is that Live Volume can be configured to be synchronous or asynchronous. The major difference is that Live Volume enables mobility of your environment between SC Series storage, rather than protection from failure. The SC Series storage taking part in Live Volume can be located in the same data center or a different data center. Live Volume enables movement of the workload to a different set of servers and storage to enable hardware maintenance, which can be especially challenging for non-virtualized environments.
Although Live Volume is different than replication, a Live Volume can be used as the source volume in a replication configuration. That means the Live Volume environment can also be protected by a remote site. One replication limitation is that asynchronous Live Volumes can be the source for either a synchronous or asynchronous destination volume, whereas a synchronous Live Volume can be the source for only an asynchronous destination volume. For a deeper look into replication and Live Volume options, see the Storage Center Synchronous Replication and Live Volume Solutions Guide.
Bi-directional, asynchronous replication between SC arrays and PS groups is a new option available with our latest software and firmware releases. This new capability enables an environment with both SC and PS storage to better utilize existing storage assets. For example, one or more locations with PS groups could replicate to an SC array at a remote site. Or, existing PS groups can be moved to a remote site as the replication target, protecting an SC array at the primary site. For additional details, see the Cross-Platform Replication solutions guide and the Cross-Platform Replication video series.
Now that both PS Series and SC Series storage platforms can be integrated into a single management environment, the ability to migrate from PS Series to SC Series storage has become more relevant. Migrating from PS Series to SC Series storage is accomplished using Thin Import. This process supports either online or offline migration of volumes to SC Series storage, and allows you to repurpose the PS array to a remote site for replication, if desired. There is much more information on this feature in the Thin Import solution guide and Thin Import video.
All this information is great, but what about your VMware environment? Should you use Live Volume or VMware Site Recovery Manager (SRM), which uses the Dell Storage Replication Adapter (SRA)?
Live Volume provides a different type of protection than SRM. Live Volume enables mobility of the virtual environment between two SC Series arrays, which can be in the same or different data centers. This would be ideal if each location supports shifts of end users. For example, the day shift workers all report to site one and the night shift workers all report to site two. In this case, using Live Volume to move the running environment from site one to site two and back again is a great solution.
Alternatively, VMware SRM enables recovery from a site failure to a secondary location. This is used to protect a primary site using a hot site. Additionally, when the updated SRA becomes available, a Live Volume environment will be able to be protected with an SRM solution.
While this post summarizes each replication method, be sure to check out these additional resources:
Find even more information on storage topics at Dell.com/StorageResources.