Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for Big Data and Analytics - Blog Blueprint for Big Data and Analytics 0
Blueprint for Cloud - Blog Blueprint for Cloud 0
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Blueprint for VDI - Blog Blueprint for VDI 0
Latest Blog Posts
  • Stat

    How do I maintain my customizations when upgrading to R12.2?

    Quick blog to help  you save huge amounts of time which translates in to huge amounts of money and much smaller headaches!

    According to the Customized Environments section (1-17) of the Oracle® E-Business Suite Upgrade Guide Release 12.0 and 12.1 to 12.2 Part No. E48839-13 documentation:

    Caution: Customizing any concurrent program definitions, menus, value sets, or other seeded data provided by Oracle E-Business Suite is not supported. The upgrade process overwrites these customizations.

    So, the upgrade process to 12.2.x overwrites any of these customizations and Oracle advises you to back them up!  Catch is, they don't tell you how to back up "just" this data.  Beyond a full database backup, they assume you would most likely do some sort of export of your customized data by way of a script that, if written correctly and accurately, will back up your customizations.  One operative word in that statement is "correctly"; the other is "accurately".  The one that's missing is "time".  Scripting this export will take some serious time due to the specificity of what you're backing up! Time is just not a luxury most of us have.  Enter Stat.

    With Stat, you can very quickly and easily "backup" your customizations in one or several CSRs without having to script anything and by executing some very simple mouse clicks.  That's it!  Then, once you've upgraded, you simply drag and drop your archived customizations to your newly upgraded environment and viola!  You've restored your customizations!  No worrying that you transposed characters in your script or missed something because your backup was automated as is your recovery!

    Now that we've made your life easier, go ahead and enjoy your time off!

    Happy Upgrading!

  • Windows Management & Migration Blog

    The Trouble with SharePoint Today

    I have a saying: Nothing free is easy, and nothing easy is free. That certainly holds true where SharePoint is concerned.

    The ”trouble” with SharePoint has always been its blank canvas approach to what it is. For example, admins come to IT and ask for a document repository. SharePoint can do that. Legal comes to IT and asks for a search engine for legal files. SharePoint can do that, too.

    You get the picture, SharePoint can do so much for so many businesses, but to get the most out of it requires strong in-house knowledge and skills.

    SharePoint Troubles

    SharePoint can do so much that it’s set up and configured for many different departments, however 20-40% of these departments never actually end up touching the platform. That’s often because the initial design wasn’t good enough to attract users, so they stayed with their legacy approaches: file systems, email folders, etc.

    Some business units go outside the IT department to build their own SharePoint instance without IT guidance. That means the day that the hard drive fails, it becomes IT’s problem for not backing up the service even though IT never knew it existed in the first place!

    Now, we find today’s biggest push is to the cloud. IT managers are being asked for a cloud-first strategy, ensuring IT again have additional work on their plates to get the company to the cloud as rapidly as possible — all without additional headcount or budget to support the push.

    This is where SharePoint comes in again. You need to migrate your on-premises SharePoint instances to Office 365, where do you start?

    With Dell Software, you can ensure a successful and affordable SharePoint migration to Office 365 or SharePoint 2016.

    Our SharePoint consolidation and migration methodology, which includes assessing your current environment, migrating the needed content, ensuring coexistence and managing your new environment after the move, helps you:

    • Reduce infrastructure costs by minimizing and removing legacy hardware
    • Reduce storage utilization by removing and archiving unused SharePoint data
    • Reduce the licensing costs of maintaining SQL Server, SharePoint server, backup agents, etc.
    • Centralize SharePoint infrastructure by removing the legacy organic growth that occurred when SharePoint was initially implemented
    • Minimize user effort and end user outages during the migration

    This methodology is well tested and proven across many 10’s of thousands of successful migrations with Dell Software.

    Don’t just take my word for it, though. Dell Software is the number one migration vendor worldwide and has proven this methodology through tens of thousands of successful migrations. See for yourself how we can help ensure your organization easily and successfully gets to Office 365 and SharePoint 2016.

    Michael Brooke

    About Michael Brooke

    Michael Brooke is an experienced technology and IT consultant for Dell Software. Some of the solutions Michael works with include, technology refreshes (Desktops and servers), migrations between platforms (messaging and OS), security projects (privilege management and IAM) and IT management solutions.

    View all posts by Michael Brooke

  • Windows Management & Migration Blog

    Shark Week: Are You Tracking the Behavior of the Sharks in Your Network? #Securonix

    Dell Teams with Securonix to Provide Advanced Security Analytics for Active Directory and Enterprise Applications


    There are a lot of sharks swimming around your network. Most are friendly, but some are not. Can you tell the difference?

    Fortunately, you don't need to. Thanks to a new partnership between Dell Software and Securonix, we are now offering user behavior analytics (UBA) to automatically detect suspicious shark behaviors.

    The partnership combines the unique insights delivered by Change Auditor with actionable security intelligence. The AD-Securonix integration means organizations can rest assured that the keys to their critical data are protected by the most advanced security analytics solution on the market.

    Securonix is the pioneer of user and entity behavior analytics (UEBA) for cyber security. The company’s products combine the latest advances in machine learning and artificial intelligence with advanced anomaly detection techniques to accurately predict, prevent, detect and respond to threats in real time.

    The problem with sharks is that they're unpredictable. Until now.

  • Information Management

    How Do I Make Those Pesky WMI/Access Denied Errors In Spotlight Go Away?

    Do you ever wonder why Spotlight throws an Access Denied or WMI error while monitoring a Windows server when your user account has all the permissions needed to access that server? You might encounter these errors on the Home page or even at the time of connection. The frustrating part is that even though you can connect via Remote Desktop and Ping the server, you still get those annoying errors!  Well, Spotlight actually runs various WMI commands to connect to your server and collect performance metrics for monitoring. These underlying commands are a common cause of these errors. Utilizing some simple commands, you can identify and address the root cause and be on your way to monitoring in no time!

    Before we get into the details, let’s clarify the characteristics of these errors:

    • Errors would be consistent and not random.
    • Errors are received on the Home page of Spotlight, with the acronym ‘WMI’ followed by ‘Access Denied’ or ‘Invalid Class’ along with the WMI class name in the error message:

      • "Collection 'Open Sessions' failed: WMI query "Win32_ServerSession" failed: Access denied.[0x80041003]"
      • “WMI query Win32_PerfRawData_PerfOS_Memory failed: Invalid class.
        [0x80041010] [Error Code: -2147217392].”
      • Errors are received when attempting to connect to the server, containing ‘Access Denied’ or ‘RPC’ within their messages:
        • "Windows host is in an unplanned outage: Access is denied (Exception from HRESULT: 0x8007005 (E_ACCESSDENIED)"
        • “Monitored Server - Windows Connection Failure: Cannot connect to windows host 'NNN.N": Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)' for Windows 2008 Servers”
        • "Windows (WMI) connection error: Error 800706BA: The RPC server is unavailable"

    To find the root cause and rectify these errors, follow these simple steps:

    1. First confirm the login account used in Spotlight for the connection using these steps:
      1. On the Spotlight Home page, using the connection tree located on the left side of the page, right-click on Windows connection name that’s failing and select the Properties option.
      2. Under the Details tab take note of the login account used here. This account has two possible sources:
        1. A domain account was manually entered.
        2. The “Use Diagnostic Server Credentials” option was used. In this case, the user must login to the server where Spotlight Diagnostic Server is installed. Using the Services.msc console, confirm the account owner of ‘Spotlight Diagnostic Server’ services.
    2. Next, test WMI using the same credentials:
      1. Log into server where Spotlight Diagnostic Server is installed
      2. Open CMD window and modify the command below in order to enter your information:  wmic /node: <host name> /user: <domain>\<user name> path Win32_PerfRawData_PerfOS_Memory
      3. Use the host name and user name retrieved from step 1 above in this command.  If error encountered had a specific WMI class name in the error message, then enter the class name as well in command.  Otherwise, use the sample ‘Win32_PerfRawData_PerfOS_Memory’ class mentioned above.
      4. Running this command should ask you for a password and then return data from host server.
      5. Most likely you’ll receive the same Spotlight error running this command.  Windows errors such as this one can be resolved by System Administrators of that server. Once you can run this WMI without encountering any errors, that’s an indication that Spotlight error will also rectify itself.

    Our Deployment Guide includes a ‘Troubleshooting WMI’ section providing more details about these errors.

    Interested in trying Spotlight on SQL Server Enterprise for yourself?  Download a free 30-day trial.  

  • Hotfixes

    Optional Hotfix 593313 for vWorkspace 8.6 MR1 Management Console Released

    This is an optional hotfix and can be installed on the following vWorkspace roles -

    • vWorkspace Management Console

     


    This release provides support for the following:

    Feature
    Description
    Feature ID
    Management Console
    Hyper-V templates with more than one NICs can be imported to MC with Null values
    625458
    Management Console
    Error when adding client record : COMMIT TRANSACTION has no BEGIN
    599838
    Management Console
    When changing provisioning settings in the VWs MC, DMGeoLocDCsAndHosts values can be removed and affect provisioning
    588140
    Management Console
    After upgrading to vworkspace 8.6 - can't control user sessions
    453743

     

    This hotfix is available for download at:

     https://support.software.dell.com/vworkspace/kb/205465

  • KACE Blog

    Hardware is Now a Powerful Weapon in the Battle Against Malware

    It’s no secret that cyberattacks are on the rise, putting enterprises of all sizes at risk. Organizations are working hard to defend themselves; according to Gartner, Inc., worldwide spending on information security will reach $75 billion for 2015, an increase of 4.7 percent​ over 2014, and other cybersecurity analysts predict that the global cybersecurity market will grow to a staggering $170 billion by 2020.

    However, what that money should be spent on is changing, and it’s critical to keep up with the exciting new developments.  In particular, endpoint security has traditionally been handled by software, such as anti-virus tools, multi-factor authentication, and Group Policy enforcing password standards and password changes. That software is extremely helpful — but increasingly insufficient. Ever more sophisticated and pernicious malware is infecting endpoints despite these software barriers, leading to security breaches, data loss, compliance violations, bad press and a variety of other consequences organizations are eager to avoid.

    Hardware Can Help

    Fortunately, now you can have a powerful new tool in your arsenal — hardware.  That’s right:  Hardware vendors are making improvements that enable the hardware itself to enhance IT security. Intel's new Skylake processors are being hawked in their press for their better CPU and GPU performance and longer battery life, but if you’re an IT pro responsible for protecting your network, their security features might be even more exciting. Skylake includes enhancements that make biometric and other strong authentication methods faster and more secure, while also protecting user privacy. For example, biometric measurements are kept in tamper-resistant hardware storage instead of the hard drive, and new image-processing capabilities speed up facial recognition.

    Enterprise PCs and laptops based on Skylake are already available. But before you rush to fill out a requisition, you need to know that there are some challenges involved in migrating to the new technology. In particular, the corporate images you’ve spent years refining won’t work on the new machines, so you’ll have to build and test new ones, and then provision your new laptops and PCs. Done manually, that would be a lot of work and delay. But the KACE K2000 Systems Deployment Appliance automates image creation, deployment and maintenance, so you can easily ensure that your new computers are ready to be your next line of defense against attackers.

    To learn more about the benefits of Skylake and how you can streamline its deployment using the K2000, view the on-demand webcast, “Top 8 Security Features in Skylake PCs."

  • SharePlex Blog

    SharePlex – wie viele Fliegen kann man mit einer Klappe schlagen?

    Für einen Großteil der Unternehmen sind die Zeiten, als es nur eine Produktionsdatenbank gab, längst vorbei. Datenbank-Admins (DBA) müssen heutzutage eher eine Vielzahl an Produktionsdatenbanken verwalten, kürzere SLAs einhalten und in vielen Fällen auch noch mit Datenbanken verschiedener Hersteller umgehen.

    Kundengespräche legen nahe, dass der häufigste Grund dafür Business Intelligence ist. Unternehmen sichern alle möglichen Arten von Daten, manchmal sogar, ohne dass für sie genau klar ist, welcher Wert oder Geschäftsvorteil beim Aufheben der Daten liegt – was man hat, hat man und wofür es gut ist, kann ja später noch geklärt werden. Der Umkehrschluss ist allerdings, dass die DBAs mit einer anwachsenden Datenmenge umgehen müssen, einhergehend mit einer niedrigeren Toleranz für Ausfälle, jedoch ohne dass das Budget ebenso anwachsen würde.

    Diese Situation veranlasst DBAs und IT-Teams, neue Technologien zur Kostenreduzierung zu testen, egal ob cloudbasiert oder schlicht andere Datenbank-Plattformen. Allerdings führt das zu einem anderen Problem. Es ist meist die Ausnahme, dass ein Unternehmen mehrere Datenbanken hat, die nicht untereinander kommunizieren würden – die Daten müssen zwischen ihnen hin und her wandern. Das wiederum führt zu einer komplexeren IT-Infrastruktur mit einer Mischung aus DR- und Replikationstools, von denen einige im eigenen Haus entwickelt werden, andere stellen eine Verbindung aus nativen und/oder Angeboten von Drittanbietern dar, einige arbeiten in Echtzeit, andere im Batchmodus – Sie verstehen, was gemeint ist!

    Ich würde Ihnen daher gerne SharePlex vorstellen – das Tool zur Erfassung von Datenänderungen. Ein einziges Tool mit dem Sie alles, was ich eben dargestellt habe, verwirklichen können, da die Änderungen der Daten zwischen den Datenbanken in Echtzeit oder batchweise erfasst werden.

    Nehmen wir beispielsweise eine Firma mit dem eben beschriebenem Szenario. Sie haben in ihrem Rechenzentrum einige Oracle Produktionsdatenbanken mit verschiedenen Zwecken für das Unternehmen, mit einer selbst entwickelten, maßgeschneiderten Lösung zur Replizierung von Datenänderungen einer Untermenge von Tabellen zwischen den Datenbanken alle paar Sekunden. Alle Datenbanken haben RAC für Hochverfügarkeit und DataGuard für Disaster Recovery. Eine zweite selbst gestrickte Lösung repliziert stündlich batchweise Daten von Oracle Datenbanken zu einem SQL Server Data Warehouse auf Azure-Basis.

    Das Schöne an SharePlex ist nun, dass man es für verschiedene Zwecke einsetzen kann, inklusive aller Szenarios, die oben beschrieben wurden: hohe Verfügbarkeit, Disaster Recovery, Replikation aller oder einer Untermenge von Tabellen und Replikation von Oracle zu SQL Server. Noch viel besser ist, dass man SharePlex auch für verschiedene Zwecke gleichzeitig verwenden kann. Sie könnten also mit SharePlex eine DR-Site aufsetzen, und da es immer verfügbar und aktuell ist, kann es ebenso den Wunsch nach einer hohen Verfügbarkeit erfüllen – und warum nutzen Sie diese DR/HA-Datenbank nicht auch noch zu Reportingzwecken? Sollte es dann irgendwann an der Zeit für Updates oder Migrationen der Datenbanken sein, können Sie minimale Ausfallzeiten und ein geringes Risiko mit SharePlex gewährleisten.

    Ich denke, Sie stimmen zu, dass ein Tool, das alle diese Einsatzzwecke abdeckt, nicht nur Ihre IT-Umgebung vereinfacht und verbessert, sondern Ihnen auch Zeit und Geld spart, während Sie flexibel in Bezug auf Datenbanktechnologie und –plattformen bleiben können. Fehlt eigentlich nur noch, dass es Kaffee kochen kann…


  • Dell Cloud Blog

    SPEC Cloud IaaS Benchmarking: Dell Leads the Way

    By Nicholas Wakou, Principal Performance Engineer, Dell

     

      

    Computer benchmarking, the practice of measuring and assessing the relative performance of a system, has been around almost since the advent of computing.  Indeed, one might say that the general concept of benchmarking has been around for over two millennia.  As Sun Tzu wrote on military strategy:

    "The ways of the military are five: measurement, assessment, calculation, comparison, and victory.  If you know the enemy and know yourself, you need not fear the result of a hundred battles.”
     

    The SPEC Cloud™ IaaS 2016 Benchmark is the first specification by a major industry-standards performance consortium that defines how the performance of cloud computing can be measured and evaluated. The use of the benchmark suite is targeted broadly at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers.  The SPEC Cloud Benchmark addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms, either public or private.

    Dell has been a major contributor to the development of the SPEC Cloud IaaS Benchmark and is the first – and so far only – cloud vendor either private or public to successfully execute the benchmark specification tests and to publish its results.  This article explains this new cloud benchmark and Dell’s role and results.

    How it works and what is measured

    The benchmark is designed to stress both provisioning and runtime aspects of a cloud using two multi-instance I/O and CPU intensive workloads: one based on YCSB (Yahoo! Cloud Serving Benchmark) that uses the Cassandra NoSQL database to store and retrieve data in a manner representative of social media applications; and another representing big data analytics based on a K-Means clustering workload using Hadoop.  The Cloud under Test (CuT) can be based on either virtual machines (instances), containers, or bare metal.

    The architecture of the benchmark comprises two execution phases, Baseline and Elasticity + Scalability. In the baseline phase, peak performance for each workload running on the Cloud under Test (CuT) alone is determined in 5 separate test runs.  Data from the baseline phase is used to establish parameters for the Elasticity + Scalability phase.  In the Elasticity + Scalability phase, both workloads are run concurrently to determine elasticity and scalability metrics.  Each workload runs in multiple instances, referred to as an application instance (AI).  The benchmark instantiates multiple application instances during a run.  The application instances and the load they generate stress the provisioning as well as the run-time performance of the cloud.  The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in the cloud.  The benchmark runs the workloads until specific quality of service (QoS) conditions are reached.  The tester can also limit the maximum number of application instances that are instantiated during a run.

    The key benchmark metrics are:

    • Scalability measures the total amount of work performed by application instances running in a cloud.  The aggregate work performed by one or more application instances should linearly scale in an ideal cloud.  Scalability is reported for the number of compliant application instances (AIs) completed and is an aggregate of workloads metrics for those AIs normalized against a set of reference metrics.
       
    • Elasticity measures whether the work performed by application instances scales linearly in a cloud when compared to the performance of application instances during the baseline phase.  Elasticity is expressed as a percentage.
       
    • Mean Instance Provisioning Time measures the time interval between the instance provisioning request and connectivity to port 22 on the instance.  This metric is an average across all instances in valid application instances.
       

    Benchmark status and results

    The SPEC Cloud IaaS benchmark standard was released on May 3, 2016, and more information can be found in the Standard Performance Evaluation Corporation’s announcement.  At the time of the release, Dell submitted the first and only result in the industry.  This result is based on the Dell Red Hat OpenStack Cloud Solution Reference Architecture comprised of Dell’s PowerEdge R720 and R720xd server platforms running the Red Hat OpenStack Platform 7 software suite.  The details of the result can be found on the SPEC Cloud IaaS 2016 results page

    Dell has been a major contributor to developing the SPEC Cloud IaaS benchmark standard right from the beginning from when the charter of SPEC Cloud Working Committee was drafted to when the benchmark was released.  So it is no surprise that Dell was the first company to publish a result based on the new Cloud benchmark standard.  Dell will continue to use the SPEC Cloud IaaS benchmark to compare and differentiate its cloud solutions and will additionally use the workloads for performance characterizations and optimizations for the benefit of its customers.

    At every opportunity, Dell will share how it is using the benchmark workloads to solve real world performance issues in the cloud.  On Wednesday, June 29th, 2016, I will be presenting a talk entitled “Measuring performance in the cloud: A scientific approach to an elastic problem” at the Red Hat Summit in San Francisco.  This presentation will include the use of SPEC Cloud IaaS Benchmark standard as a tool for evaluating the performance of the cloud.

    Computer benchmarking is no longer an academic exercise or a competition among vendors for bragging rights.  It has real benefits for customers, and now – with the creation of the SPEC Cloud IaaS 2016 Benchmark – it advances the state of the art of performance engineering for cloud computing.

      

  • Identity and Access Management - Blog

    Payment Card Industry Data Security Standard

    While relatively a newcomer to the IT compliance scene, PCI DSS has been mandated by all members of the PCI Security Standards Council, including Visa International, MasterCard Worldwide, American Express, Discover Financial Services and JCB International. What this means, essentially, is that all banks that process the payment transactions associated with these cards are responsible for ensuring that merchants meet the standard or face severe penalties.

    PCI DSS has an extensive reach — it applies not only to your business, but also to virtually any vendor that supports your organization by accepting, storing, processing or transmitting payment card data, including personal data from credit and debit cards. Any business partner or vendor that handles cardholder data (CHD) or sensitive authentication data (SAD) in these capacities is classified as a PCI merchant and is required to comply. Objectives and requirements

    The overriding goal of PCI DSS is to ensure payment card data confidentiality, which means making sure that you and your vendors have the proper operational processes and controls in place to secure customer data and ensure it is auditable. Specifically, PCI DSS requirements are intended to ensure that organizations

     Build and maintain secure networks and systems

    • Protect cardholder data

    • Maintain a vulnerability management program

    • Implement strong access control measures

    • Regularly monitor and test networks

    • Maintain an information

     Many of the PCI DSS standards have detailed requirements that focus on key processes and controls organizations must have in place for managing user identities and entitlements.

    These include controls that:

    • Ensure each user is uniquely identified
    • Define access needs for each role
    • Assign access based on individual’s job classification and function
    • Limit access to cardholder data to only authorized users
    • Ensure each user has explicit approval for the least amount of data and privilege needed to perform his or her job role
    • Enforce strong password management settings
    • Track logging and recording of all privileged user activity
    • Prevent the abuse of system accounts
    • Secure audit logs 

  • Network, SRA and Email Security Blog

    Zika Is Not the Only Virus You Can Get By Watching the Olympics

    It’s August 5, 2016 and you settle down at your computer to watch the Olympic opening ceremony.  You have no fear of catching Zika, unlike the thousands of people in Rio. Feeling safe, you navigate to the official broadcast site of the games and click on Watch the Olympics live.

     But wait, there’s fine print:  Simply Sign-In Using Your TV Provider Account Login/Password And You'll Have Access To FREE, LIVE Rio Olympics Coverage. Not cool. “Who pays for TV?” you ask yourself. “Haven’t they heard of streaming?” So you search on “Olympics live streaming free” and there, on the first page of results is:

     

    The site doesn’t look official, but hey, the media player icon looks like YouTube, which you know is safe, so you click Play. The site asks you to download and install a video codec. The ceremony starts in five minutes and the screen screams “Stream in HD now!”  You’re one step away from the Free Live Stream, so you click Accept…within microseconds, your computer is infected with a virus.  Perhaps the video will start, perhaps it won’t, but in either case, the malware now on your computer will give you greater problems than missing the opening ceremonies.

    How can you protect yourself from such a scenario?  Here are some precautions you can take:

    1. Don’t go there.  If a website is not an official site, chances are that it does not have the right to stream protected content.  And if it does not have rights to stream, then the content is just bait for unsuspecting visitors like you.
                 
    2. Don’t click that.  So you ended up on the site anyhow.  If it asks you to click a link or icon, tempts you with ads for free stuff, wants to do a download, or wants to install something, the only thing you should click is Close, as in close the browser.
                 
    3. Update, update, update.  Whether you are using a PC or a mobile device, update your operating system with the latest hotfixes. Update your browser to the latest level. Update your anti-virus software with the latest signatures. Configure your applications to do all these things automatically.
                 
    4. Control and protect your network with a next-generation firewall.  A next-generation firewall includes up-to-date security services that blocks websites of ill-repute, prevents malicious downloads, and kills the latest viruses. It even denies intrusions and attack attempts, snuffs out botnet traffic patterns, and recognizes which countries have the riskiest and most suspect Internet activities. You can’t get this level of security simply by following the first three precautions.

    To learn more about the bad things that can infect you on the Internet and the ways that you can inoculate yourself, read our ebook - How ransomware can hold your business hostage.