Blog Group Posts
Application Performance Monitoring Blog Foglight APM 106
Blueprint for Big Data and Analytics - Blog Blueprint for Big Data and Analytics 0
Blueprint for Cloud - Blog Blueprint for Cloud 0
Blueprint for HPC - Blog Blueprint for HPC 0
Blueprint for VDI - Blog Blueprint for VDI 0
Latest Blog Posts
  • Desktop Authority

    What's New in Desktop Authority v.9.3

    Dell Desktop Authority v.9.3 is now live!

    We're happy to announce the release of a Windows 10 ready Desktop Authority v.9.3.  We've also made some performance enhancements and opened up features in certain licenses.  Have a look! 

    New Features

    Support for New Platforms

    Desktop Authority 9.3 now supports:

    • Windows 10

    • SQL Server 2014 and

    • Exchange Server 2013

    Enhancements and Improvements

    We've added the MSI Packages feature to Standard version

    • All Standard license customers are now able to use the MSI Packages feature

    We've added Hardware/Software Inventory & Reporting to Standard version

    • All Standard license customers now have full software/hardware inventory and reporting functionality, previously reserved for the Professional Edition

    We've also enabled the USB/Port Security feature for all Standard and Professional licenses

    • Now all Standard and Professional license customers have access to this popular feature at no additional cost

    We've spent some time implementing performance improvements.  In this release, we've fixed issues that customers have reported and have found and fixed numerous intermittent bugs, performance bottlenecks, and stability issues.

    See Release Notes for a complete list.

    New to the product? Ready to try it in your own environment?

  • Information Management

    The Strengths and Limitations of Traditional Oracle Migration Methods

    There’s an old saying: Insanity is doing the same thing over and over while expecting different results.

    If you’re tired of spending your nights and weekends performing Oracle upgrades and migrations, struggling to minimize downtime and business impact, and worrying about what might happen if the migration fails, it’s time to shake things up. You don’t have limit yourself to old habits and old tools that don’t deliver the results you need. To get different results — better results — you need new approaches and new tools.

    For example, the traditional way to reduce the impact of a migration on the business is to schedule resource-intensive tasks during times of low activity. But before you just accept all those long evenings and weekends in the office, look into newer technologies, such as near real-time replication, that can minimize the migration’s impact on the business — and your personal life.

    Lest we throw out the baby with the bathwater, let’s take a hard look at the traditional methods for performing upgrades and migrations and determine whether and when they are helpful:

    • Export/import utilities and Oracle Data Pump — The most straightforward option for moving data between different machines, databases or schema is to use Oracle's export and import utilities. But, boy, talk about manual, time-consuming and error-prone. Plus, these utilities can be used only between Oracle databases and require significant downtime. Oracle Data Pump is a step up, offering bulk movement of data and metadata. But it still works only between Oracle databases and still requires significant downtime. Let’s keep looking.
    • Oracle database upgrade wizard — This wizard enables in-place upgrade of a standalone database. But it’s hardly a general-purpose solution, since you can upgrade only one single instance database or one Oracle RAC database instance at a time, and the source database version must be 10.2.0.4 or above for upgrade to 11g or 12c. Next.
    • Oracle transportable tablespaces (XTTS) — XTTS enables you to move tablespaces between Oracle databases, and it can be much faster than export/import. So far, so good. But XTTS moves your data as it exists; any fragmentation or sub-optimal object or tablespace designs are carried forward. Wouldn’t it be better to be able to clean things up as you go?
    • Cloning from a cold (offline or closed) backup — Cloning a database is a means of providing a database to return to in the event an upgrade does not succeed. While having a failback plan is a critical piece of the puzzle, it’s hardly a complete upgrade or migration strategy. Moving on.
    • Manual scripts — Ah, custom scripts. The first time, they seem like the perfect answer. No migration tool to license or learn, and you can tailor the migration or upgrade to meet your exact needs. But if you’ve gone down this path, you know that the process of creating, testing and running custom scripts is complex and requires significant time from skilled IT professionals with expert knowledge of your applications. And most of the time, it doesn’t enable you to avoid the dreaded downtime. Isn’t there a less manual approach?
    • Online options — Online upgrade and migration options include traditional remote mirroring solutions, Oracle RMAN, Oracle transportable databases and Oracle Data Guard. But if you’ve tried any of these options, you know that they all have significant limitations. For example, the transportable database feature can be used to recreate an entire database from one platform on another platform — but that’s just one of many migration and upgrade scenarios you face. You need a comprehensive approach that reduces both costs and the downtime that impact the business.

    In short, while each of these tools has value in certain specific scenarios, all of them are complex or resource-intensive, require lengthy downtime of production systems, or work only for Oracle databases. Fortunately, you don’t have to limit yourself to these traditional tools. In my next blog, I’ll explain why investing in an enterprise tool is a smart alternative.

    You can also learn more in our new e-book, “Simplify Your Migrations and Upgrades: Part 2: Choosing a fool-proof method and toolset.

    Steven Phillips

    About Steven Phillips

    With over 15 years in marketing, I have led product marketing for a wide range of products in the database and analytics space. I have been with Dell for over 3 years in marketing, and I’m currently the product marketing manager for SharePlex. As data helps drive the new economy, I enjoy writing articles that showcase how organizations are dealing with the onslaught of data and focusing on the fundamentals of data management.

    View all posts by Steven Phillips | Twitter

  • Windows Management & Migration Blog

    “Everyone Has a Plan Till They Get Punched in the Mouth.” Got Recovery Software for Your Exchange or AD Migration?

    If you’re like most system administrators, you’ll never have Mike Tyson working on your Exchange migration or Active Directory migration project. But when he says, “Everyone has a plan till they get punched in the mouth,” you’d better know he’s talking about you. And your project.

    As a system administrator or IT manager, you almost always have migration projects on your radar, and some of the 2016 releases from Microsoft (think: Exchange, Windows Server, SharePoint) may be coming into your environment soon.

    Have you got a plan? Are you ready in case you get punched in the mouth?

    Active Directory and Exchange Recovery Plans

    Think about a few things that can go wrong in an AD migration or Exchange migration:

    • You delete too much. Migration is a good time to eliminate data or email that you think nobody will need in the new environment. But suppose you delete data, then find out that somebody does indeed need it. Fast.
    • An email discovery request arrives mid-project. Just when you have half of your mailboxes migrated, you receive a discovery request for a specific email thread. With one foot on boat and other on the dock, how easily do you think you’ll be able to locate that email?
    • Those unused mailboxes aren’t really unused. You thought nobody used them, so you deleted them, only to find out people need them for VPN access. Oops. Pause your migration and spend a couple of days recreating the mailboxes.
    • Your migration gets interrupted. Suppose there’s a network outage in the middle of your migration. Where did you leave off? Where should you start up again? Later on, how can you be sure everything migrated successfully?

    None of that hurts as much as being punched in the mouth, but why run the risk when you can put in place an Exchange recovery plan using Windows recovery software before any of it happens?

    Implementing Recovery Manager for Exchange and Recovery Manager for Active Directory lets you do three things to ensure your migration goes smoothly:

    • Recover missing or corrupted data.
    • Confirm what has been migrated in the event of an interruption.
    • Create and test your migration and disaster recovery plan for compliance and peace of mind.

    Recovery Manager is like buying insurance against the things that can go wrong in your migration project. Since there’s no such thing as a perfect project, recovery software is a good way to keep your Exchange migration or AD migration plan intact and on schedule.

    Planning an Exchange or AD Migration? New Tech Brief

    Which Windows migration projects are on your horizon? Have you devised a migration plan yet? More important, how about a recovery plan?

    Have a look at our new tech brief, Planning an Exchange or AD Migration? Three Reasons to Include a Plan for Recovery. You’ll see in greater detail how a recovery plan can help you overcome getting punched in the mouth mid-project. You’ll also see real-world scenarios in which sysadmins have used Recovery Manager to get email back and keep their migration projects on schedule.

    What you won’t see is Mike Tyson. When we wrote the tech brief, he was still on bed rest. After all, everyone has a plan till they get punched by a hoverboard.

  • Information Management

    Mobility Tranquility, Part 2: Pull Out a Smartphone and Monitor Your SQL Server performance

    Mobility Tranquility is the state we want to put you in. It’s where you are when you’re using the Spotlight mobile app to monitor the performance of your SQL Server instances anytime, from anywhere.

    Spotlight collects SQL Server performance data from all layers – SQL Server, Windows, VM layer/Hyper-V, Analysis Services, High Availability, Replication and SQL Azure – then summarizes it in dashboards that make it easy for DBAs to execute basic troubleshooting tasks:

    • Prioritize at a glance the instances in greatest need of your attention
    • Run diagnostics quickly for individual instances and the SQL Server environment overall
    • Analyze and get to the root cause of performance problems

    We’ve taken the highest-level, most urgent functions of Spotlight and built them into our mobile app for Android, iOS and Windows Mobile. You can pull out a smartphone or tablet anywhere at any time and monitor SQL Server performance, no matter how far away your servers are located.

    Monitor SQL Server alarms and performance

    In my previous post, I described the heatmap in the app that shows you which of your instances are most in need of attention:

    You can touch any tile for high-level information about what’s going on inside the instance. More important, you’ll see specific notifications and alarms coming from the instance. In this example, database ADWorks12_DBM has some mirroring problems you’d better address:

    You then access a deeper layer of diagnostics for sorting and grouping notifications:

    As with the heatmap and everything we build into Spotlight, our goal is to make it easy for you to find any SQL Server performance problems as quickly as possible and start troubleshooting right away. Sorting and grouping of diagnostics is now available in the iOS app and will soon appear in the Android and Windows Mobile apps.

    Watch the webcast and install the Spotlight mobile app

    Our own Peter O’Connell conducted a webcast called SQL Server DBAs: Do you have Mobility Tranquility? You can listen as Peter takes you through the mobile app and describes the freedom it gives DBAs to “monitor SQL Server performance back at the office while out on a boat in Galway Harbor, being rained on and getting hypothermia.” Peter never lets a good time get in the way of monitoring his databases.

    Watch the webcast for a lively overview of the Spotlight mobile app, Spotlight for SQL Server Enterprise and Spotlight Essentials.

  • Data Protection

    Future-Proof Your Backup Strategy by Adding the Option to Use Cloud Services

    The mature and stable backup market has seen an influx of innovative technologies over the past few years and organizations can now choose a mix of backup technologies that are just right for them. Backup-to-tape is slowly being phased out and replaced with disk-based backup targets, and backup appliances and cloud services are also being added to the mix.

        

    IDC's recent survey of storage managers shows that 30% of European organizations are already using backup-as-a-service and that a further 43% are planning to add cloud services to their mix of backup technologies in the next 12 months.

    With so many options to choose from, it can be a challenge to design a future-proof backup strategy. Here are three key points to consider when choosing your next backup solution:

    • Can you back up to the cloud seamlessly and recover selectively? The key criterion for future-proofing your backup strategy is whether your backup solution can connect to the cloud and therefore let you leverage the cloud as a backup target. There are many cloud providers in the market — global, regional, and local — and it is important that you can connect to the cloud provider of your choice using the right APIs. The solution should also have features that support the use of cloud — for example, data deduplication and change-block tracking to reduce the data volume being sent over the Internet, or WAN acceleration to speed up the network traffic and ensure backup performance over the network link. Recovery from the cloud requires a flexible solution to recover either individual files, full workloads/applications, or an entire server. Ultimately, you want the cloud to become a seamless extension in your backup strategy, so that you can use data deduplication and compression across your on-premises estate as well as in the cloud and have flexible recovery options.
    • Can you contain cost? The cost structure for cloud backup is still not properly understood. 85% of organizations expect cloud storage to be cheaper than on-premises storage solutions, but cost drivers work differently in the cloud. While uploading and storing data in the cloud might be very attractively priced, downloading data for restore can be quite costly, depending on your cloud provider's business model. When choosing your next backup solution, it is important to ensure that the data footprint sent over the Internet and stored in the cloud is as small as possible, through the use of data deduplication and compression technologies, for example. Offering different recovery options is also essential to contain cloud-related costs, so that you can recover only what you need and consequently only pay for what you need to recover.
    • Can you replicate to and from the cloud to ensure cost-effective and reliable disaster recovery (DR)? Backup and replication are essential parts of any disaster recovery strategy. 30% of European storage managers are currently redesigning their disaster recovery strategy to ensure faster recovery times for business-critical applications and provide productivity applications for employees. Many organizations do not have the means to replicate to a secondary site, which is accepted as a best practice. Adding cloud to the mix adds a secondary site at a fraction of the cost of a traditional disaster recovery setup. Replication to and from the cloud is an important feature to enable successful DR.

    Ultimately, your new backup solution should give you the flexibility to take advantage of any backup technology that you want to deploy, and leverage the benefits of cloud if you want to use cloud services or, if you are not already using cloud services, provide the option to do so in the future, when the time is right for your organization.

    If you would like to learn more about the characteristics of a future-proof backup strategy, download our complimentary white paper, “Choosing the Right Public Cloud for Better Data Protection”.

    Carla Arend

    About Carla Arend

    Carla Arend is a program director with the European software and infrastructure research team, responsible for managing the European storage research and co-leading IDC's European cloud research practice. Arend provides industry clients with key insight into market dynamics, vendor activities, and end-user trends in the European storage market, including hardware, software and services. As part of her research, she covers topics such as software-defined storage, OpenStack, flash, cloud storage, and data protection, among others..

    View all posts by Carla Arend | Twitter

  • Dell Big Data - Blog

    Dell’s Jim Ganthier Recognized as an “HPCWire 2016 Person to Watch”

    Congratulations to Jim Ganthier, Dell’s vice president and general manager of Cloud, HPC and Engineered Solutions, who was recently selected by HPCWire as a “2016 Person to Watch.” In an interview as part of this recognition, Jim offered his insights, perspective and vision on the role of HPC, seeing it as a critical segment of focus driving Dell’s business. He also discussed initiatives Dell is employing to inspire greater adoption through innovation, as HPC becomes more mainstream.

    There has been a shift in the industry, with newfound appreciation of advanced-scale computing as a strategic business advantage. As it expands, organizations and enterprises of all sizes are becoming more aware of HPC’s value to increase economic competitiveness and drive market growth. However, Jim believes greater availability of HPC is still needed for the full benefits to be realized across all industries and verticals.

    As such, one of Dell’s goals for 2016 is to help more people in more industries to use HPC by offering more innovative products and discoveries than any other vendor. This includes developing domain-specific HPC solutions, extending HPC-optimized and enabled platforms, and enabling a broader base of HPC customers to deploy, manage and support HPC solutions. Further, Dell is investing in vertical expertise by bringing on HPC experts in specific areas including life sciences, manufacturing and oil and gas.

    Dell is also offering its own brand muscle to draw more attention to HPC at the C-suite level, and will thus accelerate mainstream adoption - this includes leveraging the company’s leading IT portfolio, services and expertise. Most importantly, the company is championing the democratization of HPC, meaning minimizing complexities and mitigating risk associated with traditional HPC while making data more accessible to an organization’s users.

    Here are a few of the trends Jim sees powering adoption for the year ahead:

    • HPC is evolving beyond the stereotype of being solely targeted for government and academia, deeply technical, audiences and is now making a move toward the mainstream. Commercial companies within the manufacturing and financial services industries, for example, require high-powered computational ability to stay competitive and drive change for their customers.
    • The science behind HPC is no longer just about straight number crunching, but now includes development of actionable insights by extending HPC capabilities to Big Data analytics, creating value for new markets and increasing adoption.
    • This trend towards the mainstream does come with its challenges, however, including issues with complexity, standardization and interoperability among different vendors. That said, by joining forces with Intel and others to form the OpenHPC Collaborative Project in November 2015, Dell is helping to deliver a stable environment for its HPC customers. The OpenHPC Collaborative Project aims to enable all vendors to have a consistent, open source software stack, standard testing and validation methods, the ability to use heterogeneous components together and the capacity to reduce costs. OpenHPC levels the playing field, providing better control, insight and long-term value as HPC gains traction in new markets.

    A great example of HPC outside the world of government and academic research is aircraft and automotive design. HPC has long been used for structural mechanics and aerodynamics of vehicles, but now that the electronics content of aircraft and automobiles is increasing dramatically, HPC techniques are now being used to prevent electromagnetic interference from impacting performance of those electronics. At the same time, HPC has enabled vehicles to be lighter, safer and fuel efficient than ever before. Other examples of HPC applications include everything from oil exploration to personalized medicine, from weather forecasting to the creation of animated movies, and from predicting the stock market to assuring homeland security. HPC is also being used by the likes of FINRA to help detect and deter fraud, as well as helping stimulate emerging markets by enabling growth of analytics applied to big data.

    Again, our sincerest congratulations to Jim Ganthier! To read the full Q&A, visit http://bit.ly/1PYFSv2.

     

  • Dell TechCenter

    Successfully Running ESXi from SD Card or USB – Part 2

    Successfully Running ESXi from SD Card or USB – Part 2

    In Part 1 of this blog, we discussed some items that need to be addressed to successfully run ESXi from an SD card or USB drive. Most specifically the syslog files, core dump, and VSAN traces files (if VSAN is enabled).

    This post will discuss some options to address each one and various pros and cons to each method. Unfortunately there is no definitive answer. Since each infrastructure can and will be different it is nearly impossible for me, Dell, or VMware to say exactly what you should do. The intent of the information below is to help give you options on how to manage these files. These are by no means the only options.

    How do we manage these files, so they are persistent?

    1. Syslog Files
      1. Every IT environment will have a different preferred way to manage syslog files. Some datacenters already have a central syslog server that ESXi can use to send its logs. Here are two additional solutions.
      2. Solution 1 – vSphere Syslog Collector:
        1. Available to be installed for Windows vCenter installations. For the vCenter appliance, the syslog collector service is enabled but will not register as an extension. To retrieve the files you have to login via SSH to the vCenter appliance as they will not be available from the vCenter client.
        2. This application is aptly named vSphere Syslog Collector.
    • The installation is included on the vCenter install disk and can co-exist with vCenter. For a large environment, use another VM as a standalone Syslog Collector, to avoid overtaxing the vCenter server.
    1. Pros and Cons:
      1. Pros:
        1. Free!
        2. Easy to install.
      2. Cons:
        1. Stores syslog files in a flat file system. This can make finding data difficult.
        2. No integration with other VMware tools (i.e. vRealize Operations)
        3. Although status can be seen from vCenter, not much less can be managed from vCenter.
      3. Solution 2 – vRealize Insight
        1. vRealize Insight is a full blown log management and analytic solution.
        2. Installation is done by deploying virtual appliances. The number of appliances depends on the ingestion rate. Since vRealize Insight can be used for all types of system logging the amount of data can be considerable.
    • Pros and Cons:
      1. Pros:
        1. Actual analytics for the logs that are collected.
        2. Can manage logs sent from vSphere, Windows, Linux, etc.
        3. Easy to manage.
        4. Integrates with vRealize Operations.
      2. Cons:
        1. Not free.
        2. High IO consumer, especially for storage.
      3. Core Dump Files
        1. Core Dump files are pretty easy to manage.
        2. As long as you are using at least a 4GB SD/USB drive, you are covered. The core dump will be saved to the 2.2GB partition reserved for the dump.
          1. ESXi doesn’t need 4GB but if you plan on running a system with considerable amount of memory (i.e. >256GB but <512GB) or VSAN, it is required. Note that systems with more than 512GB of RAM are required to use local storage.
        3. Another solution to manage core dumps is to setup vSphere Dump Collector similar to vSphere Syslog Collector.
          1. vSphere Dump Collector is a service that can coexist on your Windows based vCenter to collect dumps from hosts configured to send core dumps.
          2. Pros:
            1. Free!
            2. Allows for a central location for host diagnostic data.
            3. Will be able to hold the entire dump (no limitation on dump file size vs SD/USB).
            4. Available on both Windows based vCenter and vCenter appliances.
    • Cons:
      1. ESXi uses a UDP based process to copy the core dump files to the target location. UDP is unreliable as packets are sent without verification of delivery, so data could be missing and you will not know it.
    1. VSAN Trace Files
      1. There are only a couple of options to manage VSAN trace files. They cannot be sent to a syslog server.
      2. Solution 1 – Let them write to the default RAMDisk.
        1. Pros:
          1. Easy to setup (i.e. no setup!)
          2. VMware natively supports copying these files to the SD/USB drive during host failures (not total hardware failure though).
        2. Cons:
          1. Increases the memory usage of the host. This can be an issue on memory constrained systems.
          2. If the VSAN trace files are larger than the available space in the locker partition, not all will be copied during a host failure.
        3. Solution 2 – Send them to an NFS server.
          1. Pros:
            1. VSAN trace files are kept on a remote storage location so even in the event of a complete host hardware failure the trace files are available.
            2. Easy to setup.
            3. Reduces the load on the host’s memory.
          2. Cons:
            1. Requires an NFS server or storage location.
            2. Additional network traffic out of the host.
          3. Solution 3 – Write to a local VMFS data store.
            1. Pros:
              1. A good solution if no NFS server or storage is available.
            2. Cons:
              1. Wastes drive(s) and drive space that could be used for VSAN drives.
              2. Requires the host to have a second disk controller as VSAN drives need to be connected to a dedicated controller. This can be tricky in some configs – for example, if your preferred platform is the R630, it does not support multiple controllers.
              3. Additional cost for drives just to support VSAN trace files.

    Remember, not every solution above will work in your environment.  But I do strongly advise doing something to protect, at the very least, the core dumps and VSAN trace files.  These are two key items that either VMware support will require to help resolve issues that may come up.  With the available free options it is cheap insurance for what could be a terrible support/troubleshooting session.

    Look for the third and final blog in this series where I will show you how to configure some of the infrastructure discussed above.

  • Dell TechCenter

    Successfully Running ESXi from SD Card or USB – Part 1

    Recently, we have received questions about why our VSAN Ready Nodes don’t have local drives dedicated to running ESXi. This post will provide some guidance on how to successfully deploy an ESXi host running from a SD card, such as Dell IDSDM solution, or USB drive. 

    This method is fully supported as long as you take into account some requirements and recommendations. Dell takes this one step further with the Integrated Dual Secure Digital Module (IDSDM). This module can support up to two 16GB SD cards and can protect them in a RAID 1 configuration. This means you get all the benefits of running ESXi from a SD card, with hardware-enabled redundancy as well.

    This post was originally going to focus only on our Ready Node architecture, but I felt it prudent to discuss this particular topic on a more general scale. Items relevant to VSAN are discussed but obviously if your host is not enabled with VSAN then any items pertaining to VSAN can be ignored.

    Requirements to install ESXi on SD/USB

    1. A supported SD/USB drive is required. Dell SD cards are certified to the highest standards to ensure reliable operation. The use of non-Dell SD cards is not recommended because Dell support has seen reliability/performance issues in the field with non-Dell SD cards. Each vendor has specific devices that they support. Dell’s solution automatically provides some boot redundancy with 2 SD cards. The Integrated Dual SD Module supports up to 16GB SD cards in RAID 1.
    2. Total system memory must be 512GB or less. For systems with more than 512GB of RAM installation to a local HDD/SSD is required.

    ESXi Scratch Partition

    The ESXi scratch partition is used by ESXi to store syslog files, core dump files, VSAN trace files, and other files. The most important to manage SD/USB boot are:

    • Syslog
    • core dump (PSOD)
    • VSAN trace files

    What are these three items? And why do I care?

    • Syslog files
      • This is a group of files that store critical log data from various processes running on the host.
      • Examples include: vmkernel.log, hostd.log, vmkwarning.log, etc.
      • These files are written to real-time by the host.
    • Core Dump Files
      • A core dump is the official name for the Purple Screen of Death, otherwise known as PSOD.
      • The core dump contains helpful troubleshooting data to determine the cause and fix for the PSOD.
      • Although not written often (hopefully not ever!) we need a persistent place to store these items so they can be retrieved.
    • VSAN Trace Files
      • VSAN trace files contain actions taken by VSAN on the data that is being written to and read from VSAN data stores.
      • This data is needed by VMware support should any issue with the VSAN data store occur.
      • These files are being written to in real-time, like the syslog files.
      • These are completely separate from the syslog files.

    So why do we have to “manage” these files? Doesn’t ESXi just store them on the SD/USB drive?

    Not by default. When ESXi is installed to a SD/USB device, the scratch partition is not created on the drive itself, but in a RAMDisk. A RAMDisk is a block storage device dynamically created within the system RAM. This device is mounted to the root file system for the ESXi installation to use.

    • This location is very high performance (it’s in RAM).
    • Unfortunately, the RAMDisk is not persistent across reboots, so all the data written there is lost forever if the host were to crash or even be rebooted manually.

    There is one exception to this rule. In failure scenarios other than complete system failure, the VSAN trace files are written to the locker partition on the SD/USB drive. These trace files are written in order from newest to oldest until the locker partition is full. This won’t necessarily capture all the VSAN trace files as the VSAN trace files can be much larger than the locker partition.

    Part 2 of this blog will discuss some different methods/software to address the management of the files we have discussed.  Look for it coming soon.  Once it is posted I will link it HERE.

  • Dell TechCenter

    VMware ESXi Upgrade fails with an error “Permission denied“

    Did you come across a failure “Error : Permission denied“ while upgrading from one version of ESXi to a later version as noted in the screenshot below ?

    ESXi Upgrade Failure

    Wondering what might be causing this !! ? This blog points out the details of the error and a potential solution to overcome this failure. This is generally seen on hardware configurations where ESXi is installed on a USB device and there is no HDD/LUN exposed to the system during first boot of ESXi.

    The reason for this error is ESXi creates a partition number 2 of partition ID ‘fc‘ (coredump) during 1st boot when ESXi doesn’t detect a harddisk / LUN.  Here is an example of how the partition table look like in this scenario.

    Where vmhba32 is the device name for the USB storage device.

    Refer to VMware KB to know more about the partition types. During upgrade, the installer see the partition #2 and it tries to format it as vfat thinking that it’s a scratch partition. The format triggers the error “Permission denied“.

    Now how do I resolve it ? Here you go.

    The first step is to reassign the coredump partition to a different partition other than #2. The commands shown in the below screenshot does the same. 

    As you see the coredump partition is reassigned and made active on 7th partition. Now, it’s time to remove the 2nd partition from the partition table. 

    There you go. Now upgrading ESXi to a later version is seamless and will not end up in permission denied error. 

  • Dell TechCenter

    Surrounded by friends: reflecting on my career so far

    Back in the early 90’s, I began my professional career working for a payroll tax company. Soon after, I began to realize what corporate life was – the pluses and minuses. I also realized the coffee provided at the office was going to be a lot more cost-effective for me than multiple cans of soda each morning. Wearing a tie, drinking coffee; who was this person I saw in the mirror every day?

    As I mentioned, there were many pluses and minuses to corporate life. At a superficial level, I learned I didn’t really enjoy wearing a tie. Who does? Beyond that, I’m not going to bore you with any of the minuses I’ve found because, in all honesty, I don’t think it would be a useful exercise. Also, if I’ve learned one thing in my career and in life it’s this; focus on the positive relationships you can build.

    I’ve been extremely fortunate to work alongside some incredible individuals. Many of them shaped my early years involved in technology, mentoring me in computer operations, different technologies, and even time management. A few of them even attended my wedding.

    I can think back and remember so many things about these people, from printing payroll tax forms on special printers that only one of us could figure out, to the day when their children came into their lives. The connections with these friends has been a constant in my career and my life.

    Now here’s the kicker, three of us still work together here at Dell Software. If you asked Randy, Steve, or I 20 years ago if we’d all still be working together we probably would have laughed. 

    So take a minute to think of the personal relationships you’ve been fortunate enough to build over the years. Work to keep the relationships you have, whenever possible. Take time to learn something interesting about your coworker; who knows – you may still be friends with them 20+ years from now!