Blog Group Posts
Application Performance Monitoring Blog Foglight APM 117
Blueprint for Big Data and Analytics - Blog Blueprint for Big Data and Analytics 0
Blueprint for Cloud - Blog Blueprint for Cloud 0
Blueprint for HPC - Blog Blueprint for HPC 0
Blueprint for VDI - Blog Blueprint for VDI 0
Latest Blog Posts
  • Data Protection

    How Do Purpose-Built Backup Appliances Bridge Legacy and New Backup and Recovery Strategies?

    Out with the Old, In with the New

    Is it easy for a company to throw out its old or existing backup and recovery infrastructure and bring in a new one? Some vendors are certainly pushing that as a viable option. In practice, however, it is not realistic for many reasons. In many organizations, backup and recovery infrastructure and operation have been built up over long periods of time. In today’s world, legacy products and solutions coexist with new, highly virtualized applications and widely distributed IT infrastructure.

    Now companies must manage and safeguard their data across physical, virtual, and cloud. As a result, many firms find it very difficult, if not impossible, to modernize and optimize their backup and recovery environment. We see this time and time again. Exacerbating this trend is the difficultly to adequately predict expansive data growth at all levels of an organization. Much of the data growth that is created resides outside the confines of data centers and is unstructured.

    Organizations have significant challenges managing existing practices and policies for data protection and recovery with new application deployment, increased virtualization, and the march to the cloud. The cloud (public, private, or hybrid) has been a transformative mechanism for data protection and recovery. Furthermore, the cloud can facilitate companies to build out business continuity or disaster recovery (DR) strategies at a much lower cost. Additionally, archiving to the cloud is now a viable option.

    So, how do purpose-built backup appliances help organizations meld their old and new data protection and recovery strategies in the face of unabated data growth, expanding virtualized environments, and push to the cloud? First and foremost, we need to begin outlining why companies should buy an *** solution rather than cobbling their own together from software, servers, and storage they currently own.

    Why buy a backup *** versus building one with backup software, a server, and storage array? Here’s some sound and prudent advice to take into account when considering a do it yourself (DIY) approach:

    • A DIY requires configuration of backup software, server, storage array, and target media
    • Untested software and hardware creates complexity
    • IT staff or personnel need to manage both – software, hardware, and storage infrastructure
    • Multiple failure points, no single vendor to escalate issues, and maintenance challenges
    • Costly licensing model for capacity expansion or agents and options

    A backup *** alleviates all of the challenges listed above and provides the following benefits:

    • Economical, turnkey *** that is tightly integrated with backup/restore software and backend storage
    • Components optimized and tested for supported configurations
    • Simplified or all-inclusive licensing for agents, options, or replication
    • Improves consistent backup, recovery and DR, as well as providing the conduit to cloud-based storage options
    • Ease-of-deployment, ease-of-use and greater flexibility
    • Built-in wizards make it easy to install and deploy
    • Consistent performance provides storage optimization such as deduplication
    • Ideal for SMB, mid-market, branch offices, scale to enterprise
    • Can coexist with existing data protection schemes and strategies already in place

    There are currently a plethora of backup *** choices in the marketplace today. Nevertheless, not all appliances are the same. In short, many of the current vendors building appliances in the market today are using proprietary software, hardware, and cloud components. In many cases, an organization will need to purchase additional SKUs or options to build out a comprehensive backup and disaster recovery strategy. Ironically, this opens the door for another set of challenges and can be more costly to support long-term. A company evaluating a backup *** needs to consider if the vendor can deliver, support, and maintain the solution long-term. Not many vendors have the reach or resources to build out their own IP across software, hardware and storage technologies. 

  • Windows Management & Migration Blog

    Government Agencies Migrate to Windows 2012: Prepare [New White Paper]

    If lack of preparation causes so many Windows Server migration problems down the line – and it does – then what does it take to prepare adequately?

    I posted last week on government agencies and their efforts to migrate from Windows Server 2003. Market Connections surveyed federal IT directors and managers and found that 89% had either upgraded completely or begun the process of doing so.

    Those are encouraging numbers, but you can’t deduce from them that they were smooth, ZeroIMPACT migrations. Some of them were surely nightmares for the users, managers and the system administrators involved. For a hiccup-free migration from Windows Server 2003, you need to get your IT ducks in a row. And keep them there.

    Preparing for Windows Server Migration

    To prepare adequately for the migration, you’ll have to spend more time examining your current environment than you think.

    “Why should we worry about what we have now?” say your impatient sysadmins. “We don’t have much time for this project, and we want to focus on what’s ahead.”

    They should cool their jets. The road to a smooth migration starts with understanding the existing Active Directory environment, testing for application compatibility and discovering dependencies. It’s time invested now that will save time and headaches later.

    Again, in the context of a future-oriented migration project, these steps may make you feel as though you’re focusing on the past or present, but you and your team are laying precious groundwork for the migration and for the future beyond it.

    How to Achieve a ZeroIMPACT Migration – New White Paper

    Want to get an idea of how your colleagues in other government agencies are dealing with the migration from Windows Server 2003? Have a look at the new Market Connections-Dell Software white paper, Windows Server Migration – How to Achieve a ZeroIMPACT Migration, and review the results of their July 2015 survey.

    You’ll find out more about the four pillars of our Windows Server migration methodology – prepare, migrate, coexist, manage – and see how to use them as a valuable paradigm throughout your own project. Dell Software offers tools to help you at every step along the way.

    Jeffrey Honeyman

    About Jeffrey Honeyman

    Jeff Honeyman manages messaging and content for government and education for Dell Software. He is also a saxophone and clarinet player and science fiction reader.

    View all posts by Jeffrey Honeyman | Twitter

  • Statistica

    Is Statistica 13 Really All That Great? (Duh.)

    Probably every IT department has at least one cynic who believes that every software maker touts every new release as something earth-shattering. After all, why give software a new number if it doesn’t represent a quantum leap of some kind, right? However, it is arguably true that some releases may disappoint the masses while others may justify their sequential numerations. So, skepticism may be a healthy way of self-regulating one’s expectations.


    How does this apply to Statistica 13?

    Having said all that, you probably expect that I will now claim the new Statistica 13 really is earth-shattering (it is!) and that you should simply take my word for it (you should!) There actually are specific capabilities within this release that make touting its merits a very easy assignment. However, “earth-shattering” remains a subjective term, so I should not be so crass as to insist you take my word for anything.

    Instead, I will gladly let others make that case for me, because this Statistica release is very impressive and people are taking notice. Our newsletter subscribers already received a headful of headlines about Statistica 13, big data, and the Internet of Things (IoT) generated from our recent Dell World event in Austin, TX. Maybe you've run across these headlines yourself in other venues:

    On top of Dell’s own press release, these six articles are but a drop in the bucket of media coverage, but I can tell you that what got journalists and analysts really excited about the Statistica 13 rollout is our software’s application of Native Distributed Analytics (NDA), with which Statistica saves time and effort by pushing algorithms and scoring functionality into your databases, basically analyzing your data — even big data — right where it lives. Statistica 13 distributes analytics anywhere on any platform. When it comes to dealing with streaming data and transfer limitations, NDA will fast become a busy analyst’s best friend.

    Meanwhile, you just know there other enhancements in Statistica 13 that will make the user experience more enjoyable and productive with respect to data visualization, workspace GUIs, and more. After all, we had to pack in enough newness to justify that new number 13, right?

    Today is a good day to check out Statistica 13 to see what it can do for you and your business. Also, be sure to subscribe to the Statistica newsletter to keep abreast of our latest product info and thought leadership.

    Read the Oct/Nov Statistica Newsletter >

  • KACE Blog

    Technology Tunnel Vision: Why Endpoint Management Without Network Security Is Putting Your Organization at Risk

     You know what’s about to happen in the photo? You see how the defender on the right is ready to block a pass? The player on the left is going to hesitate for a fraction of a second, pass the ball between the defender’s legs, dash around the defender and keep right on dribbling, leaving the defender a day late and a dollar short.

    Image credit: Nick Hubbard | Licensed under: CC BY 2.0

    We call that “tunnel vision” – focusing too closely on one area and failing to see the broader picture. 

    Endpoint Management and Tunnel Vision

    You can focus so intently on endpoint management that you take your eye off the ball of network security. That puts your organization at risk of somebody sneaking the ball past you and driving straight for the hoop before you know what’s hit you.

    Of course, it doesn’t seem like tunnel vision to you, because you’re constantly on the lookout for ways to tighten network security. But every new technology wave brings more endpoints to secure and more moving parts: tablets, smartphones, bring your own device (BYOD), internet of things (IoT) and wearables, to name a few.

    The struggle is for IT to manage that variety of endpoints without getting stuck in silos and losing sight of network security that spans the entire organization. You’re trying to do this even as the data center is becoming less centralized and assets are ending up in hosted, cloud and mobile repositories.

    Meanwhile, users want to be connected anytime and anywhere on an increasingly diverse set of devices. Their expectations for convenience and privacy are rising, and they are willing to engage in shadow IT to meet those expectations.

    New E-book: Technology Tunnel Vision

    IT resources are sagging, operating expenses are skyrocketing and security breaches are increasing in frequency and severity. Organizations are putting their reputations and their bottom lines at risk, despite their stepped-up efforts to increase security. The way to eliminate tunnel vision is to replace the traditional silo-approach to systems management with a holistic view of network infrastructure and a fully integrated solution that offers centralized management and network security.

    We’ve released a new e-book, Technology Tunnel Vision: Part 1, that explains why the holistic approach is imperative today. Read it for a closer look at the impact of tunnel vision on IT, on security and on your ability to keep your adversaries from dribbling around you and leaving you flat-footed.

    David Manks

    About David Manks

    David Manks is a Solutions Marketing Director for Dell Software focusing on endpoint management and security products.

    View all posts by David Manks  | Twitter

  • Windows Management & Migration Blog

    Simplified Long-term Coexistence — #LotusNotes, #Office365 and Beyond

    We’re frequently asked about the viability of long-term coexistence between platforms like IBM Lotus Notes, Domino, Microsoft Exchange and Office 365.

    These inquiries originate from a variety of organizations and scenarios. Some examples include:

    • Growth through mergers and acquisitions
    • Long-term vendor/supplier relationships
    • Disparate departments, offices or branches with different platform requirements

    However, regardless of the scenario, the concerns seem quite consistent:

    • Will we be able to quickly establish a common directory?
    • How will users be able to schedule resources across systems?
    • Are free/busy requests possible between environments?
    • What is the impact to legacy applications and workflows?
    • Is coexistence a feasible model for maintaining business productivity on a long-term basis?

    Coexistence World

    Many organizations going through a migration establish coexistence for a short period of time as they complete the migration project. However, entities requiring longer-term coexistence have understandable concerns about the impact to business operations and workflows. Rather than a solution to fill a temporary gap, they are looking for a new architecture and standard of operation. As a result, they want to ensure the organization will be able to flourish and grow in a coexistence world.

    While there are certainly costs and considerations when maintaining and supporting multiple platforms, the good news is that long-term coexistence is a viable option, as discussed in this recent case study by Imerys and InfraScience.

    In the case study, you’ll see how Dell Software and InfraScience provided Imerys with seamless calendar and directory synchronization between Office 365 and IBM Notes using Dell Coexistence Manager for Notes. Thanks to solutions like the one Imerys selected, organizations are free to balance the considerations of maintaining two environments against the benefits to the business without focusing on whether or not coexistence is even possible.

    One of the primary goals for the coexistence and migration solutions from Dell Software is to allow organizations to make decisions based on business and organizational needs, rather than the requirements of their chosen solutions. This case study is one example demonstrating that we are meeting those objectives, so feel free to evaluate your needs, coexist if that makes sense for your organization, and “play nice” with others.  

    Learn more about how we can help you ensure coexistence in this short case study.

    About Daniel Gauntner

    Dan Gauntner is a global product marketing manager for Dell Software where he oversees the positioning and go-to-market strategy for Lotus Notes, GroupWise, SharePoint and Enterprise Social migration solutions.

    View all posts by Daniel Gauntner | Twitter

  • Stat

    Change Management: Is it Art or Science? [New tech brief]

    How much of change management, version control and migration management is art, and how much is science?

    I worked for an IT director who had been through dozens of audits and migrations. He used to talk about “the Art and Science of IT.”

    “If it was predictable and you were prepared for it, then it’s science,” he would say. “If not, then it’s art.”

    That confused me at first, but it makes more sense to me as time goes on.

    Art in Change Management

    Here are a few of the unpredictable factors in change management that make it seem like an art:

    • Auditors are suspicious by nature. They’re paid to be suspicious, usually by people who are too shy and polite to be that suspicious.
    • You never know what auditors are going to ask you for. They take professional pleasure in springing the unpredictable on you. If you show them a report of 300 change/service requests (CSRs), there’s no telling how many they’ll want to examine, let alone which ones.
    • You’re guilty until you can prove you’re innocent. What are auditors trained to assume? That people in your organization have made unauthorized, undocumented changes to your enterprise systems. Maybe they have and maybe they haven’t, but it’s up to you to produce the authorization and documentation for every change.
    • Time is not on your side in an audit. The longer you fumble and rummage to demonstrate that your changes were properly authorized, the less confidence you instill in your change management process and the greater the likelihood that the auditors will smell blood in the water.

    So as cool as art may be in the evenings and on weekends, you really want more science when your job involves change management, version control and migration management.

    Science in Change Management

    The best way to replace your change management art with change management science is to leave as little as possible up to chance. Richard Kosiba, PeopleSoft sysadmin at the University of Texas Health Science Center at Houston, found that out.

    Here are a few of the ways in which Richard used Stat to show that he was prepared for state audits of the Center’s financial and HR systems (see the diagram above):

    • Change log – Richard has access to a log of all changes to his PeopleSoft and enterprise systems during the last 12 months, including version control and migration details.
    • CSR list – Richard runs a report and gives the auditors a list of CSRs, from which they pick a handful of CSRs at random.
    • CSR object listing – The auditors spot-check the CSRs and request data on each one, like date/time stamps, objects, packages, what happened, who did it, who approved it and when. Richard is able to drill down multiple levels into the CSRs and quickly demonstrate that they are properly documented.
    • Security information – Showing their healthy suspicion, the auditors ask to see details that will convince them that the documentation is legitimate and that nobody has made unauthorized changes in PeopleSoft. Richard provides security reports and, to convince the auditors definitively, he also provides screenshots.
    • Testing reports – Auditors want to see that approved changes were first tested then deployed to production. Richard runs testing reports from Stat.

    Does Richard miss the art of change management? Science is better, he thinks.

    Instead of losing a week of productivity while sitting with auditors, he sets up a conference call with screen sharing and goes through the audit in about three hours. Also, the auditors prefer science: Richard can more efficiently show them all the details they need. So much so, in fact, that they know what to ask for from one year to the next and what kinds of reports Richard can provide.

    New Tech Brief: “Mastering the Art of Version Control and Migration Management with Stat”

    We conducted a webcast with Richard that is a case study on what his state auditors look for and how easily he is able to provide it.

    We’ve also put together a tech brief called Mastering the Art of Version Control and Migration Management with Stat. It explains the change management features to look for in software and shows how Stat embodies them. Have a look at the tech brief and start replacing the art in your change management processes with a little more science.

  • Foglight for Virtualization and Storage Management

    Getting Back to the Promise of Virtualization

    A virtualized environment is everything your organization hoped for and more, right? Almost unlimited resources, easy spin-up/spin-down of virtual servers and a big cost-savings on hardware. When industry experts, analysts and even your colleagues proclaimed it to be true, you quickly charted your course and set sail on the virtual sea.  

    So why hasn’t your ROI materialized? Why isn’t your virtualization strategy panning out the way you expected? Why is your organization experiencing spikes in both operating expenses (OpEx) and capital expenses (CapEx)?

    After speaking with hundreds of customers, we’ve concluded that the answer to these questions rests with one simple fact:

    It’s so easy to create and distribute virtual machines (VMs) that companies have become complacent about the need to properly manage them.  

    As part of this three-blog series, we’ll explain the key concepts you need to optimize virtualization management within your organization.

    VM Density

    While you can create and run dozens of VMs per physical server, keep in mind that even though the workloads are virtual, the server and storage resources are not.

    VM density is an important metric because you trade it off against optimal performance. If your density is too high, then your VMs are competing for precious resources, which can lead to poor performance. While it can be a problem, it’s easier to find than low density.

    If your density is too low, then you’re underutilizing your physical resources. The most common symptom is excellent performance, which makes it look as though there’s no problem at all. However, once you edge back toward high density that’s when you begin to see performance problems. Striking the balance is what optimizing virtualization management is all about.

    VM size is also a factor in density. Creating immense VMs leads to inefficient sharing of disk space, physical memory and CPU.

    VM Sprawl

    Physical sprawl is pretty easy to spot. While rows of servers take up a lot of real estate and cost a lot of money, VM sprawl is less conspicuous and can be more difficult to identify. Almost every virtual data center exhibits some symptoms of VM sprawl:

    • Abandoned VM images—They’re no longer in the inventory, but they’re still in your virtualization environment. Even though they’re invisible, they consume resources.
    • Powered-off VMs—Nobody has started them up for months (years?), yet they still take up physical disk space.
    • Unused template images—If your organization has published templates for creating VMs, are you sure anyone uses them anymore?
    • Snapshots—VM snapshots that go a long time without modification are also disk hogs. If they’ve outlived their usefulness, then they’re candidates for deletion or archiving to inexpensive storage.
    • Zombie VMs—Self-service provisioning is a great idea, until it isn’t. Most users assume there is no cost associated with spinning up a few VMs (and then forgetting about them), but zombie VMs consume resources needlessly.

    VM sprawl leads to wasted storage and computing resources in the virtual environment. It’s inconspicuous at first, but it doesn’t remain that way for long and it eventually affects your performance and OpEx.

    Read the E-book: An Expert's Guide to Optimizing Virtualization Management

    We’re just getting warmed up. In the next two blogs in this series, we’ll take a look at optimizing virtualization management. Meanwhile, download our guidebook, An Expert's Guide to Optimizing Virtualization Management. It highlights five important areas many companies overlook in their virtual landscape where they can usually recover lost ROI and regain the promise of virtualization.

    John Maxwell

    About John Maxwell

    John Maxwell leads the Product Management team for Foglight at Dell Software. Outside of work he likes to hike, bike, and try new restaurants.

    View all posts by John Maxwell | Twitter

  • SharePlex Blog

    SharePlex Highlight at Oracle Open World 2015

    After attending Oracle Open World 2015, I walked away energized and smiling. This conference brings together portfolio-wide Oracle users to learn best practices and hear about product advances. As the product manager for Dell SharePlex, I was particularly interested in sessions around Oracle GoldenGate, our primary competitor.

    Having arrived Saturday, I attended a few sessions, but really had my sights set on the opening session for GoldenGate, Product Update and Strategy.  Chai Pydmukkala, product management leader for Golden Gate, opened with the latest features and direction then introduced a customer presentation from Andrew Yee, database architect from Ticketmaster.  This was sure to be interesting because Ticketmaster was also a Dell SharePlex customer. .  Right out of the gate, the presentations started with the slide below:

    Now in this diagram, there are a total of 8 replication connections; SharePlex has 3, MySQL native has 3, and Oracle Golden Gate has 2. Now, technically speaking Oracle GoldenGate is capable of handling all 8 connections, but Ticketmaster decided to bring in other solutions including Dell SharePlex to handle 3 connections.  Think about this for a minute. Up on the screen at Oracle Open World – in their opening session for GoldenGate – the customer presenting showed SharePlex picked over GoldenGate by a preference of 3 to 1. I would like to thank Andrew Yee as a SharePlex customer for showing this. Wow, have to say, loved seeing SharePlex highlighted like this. But, it didn’t stop. The next slide up showed this:

    The first two, they rate Oracle GoldenGate and SharePlex equal.

    Next it says customer SQL Code or parameter or customer SQL code. We have generic conflict resolution to parameterize in addition to Customer SQL code we are actually even there.

    Now, Oracle GoldenGate supports more platforms than we do, but that’s today. Our vision and approach with databases will remain different from Oracle’s with respect to favoring databases or integration. Oracle introduced support for automatically managing cascading deletes with Oracle to Oracle replication, which SharePlex also does as well. Trouble is, Oracle GoldenGate’s approach only works with Oracle whereas the Dell SharePlex design is database agnostic. Oracle GoldenGate will not have positive relationships with other database vendors as they do with the Oracle database developers and therefore, their approach only works with Oracle both technically and from a business perspective.  Certainly MySQL can be made to but that’s it. How about Postgres where EnterpriseDB’s Postgres Plus Advanced Server (PPAS) provides SQL and PL/SQL compatibility with PL/pgSQL to ease your migration and more. They won’t change the database to support Oracle GoldenGate. SharePlex is in beta with Oracle PPAS which will be available in SharePlex 8.6.3, currently scheduled for GA in early 2016. Look for announcements between Dell and EnterpriseDB.

    As the session moved to Q&A, “Point of Replication” became a  hot topic with audience – and a long-time favorite of mine (but that’s for another blog).. Immediately members of the audience began asking the Oracle GoldenGate product manager when it would perform before commit point of replication also known as optimistic commitment. I will write a separate blog around this topic but there was really a lot of buzz in the meeting.

    I can’t describe the excitement I felt for the presence Dell SharePlex had in this session. I quickly found Andrew Yee from Ticketmaster, and thanked him profusely for an outstanding presentation.  Even though Oracle GoldenGate could have handled all the connections, the customer determined they needed Dell SharePlex despite the Oracle database instances. I made my way to the next session energized about what I just witnessed…and smiling. More to come on my Oracle Open World 2015 experience. 

  • Dell TechCenter

    Expect More at Work and Do More of the Things You Love

    Announcing the #ExpectMore Selfie Contest

    Finding Work / Life Balance

    What did your workday look like? You’d wake up every morning, commute to the office and start solving problems. Some days, the problems couldn’t wait – 6AM conference calls, email bombardments, systems going down - all before you hit the snooze button. Then, in the evenings, you’d do your best to leave the office at a reasonable hour. Which -  you and I both know - didn’t always happen.

    But thanks to Dell Software, your workday is now more productive and ends when it should. We get it: work-life balance matters. And when you work more efficiently, you have the time to pursue your passions and do the things you love. In fact, this infographic shows you how!

    And now, we want to hear from you and see your smiling faces! So, we’re launching the #ExpectMore Selfie Contest,* giving you the chance to show us what you do with the time Dell Software saves you at work.

    What Is Your Passion?

    Do you use Toad at work to tackle complex databases, then, go home in the evening and fly the drone you and your son built in the garage?

    Do you optimize your VMware storage infrastructure with Foglight for Storage Management, then coach your daughter’s soccer club to victory on the weekend?

    Do you handle otherwise daunting email migrations with Migration Manager for Exchange, then mountain bike through the trails of your local state park?

    How to Enter

    Entry into the Contest is simple.

    1. Read the Official Contest Rules
    2. Take a picture of yourself at the office. (something safe to share)
    3. Take a picture of yourself pursuing your passion! (work appropriate, of course)
    4. Follow @DellSoftware on Twitter
    5. Tweet us the pictures with the #ExpectMoreContest hashtag – include a brief description of how using our solutions allow you the work/life balance to pursue your personal passion in life.


    Each Friday during the Contest Period, we will select the most creative entry. Once selected, we will announce the winner via our @DellSoftware Twitter channel. The winner must then contact us via Twitter to receive a $50 Amazon Gift Card! 


    Stephanie quickly runs down database issues so she has time for her true passion - skateboarding!

    Kris easily finishes his endpoint management heavy lifting so he has time for his true passion - strength training!

    Good Luck!

    Expect more from your software solutions with Dell, and start doing more with your free time.

    * NO PURCHASE NECESSARY. Legal residents of the 50 United States (D.C.) 18 years or older.  Ends 12/18/2015. To enter and for Official Rules, including prize descriptions visit  Void where prohibited.

  • Dell TechCenter

    Installing Nano Server on Dell PowerEdge Server Internal Dual SD Module

    Disclaimer: Dell does not offer support for Windows Server 2016 at this time.  Dell is actively testing and working closely with Microsoft on Windows Server 2016, but since it is still in development, the exact hardware components/configurations that Dell will fully support are still being determined.  The information divulged in our online documents prior to Dell launching and shipping Windows Server 2016 may not directly reflect Dell supported product offerings with the final release of Windows Server 2016. We are, however, very interested in your results/feedback/suggestions. Please send them to

    As a configuration option for the 13th generation Dell PowerEdge server such as R630, R730 and R730XD, etc., the Internal Dual SD Module (IDSDM) provides dual SD interfaces which can be configured in a mirrored configuration (primary and secondary SD), providing full RAID-1 redundancy. (Note: dual SD cards are not required, the IDSDM can operate with a single SD card but no redundancy.) The IDSDM also becomes a non-removable device, which makes it feasible to host a small footprint Windows Server OS. In addition, the IDSDM has added the USB 3.0 support, which can enhance its performance for the deployment and OS boot process.

    With the upcoming Windows Server 2016, the lightweight, small footprint and customizable Nano server can be running on the IDSDM module. There are a couple of methods to install Nano Server which can be used on the internal SD card:

    • Create a Nano Server VHD. This method requires a running Windows Server or Server Core on the target system and you also need to configure Boot Configuration Data to make VHD as a boot option. 
    • Use WDS to PXE Boot a Nano Server VHD. This method is good for a bare metal system. It requires WDS and also requires to create a Nano VHD or customize the Nano image on the WDS with needed driver and feature packages
    • Deploy the Nano Server on a bare metal system with a bootable USB flash drive including Windows Setup files. This process is very similar as installing a regular Windows OS.

    This document co-authored with Rui Freitas (Microsoft) describes the steps required to create and deploy Nano Server image on the IDSDM with the bootable USB flash drive.