Blog Group Posts
Application Performance Monitoring Blog Foglight APM 106
Blueprint for Big Data and Analytics - Blog Blueprint for Big Data and Analytics 0
Blueprint for Cloud - Blog Blueprint for Cloud 0
Blueprint for HPC - Blog Blueprint for HPC 0
Blueprint for VDI - Blog Blueprint for VDI 0
Latest Blog Posts
  • General HPC

    Need for Speed: Comparing FDR and EDR InfiniBand (Part 2)

    By Olumide Olusanya and Munira Hussain

    This is the second part of this blog series. In the first post, we shared OSU Micro-Benchmarks (latency and bandwidth) and HPL performance between FDR and EDR Infiniband. In this part, we will further compare performance using additional real-world applications such as ANSYS Fluent, WRF, and NAS Parallel Benchmarks. For my cluster configuration, please refer to part 1.


    Ansys Fluent

    Fluent is a Computational Fluid Dynamics (CFD) application used for engineering design and analysis. It can be used to simulate the flow of fluids, with heat transfer, turbulence and other phenomena, involved in various transportation, industrial and manufacturing processes.

    For this test we ran Eddy_417k which is one of the problem sets from ANSYS Fluent Benchmark suits.  It is a reaction flow case based on the eddy dissipation model. In addition, it has around 417,000 hexahedral cells and is a small dataset with a high communication overhead.  

                                                                         Figure 1 - ANSYS Fluent 16.0 (Eddy_417k)

    From Figure 1 above, EDR shows a wide performance advantage over FDR as the number of cores increase to 80. We continue to see an even wider difference as the cluster scales. While FDR’s performance seems to gradually taper off after 80 cores, EDR’s performance continues to scale as the number of cores increase and performs 85% better than FDR on 320 cores (16 nodes).

    WRF (Weather Research and Forecasting)

    WRF is a modelling system for weather prediction. It is widely used in atmospheric and operational forecasting research. It contains two dynamic cores, a data assimilation system, and a software architecture that allows for parallel computation and system extensibility. For this test, we are going to study the performance of a medium size case, Conus 12km.

    Conus 12km is a resolution case over the Continental US domain. The benchmark is run for 3 hours after which we take the average of the time per time step.

                                                                                                    Figure 2 - WRF (Conus12km)

    Figure 2 shows both EDR and FDR scaling almost linearly and also performing almost equally until the cluster scales to 320 cores when EDR performs better than FDR by 2.8%. This performance difference, which may seem little, is significantly higher than my highest run to run variation of 0.005% between three successive EDR and FDR 320-core tests.

    HPC Advisory Council’s result here shows a similar trend with the same benchmark. From their result, we can see that the performances are neck and neck until the 8 and 16-node run where we see a small performance gap. Then the gap widens even more in the 32-node run and EDR posts a 28% better performance than FDR. Both results show that we could see an even higher performance advantage with EDR as we scale beyond 320 cores.


    NAS Parallel Benchmarks

    NPB contains a suite of benchmarks developed by NASA Advanced Supercomputing Division. The benchmarks are developed to test the performance of highly parallel supercomputers which all mimic large-scale and commonly used computational fluid dynamics applications in their computation and data movement.  For my test, we ran four of these benchmarks: CG, MG, FT, and IS. In the figures below, the performance difference is in an oval right above the corresponding run.

                                                                          Figure 3 - CG         

                                                                              Figure 4 - MG


                                                                            Figure 5 - FT         


                                                                              Figure 6 - IS

    CG is a benchmark which computes an approximation of the smallest eigenvalue of a large, sparse, symmetric positive-definite matrix using a conjugate gradient method. It also tests irregular long distance communication between cores. From Figure 3 above, EDR shows a 7.5% performance advantage with 256 cores.

    MG solves a 3-D Poisson Partial Differential Equation. The problem in this benchmark is simplified as it has constant instead of variable coefficients to better mimic real applications. In addition to this, it tests short and long distance communication between cores. Unlike CG, the communication patterns are highly structured. From Figure 4, EDR performs better than FDR by 1.5% on our 256-core cluster.

    FT is a 3-D partial differential equation solution using FFTs. It tests the long-distance communication performance as well and shows a 7.5% performance gain using EDR on 256 cores as seen in Figure 5 above.

    IS, a large integer sort application, shows a high 16% performance difference between EDR and FDR on 256 cores. This application not only tests the integer computation speed, but also the communication performance between cores. From Figure 6, we can see a 12% EDR advantage with 128 cores which increases to 16% on 256 cores.



    In both blogs, we have shown several micro-benchmark and real-world application results to compare FDR with EDR Infiniband. From these results, EDR has shown a higher performance and better scaling than FDR on our 16-node Dell PowerEdge C6320 cluster. Also, some applications have shown a wider performance margin between these interconnects than other applications. This is because of the nature of the applications being tested; communication intensive applications will definitely perform and scale better with a faster network when compared with compute-intensive applications. Furthermore, because of our cluster size, we were only able to test the scalability of the applications on 16 servers (320 cores). In the future, we plan on running these tests again on a larger cluster to further test the performance difference between EDR and FDR.


  • Data Protection

    Data Protection ROI: Avoid a Data Protection Achilles’ Heel

    How one company known for thwarting cyber attacks made improvements to protect its own data.


    Imagine a company that thwarts millions of cyber attacks for clients all over the world only to discover that its own digital property and reputation are at risk.

    In our e-book, you’ll learn about how ISCI found itself facing a potential Achilles’ heel:  Almost all of its intellectual property consists of digital assets, and backup turmoil was creating the threat of data loss. A wrong data protection strategy can set in motion the reversal from good fortune to bad — and mess with your organization’s integrity.

    ISCI is a company with 150 patents. It knows how to detect the presence of sophisticated adversaries hiding inside enterprise networks; it also has a product line addressing some of the toughest security challenges an organization can face, including the user-identity, 8th or "human layer" in network protocol.

    To make the most of these achievements, and capitalize on its victories in the global battle for cyber security, ISCI needed to face down data protection challenges on the home front: to eliminate the risk of losing data required for product development and financial auditing.

    Last but not least, it had to prevent a loss of confidence in IT.

    Trial by full recovery

    ISC8 launched a new data protection strategy that deployed a mix of data protection solutions: a Dell DR series backup-to-disk deduplication appliance and NetVault Backup software. The two are tightly integrated to deliver automated backups and granular recovery. The new solution was subjected to a “trial by full recovery” — by copying 8 terabytes off of a server, executing a full recovery and undertaking a bit-wise comparison. The result: The full 8 terabytes were bit perfect.

    By leveraging the right data protection solution, ISC8 was able to:

    • Reduce the threat of data loss and improve compliance.
    • Save on IT time, since the IT team no longer spend 10 percent of its time babysitting backups.
    • Push down storage costs, with 78 TB of data protected using only 2.55 TB of physical storage, a 31:1 ratio.
    • Recover an entire file share server (9 TB) in less than 20 minutes.


    Data threats can be avoided

    ISCI surmounted its data protection challenges and has ejected a worst-case scenario — an irreversible loss of proprietary data — from the realm of the possible. That alone represents a substantial return on its investment in a new data protection strategy.

    There are other aspects of data protection ROI that we’ll be blogging about — accelerating recovery, gaining IT staff time, reducing storage costs — but they all hinge on avoiding data loss.

    Companies, like heroes, are rarely perfect. But when best efforts are put forward, potential data protection threats can be avoided.

    Read our e-book or more examples of how organizations like yours are protecting against data threats, and increasing their return on investment from data protection.


    James Gomez

    About James Gomez

    James is a marketing content strategist for Dell Software's marketing organization.

    View all posts by James Gomez | Twitter

  • Identity and Access Management - Blog

    The Quest for Universal Single Sign-On Solutions

    For a long time the “Holy Grail” of identity and access management has been single sign-on (SSO) – at least when you ask end users and executives that’s what they would say. After all, nothing makes regular folks happier than easy access to everything they need, with only one password to remember, and no need to call IT – ever – to fix your mistakes, because you don’t make them anymore. It all sounds good, but as any of us who have tried to achieve SSO know, it’s not quite that simple.

    Maybe “less” sign-on, or “reduced” sign-on, or “close to single” sign-on would be more accurate….and that’s fine. Anything is better than the mess of not streamlining access.

    Let’s take a quick look at SSO through the ages.

    • It started with password synchronization, but that soon became too cumbersome, too labor-intensive, and required too much integration to be a true ”enterprise” solution.
    • Next we had the concept of enterprise SSO where all credentials were stored and the appropriate fields were automatically filled in when login was required. But ESSO doesn’t leverage more modern SSO concepts and is still difficult to implement and manage.
    • Finally we arrived at “true” SSO for Windows with the advent of Active Directory (AD), where a single account and a single credential provides universal access without any synchronization or form-filling. The problem is it only works for Microsoft stuff or things that you can get to play nice with AD, leaving many critical systems out in the cold.
    • Today we have the concept of federation, which is “true” SSO for web applications, but only if those applications talk the right standards, leaving lots of legacy web applications and all thick client apps out of the equation.

    So you can get single sign-on for everything but it will take a combination of tools and technologies and may not be worth the effort. Many people these days are taking advantage of SSO in pockets – maybe federation for SaaS apps and AD-based SSO for Unix and Linux, but often there are other critical systems that don’t fit the deployment.

    But you can get awfully close if you choose the right solutions.

    I’ve recorded a short “white board” video that details the options for Web single sign-on and provides alternatives to limited, siloed solutions that only address one of the needs detailed above.

    And if you want a more detailed discussion about SSO, how to ensure your project is successful, and how to “sell” the benefits of doing it right read this white paper: Moving SSO beyond convenience


  • Dell TechCenter

    Three Simple Ways to Show Your SysAdmin / DBA / IT Team Some Extra Love This Valentine's Day!

    It's almost Valentine's Day!  Birds are singing, love is in the air and candy bowls around the office are filled with tiny hearts.  As the holiday approaches you're working hard to make your workday more efficient so you can clock out on the 14th and spend some time with that special someone!  But think back to when you were a kid in elementary school.  Valentine's Day was a time when you ran around giving paper hearts and candy to everyone in the class!  It was a time to reach out to the people around you and let them know how special they are! Why not take that Valentine spirit with you into the work place this year?

    February 14th is an opportunity for us to show a little extra love and appreciation to all the unsung heroes we work with every single day; the people who keep our IT systems going day in  and day out!  Don't forget about your faithful DBA, SysAdmin, and IT team that works tirelessly to keep things running smoothly.  This year, rediscover the joy and appreciation that is Valentine's Day and show a little extra love to your team mates!


    Here are 3 simple ways to show your DBA / SysAdmin / and IT team some extra love this Valentine's Day!


    1.  Surprise them with some coffee in the morning!

    Nothing shows appreciation like bringing that sweet, sweet nectar of the gods... coffee.

    ...Bonus points if it comes with a box of donuts.


    2.  Stop and say "hello!"

    Sometimes, all it takes is to pause for a moment and ask someone how they're doing!  Life is crazy busy, true.  But don't let it get so busy that you can't stop and ask someone how their day is going.


    3.  Ambush them with a big "Thank You!"

    Two little words.  That's all it takes.  You can turn someone's day around with a simple, "Thank You!"  So use up all those extra office supplies you have around and build someone a Thank You Valentine!  


    There you have it!  3 Simple ways to show your DBA / SysAdmin / IT Team some extra love this February!  

    Do you have any other ideas?  Let us know in the comments below!

    And from all of us here at Dell Software, Happy Valentines Day!

    About Ryan McKinney

    Ryan McKinney has been a Social Media and Communities Advisor for Dell Software since 2014.

    View all posts by Ryan McKinney | Twitter

  • Windows Management & Migration Blog

    [Webcast Series] Spring into Action This Winter – Prepare for Exchange and SharePoint 2016

    As I sit in my Columbus, Ohio office on February 9th and look out the window, I frown at the sight of my car already covered in snow. The holidays have come and gone.

    I’m seeing doom and gloom weather reports of weekend storms to come. The harsh reality has hit me: winter is here! Depending upon where you live and what type of climate you’re in, you may be experiencing the same winter blues. But the best way to fight winter doldrums is to look forward to spring and all that is has to offer! Nice weather. Beautiful flowers. HBO’s Game of Throne’s show (where winter is still coming). But most importantly — NEW Microsoft releases!

    Microsoft Releases Equivalent to Christmas Morning

    • Exchange 2016 is here, but we’ll see more cumulative updates throughout 2016 that will deliver some of the features missing from the RTM software delivered last October.
    • SharePoint 2016 is scheduled to hit the market in Q2.

    We’ve made it easy for you to keep you current on what’s new and learn what steps you can take today to get ready for your next migration project. Register for our on-demand webcasts to learn about the latest Microsoft releases.

  • KACE Blog

    Get Matched with Your Perfect Systems Management Solution

    Today you don’t have to have thousands of employees to have serious application management and endpoint protection challenges. Organizations of all sizes have a broad mix of operating systems and applications to deploy and keep up to date; a wide variety of desktops, laptops, smartphones and other devices to manage and secure; and geographically disparate sites that make hands-on attention from IT impractical, if not impossible.

    The difference is, some organizations are struggling with these challenges while others have application management well in hand. I’d like to tell you about one of the latter: online dating firm Meetic Group. (Spoiler alert: the company was able to complete an upgrade to Office 2014 without any issues in just two weeks.)

    Too Many Systems Management Tools Can Be as Bad as Too Few

    Meetic Group is headquartered in France but its 500 employees are spread across Europe. The small IT team was finding it challenging to manage all the company’s desktops and laptops — Windows, Mac and Linux  —and maintain an up-to-date inventory of software assets. The problem wasn’t that they didn’t have tools; they had an open-source solution for desktops and asset management, as well as Active Directory Group Policy settings and the Microsoft Deployment Toolkit. The problem was that they didn’t have the right tools.

    “We had multiple solutions to oversee and none of them gave us the level of insight we really needed to manage the client estate effectively,” explains Frédéric de Ascencao, internal IT manager at Meetic Group.” Ideally, we wanted a single solution that could help us obtain more value from our desktops and laptops for the business and personnel.”

    Centralized Application Management Delivers a Host of Benefits

    So Meetic Group went shopping. Through careful research and extensive testing, the company found a clear winner: Dell KACE K1000 and K2000 appliances. “They arrive preconfigured and can be up and running in a couple of hours,” says de Ascencao. “Furthermore, we didn’t need to spend time or money up-skilling our administrators. Using a basic knowledge of IT, an administrator can install and operate Dell KACE solutions. It’s that simple.”

    Trading a hodgepodge of disparate tools for the integrated Dell solution has really paid off. Now Meetic Group enjoys:

    • Efficient software deployment — Rollout of Microsoft Office 2014 across the business was completed in just two weeks, weeks faster than previous similar upgrades.
    • Reduced inventory costs — Clear insight into its hardware and software assets enables the IT team to know exactly what spare inventory to maintain at each site.
    • Better security and performance — Effective remote management keeps software across the enterprise up to date, including critical applications like anti-virus tools.
    • Reduced licensing costs — The IT team can easily identify and reallocate unused and underused software licenses while ensuring software license compliance.
    • Easy customization — With the Dell KACE appliances, even an international company like Meetic Group needs just one image per operating system; the right language package is automatically deployed based on the location of the desktop.
    • Proactive capacity management — The solution reports on the amount of storage available on each desktop so IT staff can alert employees to archive data when needed, thereby preventing business disruptions.

    To learn more about how efficient, centralized application management can make a real difference to your business, check out the latest chapter in our e-book, “Technology Tunnel Vision, Part 3: Expanding control of your application environment.”


    David Manks

    About David Manks

    David Manks is a Solutions Marketing Director for Dell Software focusing on endpoint management and security products.

    View all posts by David Manks  | Twitter

  • KACE Blog

    Comprehensive Systems Management for Health Care IT

    Comprehensive ITSM

    The more money you invest in IT, the greater the return you want on your investment. Estimates from IDC show that worldwide healthcare IT spending will rise from $115 billion this year to $135.7 billion by 2019. That means you and your counterparts are increasing your investment in areas like patch management software, remote administration tools and systems management, and probably expecting more and more value from it.

    We’ve put together a paper called Realizing the Return on Healthcare IT Investment to highlight the way that comprehensive systems management maximizes the value of healthcare IT. It turns out that investment isn’t the only thing increasing from year to year; complexity is increasing as well, for several reasons.

    Complexity in Healthcare IT

    • Networks and the devices they connect are generating streams of new data that must be stored, analyzed and shared effectively.
    • Healthcare providers can see and care for patients more flexibly, creating new ways to drive down costs and allocate payments.
    • Opportunities are growing for collaboration among healthcare systems and organizations that have previously operated in silos.

    That’s why we emphasize the value of comprehensive systems management in healthcare IT, extending to patch management, physical inventory, system auditing, vulnerability scanning and configuration management.

    It’s also a big part of fulfillling HIPAA requirements. Being able to see, update and account for all of the hardware and software in your organization is a long step on the path to compliance.

    The Dell KACE K1000 Systems Management Appliance and the Dell KACE K2000 Systems Deployment Appliance help healthcare IT teams cut through complexity and maximize the return on their investments in IT. Have a look at our paper, Realizing the Return on Healthcare IT Investment, to gauge the fit with your healthcare environment.

    Christopher Garcia

    About Christopher Garcia

    A ten-year Dell veteran, Chris has had experience in various marketing roles within the organization. He is currently a Senior Product Marketing Manager.

    View all posts by Christopher Garcia 

  • Desktop Authority

    What's New in Desktop Authority v.9.3

    Dell Desktop Authority v.9.3 is now live

    Dell Desktop Authority v.9.3 is now live!

    We're happy to announce the release of a Windows 10 ready Desktop Authority v.9.3.  We've also made some performance enhancements and opened up features in certain licenses.  Have a look! 

    New Features

    Support for New Platforms

    Desktop Authority 9.3 now supports:

    • Windows 10

    • SQL Server 2014 and

    • Exchange Server 2013

    Enhancements and Improvements

    We've added the MSI Packages feature to Standard version

    • All Standard license customers are now able to use the MSI Packages feature

    We've added Hardware/Software Inventory & Reporting to Standard version

    • All Standard license customers now have full software/hardware inventory and reporting functionality, previously reserved for the Professional Edition

    We've also enabled the USB/Port Security feature for all Standard and Professional licenses

    • Now all Standard and Professional license customers have access to this popular feature at no additional cost

    We've spent some time implementing performance improvements.  In this release, we've fixed issues that customers have reported and have found and fixed numerous intermittent bugs, performance bottlenecks, and stability issues.

    See Release Notes for a complete list.

    New to the product? Ready to try it in your own environment?

  • Information Management

    The Strengths and Limitations of Traditional Oracle Migration Methods

    There’s an old saying: Insanity is doing the same thing over and over while expecting different results.

    If you’re tired of spending your nights and weekends performing Oracle upgrades and migrations, struggling to minimize downtime and business impact, and worrying about what might happen if the migration fails, it’s time to shake things up. You don’t have limit yourself to old habits and old tools that don’t deliver the results you need. To get different results — better results — you need new approaches and new tools.

    For example, the traditional way to reduce the impact of a migration on the business is to schedule resource-intensive tasks during times of low activity. But before you just accept all those long evenings and weekends in the office, look into newer technologies, such as near real-time replication, that can minimize the migration’s impact on the business — and your personal life.

    Lest we throw out the baby with the bathwater, let’s take a hard look at the traditional methods for performing upgrades and migrations and determine whether and when they are helpful:

    • Export/import utilities and Oracle Data Pump — The most straightforward option for moving data between different machines, databases or schema is to use Oracle's export and import utilities. But, boy, talk about manual, time-consuming and error-prone. Plus, these utilities can be used only between Oracle databases and require significant downtime. Oracle Data Pump is a step up, offering bulk movement of data and metadata. But it still works only between Oracle databases and still requires significant downtime. Let’s keep looking.
    • Oracle database upgrade wizard — This wizard enables in-place upgrade of a standalone database. But it’s hardly a general-purpose solution, since you can upgrade only one single instance database or one Oracle RAC database instance at a time, and the source database version must be or above for upgrade to 11g or 12c. Next.
    • Oracle transportable tablespaces (XTTS) — XTTS enables you to move tablespaces between Oracle databases, and it can be much faster than export/import. So far, so good. But XTTS moves your data as it exists; any fragmentation or sub-optimal object or tablespace designs are carried forward. Wouldn’t it be better to be able to clean things up as you go?
    • Cloning from a cold (offline or closed) backup — Cloning a database is a means of providing a database to return to in the event an upgrade does not succeed. While having a failback plan is a critical piece of the puzzle, it’s hardly a complete upgrade or migration strategy. Moving on.
    • Manual scripts — Ah, custom scripts. The first time, they seem like the perfect answer. No migration tool to license or learn, and you can tailor the migration or upgrade to meet your exact needs. But if you’ve gone down this path, you know that the process of creating, testing and running custom scripts is complex and requires significant time from skilled IT professionals with expert knowledge of your applications. And most of the time, it doesn’t enable you to avoid the dreaded downtime. Isn’t there a less manual approach?
    • Online options — Online upgrade and migration options include traditional remote mirroring solutions, Oracle RMAN, Oracle transportable databases and Oracle Data Guard. But if you’ve tried any of these options, you know that they all have significant limitations. For example, the transportable database feature can be used to recreate an entire database from one platform on another platform — but that’s just one of many migration and upgrade scenarios you face. You need a comprehensive approach that reduces both costs and the downtime that impact the business.

    In short, while each of these tools has value in certain specific scenarios, all of them are complex or resource-intensive, require lengthy downtime of production systems, or work only for Oracle databases. Fortunately, you don’t have to limit yourself to these traditional tools. In my next blog, I’ll explain why investing in an enterprise tool is a smart alternative.

    You can also learn more in our new e-book, “Simplify Your Migrations and Upgrades: Part 2: Choosing a fool-proof method and toolset.

    Steven Phillips

    About Steven Phillips

    With over 15 years in marketing, I have led product marketing for a wide range of products in the database and analytics space. I have been with Dell for over 3 years in marketing, and I’m currently the product marketing manager for SharePlex. As data helps drive the new economy, I enjoy writing articles that showcase how organizations are dealing with the onslaught of data and focusing on the fundamentals of data management.

    View all posts by Steven Phillips | Twitter

  • Windows Management & Migration Blog

    “Everyone Has a Plan Till They Get Punched in the Mouth.” Got Recovery Software for Your Exchange or AD Migration?

    If you’re like most system administrators, you’ll never have Mike Tyson working on your Exchange migration or Active Directory migration project. But when he says, “Everyone has a plan till they get punched in the mouth,” you’d better know he’s talking about you. And your project.

    As a system administrator or IT manager, you almost always have migration projects on your radar, and some of the 2016 releases from Microsoft (think: Exchange, Windows Server, SharePoint) may be coming into your environment soon.

    Have you got a plan? Are you ready in case you get punched in the mouth?

    Active Directory and Exchange Recovery Plans

    Think about a few things that can go wrong in an AD migration or Exchange migration:

    • You delete too much. Migration is a good time to eliminate data or email that you think nobody will need in the new environment. But suppose you delete data, then find out that somebody does indeed need it. Fast.
    • An email discovery request arrives mid-project. Just when you have half of your mailboxes migrated, you receive a discovery request for a specific email thread. With one foot on boat and other on the dock, how easily do you think you’ll be able to locate that email?
    • Those unused mailboxes aren’t really unused. You thought nobody used them, so you deleted them, only to find out people need them for VPN access. Oops. Pause your migration and spend a couple of days recreating the mailboxes.
    • Your migration gets interrupted. Suppose there’s a network outage in the middle of your migration. Where did you leave off? Where should you start up again? Later on, how can you be sure everything migrated successfully?

    None of that hurts as much as being punched in the mouth, but why run the risk when you can put in place an Exchange recovery plan using Windows recovery software before any of it happens?

    Implementing Recovery Manager for Exchange and Recovery Manager for Active Directory lets you do three things to ensure your migration goes smoothly:

    • Recover missing or corrupted data.
    • Confirm what has been migrated in the event of an interruption.
    • Create and test your migration and disaster recovery plan for compliance and peace of mind.

    Recovery Manager is like buying insurance against the things that can go wrong in your migration project. Since there’s no such thing as a perfect project, recovery software is a good way to keep your Exchange migration or AD migration plan intact and on schedule.

    Planning an Exchange or AD Migration? New Tech Brief

    Which Windows migration projects are on your horizon? Have you devised a migration plan yet? More important, how about a recovery plan?

    Have a look at our new tech brief, Planning an Exchange or AD Migration? Three Reasons to Include a Plan for Recovery. You’ll see in greater detail how a recovery plan can help you overcome getting punched in the mouth mid-project. You’ll also see real-world scenarios in which sysadmins have used Recovery Manager to get email back and keep their migration projects on schedule.

    What you won’t see is Mike Tyson. When we wrote the tech brief, he was still on bed rest. After all, everyone has a plan till they get punched by a hoverboard.