Blog Group Posts
Application Performance Monitoring Blog Foglight APM 106
Blueprint for Big Data and Analytics - Blog Blueprint for Big Data and Analytics 0
Blueprint for Cloud - Blog Blueprint for Cloud 0
Blueprint for HPC - Blog Blueprint for HPC 0
Blueprint for VDI - Blog Blueprint for VDI 0
Latest Blog Posts
  • Data Protection

    Future-Proof Your Backup Strategy by Adding the Option to Use Cloud Services

    The mature and stable backup market has seen an influx of innovative technologies over the past few years and organizations can now choose a mix of backup technologies that are just right for them. Backup-to-tape is slowly being phased out and replaced with disk-based backup targets, and backup appliances and cloud services are also being added to the mix.

        

    IDC's recent survey of storage managers shows that 30% of European organizations are already using backup-as-a-service and that a further 43% are planning to add cloud services to their mix of backup technologies in the next 12 months.

    With so many options to choose from, it can be a challenge to design a future-proof backup strategy. Here are three key points to consider when choosing your next backup solution:

    • Can you back up to the cloud seamlessly and recover selectively? The key criterion for future-proofing your backup strategy is whether your backup solution can connect to the cloud and therefore let you leverage the cloud as a backup target. There are many cloud providers in the market — global, regional, and local — and it is important that you can connect to the cloud provider of your choice using the right APIs. The solution should also have features that support the use of cloud — for example, data deduplication and change-block tracking to reduce the data volume being sent over the Internet, or WAN acceleration to speed up the network traffic and ensure backup performance over the network link. Recovery from the cloud requires a flexible solution to recover either individual files, full workloads/applications, or an entire server. Ultimately, you want the cloud to become a seamless extension in your backup strategy, so that you can use data deduplication and compression across your on-premises estate as well as in the cloud and have flexible recovery options.
    • Can you contain cost? The cost structure for cloud backup is still not properly understood. 85% of organizations expect cloud storage to be cheaper than on-premises storage solutions, but cost drivers work differently in the cloud. While uploading and storing data in the cloud might be very attractively priced, downloading data for restore can be quite costly, depending on your cloud provider's business model. When choosing your next backup solution, it is important to ensure that the data footprint sent over the Internet and stored in the cloud is as small as possible, through the use of data deduplication and compression technologies, for example. Offering different recovery options is also essential to contain cloud-related costs, so that you can recover only what you need and consequently only pay for what you need to recover.
    • Can you replicate to and from the cloud to ensure cost-effective and reliable disaster recovery (DR)? Backup and replication are essential parts of any disaster recovery strategy. 30% of European storage managers are currently redesigning their disaster recovery strategy to ensure faster recovery times for business-critical applications and provide productivity applications for employees. Many organizations do not have the means to replicate to a secondary site, which is accepted as a best practice. Adding cloud to the mix adds a secondary site at a fraction of the cost of a traditional disaster recovery setup. Replication to and from the cloud is an important feature to enable successful DR.

    Ultimately, your new backup solution should give you the flexibility to take advantage of any backup technology that you want to deploy, and leverage the benefits of cloud if you want to use cloud services or, if you are not already using cloud services, provide the option to do so in the future, when the time is right for your organization.

    If you would like to learn more about the characteristics of a future-proof backup strategy, download our complimentary white paper, “Choosing the Right Public Cloud for Better Data Protection”.

    Carla Arend

    About Carla Arend

    Carla Arend is a program director with the European software and infrastructure research team, responsible for managing the European storage research and co-leading IDC's European cloud research practice. Arend provides industry clients with key insight into market dynamics, vendor activities, and end-user trends in the European storage market, including hardware, software and services. As part of her research, she covers topics such as software-defined storage, OpenStack, flash, cloud storage, and data protection, among others..

    View all posts by Carla Arend | Twitter

  • Dell Big Data - Blog

    Dell’s Jim Ganthier Recognized as an “HPCWire 2016 Person to Watch”

    Congratulations to Jim Ganthier, Dell’s vice president and general manager of Cloud, HPC and Engineered Solutions, who was recently selected by HPCWire as a “2016 Person to Watch.” In an interview as part of this recognition, Jim offered his insights, perspective and vision on the role of HPC, seeing it as a critical segment of focus driving Dell’s business. He also discussed initiatives Dell is employing to inspire greater adoption through innovation, as HPC becomes more mainstream.

    There has been a shift in the industry, with newfound appreciation of advanced-scale computing as a strategic business advantage. As it expands, organizations and enterprises of all sizes are becoming more aware of HPC’s value to increase economic competitiveness and drive market growth. However, Jim believes greater availability of HPC is still needed for the full benefits to be realized across all industries and verticals.

    As such, one of Dell’s goals for 2016 is to help more people in more industries to use HPC by offering more innovative products and discoveries than any other vendor. This includes developing domain-specific HPC solutions, extending HPC-optimized and enabled platforms, and enabling a broader base of HPC customers to deploy, manage and support HPC solutions. Further, Dell is investing in vertical expertise by bringing on HPC experts in specific areas including life sciences, manufacturing and oil and gas.

    Dell is also offering its own brand muscle to draw more attention to HPC at the C-suite level, and will thus accelerate mainstream adoption - this includes leveraging the company’s leading IT portfolio, services and expertise. Most importantly, the company is championing the democratization of HPC, meaning minimizing complexities and mitigating risk associated with traditional HPC while making data more accessible to an organization’s users.

    Here are a few of the trends Jim sees powering adoption for the year ahead:

    • HPC is evolving beyond the stereotype of being solely targeted for government and academia, deeply technical, audiences and is now making a move toward the mainstream. Commercial companies within the manufacturing and financial services industries, for example, require high-powered computational ability to stay competitive and drive change for their customers.
    • The science behind HPC is no longer just about straight number crunching, but now includes development of actionable insights by extending HPC capabilities to Big Data analytics, creating value for new markets and increasing adoption.
    • This trend towards the mainstream does come with its challenges, however, including issues with complexity, standardization and interoperability among different vendors. That said, by joining forces with Intel and others to form the OpenHPC Collaborative Project in November 2015, Dell is helping to deliver a stable environment for its HPC customers. The OpenHPC Collaborative Project aims to enable all vendors to have a consistent, open source software stack, standard testing and validation methods, the ability to use heterogeneous components together and the capacity to reduce costs. OpenHPC levels the playing field, providing better control, insight and long-term value as HPC gains traction in new markets.

    A great example of HPC outside the world of government and academic research is aircraft and automotive design. HPC has long been used for structural mechanics and aerodynamics of vehicles, but now that the electronics content of aircraft and automobiles is increasing dramatically, HPC techniques are now being used to prevent electromagnetic interference from impacting performance of those electronics. At the same time, HPC has enabled vehicles to be lighter, safer and fuel efficient than ever before. Other examples of HPC applications include everything from oil exploration to personalized medicine, from weather forecasting to the creation of animated movies, and from predicting the stock market to assuring homeland security. HPC is also being used by the likes of FINRA to help detect and deter fraud, as well as helping stimulate emerging markets by enabling growth of analytics applied to big data.

    Again, our sincerest congratulations to Jim Ganthier! To read the full Q&A, visit http://bit.ly/1PYFSv2.

     

  • Dell TechCenter

    Successfully Running ESXi from SD Card or USB – Part 2

    Successfully Running ESXi from SD Card or USB – Part 2

    In Part 1 of this blog, we discussed some items that need to be addressed to successfully run ESXi from an SD card or USB drive. Most specifically the syslog files, core dump, and VSAN traces files (if VSAN is enabled).

    This post will discuss some options to address each one and various pros and cons to each method. Unfortunately there is no definitive answer. Since each infrastructure can and will be different it is nearly impossible for me, Dell, or VMware to say exactly what you should do. The intent of the information below is to help give you options on how to manage these files. These are by no means the only options.

    How do we manage these files, so they are persistent?

    1. Syslog Files
      1. Every IT environment will have a different preferred way to manage syslog files. Some datacenters already have a central syslog server that ESXi can use to send its logs. Here are two additional solutions.
      2. Solution 1 – vSphere Syslog Collector:
        1. Available to be installed for Windows vCenter installations. For the vCenter appliance, the syslog collector service is enabled but will not register as an extension. To retrieve the files you have to login via SSH to the vCenter appliance as they will not be available from the vCenter client.
        2. This application is aptly named vSphere Syslog Collector.
    • The installation is included on the vCenter install disk and can co-exist with vCenter. For a large environment, use another VM as a standalone Syslog Collector, to avoid overtaxing the vCenter server.
    1. Pros and Cons:
      1. Pros:
        1. Free!
        2. Easy to install.
      2. Cons:
        1. Stores syslog files in a flat file system. This can make finding data difficult.
        2. No integration with other VMware tools (i.e. vRealize Operations)
        3. Although status can be seen from vCenter, not much less can be managed from vCenter.
      3. Solution 2 – vRealize Insight
        1. vRealize Insight is a full blown log management and analytic solution.
        2. Installation is done by deploying virtual appliances. The number of appliances depends on the ingestion rate. Since vRealize Insight can be used for all types of system logging the amount of data can be considerable.
    • Pros and Cons:
      1. Pros:
        1. Actual analytics for the logs that are collected.
        2. Can manage logs sent from vSphere, Windows, Linux, etc.
        3. Easy to manage.
        4. Integrates with vRealize Operations.
      2. Cons:
        1. Not free.
        2. High IO consumer, especially for storage.
      3. Core Dump Files
        1. Core Dump files are pretty easy to manage.
        2. As long as you are using at least a 4GB SD/USB drive, you are covered. The core dump will be saved to the 2.2GB partition reserved for the dump.
          1. ESXi doesn’t need 4GB but if you plan on running a system with considerable amount of memory (i.e. >256GB but <512GB) or VSAN, it is required. Note that systems with more than 512GB of RAM are required to use local storage.
        3. Another solution to manage core dumps is to setup vSphere Dump Collector similar to vSphere Syslog Collector.
          1. vSphere Dump Collector is a service that can coexist on your Windows based vCenter to collect dumps from hosts configured to send core dumps.
          2. Pros:
            1. Free!
            2. Allows for a central location for host diagnostic data.
            3. Will be able to hold the entire dump (no limitation on dump file size vs SD/USB).
            4. Available on both Windows based vCenter and vCenter appliances.
    • Cons:
      1. ESXi uses a UDP based process to copy the core dump files to the target location. UDP is unreliable as packets are sent without verification of delivery, so data could be missing and you will not know it.
    1. VSAN Trace Files
      1. There are only a couple of options to manage VSAN trace files. They cannot be sent to a syslog server.
      2. Solution 1 – Let them write to the default RAMDisk.
        1. Pros:
          1. Easy to setup (i.e. no setup!)
          2. VMware natively supports copying these files to the SD/USB drive during host failures (not total hardware failure though).
        2. Cons:
          1. Increases the memory usage of the host. This can be an issue on memory constrained systems.
          2. If the VSAN trace files are larger than the available space in the locker partition, not all will be copied during a host failure.
        3. Solution 2 – Send them to an NFS server.
          1. Pros:
            1. VSAN trace files are kept on a remote storage location so even in the event of a complete host hardware failure the trace files are available.
            2. Easy to setup.
            3. Reduces the load on the host’s memory.
          2. Cons:
            1. Requires an NFS server or storage location.
            2. Additional network traffic out of the host.
          3. Solution 3 – Write to a local VMFS data store.
            1. Pros:
              1. A good solution if no NFS server or storage is available.
            2. Cons:
              1. Wastes drive(s) and drive space that could be used for VSAN drives.
              2. Requires the host to have a second disk controller as VSAN drives need to be connected to a dedicated controller. This can be tricky in some configs – for example, if your preferred platform is the R630, it does not support multiple controllers.
              3. Additional cost for drives just to support VSAN trace files.

    Remember, not every solution above will work in your environment.  But I do strongly advise doing something to protect, at the very least, the core dumps and VSAN trace files.  These are two key items that either VMware support will require to help resolve issues that may come up.  With the available free options it is cheap insurance for what could be a terrible support/troubleshooting session.

    Look for the third and final blog in this series where I will show you how to configure some of the infrastructure discussed above.

  • Dell TechCenter

    Successfully Running ESXi from SD Card or USB – Part 1

    Recently, we have received questions about why our VSAN Ready Nodes don’t have local drives dedicated to running ESXi. This post will provide some guidance on how to successfully deploy an ESXi host running from a SD card, such as Dell IDSDM solution, or USB drive. 

    This method is fully supported as long as you take into account some requirements and recommendations. Dell takes this one step further with the Integrated Dual Secure Digital Module (IDSDM). This module can support up to two 16GB SD cards and can protect them in a RAID 1 configuration. This means you get all the benefits of running ESXi from a SD card, with hardware-enabled redundancy as well.

    This post was originally going to focus only on our Ready Node architecture, but I felt it prudent to discuss this particular topic on a more general scale. Items relevant to VSAN are discussed but obviously if your host is not enabled with VSAN then any items pertaining to VSAN can be ignored.

    Requirements to install ESXi on SD/USB

    1. A supported SD/USB drive is required. Dell SD cards are certified to the highest standards to ensure reliable operation. The use of non-Dell SD cards is not recommended because Dell support has seen reliability/performance issues in the field with non-Dell SD cards. Each vendor has specific devices that they support. Dell’s solution automatically provides some boot redundancy with 2 SD cards. The Integrated Dual SD Module supports up to 16GB SD cards in RAID 1.
    2. Total system memory must be 512GB or less. For systems with more than 512GB of RAM installation to a local HDD/SSD is required.

    ESXi Scratch Partition

    The ESXi scratch partition is used by ESXi to store syslog files, core dump files, VSAN trace files, and other files. The most important to manage SD/USB boot are:

    • Syslog
    • core dump (PSOD)
    • VSAN trace files

    What are these three items? And why do I care?

    • Syslog files
      • This is a group of files that store critical log data from various processes running on the host.
      • Examples include: vmkernel.log, hostd.log, vmkwarning.log, etc.
      • These files are written to real-time by the host.
    • Core Dump Files
      • A core dump is the official name for the Purple Screen of Death, otherwise known as PSOD.
      • The core dump contains helpful troubleshooting data to determine the cause and fix for the PSOD.
      • Although not written often (hopefully not ever!) we need a persistent place to store these items so they can be retrieved.
    • VSAN Trace Files
      • VSAN trace files contain actions taken by VSAN on the data that is being written to and read from VSAN data stores.
      • This data is needed by VMware support should any issue with the VSAN data store occur.
      • These files are being written to in real-time, like the syslog files.
      • These are completely separate from the syslog files.

    So why do we have to “manage” these files? Doesn’t ESXi just store them on the SD/USB drive?

    Not by default. When ESXi is installed to a SD/USB device, the scratch partition is not created on the drive itself, but in a RAMDisk. A RAMDisk is a block storage device dynamically created within the system RAM. This device is mounted to the root file system for the ESXi installation to use.

    • This location is very high performance (it’s in RAM).
    • Unfortunately, the RAMDisk is not persistent across reboots, so all the data written there is lost forever if the host were to crash or even be rebooted manually.

    There is one exception to this rule. In failure scenarios other than complete system failure, the VSAN trace files are written to the locker partition on the SD/USB drive. These trace files are written in order from newest to oldest until the locker partition is full. This won’t necessarily capture all the VSAN trace files as the VSAN trace files can be much larger than the locker partition.

    Part 2 of this blog will discuss some different methods/software to address the management of the files we have discussed.  Look for it coming soon.  Once it is posted I will link it HERE.

  • Dell TechCenter

    VMware ESXi Upgrade fails with an error “Permission denied“

    Did you come across a failure “Error : Permission denied“ while upgrading from one version of ESXi to a later version as noted in the screenshot below ?

    ESXi Upgrade Failure

    Wondering what might be causing this !! ? This blog points out the details of the error and a potential solution to overcome this failure. This is generally seen on hardware configurations where ESXi is installed on a USB device and there is no HDD/LUN exposed to the system during first boot of ESXi.

    The reason for this error is ESXi creates a partition number 2 of partition ID ‘fc‘ (coredump) during 1st boot when ESXi doesn’t detect a harddisk / LUN.  Here is an example of how the partition table look like in this scenario.

    Where vmhba32 is the device name for the USB storage device.

    Refer to VMware KB to know more about the partition types. During upgrade, the installer see the partition #2 and it tries to format it as vfat thinking that it’s a scratch partition. The format triggers the error “Permission denied“.

    Now how do I resolve it ? Here you go.

    The first step is to reassign the coredump partition to a different partition other than #2. The commands shown in the below screenshot does the same. 

    As you see the coredump partition is reassigned and made active on 7th partition. Now, it’s time to remove the 2nd partition from the partition table. 

    There you go. Now upgrading ESXi to a later version is seamless and will not end up in permission denied error. 

  • Dell TechCenter

    Surrounded by friends: reflecting on my career so far

    Back in the early 90’s, I began my professional career working for a payroll tax company. Soon after, I began to realize what corporate life was – the pluses and minuses. I also realized the coffee provided at the office was going to be a lot more cost-effective for me than multiple cans of soda each morning. Wearing a tie, drinking coffee; who was this person I saw in the mirror every day?

    As I mentioned, there were many pluses and minuses to corporate life. At a superficial level, I learned I didn’t really enjoy wearing a tie. Who does? Beyond that, I’m not going to bore you with any of the minuses I’ve found because, in all honesty, I don’t think it would be a useful exercise. Also, if I’ve learned one thing in my career and in life it’s this; focus on the positive relationships you can build.

    I’ve been extremely fortunate to work alongside some incredible individuals. Many of them shaped my early years involved in technology, mentoring me in computer operations, different technologies, and even time management. A few of them even attended my wedding.

    I can think back and remember so many things about these people, from printing payroll tax forms on special printers that only one of us could figure out, to the day when their children came into their lives. The connections with these friends has been a constant in my career and my life.

    Now here’s the kicker, three of us still work together here at Dell Software. If you asked Randy, Steve, or I 20 years ago if we’d all still be working together we probably would have laughed. 

    So take a minute to think of the personal relationships you’ve been fortunate enough to build over the years. Work to keep the relationships you have, whenever possible. Take time to learn something interesting about your coworker; who knows – you may still be friends with them 20+ years from now!

  • Dell Big Data - Blog

    The Importance of Data Quality

    Poor data quality has been a thorn in the side of IT for years.  The problem is simple and is centered on knowing that "key" data is correct, reliable and trustworthy.  As applications and databases continue to grow and sprawl, the problem becomes exacerbated.

    In the world of data quality there are three basic states:

    1) Good

    2) Bad

    3) Unknown

    If your data quality is good - consider yourself in the elite minority.  By definition, if you know which data items matter and which don't, your data is good.  For the data that matters, you measure, monitor and have a process to improve it. 

    If your data quality is bad, with your acknowledgement, you recognize that you have a problem.  You must be measuring it to some extent to actually know it's bad.  There is hope.  

    If the state of your data quality is unknown, then you're in the dominant majority.  You probably have many problems, but they are hidden from view.  Most organizations in this situation falsely believe that since they sell great products, have loyal customers, have happy IT users, in that no one is complaining, or that they are a world class organization, then they must have excellent data quality. NOT.

    Let’s take a look at data sources and data quality. Considering the data source is important.  Data can come from internal sources within your company.  It can come from external sources.  The external sources may be public like data.gov or AWS public datasets.  And, external sources may be cloud-based and private - for example,  Salesforce.com.  Data may be purchased from a third party, which means it’s semi-private.  In each of these cases the quality of the data will vary.  More importantly, your ability to change a discovered problem is certainly differed.  If you found an error in the published census data at data.gov, the U.S. government is probably not going to change it.  If you purchased data from Acxiom and find an error, they might change it in the next cut or reissue it, just for you.  If it’s in an internal system the owner might change it if it’s impactful to them, but it will take time and money to remediate.  And more than likely, you will not find the errors so one should always be skeptical when considering the sources of data.

    With the advent of big data where organizations reach far and wide to collect data from numerous disparate sources in a wide array of formats, the importance of data quality is heightened.  If one measured the quality of three ingested sources, they were treated equally and we knew they were each 80 percent, then the overall data quality might be 80%**3 = 51.2 percent. In reality more factors and weightings would likely be employed, but for the purpose of this discussion, I think you get the idea.

    Now that you have many of data sources how does one put them together?  Many sources overlap.  Some items conflict.  Context and scope can vary.  One must integrate data from many different sources to provide a single view(s) of the truth for consumption by analysts and data scientists using software like Mahout or Statistica.  This is an important part of the big data puzzle that’s best looked at as an opportunity.  If one considers those same three sources we mentioned at 80 percent each, then if we pick and choose the best pieces we might get an integrated, normalized data source that is at approximately 95 percent according to some measure.  That’s a win for your analytics environment.

    So what's a big data architect to do?

    1) Survey your data by ranking data items across all sources in terms of value.

    2) Select the top 1-10 percent that matter most.

          a) > 500 total items?  Use 1 percent

          b) > 300 & < 500 items? Use 5 percent

          c) <= 300? Use 10 percent

    3) Determine a metric for each item.

    4) Measure each data item using the metric outlined. Sampling is ok, but beware of bias.

    5) Create a process to improve the quality.

    6) Set an acceptable target goal for each item.

    7) Quantify the cost as compared to the goal.

    8) Start working on the items with the highest cost impact to the bottom line most.

    9) Fix items at the source, if possible, otherwise do it during ingestion.

  • Hotfixes

    Cumulative Hotfix 591746 for 8.6 MR1 vWorkspace Windows Connector

    This is a Mandatory Hotfix 591746 for vWorkspace 8.6 MR1  for the Windows Connector role.

    Below is the list of issues addressed in this hotfix.

    • Windows connector fails to launch in Desktop Integrated mode when unanticipated shortcuts exist on the desktop or recycle bin of the client machine.
    • When silently installed as admin with command line, Windows Connector installs to user profile folder.

    Resolved Issues:

    The following issues have been resolved in this release:

    Feature

    Description

    Feature ID

    Windows Connector

    Windows connector fails to launch in Desktop Integrated mode when unanticipated shortcuts exist on the desktop or recycle bin of the client machine.

    589017

    Windows Connector

    When silently installed as admin with command line, Windows Connector installs to user profile folder.

    592293

    Known Issue:

    The following issue has been identified in this release:

    Feature

    Description

    Feature ID

    Installer

    A standard user on the client machine is prompted to enter administrator credentials when upgrading from a previous version of the 8.6 vWorkspace Windows connector to this hotfix.

    467342

    This hotfix is available for download at:

  • KACE Blog

    5 Ways Effective Endpoint Management is Like Planning for the Big Game

    Effective Endpoint Management is Like Planning for the Big Game

    Football is one of the most hyped sporting events in the country. With the 50th big game coming in the next week, all of the fanfare will be out in full swing. All of the articles, the speculation, the betting, the back stories and dissection of each team and player. I am a huge fan and I take it all in but what I love the most is thinking about how the teams and players prepare for the biggest game of the year. How to manage being the focus of all the attention and distractions and get ready to play in a game they have been preparing for their entire lives.    

    This is not remarkably different from what IT has to deal with every day, if you have a little imagination. One must constantly deal with all of the distractions, making sure that skill players (AKA execs) are taken care of, ensuring that game plans are ready, not just for this big game or on Sundays but every day. “Bad actors” take the form of “spies” for the football game, and hackers for IT professionals. Everything goes smoothly – players make the catches they are expected to – there are no service outages, and no one notices. Miss a game winning field goal, or find that no one is getting e-mail for an hour in the middle of the day, and the sky falls. Depending on how willing one is to push the analogy, the parallels can go on indefinitely.

    Without stretching TOO far, though, it’s possible to examine an effective endpoint management strategy through the lens of preparation for the greatest spectacle in American sports.

    Get Ready for the Big Game!

    1. Internal Game Planning - Coaches will tell you that one of the first things that teams do to get ready for the big game is prepare a critical self-evaluation.  Before they even start looking at the opposing team’s game films or play planning, they do an extensive internal evaluation.  You can’t focus on winning until you know what is working and what is not.  Kind of like a SWOT analysis so they can see what worked well in the last few playoff games, where to improve and what needs to be corrected. 

    Similar to endpoint strategy and planning, one needs to have that same level of introspection and analysis. It’s amazing how many IT organizations still say, they don’t know how many devices or applications are connected to the network.  That could be a good place to start.  Start with focusing on that discovery and understanding your infrastructure.  Where are the gaps and holes in your line that need to be adjusted to be ready for game day – which is every day? 

    2. Be Prepared for Every Situation – We all know that no matter how much you plan, something unexpected will go wrong.  We have seen this so many times in football games that huge fumble, bungled snap or blown coverage, resulting in a touchdown at a critical time that totally changes the momentum of the game.  Organizations face similar issues in IT.  Top coaches focus on situational football where players know what to do, when and where; the two-minute drill, goal line play, short-yardage, backed up near their own goal line, etc. The last thing anyone wants to hear from a player during a game is, `I wasn't expecting that to happen,' or `I was not prepared for that.'

    As IT systems become bigger, more complex and more difficult to manage, organizations need to have that same visibility and situational planning.   Everyone knows that something will go wrong, but a good IT organization can be proactive and identify the issue quickly and take quick action to solve it. One can never be too prepared for a software audit, and knowing who is using what, and preparing for that, is a terrific place to start.

    3. Manage the Process – One of the most successful and prolific college football coaches today, Nick Saban talks a lot about “the process” and getting the details right.  Saban’s “Process” is all about focusing on the journey, and not the destination.  About doing the right thing the right way all the time. He instructs his players to treat each play as if it was a game, and focus on what is needed to be done during that play to be successful.  

    If one looks at this process from an endpoint systems management perspective, one might be thinking about automating repetitive processes, such as ensuring there is a fully automated software patch management system, a configuration management tool as well as regularly scheduled compliance reporting.  Why not have a BYOD playbook in place that everyone is following? 

    4. Defense wins Championships – Anyone that follows football has heard this axiom.  Although I’m not sure that it is statically true as one needs a balanced approach, no one would argue that defense isn’t a critical aspect to the game.  Just ask Tom Brady about the Denver defense in the most recent AFC championship game.  Systems management is like defense in football.  One can't win without building strong front lines.  If one doesn't build a good systems management discipline and strategy then it becomes impossible to win the IT Bowl.  Just like football, security is a tough game and not for the faint of heart. There are threats lurking around every corner.  One may be blindsided at any moment.  It’s important to have defense at all levels of infrastructure to protect against all different types of threats while concentrating on the most important assets –endpoints, data and the network. 

    It has been estimated that 80% of malicious security attacks could have been prevented with improved patch management.  One must ensure that the front line – endpoints, have all the protection they can get.

    5. It’s all about the Team – Just as in IT, one’s resources and personnel are key to achieving IT initiatives and goals.   Players don’t just show up on game day and start playing.  They have had hours or preparation and training so that when game day arrives, it has become second nature for them to understand how to react, adjust and respond to every situation.  This goes for IT Administrators as well.  It is not enough to prepare with the skills needed but to also have the appropriate policies and procedures in place.  And just as each player needs to do their job, they need to also remember that there is a whole team behind them. 

    Patching endpoints covers one piece of the puzzle.  Configuration management solves for another.  Compliance reporting provides protection in a different, important way.  All of these parts of the systems management whole help put prepared organizations in a position to be successful.

    So as the world gets ready to watch the big game this coming Sunday, remember that just like in IT, each play, each yard, each touchdown moves us closer to that win and a foundation for success!

    How is your organization preparing for the IT Bowl?

    Sean Musil

    About Sean Musil

    Sean Musil is a Product Marketing Manager for Dell KACE. He believes the internet should be free and secure.

    View all posts by Sean Musil  | Twitter

  • Network, SRA and Email Security Blog

    Network Security Solutions for K-12 Schools – 5 Things to Look For

    Having children in elementary and middle school I tend to spend some time wondering what the district does to protect its network from threats and its students from harmful web content. Even if you don’t work in the security industry there’s a good chance you’ve read about several high-profile breaches that resulted in the loss of confidential company data. Schools aren’t immune to these attacks. The growing amount of digital information school districts compile on students such as social security numbers has made them an attractive target to cyber-criminals who sell the information or post it online.

    Breaches aren’t the only security concern for school IT administrators. Securing the network from threats such as viruses, spyware and intrusions is critical. So too is the need to control the apps students use on their mobile devices when connected to the school network. With education moving more and more online, the unrestrained use of unproductive apps slows network performance which impacts learning in the classroom. There’s also the need to comply with Children’s Internet Protection Act (***) requirements for schools that want to be eligible for discounts offered through the E-Rate program.

    Regardless of whether you have a solution in place today that covers your school district against cyber-threats and protects students as part of *** compliance, here are five components to look for in a network security solution for your school.

    1. Content/web filtering – To be eligible for discounts through the E-Rate program, *** requires schools (and libraries) to certify they have an internet safety policy in place that includes technology protection measures to block or filter access to internet content that is inappropriate or harmful to minors. A good content filtering solution will do this for devices behind the school’s firewall. However a complete solution will also provide policy enforcement when students use their school-issued devices to access the internet from home.
    2. Deep Packet Inspection of SSL-encrypted traffic (DPI-SSL) – More websites are using SSL encryption to transfer information securely over the internet. A good example is Google. Unfortunately hackers have devised ways to hide malware in the encrypted HTTPS traffic. Therefore, make sure your firewall is capable of inspecting SSL-encrypted traffic and eliminating any threats it finds.
    3. Gateway security services – While you may not have heard of DPI-SSL, you’re probably familiar with security technologies such as anti-virus, anti-spyware and possibly intrusion prevention. These are essential security services that protect your school’s network from attacks. Another valuable technology is application control which provides the tools to control access to apps that are used on your school’s network. After all, do you really want students accessing online games during math class?
    4. Wireless network security – Wireless has become a ubiquitous technology and a real help in the classroom. Chromebooks are a perfect example. Two things here. First, if you haven’t already made the leap to the 802.11ac wireless standard with your access points, now is a good time to consider the switch. It’s much faster than its predecessor 802.11n and any Wi-Fi-enabled device your school buys today is going to include it. Second, make sure the wireless traffic goes back through the firewall where it can be scanned for threats. Your wireless traffic should be as secure as your wired traffic.
    5. WAN acceleration/optimization – Ever have a bad experience using an application from a campus site when the app is served up from the district headquarters? Slow network performance makes teaching and learning a tiresome experience. WAN acceleration helps by speeding up the performance of internal apps and others such as Windows File Sharing. In addition, web caching improves browser response times to websites students visit frequently.

    Ideally all five of these components are part of an integrated solution that be managed centrally from one device. This reduces the time and effort it takes to deploy, set up and manage everything. If you’re an IT director or admin for a school district and would like to learn more about each of these components in more depth and how Dell Security solutions can help, read our technical white paper titled “K-12 network security: A technical deep-dive playbook.

     

    Scott Grebe

    About Scott Grebe

    Scott Grebe manages product marketing for Dell SonicWALL NSA, SonicPoint and WXA security products. He’s also knowledgeable on sports and Irish culture but not so much on cooking.

    View all posts by Scott Grebe