The mature and stable backup market has seen an influx of innovative technologies over the past few years and organizations can now choose a mix of backup technologies that are just right for them. Backup-to-tape is slowly being phased out and replaced with disk-based backup targets, and backup appliances and cloud services are also being added to the mix.
IDC's recent survey of storage managers shows that 30% of European organizations are already using backup-as-a-service and that a further 43% are planning to add cloud services to their mix of backup technologies in the next 12 months.
With so many options to choose from, it can be a challenge to design a future-proof backup strategy. Here are three key points to consider when choosing your next backup solution:
Ultimately, your new backup solution should give you the flexibility to take advantage of any backup technology that you want to deploy, and leverage the benefits of cloud if you want to use cloud services or, if you are not already using cloud services, provide the option to do so in the future, when the time is right for your organization.
If you would like to learn more about the characteristics of a future-proof backup strategy, download our complimentary white paper, “Choosing the Right Public Cloud for Better Data Protection”.
About Carla Arend
Carla Arend is a program director with the European software and infrastructure research team, responsible for managing the European storage research and co-leading IDC's European cloud research practice. Arend provides industry clients with key insight into market dynamics, vendor activities, and end-user trends in the European storage market, including hardware, software and services. As part of her research, she covers topics such as software-defined storage, OpenStack, flash, cloud storage, and data protection, among others..
View all posts by Carla Arend |
Congratulations to Jim Ganthier, Dell’s vice president and general manager of Cloud, HPC and Engineered Solutions, who was recently selected by HPCWire as a “2016 Person to Watch.” In an interview as part of this recognition, Jim offered his insights, perspective and vision on the role of HPC, seeing it as a critical segment of focus driving Dell’s business. He also discussed initiatives Dell is employing to inspire greater adoption through innovation, as HPC becomes more mainstream.
There has been a shift in the industry, with newfound appreciation of advanced-scale computing as a strategic business advantage. As it expands, organizations and enterprises of all sizes are becoming more aware of HPC’s value to increase economic competitiveness and drive market growth. However, Jim believes greater availability of HPC is still needed for the full benefits to be realized across all industries and verticals.
As such, one of Dell’s goals for 2016 is to help more people in more industries to use HPC by offering more innovative products and discoveries than any other vendor. This includes developing domain-specific HPC solutions, extending HPC-optimized and enabled platforms, and enabling a broader base of HPC customers to deploy, manage and support HPC solutions. Further, Dell is investing in vertical expertise by bringing on HPC experts in specific areas including life sciences, manufacturing and oil and gas.
Dell is also offering its own brand muscle to draw more attention to HPC at the C-suite level, and will thus accelerate mainstream adoption - this includes leveraging the company’s leading IT portfolio, services and expertise. Most importantly, the company is championing the democratization of HPC, meaning minimizing complexities and mitigating risk associated with traditional HPC while making data more accessible to an organization’s users.
Here are a few of the trends Jim sees powering adoption for the year ahead:
A great example of HPC outside the world of government and academic research is aircraft and automotive design. HPC has long been used for structural mechanics and aerodynamics of vehicles, but now that the electronics content of aircraft and automobiles is increasing dramatically, HPC techniques are now being used to prevent electromagnetic interference from impacting performance of those electronics. At the same time, HPC has enabled vehicles to be lighter, safer and fuel efficient than ever before. Other examples of HPC applications include everything from oil exploration to personalized medicine, from weather forecasting to the creation of animated movies, and from predicting the stock market to assuring homeland security. HPC is also being used by the likes of FINRA to help detect and deter fraud, as well as helping stimulate emerging markets by enabling growth of analytics applied to big data.
Again, our sincerest congratulations to Jim Ganthier! To read the full Q&A, visit http://bit.ly/1PYFSv2.
Successfully Running ESXi from SD Card or USB – Part 2
In Part 1 of this blog, we discussed some items that need to be addressed to successfully run ESXi from an SD card or USB drive. Most specifically the syslog files, core dump, and VSAN traces files (if VSAN is enabled).
This post will discuss some options to address each one and various pros and cons to each method. Unfortunately there is no definitive answer. Since each infrastructure can and will be different it is nearly impossible for me, Dell, or VMware to say exactly what you should do. The intent of the information below is to help give you options on how to manage these files. These are by no means the only options.
How do we manage these files, so they are persistent?
Remember, not every solution above will work in your environment. But I do strongly advise doing something to protect, at the very least, the core dumps and VSAN trace files. These are two key items that either VMware support will require to help resolve issues that may come up. With the available free options it is cheap insurance for what could be a terrible support/troubleshooting session.
Look for the third and final blog in this series where I will show you how to configure some of the infrastructure discussed above.
Recently, we have received questions about why our VSAN Ready Nodes don’t have local drives dedicated to running ESXi. This post will provide some guidance on how to successfully deploy an ESXi host running from a SD card, such as Dell IDSDM solution, or USB drive.
This method is fully supported as long as you take into account some requirements and recommendations. Dell takes this one step further with the Integrated Dual Secure Digital Module (IDSDM). This module can support up to two 16GB SD cards and can protect them in a RAID 1 configuration. This means you get all the benefits of running ESXi from a SD card, with hardware-enabled redundancy as well.
This post was originally going to focus only on our Ready Node architecture, but I felt it prudent to discuss this particular topic on a more general scale. Items relevant to VSAN are discussed but obviously if your host is not enabled with VSAN then any items pertaining to VSAN can be ignored.
Requirements to install ESXi on SD/USB
ESXi Scratch Partition
The ESXi scratch partition is used by ESXi to store syslog files, core dump files, VSAN trace files, and other files. The most important to manage SD/USB boot are:
What are these three items? And why do I care?
So why do we have to “manage” these files? Doesn’t ESXi just store them on the SD/USB drive?
Not by default. When ESXi is installed to a SD/USB device, the scratch partition is not created on the drive itself, but in a RAMDisk. A RAMDisk is a block storage device dynamically created within the system RAM. This device is mounted to the root file system for the ESXi installation to use.
There is one exception to this rule. In failure scenarios other than complete system failure, the VSAN trace files are written to the locker partition on the SD/USB drive. These trace files are written in order from newest to oldest until the locker partition is full. This won’t necessarily capture all the VSAN trace files as the VSAN trace files can be much larger than the locker partition.
Part 2 of this blog will discuss some different methods/software to address the management of the files we have discussed. Look for it coming soon. Once it is posted I will link it HERE.
Did you come across a failure “Error : Permission denied“ while upgrading from one version of ESXi to a later version as noted in the screenshot below ?
Wondering what might be causing this !! ? This blog points out the details of the error and a potential solution to overcome this failure. This is generally seen on hardware configurations where ESXi is installed on a USB device and there is no HDD/LUN exposed to the system during first boot of ESXi.
The reason for this error is ESXi creates a partition number 2 of partition ID ‘fc‘ (coredump) during 1st boot when ESXi doesn’t detect a harddisk / LUN. Here is an example of how the partition table look like in this scenario.
Where vmhba32 is the device name for the USB storage device.
Refer to VMware KB to know more about the partition types. During upgrade, the installer see the partition #2 and it tries to format it as vfat thinking that it’s a scratch partition. The format triggers the error “Permission denied“.
Now how do I resolve it ? Here you go.
The first step is to reassign the coredump partition to a different partition other than #2. The commands shown in the below screenshot does the same.
As you see the coredump partition is reassigned and made active on 7th partition. Now, it’s time to remove the 2nd partition from the partition table.
There you go. Now upgrading ESXi to a later version is seamless and will not end up in permission denied error.
Back in the early 90’s, I began my professional career working for a payroll tax company. Soon after, I began to realize what corporate life was – the pluses and minuses. I also realized the coffee provided at the office was going to be a lot more cost-effective for me than multiple cans of soda each morning. Wearing a tie, drinking coffee; who was this person I saw in the mirror every day?
As I mentioned, there were many pluses and minuses to corporate life. At a superficial level, I learned I didn’t really enjoy wearing a tie. Who does? Beyond that, I’m not going to bore you with any of the minuses I’ve found because, in all honesty, I don’t think it would be a useful exercise. Also, if I’ve learned one thing in my career and in life it’s this; focus on the positive relationships you can build.
I’ve been extremely fortunate to work alongside some incredible individuals. Many of them shaped my early years involved in technology, mentoring me in computer operations, different technologies, and even time management. A few of them even attended my wedding.
I can think back and remember so many things about these people, from printing payroll tax forms on special printers that only one of us could figure out, to the day when their children came into their lives. The connections with these friends has been a constant in my career and my life.
Now here’s the kicker, three of us still work together here at Dell Software. If you asked Randy, Steve, or I 20 years ago if we’d all still be working together we probably would have laughed.
So take a minute to think of the personal relationships you’ve been fortunate enough to build over the years. Work to keep the relationships you have, whenever possible. Take time to learn something interesting about your coworker; who knows – you may still be friends with them 20+ years from now!
Poor data quality has been a thorn in the side of IT for years. The problem is simple and is centered on knowing that "key" data is correct, reliable and trustworthy. As applications and databases continue to grow and sprawl, the problem becomes exacerbated.
In the world of data quality there are three basic states:
If your data quality is good - consider yourself in the elite minority. By definition, if you know which data items matter and which don't, your data is good. For the data that matters, you measure, monitor and have a process to improve it.
If your data quality is bad, with your acknowledgement, you recognize that you have a problem. You must be measuring it to some extent to actually know it's bad. There is hope.
If the state of your data quality is unknown, then you're in the dominant majority. You probably have many problems, but they are hidden from view. Most organizations in this situation falsely believe that since they sell great products, have loyal customers, have happy IT users, in that no one is complaining, or that they are a world class organization, then they must have excellent data quality. NOT.
Let’s take a look at data sources and data quality. Considering the data source is important. Data can come from internal sources within your company. It can come from external sources. The external sources may be public like data.gov or AWS public datasets. And, external sources may be cloud-based and private - for example, Salesforce.com. Data may be purchased from a third party, which means it’s semi-private. In each of these cases the quality of the data will vary. More importantly, your ability to change a discovered problem is certainly differed. If you found an error in the published census data at data.gov, the U.S. government is probably not going to change it. If you purchased data from Acxiom and find an error, they might change it in the next cut or reissue it, just for you. If it’s in an internal system the owner might change it if it’s impactful to them, but it will take time and money to remediate. And more than likely, you will not find the errors so one should always be skeptical when considering the sources of data.
With the advent of big data where organizations reach far and wide to collect data from numerous disparate sources in a wide array of formats, the importance of data quality is heightened. If one measured the quality of three ingested sources, they were treated equally and we knew they were each 80 percent, then the overall data quality might be 80%**3 = 51.2 percent. In reality more factors and weightings would likely be employed, but for the purpose of this discussion, I think you get the idea.
Now that you have many of data sources how does one put them together? Many sources overlap. Some items conflict. Context and scope can vary. One must integrate data from many different sources to provide a single view(s) of the truth for consumption by analysts and data scientists using software like Mahout or Statistica. This is an important part of the big data puzzle that’s best looked at as an opportunity. If one considers those same three sources we mentioned at 80 percent each, then if we pick and choose the best pieces we might get an integrated, normalized data source that is at approximately 95 percent according to some measure. That’s a win for your analytics environment.
So what's a big data architect to do?
1) Survey your data by ranking data items across all sources in terms of value.
2) Select the top 1-10 percent that matter most.
a) > 500 total items? Use 1 percent
b) > 300 & < 500 items? Use 5 percent
c) <= 300? Use 10 percent
3) Determine a metric for each item.
4) Measure each data item using the metric outlined. Sampling is ok, but beware of bias.
5) Create a process to improve the quality.
6) Set an acceptable target goal for each item.
7) Quantify the cost as compared to the goal.
8) Start working on the items with the highest cost impact to the bottom line most.
9) Fix items at the source, if possible, otherwise do it during ingestion.
This is a Mandatory Hotfix 591746 for vWorkspace 8.6 MR1 for the Windows Connector role.
Below is the list of issues addressed in this hotfix.
The following issues have been resolved in this release:
Windows connector fails to launch in Desktop Integrated mode when unanticipated shortcuts exist on the desktop or recycle bin of the client machine.
When silently installed as admin with command line, Windows Connector installs to user profile folder.
The following issue has been identified in this release:
A standard user on the client machine is prompted to enter administrator credentials when upgrading from a previous version of the 8.6 vWorkspace Windows connector to this hotfix.
This hotfix is available for download at:
Football is one of the most hyped sporting events in the country. With the 50th big game coming in the next week, all of the fanfare will be out in full swing. All of the articles, the speculation, the betting, the back stories and dissection of each team and player. I am a huge fan and I take it all in but what I love the most is thinking about how the teams and players prepare for the biggest game of the year. How to manage being the focus of all the attention and distractions and get ready to play in a game they have been preparing for their entire lives.
This is not remarkably different from what IT has to deal with every day, if you have a little imagination. One must constantly deal with all of the distractions, making sure that skill players (AKA execs) are taken care of, ensuring that game plans are ready, not just for this big game or on Sundays but every day. “Bad actors” take the form of “spies” for the football game, and hackers for IT professionals. Everything goes smoothly – players make the catches they are expected to – there are no service outages, and no one notices. Miss a game winning field goal, or find that no one is getting e-mail for an hour in the middle of the day, and the sky falls. Depending on how willing one is to push the analogy, the parallels can go on indefinitely.
Without stretching TOO far, though, it’s possible to examine an effective endpoint management strategy through the lens of preparation for the greatest spectacle in American sports.
Get Ready for the Big Game!
1. Internal Game Planning - Coaches will tell you that one of the first things that teams do to get ready for the big game is prepare a critical self-evaluation. Before they even start looking at the opposing team’s game films or play planning, they do an extensive internal evaluation. You can’t focus on winning until you know what is working and what is not. Kind of like a SWOT analysis so they can see what worked well in the last few playoff games, where to improve and what needs to be corrected.
Similar to endpoint strategy and planning, one needs to have that same level of introspection and analysis. It’s amazing how many IT organizations still say, they don’t know how many devices or applications are connected to the network. That could be a good place to start. Start with focusing on that discovery and understanding your infrastructure. Where are the gaps and holes in your line that need to be adjusted to be ready for game day – which is every day?
2. Be Prepared for Every Situation – We all know that no matter how much you plan, something unexpected will go wrong. We have seen this so many times in football games that huge fumble, bungled snap or blown coverage, resulting in a touchdown at a critical time that totally changes the momentum of the game. Organizations face similar issues in IT. Top coaches focus on situational football where players know what to do, when and where; the two-minute drill, goal line play, short-yardage, backed up near their own goal line, etc. The last thing anyone wants to hear from a player during a game is, `I wasn't expecting that to happen,' or `I was not prepared for that.'
As IT systems become bigger, more complex and more difficult to manage, organizations need to have that same visibility and situational planning. Everyone knows that something will go wrong, but a good IT organization can be proactive and identify the issue quickly and take quick action to solve it. One can never be too prepared for a software audit, and knowing who is using what, and preparing for that, is a terrific place to start.
3. Manage the Process – One of the most successful and prolific college football coaches today, Nick Saban talks a lot about “the process” and getting the details right. Saban’s “Process” is all about focusing on the journey, and not the destination. About doing the right thing the right way all the time. He instructs his players to treat each play as if it was a game, and focus on what is needed to be done during that play to be successful.
If one looks at this process from an endpoint systems management perspective, one might be thinking about automating repetitive processes, such as ensuring there is a fully automated software patch management system, a configuration management tool as well as regularly scheduled compliance reporting. Why not have a BYOD playbook in place that everyone is following?
4. Defense wins Championships – Anyone that follows football has heard this axiom. Although I’m not sure that it is statically true as one needs a balanced approach, no one would argue that defense isn’t a critical aspect to the game. Just ask Tom Brady about the Denver defense in the most recent AFC championship game. Systems management is like defense in football. One can't win without building strong front lines. If one doesn't build a good systems management discipline and strategy then it becomes impossible to win the IT Bowl. Just like football, security is a tough game and not for the faint of heart. There are threats lurking around every corner. One may be blindsided at any moment. It’s important to have defense at all levels of infrastructure to protect against all different types of threats while concentrating on the most important assets –endpoints, data and the network.
It has been estimated that 80% of malicious security attacks could have been prevented with improved patch management. One must ensure that the front line – endpoints, have all the protection they can get.
5. It’s all about the Team – Just as in IT, one’s resources and personnel are key to achieving IT initiatives and goals. Players don’t just show up on game day and start playing. They have had hours or preparation and training so that when game day arrives, it has become second nature for them to understand how to react, adjust and respond to every situation. This goes for IT Administrators as well. It is not enough to prepare with the skills needed but to also have the appropriate policies and procedures in place. And just as each player needs to do their job, they need to also remember that there is a whole team behind them.
Patching endpoints covers one piece of the puzzle. Configuration management solves for another. Compliance reporting provides protection in a different, important way. All of these parts of the systems management whole help put prepared organizations in a position to be successful.
So as the world gets ready to watch the big game this coming Sunday, remember that just like in IT, each play, each yard, each touchdown moves us closer to that win and a foundation for success!
How is your organization preparing for the IT Bowl?
About Sean Musil
Sean Musil is a Product Marketing Manager for Dell KACE. He believes the internet should be free and secure.
View all posts by Sean Musil |
Having children in elementary and middle school I tend to spend some time wondering what the district does to protect its network from threats and its students from harmful web content. Even if you don’t work in the security industry there’s a good chance you’ve read about several high-profile breaches that resulted in the loss of confidential company data. Schools aren’t immune to these attacks. The growing amount of digital information school districts compile on students such as social security numbers has made them an attractive target to cyber-criminals who sell the information or post it online.
Breaches aren’t the only security concern for school IT administrators. Securing the network from threats such as viruses, spyware and intrusions is critical. So too is the need to control the apps students use on their mobile devices when connected to the school network. With education moving more and more online, the unrestrained use of unproductive apps slows network performance which impacts learning in the classroom. There’s also the need to comply with Children’s Internet Protection Act (***) requirements for schools that want to be eligible for discounts offered through the E-Rate program.
Regardless of whether you have a solution in place today that covers your school district against cyber-threats and protects students as part of *** compliance, here are five components to look for in a network security solution for your school.
Ideally all five of these components are part of an integrated solution that be managed centrally from one device. This reduces the time and effort it takes to deploy, set up and manage everything. If you’re an IT director or admin for a school district and would like to learn more about each of these components in more depth and how Dell Security solutions can help, read our technical white paper titled “K-12 network security: A technical deep-dive playbook.”
About Scott Grebe
Scott Grebe manages product marketing for Dell SonicWALL NSA, SonicPoint and WXA security products. He’s also knowledgeable on sports and Irish culture but not so much on cooking.
View all posts by Scott Grebe