Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
If you haven’t yet noticed, @VMMaxwell is little excited about the upcoming Dell World Software User Forum and I’m going to share a little on why.
How many times have you complained, received a complaint or lost hours investigating why a user’s Citrix session is slow? Only to find out that you were either looking in the wrong spot, discover you didn’t have the ability to find the problem without trawling log files or your colleagues didn’t share the right information.
Foglight for Citrix XenApp and XenDesktop bridges the gap between VMware admins, Citrix administrators and your business. Delivering insight into user sessions in real-time, tracking of the health of the Citirx infrastructure and provides in-context information from the supporting VMware environment.
At Dell World I will be running a breakout session on using Foglight for Citrix in your environment. As a sneak peek I’ll let you in on some of the content.
What is all this talk about ‘in-context’?
‘In-Context’ performance monitoring is unique to Foglight as it collects data across the disparate systems that make up your enterprise applications. And when it comes to Citrix, it critical to understand not just the Citrix components but the supporting SQL Server, Active Directory, VMware and the network that delivers to the business.
At Dell World we will focus on three key use cases for Foglight for Citrix:
This all builds to answer one important question, how do you isolate and solve a single user complaint?
Oh, and the delivery of applications and desktops through to the end user.
Foglight connects to, and integrates data across the enterprise, driving collaboration with a single view of the monitored environments.
Don’t forget to check out the rest for the Foglight sessions!
Photo Credit: Ernesto Andrade Licensed under CC BY 2.0
Imagine that you’re in the middle of an analytics migration project (as Dell was).
Hundreds of users and projects all over the world are in transition from your legacy analytics product to the new one (as ours were).
Everyone is heads-down-focused on following vast, detailed project plans with a jillion moving parts (as we were.)
Suddenly, out of nowhere, an opportunity to fix Something Else swoops onto the scene (as it did onto ours).
That Something Else is kind of a mess, and this would be the ideal time to fix it, but everybody around you is urging you to focus, focus, focus on migrating projects and users. The Something Else has to do with the tools and processes on either side of the analytics function. It’s not exactly the same issue as replacing your company’s advanced analytics product, but it’s closely related.
What do you do: stay focused on your original project or devote some cycles to dealing with the Something Else?
ETL, data extraction and reporting. And the timing belt.
At Dell, we were waist-deep in migration from a well-known analytics product we had used for decades (you can probably guess which one) to Statistica, a product we had recently acquired. As I posted a few weeks ago, our migration project team discovered that a lot of people were using a Ferrari to haul dirt – that is, using a powerful analytics tool just for data manipulation – so we made some organization and tool changes as part of the migration.
But the Something Else we discovered was that people were using dozens of tools for some of the main functions around analytics:
ETL (Extract Transform Load) Process Automation – Microsoft SQL Server Integration Services, Microsoft Visual Studio
Reporting – Microsoft SQL Server Reporting Services, Microsoft Access
We had the opportunity to consolidate or replace these and stop the tool-creep, and it seemed as though we’d never have a better time to do it. Once everyone saw the inefficiency, the whole migration team wanted to deal with it, but it wasn’t part of the original plan.
It was like the timing belt story:
“Well, ma’am, your car has 90,000 miles, so we should replace the timing belt. And while we’re in there, we’ll have everything apart, so if there’s a problem with your water pump or the tensioner or the front cover gasket or the seal, that’s the best time to take care of it.”
It’s tough to bite that bullet and deal with the Something Else. But you know if you don’t deal with it and you have to go back in again later to fix it, you’ll kick yourself.
Actually, you won’t have to, because your boss will do the kicking for you.
The Great Analytics Migration – new e-book
So what did we do at Dell?
We went the extra mile and did the consolidation. It’s the kind of company we are: we can’t look at an inefficiency and not do something about it. Our companywide Business Intelligence Council ran a survey that found dozens of tools at work. The council identified seven of the top ten of those additional tools for migration to appropriate Dell technologies. We’ll get to the rest of them eventually.
Should you migrate users from other tools in the same project? We can’t tell you how to make that decision for your company, but we’ve put together an e-book, “Statistica: The Great Analytics Migration , Part 3: Technology, that tells you how we made it at Dell. Read the e-book for unique insights into how we managed our migration. We know quite a bit about it.
Just don’t ask us about your timing belt.
About David Sweenor
From requirements to coding, reporting to analytics, I enjoy leading change and challenging the status quo. Over 15 years of experience spanning the analytics spectrum including semiconductor yield characterization, enterprise data warehousing, reporting/analytics, IT program management, as well as product marketing and competitive intelligence. Currently leading Analytics Product Marketing for the Dell Software Group.
View all posts by David Sweenor |
In my last post, I gave you a few reasons why Dell World User Forum is going to rock, but there’s even more! You seriously do not want to miss this event. Held Oct. 20–22 in Austin, TX, you can expect a three-day extravaganza — with daily sessions on performance monitoring, as well as database monitoring and analysis. Plus, Foglight demonstrations will be held in the solutions exhibition! Not only will you be rockin’ with all the DBA party people from around the country, but you’ll have countless opportunities to gain IT knowledge and explore the latest tools to help you simplify your work.
Boogie on Down to Optimization
This is your chance to get expert tips and strategies to optimize your virtual environment — from the world’s largest provider of virtualization solutions. In addition to the virtualization sessions that I talked about in my last post, we’re planning DBA-related sessions that dive into Microsoft SQL Server and Oracle monitoring, analysis and optimization.
Take a look at the full agenda of sessions on the Dell World Software User Forum agenda.
The Booth of a Thousand Dances
If you really want to see something amazing, drop by the Dell booth to check out my dance moves. I’ll be bustin’ a move periodically (if I don’t bust a knee cap first) every day — moon walk, pogo, maybe even the electric slide. Join me? Or if you’d rather, I can give you a live Foglight demo of all the great database performance features.
Still asking yourself should you stay or should you go? Go! Register today!
You do not want to miss this event!
About John Maxwell
John Maxwell leads the Product Management team for Foglight at Dell Software. Outside of work he likes to hike, bike, and try new restaurants.
View all posts by John Maxwell |
Context can dramatically affect the meaning and significance of something. Something said in one situation can have a dramatically different meaning when said in a different context. The press is notorious for quoting small portions of politician’s speeches “out of context” to grab the public’s attention, but after you read the entire speech, you easily see the intended meaning of the quote.
The same is true in Mobile APM. The context of what a user is doing in a mobile app can have a dramatic impact on the perception of performance of an app. A request handled gracefully in the background can often take a great deal of time and the user will not even notice, but one that freezes the screen and causes the dreaded “spinner” can get annoying after as little as 400 ms in some cases! It also depends on the user’s expectation of the amount of work required to do something. A simple “like” on Facebook should be instantaneous, but doing a search across dozens of airlines and thousands of flights to find the best price we have no problem waiting a few seconds for, as long as we get good feedback that the search is indeed happening.
In Foglight Mobile Application Analytics we made a conscious choice to try to surface context as one of the elements available for simplifying triage of mobile app performance issues. A typical mobile app often makes hundreds, if not thousands of http requests during an invocation by a user. Image requests, data requests, update requests, analytics data.
The amount of chatter can often be overwhelming.
Many mobile APM tools will give you http request times over time like this, or in some cases, aggregated into average response time over time or by URL, which is even more cryptic. What does this actually tell me? Which requests annoyed the user and which ones were they totally unaware of? No way to really know.
To combat this problem, Foglight Mobile Application Analytics automatically captures and groups requests made by the name of the application screen that the user was on when the requests were made. This gives context to what the user was doing.
Contrast the above with this view:
I can see what the user was trying to do, how long it took, which request was the reason behind the slow performing screen, and which requests were processed in the background and probably not noticed by the user at all. The power of context.
Want to learn more about common areas where mobile apps suffer performance problems and how to track and improve performance? Check out our new eBook “How to Become a Mobile Application Performance Genius."
Follow #ThinkChat on Twitter Friday, October 2nd at 11:00 AM PDT, for a live conversation that talks with and to the data industry experts from Oracle to SQLServer and from Cassandra to Cloudera.
Join the Guest-Palooza – to name a few: Shawn Rogers (@ShawnRog), Chief Research Officer for Dell IMG, John Whittaker (@alertsource), Guy Harrison (@guyharrison), and Joanna Schloss (@JoSchloss), Dell Software's Analytics Thought Leader, for this month's ThinkChat as they discuss the challenges and opportunities central to managing, handling, and supporting the complex data ecosystem emerging today.
Tweet with us about how to support a myriad of data sources, to access the complex data ecosystems, and leverage the many data sources needed to drive the analytics in this fast past global economy. We'll chat about new data sources, real-time streams, mobile platforms and how data ecosystems are being reshaped by the emergence of new technologies every year.
Join the conversation!
Questions discussed on this program will include:
· What does Hadoop mean to you and your organization?
· How many data platforms do you support in your company?
· Which data platform is your favorite and why?
· How do you measure ROI from your data ecosystem?
· What tools do you use to support your data platform?
· What books and blogs help with data management?
· Day to day, how does data support your business’s analytics requirements?
· What are the barriers that hinder adoption of analytics?
· Where do you go to meet other professionals or experts in your field?
· How does governance, compliance, and security affect your data and analytic systems?
Where: Live on Twitter – Follow Hashtag #ThinkChat to get your questions answered and participate in the conversation!
When: October 2nd, at 11:00 am PDT
About Shawn Rogers
Shawn Rogers is Chief Research Officer for the Information Management Group at Dell Software. Shawn is an internationally recognized thought leader, speaker, author and instructor on the topics of IoT, big data, analytics, business intelligence, cloud, data integration, data warehousing and social analytics. Shawn has more than 19 years of hands-on IT experience. Prior to joining Dell he was Vice President Research for Business Intelligence and Analytics at Enterprise Management Associates a leading analyst firm. Shawn helps customers apply technology to fuel innovation and create value with data.
View all posts by Shawn Rogers |
Data is at the heart of more products and services than ever before. Everyday customers share information but do they understand the implications? Companies like Google, eBay and Facebook are doing exciting and innovative things with our data in return they supply feature rich and valuable services for free. Consumers are starting to weigh the price of their personal information, behavior data and sentiment data against these services. The balance will be different for all of us.
I am a fan of personalized service, content and promotions but I would like a more granular level of control. Companies like Acxiom are promoting transparency and control for the data they collect and sell about consumers. Several years ago they launched a website Aboutthedata.com that allows consumers to check the accuracy of the data they host. The question is; would you update the data if you see an error or omit certain fields that you’re uncomfortable with? I like this service and believe it aligns with the best practice of putting control of personal information in the hands of the customer. Long term I believe customers will control their data centrally and selectively make it available to service providers, perhaps for a fee?
In this week’s #ThinkChat segment Tom Davenport, John Thompson and I discuss the impact of data on consumers, where the control points should be and why Tom didn’t correct the accuracy issues he found in his data.
#ThinkChat Conversation with Tom Davenport, John Thompson Part 3 of 8
View other segments in the #ThinkChat series!
In my last two blog posts about agile development of database applications, I have made a big mistake:
Photo Credit: davidd Licensed under CC BY 2.0
I’ve used memorable, inspirational scenes and quotations from the Fast & Furious movies, hoping they would encourage you to read our new e-book, Three Guiding Principles to Build Better Code with Development Software. The movies seemed like a good tie-in to improving productivity fast & furiously on your development team.
But you know what our web guys told me? Most of you have gone straight from our blog posts to YouTube to watch Fast & Furious trailers of cars crashing into helicopters, driving through exploding airplanes and running into stacks of propane canisters. Some of you aren’t even downloading the e-book.
Fine thing, when I’m trying to get you to focus on agile development of your database applications.
So this time I’m going to write seriously about increasing code quality and performance, with a few top-of-mind questions for all database developers:
In the same way that you want to increase code quality and performance, the Fast & Furious team wants to get all the moving parts in sync for a great Fast 8. Will Eva Mendes come back as Agent Fuentes? Where in New York City will the movie take place? Are they really going to find a role for Helen Mirren?
I don’t know.
And I need to get back to work. You do, too.
So this time, be sure to download and read our new e-book, Three Guiding Principles to Build Better Code with Development Software BEFORE you bounce off to the Fast & Furious videos on YouTube.
The e-book contains guidelines on what to look for in software tools you can use to build your own database applications.
About Josh Howard
Josh Howard is a Worldwide Product Marketing Manager for Dell’s Information Management Group focused on database, business intelligence and big data analytics solutions. Josh has been with the Dell product team since 2010 where he leads the go-to-market strategies for Dell’s Toad™ product portfolio.You can follow Josh on Twitter at @joshoward or LinkedIn www.linkedin.com/in/joshoward/
View all posts by Josh Howard |
Here’s the sound of not enough infrastructure:
Photo Credit: Pat Pilon Licensed under CC BY 2.0
“What’s going on? I can’t get to the network!”
“The servers are choking and Sales can’t book end-of-month orders!”
“Twitter’s on fire with complaints about our website!”
“We’re getting slammed down here on the help desk! What’s wrong with the network?”
Then here’s the sound of too much infrastructure:
“We overbought on those new servers for the lab. Let’s move those assets around next chance we get.”
Not enough infrastructure = exclamation marks and hand-waving. Too much infrastructure = oh, well.
But it’s not always easy to measure the amount you’re going to need, as we found out when we migrated from our legacy analytics product to Statistica.
Getting the infrastructure requirements right
“It’s right there on the side of the box,” IT said, “where it reads ‘infrastructure requirements.’”
“We see that,” replied the migration project managers, “but there’s a lot riding on this, and we don’t want to get calls in the middle of the night.”
Nobody wanted to hear the sound of not enough infrastructure during a migration project that affected hundreds of projects and users all over the company, so we overbought. Besides, we were confident that, once people saw how much easier it was to use Statistica than our old product, we would see a big increase in our user population, and we wanted to be prepared for that.
But allocating a 12-core server with 148GB of memory to sharing and collaboration? You could almost model weather on that.
We erred on the side of caution, with similar equipment for Statistica execution and Web servers, and Monitoring and Alerting servers (MAS). It turned out to be more than adequate. Since the end of our successful migration project, we see the wisdom in the basic Statistica infrastructure requirements:
Microsoft Windows Server (64-bit) 2008 R2 or later with 2GB of memory, 5GB of disk space and a dual-core processor
Microsoft Internet Information Services, which comes with Windows Server
A standard database, such as Microsoft SQL Server or Oracle, acting as the repository for Statistica metadata
Users can work satisfactorily on even more modestly equipped Windows desktops.
If you’re planning a migration of your advanced analytics software, have a look at our latest e-book, “Statistica: The Great Analytics Migration, Part 3: Technology.” It poses the main questions every migration project needs to address, then outlines how we at Dell answered them during our successful transition from our previous product (you can probably guess which one it was) to Statistica.
You never want to fall short on your infrastructure requirements, but you don’t want to overbuy, either. Read our e-book to see how best to measure for your migration project.
While I certainly appreciate Boston for its history, chowder, and marathon, it is the predictive analytics scene that keeps bringing us back year after year. I know that sounds odd, but the annual Predictive Analytics World (PAW) Boston event is a natural fit for Statistica, especially with the recent development of a predictive healthcare track.
Healthcare's connection to predictive analytics arguably extends back to the ancient Greek physician, Hippocrates of Kos, who supposedly provided the instruction, “Declare the past, diagnose the present, foretell the future.” And if that isn't data-scientist-speak, I don't know what is! Hippocrates also touted the medicinal value of food, so I have no doubt he would have prescribed Boston clam chowder for its palliative effects, though I suspect he had his fill of seafood in his time (several hundred years before the birth of Christ).
Back then, of course, the healthcare system--if it could be called such--was comparatively simple, perhaps limited primarily to individual doctor-patient relationships. That simplicity is no longer the norm. During Statistica's mere 31-year legacy, our customers have driven us to develop expertise that guides healthcare organizations through the necessary components of data management and reporting, patient analytics, insurance risk reduction and regulatory compliance. You can learn about some of our healthcare successes in our datasheets, white papers, and videos.
So, when it comes to targeted events like PAW-Healthcare in Boston, we get to be all over the place. Our newsletter readers (yes, you can subscribe for FREE) already received a short list of our PAW-Healthcare exposure, where we will be sharing our expertise face-to-face with modern-day physicians and data scientists at breakfasts, meetups, and presentations. Take a look here and then be sure to register for PAW-Healthcare yourself.
We will also maintain a presence at booth #240, so we hope to see you there the week of September 28.
Sometimes authorizing a product can be confusing with the terms that are used.
As with many DSG products, there is a two- step authorization process.
A license key or Authorization key will need to be inputted; along with its Site Message string.
These two pieces of information will enable and authorize most applications, like Toad for Oracle and Toad Data Point.
This Knowledge Article offers additional information on how product credentials work within the DSG Toad family. Knowledgebase Article 74447 can be viewed here.