Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Agile, Agile, Agile is the mantra of many innovative companies looking to react faster to opportunities and develop solutions at the speed of their business. Believe it or not some companies find Agile Development frameworks to be too slow and too constricting to keep pace with their goals. The same is true with execution of analytics. The days of passing a query to a huge volume of data and waiting a day for the response is over, that ship has sailed. Data driven companies are looking to fail faster and learn quicker in order to innovate.
In this week’s #ThinkChat segment Tom Davenport, John Thompson and I discuss this growing trend in the analytics space as well as the importance of communicating insights, storytelling and interactive data visualization for big data and analytics.
#ThinkChat Conversation with Tom Davenport, John Thompson Part 1 of 3
View other segments in the #ThinkChat series!
Sherlock Holmes got by on guile and intuition. Rookie!
A database administrator is IT’s equivalent of a private investigator. When things go awry, users look to the DBA to solve IT mysteries, such as application latency or why the department server is out of storage. And while Sherlock is a gifted literary investigator, if he and his sub-contractor, Watson, were DBAs today they wouldn’t last a week in this budget-strapped, do-more-with-less environment.
Image credit: dynamosquito | Licensed under: CC BY 2.0
DBAs don’t have the luxury of relying on gut feelings nor are they provided a lot of time to collect information and make educated decisions. Clearly, intellect and sound reasoning are required to successfully manage any database infrastructure, but today, along with those natural skills, you’d better be digitally plugged in to every table, cell, application, query string and user to maintain five-9s infrastructure performance.
Roles are Changing
Over the last 15 years, the DBA role has changed drastically. Just as Sherlock Holmes and his trusted sidekick are partially credited with advancing real-world forensic sciences, DBAs in the early 2000s put down the foundation of processes and remediation steps we have in place today. Back then, database administration largely dealt with single-vendor, big-iron enterprise database infrastructure that didn’t feature much flexibility with third-party bolt-on solutions. Flash forward to today and DBAs are responsible for multivendor database solutions but no longer have a resource like Watson to help them manage the environment.
The complexity of structure and the number of specialized databases within even the most mundane environment is at an all-time high. Now DBAs are dealing with virtualization, mobility, nearly countless apps, and users who demand access to database resources from corporate and personal devices.
How do you stay on top of managing all this infrastructure?
DBAs need solutions that help them do their daily job but also help them delegate tasks to teams that can have an immediate impact on an issue. Some simple examples are offloading software issues to the apps group for remediation and calling upon the IT operations group to handle network performance and storage issues.
Performance data is a DBA’s best friend. With tools that provide crystal clear vision all the way down to the resource, user session and application level, a potential issue can be brought to a DBA’s attention and diagnosed before it becomes a widespread problem. Additionally, the ability to look back in time and analyze conditions that existed when a database issue occurred—such as deadlocks, application latency, Microsoft SQL Server agent failures, CPU spikes and I/O bottlenecks—is invaluable and makes DBAs more efficient.
In my next blogs in this series, I will dive deeper into capabilities that allow today’s DBAs to easily incorporate the latest features of database analysis and performance investigation. By the end of this series, you will be introduced to capabilities that would make even Sherlock envious. In the meantime, check out this webcast to get clued in on SQL Server workload mysteries and unlock peak performance.
P.S. Going to the SQL PASS Summit? Visit our booth (No. 423) to talk with Dell SQL experts and get a chance to win cool prizes.
About John Maxwell
John Maxwell leads the Product Management team for Foglight at Dell Software. Outside of work he likes to hike, bike, and try new restaurants.
View all posts by John Maxwell |
Managing tens of thousands of local and remote server nodes in a cluster is always a challenge. To reduce the cluster-management overhead and simplify setup of cluster of nodes, admins seek the convenience of a single snapshot view. Rapid changes in technology make management, tuning, customization, and settings updates an ongoing necessity, one that needs to be performed and easily as infrastructure is refactored and refreshed.
To simplify some of these challenges, it’s important to fully integrate hardware management and the cluster management solution. The following integration detailed in this blog between that of server hardware and the cluster management solution provides an example of some of the best practices achievable today.
Critical to this integration and design is the Integrated Dell Remote Access Controller (IDRAC). Since IDRAC is embedded into the server motherboards for in-band and out-of-band system management, it can display and modify BIOS settings as well as perform firmware updates through the Life Cycle Controller and remote-console. Collectively, each server’s in-depth system profile information is gathered using system tools and utilities and is available in a single graphical user interface for ease of administration, thus reducing the need to physically access the servers themselves.
Figure 1. BIOS-level integration between Dell PowerEdge servers and cluster management solution (Bright 7.1)
Figure 1 (above) depicts the configuration setup for a single node in the cluster. The fabric can be accessed via the dedicated iDRAC port or shared with the LAN-on-Motherboard capability. The cluster administration fabric is configured at the time of deployment with the help of built-in scripts in the software stack that help automate this. The system profile of the server is captured in an XML-based schema file that gets imported from the iDRAC using the racadm commands. Thus relevant data such as optimal system BIOS settings, boot order, console redirection and network configuration are parsed and displayed on the cluster dashboard of the graphical user interface. By reversing this process, it is possible to change and apply other BIOS settings onto a server to tune and set system profiles from the graphical interface. These choices are then stored in an updated XML-based schema file on the head node, and pushed out to the appropriate nodes during reboots.
Figure 2. Snapshot of the Cluster Node Configuration via cluster management solution.
Figure 2 is a screenshot showing BIOS version and system profile information for a number of Dell PowerEdge servers of the same model. This is a particularly useful overview as inappropriate settings and versions can be easily and rapidly identified.
Typical use would be when new servers are added or replaced in a cluster. The above integration will help to ensure that all servers have similar homogenous performance, BIOS versions, firmware, system profile and other tuning configurations.
This integration is also helpful for users who need custom settings – i.e. not the default settings - applied on their servers. For example codes that are latency sensitive may require custom profile with C-States disabled. These servers can be categorized into a node group, with specific BIOS parameters applied to that group.
This tightly coupled BIOS level integration delivers capabilities that provide a significantly enhanced solution offering for HPC cluster maintenance that provides a single snapshot view for simplified updates and tuning. As a validated and tested solutions on the given hardware, it provides seamless operation and administration of clusters at scale.
In my last post, I explained how using Oracle Standard Edition with a third-party data replication solution provides a low-cost, high-performance alternative to using Oracle Enterprise Edition. Now, let’s explore a real-world example of how one company used this approach to reduce costs, minimize downtime and improve database performance.
Bodybuilding.com looks to strengthen database performance
With more than 1.7 million unique visitors a day, Bodybuilding.com is the world’s most visited health and fitness website and the largest online retailer of nutritional supplements.
The company developed a strong business intelligence strategy, mining website data to better understand customer behavior, trends and new opportunities for growth. But with up to 50 employees actively running reports against the production database, online transaction processing (OLTP) performance, and the user’s website experience, were suffering.
An affordable way to offload reporting
Sean Scott, Oracle DBA at Bodybuilding.com, needed a way to separate the online transactional database from ad hoc and BI reporting. He knew moving from Oracle Standard Edition to Enterprise Edition could help ― but at a significant cost.
Not wanting to spend several hundreds of thousands of dollars in licensing fees, Scott discovered he could offload reporting to improve database performance by simply staying on Standard Edition and adding an affordable third-party data replication solution.
Anxiety-free migrations that deliver $200,000 in savings
Data replication not only allowed the company to offload the reporting workload from the production database, but it also took the stress out of migrations. With their data replication solution, DBAs can see how the new database is going to perform and resolve any issues proactively. This helps prevent downtime, saving hundreds of thousands of dollars for one migration alone.
One approach, many use cases
Using Oracle Standard Edition with a powerful, affordable data replication solution like SharePlex will empower you to reduce costs, achieve high availability, improve business intelligence and much more.
To get the full rundown on how Bodybuilding.com successfully implemented this approach, check out the case study today.
Image credit: Pat Herman
In the most recent issue of our Statistica Monthly Newsletter (yes, you can subscribe for free), our readers were made aware of a new Statistica user forum in our community pages. The new forum is intended to be a true user-to-user community, with discussion threads driven by the users, of the users, and for the users.
The good news is you don’t have to be a Statistica guru to participate! However, this forum does provide you the opportunity to share your best practices, seek feedback on vexing challenges, make product suggestions and expound on specific analytics and data topics that interest you. You can promote yourself by linking to relevant blogs and articles you have written in other forums, too, such as LinkedIn groups. This totally new forum will be regularly monitored by Dell experts and peers alike, so you can anticipate that your posts will be addressed even as we build our new community audience from scratch.
Why is this a big deal? Because the development of the Statistica platform itself is a response to your real-life use cases and the industry trends that affect you. And because you can improve your own knowledge base (and your personal brand) by collaborating with fellow Statistica users. You never know where the next big idea may come from. Here I will happily defer to greater minds than my own:
The sun never sets on the Statistica empire, because there are over 1 million Statistica users in dozens of countries around the globe, in industry and academia and government. As a Statistica user, you are never alone. So share the forum link with your peers, and we look forward to your participation.
For more information, subscribe to the Statistica newsletter >
Are your products and services ready to meet the needs of a new breed of data savvy and sophisticated consumers? As time goes by customer expectations are evolving quickly, personalization and low friction engagement is quickly becoming the norm and data driven systems are behind the innovation. Data scientists are helping companies gain insights and take deeper more meaningful action. But even this critical role is being redefined. It takes a village to orchestrate these innovative business initiatives and the role of data scientist may be better served by a team configuration.
In this week’s #ThinkChat segment Krish Krishnan, President and CEO of Sixth Sense Advisors and I discuss the need to be faster, smarter and more innovative to insure success and growth. Hadoop, MySQL, Spark are all disrupting the traditional landscape while enabling better analytics while the role of the data scientist is evolving.
#ThinkChat Conversation with Krish Krishnan Part 3 of 3
View other segments in the #ThinkChat series!
I admit the title may be a bit over the top, but I can’t hide the fact that I’m excited (insert Pointer Sisters reference here) about Foglight’s participation in Dell World Software User Forum this October. And for good reason: the forum brings together the best in technology and business professionals like you. It’s a great opportunity to enhance your IT expertise and explore all the latest tools to help simplify your work. It’s also the perfect chance to get expert tips and strategies to optimize your virtual environment – from the world’s largest provider of virtualization solutions.
Are you pumped yet? I am.
The event is October 20-22 in Austin, Texas. I’ll be there and I’m looking forward to helping you tackle your toughest virtualization challenges. We’ve planned 13 knowledge-packed sessions just for performance monitoring, and they are not to be missed.
Check out our sessions!
What will I be covering in these sessions, you may ask? Well, performance monitoring mavens, you’re in for a treat:
Take a look at the full list of sessions, including the ones for Foglight, on the Software User Forum agenda.
Get a Live Demo from Yours Truly
If you can’t catch one of the sessions and/or want to see Foglight in action (or you just want to chat about 80s dance-pop), find me at the show. I’m easy to spot – I’m the tall, handsome man with a winning smile, perfectly trimmed beard and freshly pressed collared shirt. Or, you could always ask for me by name. You know, whatever’s easier. I’d be happy to give you a live demo of Foglight to show you what we’ve been up to and fill you in on what’s coming next.
If all of that doesn’t get you excited, you might want to check your pulse.
Months of planning complete, hardware and software procured, associates prepped. The sails are set, and the stars are aligned to flip the switch on a major analytics platform migration. That’s what the buildup felt like when Dell was ready to start moving users from our legacy analytics platform to Statistica, an easier-to-use and lower cost solution, as the company’s analytics platform.
In a previous blog post, we covered the lessons learned from process planning. It bears repeating that the time Dell spent working through process-oriented questions positioned the organization to start moving users onto Statistica on schedule. Even still, when the actual migration is in progress, some new and perhaps surprising processes pop up.
What actually happens when a company migrates to a new analytics platform? Find out in the Dell-on-Dell case study. The e-book, “Statistica: The Great Analytics Migration,” is available for download.
During our migration, Dell realized that it was the business leads in the Centers of Excellence (CoEs) that had the best glimpse into progress, knowing how close each user or department was to migrating completely off the legacy analytics platform. The CoEs also had the insight into unexpected roles and tasks users and managers took on along the way. Let’s look at three:
Working in two platforms: Perhaps it’s not surprising that a migration of this magnitude would put added strain onto employees taking part in the migration, but there are only so many hours in a workweek. If there is an expectation on teams to add more tasks onto the daily workflow, plan deadlines accordingly.
Double checking, for a while: Since analytics are pervasive at Dell and run mission-critical business applications, to ensure the integrity of results and minimize risk during the migration, Dell ran Statistica and the legacy analytics system concurrently to make sure everything was operating as expected. It was a process that was unexpected and time consuming but necessary before the legacy analytics platform could be turned off.
Trying to align individual business groups: You’ve heard of the phrase herding cats. It’s certainly a good comparison to managing a migration in which various groups operate on their own timeline, working toward their own objectives. But success means getting all groups to completion by the overall deadline.
During the migration, we encountered some additional necessary processes to move the migration along. For instance, the team realized it was important to stop and correct inefficiencies, despite the reluctance to take any time away from moving forward. Managers also experimented with different motivation tactics, including contests. To find out more about the actual migration, download the Dell-on-Dell case study, “Statistica: The Great Analytics Migration, Part 2: Process,” which recounts ways to get all teams to stick to the deadline. Would you expect pressure in your organization to come from the business leads or IT?
There’s more to come from Dell on its Statistica migration. In part 3 of the e-book, we’ll cover all aspects related to technology components of the migration project — from architecture to tooling. In the meantime, read part 2 to get more insight into our migration process.
Implementing Agile Practices to Catch up with the Other Agile Teams in Your Organization
A lot of database developers are surrounded by the agile development mindset (in application development, marketing and DevOps). But when they try to apply it to their own projects, they realize that many of the things people value most about databases simply aren’t amenable to the agile mindset.
I posted last time about database developers feeling left out of agile and stressed that automation can still play an important role. That’s true even though most of the tools application developers use aren’t widely available for database development.
Here are five points in the pipeline where you can shorten the database development cycle and start to catch up with the other agile teams in your organization:
Reaping the rewards of agile means thinking differently about how teams work and how they work with one another.
We’ve released a new e-book, Getting Agile with Database Development, to give you more perspective into using automation to shorten the database development cycle. Read the e-book to see in a new light the manual processes your database development has relied on, and start automating more of them.
Have you ever got a PSOD while installing ESXi?
Have you faced any challenges during ESXi installation?
Are you interested to know what’s happening at back of your installation screen?
If yes! then you are right place.
After reading this blog you will be able to answer the following questions.
Whenever debugging comes into picture it starts from the collection of logs. VMware ESXi serial logs is one of the main key data required to root cause the issue.
Before getting into the logs let us see what does the term serial signify?
Serial: The term serial signifies that data is being sent in one direction over a single wire within the cable. Enabling serial logs will send all the VMKernel logs to the serial port connected to the local machine or in simple terms, serial logs can be taken by using serial port.
Now the question comes, what is serial port?
Serial Port: This is also known as COM port which means communication port. It is used as a data communication interface.
Now the final buzz word, what are serial logs?
Serial logs: It is sometimes helpful to have logs captured independent of disk or network connectivity while troubleshooting the issue. It will get saved at an alternate desired place instead of default location.
Generally serial logs are taken for three situations.
The very first thing you should have is, a laptop/workstation/local machine with serial port enabled or a converter (serial to usb) to enable Serial connectivity.
Connect the serial cable to host and local machine
Open the Idrac console and go into the BIOS setting of host to enable certain BIOS token which help us to provide serial communication. Refer the fig 1 and fig 2 for more clarity.
Fig 1: Serial console from BIOS setup
Enable the serial communication on device 1 and com 1(com1 is the port name, you can select com2 also). Note down the baud rate that we will use later.
Fig 2: Serial communication attributes
From local machine device setting, check for the serial port name. We will be using this for making serial connection using putty session.
Fig 3: Device setting of local machine
NOTE: In this example my serial port name in device setting is COM5 and same should be given in putty session.
Fill the required parameters for serial communication in putty session.
Fig 4: Putty connection attributes
Now provide the advanced boot option to the kernel to send the logs through serial port during installation.
Once the ESXi installer is ready to begin, it allows users to add option. Pressing <shift +O> gives you the terminal to edit boot option. Then edit the command provided below and press enter.
>runweasel debugLogToSerial=1 logPort=com1
Fig 4: ESXi Boot option
At last closing the putty session will save the logs at the default location in your local machine. And you are done!!!
Collecting Serial logs from Dell Blade Servers
From server point, enable the serial connection as described above then from the client machine do the following:
Make a ssh connection to chassis using putty.
Once you get in to the CMC console, try to connect with the specific blade using blade number.
$connect –b server-# (X is the slot number of blade server, mentioned in the chassis)
Once connection is done Open the server console and append the below command before the installer initialized.
Apart from that whole process is same as Rack Servers.
Now you are ready to debug your Issue with the ESXi Serial logs.