In November, I wrote about some of the unintended consequences of analyzing big data:

  1. Savvy customers may demand a share of the value companies realize from mining the customers’ data.
  2. Gathering and analyzing data about customers (or citizens), if too unconstrained, can cross the line from targeted marketing to prying.
  3. In their rush to harness big data, organizations can run the risk of inadvertently compromising customers’ confidential data.

We have reached a point where big data analytics technologies and decision support systems have gone beyond enabling us to make more accurate forecasts and better decisions, to posing privacy, security and ethical challenges such as these.

Big data analytics is a broad term covering a wide range of capabilities from intelligent search, pattern recognition, more accurate forecasting, and improved risk management to automating tasks and decisions and the advancement of smart machines such as IBM’s Watson.

Particularly interesting are the unintended consequences of automation and smart machines. The era of smart machines seems to be upon us as they rapidly become commercialized and more commonplace.

Conventional wisdom seems to hold little doubt about the rightfulness of automation and the increased use of smart machines in that they benefit humanity in so many ways –

  • Labor-saving devices freeing humans from mundane and repetitive tasks, allowing us to move on to higher pursuits and more creative activities
  • Far more rapid evaluation of far more data inputs to produce far more consistent and error-free results than humanly possible

On the other hand, in a highly provocative and extremely insightful article in the November 2013 issue of The Atlantic,All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines,” Nicholas Carr reviews unsettling evidence of some of the dangers of over-reliance on automation, along with prognostications about its future use.

As Carr notes, contrary to what we want to believe, automation can alter the way we behave and think. Autopilots have no doubt contributed to an overall decline in plane crashes.  They reduce pilot fatigue, provide advance warnings of problems and keep a plane airborne should the crew become disabled. However, pilots’ dependence on onboard computers erodes their expertise and dulls their reflexes, leading to “de-skilling,” a dangerous situation when an autopilot fails. Carr cites examples of this “spectacularly new type of accident.” According to Raja Parasuraman, psychology professor at George Mason University and a leading authority on automation, “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers of automation.”

A moral dilemma will face consumers, government entities and commercial businesses in 2014:

Is automation’s promise of convenience, speed, cost-savings and efficiency so beguiling that we will fail to recognize, or deliberately ignore, the warning signs and dangers of its cognitive ailments?

What do you think?