Transformation Talk

Connecting you to Business Transformation Industry News.

Artificial Intelligence; The Next “Undiscovered Country”

Oct 31, 2019

This post Artificial Intelligence; The Next “Undiscovered Country” appeared first on Operational Excellence Society.

artificual intelligence

In William Shakespeare’s “The Tragedy of Hamlet, Prince of Denmark” (usually referred to simply as “Hamlet”), the main character, Hamlet, uses the phrase “undiscovered country” to refer to what lies beyond the grave – the afterlife – and our lack of knowledge of it, and our fear of it.  As it is with death, so is it with “Artificial Intelligence”, or “AI”.

Simply put, AI are the sciences involved in having computers gather and evaluate information (data in context) – whether that information is stored, or is gathered in real-time, or combinations thereof – and to make decisions based on that information towards some pre-determined outcome or objective.  Each decision made leads to an experience which is archived and added to the information that is stored to help make better decisions in the future.

To help understand the concepts simply, we can examine the world of games.

In 1996, IBM’s Deep Blue was arguably the first computer to understand the rules and objectives of Chess and to learn from its experiences.  In its debut against Garry Kasparov, Deep Blue lost to Kasparov two games to four.  A year later, in 1997,  IBM’s Deep(er) Blue won its rematch against Kasparov three games to two with one game being a draw.

Moving to the game Go (another complex strategy game originating out of China some 2,500 years ago), in 2017, Google’s DeepMind and it’s AlphaGo program beat the top-ranked Go champion Ke Jie in a three game match, winning three straight games.  And later that same year, AlphaGo beat Stockfish (a top-rated chess engine) in a 100 game match; winning 28 games and with 72 draws (Stockfish won none of the games).

Each of these AI game systems understood the fundamentals of the games (rules and objectives) and “learned” how to play at a Masters level based on their experiences in play.

But games are one thing, real-life is quite something else.

We see AI creeping into our personal and professional lives day by day.  But is it really as effective as we believe it might be?

Consider the use of AI in Human Resources for screening candidates for a position.  These AI “bots” are instructed to look through applicant CV’s for how close they come to the job requirements (perhaps better called “desirements”).   They routinely kick-up those who are close matches and kick-out those who are less close.  But I am almost certain that many of those candidates who are kick-out are better suited for the role than those who are kicked-up.

Besides, we hire people.  And people are nuanced far beyond what can be described on paper and can contribute in ways only a face-to-face (or even a call) will give the opportunity to be seen (or heard).

Then there is the matter of diversity; how can we tell if our AI bot is not “accidentally” kicking-out those who do not fit the desired profile (unconsciously) programmed into the algorithm.  Perhaps we kick-up a “Winston”, kick-out a “Darnell” – and crash the system with a “Shaquille O’Neal”.  How do we ensure our team is not a homogenous bunch of sameness “like-me’s”?

And I have heard from some how they “game” the bots.  They look at the job profile and then just make sure their submittal has all the “buzz-words” required of the job.

But do we, should we, trust machines and their programing with our health and safety?  Even our lives?

We already do.  The only real difference is that the machines we trust have established track-records where the risks are known.  With AI, the risks are (presently) largely unknown.  And this not knowing (rightfully) causes us concern.

Take flying as an example.  Today, we rarely give it a second thought to get on a jet airliner and hurtle ourselves flying at 550+mph at 40,000 feet.  Even with all the external risks and occasional high-profile catastrophe, we still get on in ever-increasing numbers.

But such was not the case in the early-days of jet flight.  In 1952, the de Havilland Comet made its debut and was the world’s first commercial jet airliner.  But soon after brought into service, the jet airliners started falling from the sky in a series of catastrophic crashes.  Investigations found that the root-cause of the crashes were the windows – or, more specifically, the holes that were cut into the fuselage to accommodate the windows.  These holes were cut with corners.  And the pressurization and depressurization caused cracks in the fuselage in these corners.  This discovery led to the redesign of the windows so that the holes in the fuselage were rounded and had no corners.

Problem solved.  But at the cost of lives.

The same can be said of every transformation of note; from the discovery of how to make fire, to the days of sailing, to rail and automobile, to space flight.  Each early-stage of innovation brings with it a level of peril.  And we learn from our negative experiences to ensure they are eliminated from future experiences.  But it is only through these experiences that we learn.

We pay for our innovations and transformations with our lives.  Always have, always will.

But with AI, things are different.  Up to this point, all of our innovations were driven by people.  This is not the case with AI, which is designed and programed by people, but operated by machines.

The one thing that does concern me is that a machine will not, cannot, admit that it was wrong – that it made a mistake.  It can store that negative experience into its databanks and draw from the experience in making future decisions, but it cannot show remorse or offer an apology.  It’s not that they don’t care, they simply cannot care.  Machines lack the capacity for empathy.

As an example; Let’s say a driver is driving their vehicle on a slippery road and, through no other extenuating circumstances, loses control, hits a pedestrian, and the pedestrian dies.  And let’s say the driver is charged with manslaughter and brought to trial.  Because of the circumstances, the lack of mens rea (intent), and the remorse and sorrow the driver demonstrated at their trial, it is likely that the driver will be found not guilty.

But what if the person on trial did not demonstrate sorrow or remorse?  What if they were stoic, aloof, lacked any demonstration of empathy?  How might the jury find under such circumstances?  Absent remorse, our natural instinct is to want some manner of punishment, of revenge.

But how do we punish and exact revenge on a machine?  Sure, we might be able to go after the company owners, but unless they had intent, what are they guilty of?  Should it be the owners who are found guilty?  The stockholders (who are really the owners)?  Or perhaps the design engineers, or maybe even the people who made the machine?

A more complex scenario for your consideration; What if the driver was faced with a choice of running head-on into an on-coming tractor-trailer or swerving into a crowd?  The driver would base their split-second decision on the value and morals they held – harm (possibly kill) themselves by running into the tractor-trailer or harming (and possibly killing) people in the crowd.  What decision would the computer driving the automobile decide?  What is its programming?  What are the values and morals that have been programmed into it?  Does the driver have a say in their destiny?

This is the challenge of AI and their operating machines that can cause harm.

There have been several recent events that have hit the headlines for consideration;

  • Tesla; There was a crash (fatal) of a Tesla Model-3 in Florida.  The driver of the Model-3 had turned the “autopilot” feature on, two seconds later the driver took his hands off the wheel (and they remained off the wheel) and eight seconds later, the auto collided with a tractor-trailer.  Look at your watch and count-out eight seconds.  It’s a long time.

In another crash (non-fatal), a Tesla Model-S in California hit a parked fire truck.  The NTSB report showed that the driver had not had his hands on the steering wheel for approximately 13min and recommended that further development of the “autopilot” system should occur to recognize and react to similar scenarios.

  • Uber; A crash involving an Uber autonomous (self-driving) car fatally struck a pedestrian a Tempe, Arizona who was crossing a four-lane highway in the evening outside of the designated and illuminated pedestrian crossing.  Although the car was operating in autonomous mode, it had a human driver behind the wheel.  The NTSB investigation found the automobile was not exceeding the speed limit, and there was ample time for the automobile and the driver to take action to avoid the accident.  But it also found that the driver was not paying attention to the road in the moments before the accident.
  • Boeing;  Probably the most extreme recent example of the perils of AI involve the Boeing 737 Max-8 and the Maneuvering Characteristics Augmentation System (MCAS) required to overcome certain design elements of the aircraft.  In this case, the pilots wanted the aircraft to perform certain instructions given a set of circumstances, but the MCAS was forcing alternative instructions based on the data it was being fed and the programing it was given. The result was two fatal crashes and many lives lost.

Complacency kills.  Arrogance is worse.  There is a fine line between the two.

In the cases of the automobiles, the operators of the vehicles we complacent and relied too heavily on the technology and its maturity.  Until such time as these systems become more mature and the “experiences” have been learned, the human driver should be the primary and the technology the back-up system, not vice-versa.

And in the case of the 737 Max8 and the MCAS system, it would appear that complacency on the part of Boeing to assume their engineering of the technology was more thorough and ready than it was – and the FAA was complacent in not fulfilling its responsibility in performing the due-diligence to ensure Boeing’s designs and engineering were sound.

Then there is the other extreme;

What if people do purposefully use AI to create machines that are intended to cause harm?  And what if they are tied to some super-machine that is driven by AI to make decisions?  Can “Judgement Day” step from fiction to reality?

For me, this is the bottom-line; AI is at its infancy – but there is equally giganormous potential and giganormous peril.

I believe there is the potential for untold and unimaginable benefit from AI that can be realized and that can have a positive and indelible effect on every part of our world and every aspect of the lives we lead.

I also believe there is also great peril.  Given human nature, many of these perils are known but our complacency and arrogance will offer a false sense of security (as in the examples above) and we will experience the resulting and inevitable tragedies.  But these tragedies and the challenges they illuminate will be overcome and the technologies of AI will be refined as we learn from these experiences – until they are as ubiquitous and unquestioned as driving a car or flying in an airplane are today.

And many of these perils are unknown and we must remain vigilant to their tell-tails.  Most important, we must never relinquish control of our lives, our destiny, or our fate – to machines.

In each and every case, we will have the opportunity to make a future we desire – or be forced to live with the consequences.

by Joseph Paris

Paris is the Founder and Chairman of the XONITEK Group of Companies; an international management consultancy firm specializing in all disciplines related to Operational Excellence, the continuous and deliberate improvement of company performance AND the circumstances of those who work there – to pursue “Operational Excellence by Design” and not by coincidence.

He is also the Founder of the Operational Excellence Society, with hundreds of members and several Chapters located around the world, as well as the Owner of the Operational Excellence Group on Linked-In, with over 60,000 members. Connect with him on LinkedIn or find out more about him.

This post Artificial Intelligence; The Next “Undiscovered Country” appeared first on Operational Excellence Society.

Original Article: https://opexsociety.org/body-of-knowledge/artificial-intelligence-the-next-undiscovered-country/

Stay In Touch.

Subscribe to our newsletter and exclusive Leadership content.

We respect your privacy and won’t spam your inbox