Get Started
Log In
Menu
Get Started
Log In

The Challenge of Artificial Intelligence

by Gary M. Shiffman, PhD, on Feb 19, 2019 9:12:28 AM

Google assistant, a virtual assistant that can engage in two-way conversations, has been making waves in recent months. In May 2018, Google launched Duplex, an extension of Google Assistant that allows a machine to carry out natural conversations in a near-perfectly mimicked human voice. The machine can call a salon to schedule an appointment, or a restaurant to make a reservation for you. The machine responds to the nuances of conversation. When the receptionist says she does not have the time you wanted available, Google Assistant can negotiate another agreeable time using the voice and tone of a real person. The live demonstration by Google CEO Sundar Pichai at Google IO 2018 is awesome. It is powerful. But is AI also a little bit scary? If “awesome, powerful, and scary” nicely summarizes how most of us feel about Artificial Intelligence, then what are we supposed to be doing with AI in the workplace?  How do we use AI to find threats while preserving privacy?

Drawing on the leading textbook on AI by Russell and Norvig, I suggest we define AI as an agent that perceives the environment and takes actions to change that environment. From this definition, we can think of many fields such as machine learning (ML), natural language processing (NLP), computer vision, and voice-to-text. Machine Learning is the process by which a machine uses data to learn. The key element which differentiates AI from other technology is this ability to take in input from outside the machine, apply ML and perhaps other technologies, and then provide a response which impacts the world outside the machine.

Turning your ML algorithm into an agent which can impact the world is awesome. It can vacuum your carpets; it can defuse bombs in combat zones; it can answer a million phone calls per day and take on the mundane tasks not requiring human intervention. Beyond the obvious, our AIs can avoid biases and errors common among humans.

As humans, we possess biases. Some of these help us; we avoid pain and seek pleasure, for example. However, humans have negative biases such as discrimination based upon race, religion or ethnicity. And some of our biases simply do not make themselves known to us but can impact how we interact throughout our work day. For example, as reported in Daniel Khaneman’s Thinking, Fast and Slow, researchers at Newcastle University in the UK showed that workers who were presented with an “honesty jar” in an office coffee room—asking for payment when people took tea or coffee—gave more money when a picture of eyes staring at them was posted above the jar. We know that posters don’t report on honesty or theft, but the photo of a human looking at you will cause you to behave more honestly than will a photo of flowers!  Computers, not subject to identity-based prejudices and feelings of guilt, can avoid these human biases. 

On the other hand, a computer agent will act as programmed – and therefore behaves literally and will execute with the biases programmed by the people or by the data upon which it was trained. When you ask a computer to find strong patterns in data, that is exactly what the computer will do. But looking for patterns in data using machine learning can go wrong, because there are spurious correlations all over the place – your machine will find correlations between things that are not actually correlated. One of my favorite web sites hosted by Tyler Vigen identifies these spurious correlations to perfectly make the point.  For example, a machine will tell you that number of people drowning in backyard pools correlates with the number of films Nicholas Cage appeared in! What would you do if a data scientist in your organization told you that your enterprise spending on R&D correlated with suicides by hanging? This is just one example of why humans need to be engaged and aware when monitoring AI.

What people truly fear about AI is what futurist Ray Kurzweil popularized as Superintelligence, building on John von Neumann’s idea of the Singularity – the point beyond which human life cannot continue. Imagine the day when computers can build better computers without human intervention.

Perhaps my current favorite example of computer “intelligence” was when engineers built a machine that could translate between Korean and Japanese languages without being explicitly trained to do so; the machine, in essence, created its own intermediate language. 

Machines possessing the ability to perceive the world and take actions which impact us certainly sounds awesome, exciting, and perhaps scary. So here is what we know for sure. Humans don’t want to do the mundane work, and we have negative biases. So we must expect the machines to become a part of every aspect of our professional lives. We can focus machine efforts on literal tasks. We must constantly test the machines to ensure they perform to our standards and have not developed biases over time and with the ingestion of more data. And perhaps most importantly, we need to keep the human in the decision-making loop. As we think about the ways in which AIs take over the humans in almost every science fiction movie we’ve ever seen, at some point the humans allowed the Agent to make value decisions – what is good vs. bad, right vs. wrong.  I don’t know about you, but I want to live in the world where we embrace AI to do the important but mundane work, and leave the decision making to the human.

Topics:Press

Recent Posts