Which is more useful:
- an artificial intelligence that is separate from humans and is programmed to provide valuable intelligent services. Think of a robot doctor who examines the patient, conducts tests, and uses algorithms to figure out what is the best (defined by the human programmers of course) diagnosis and treatment.
- a technology-based support system that enhances and amplifies the ability of a human to accomplish the service him or herself. Think of a medical decision support system that works with a human doctor to figure out what is the best diagnosis and treatment, but always leaving decisions in the hands of the doctor.
How about the flip side:
- an artificial intelligence that has a gap in its programming constraints (e.g. Asimov’s laws of robotics) that allows it to become the equivalent of the Matrix or Skynet.
- a technology-based support system that helps an evil human (e.g. Lex Luthor) to become a powerful dictator.
I came across this question while listening to episode 2672 of the Daily Tech News Show. This is a discussion podcast that covers the latest news from the tech industry in a very engaging way that is both intelligent and accessible. What they refer to as Intelligence Amplification is what we typically refer to as Augmented Cognition.
Everyone’s excited and/or scared about artificial intelligence but should we be excited and/or scared about Intelligence Amplification instead?
The second question might seem a little far-fetched, but the first one is very important. You might take the easy way out and ask for both. But we have limited resources so many R&D teams are forced to choose. If you were GM or Ford’s head of product development, would you invest in fully autonomous self-driving cars or in more sophisticated driver-assist technologies? Companies are making these tough decisions every day.
I have been interested in this dichotomy for a long time, especially in health care . Socio-culturally, there are many reasons why we don’t accept fully autonomous systems, even when they are safer, faster, and more effective. The DTNS hosts use the example of elevators where a human operator was necessary for several years before we were willing to accept automation, even when they weren’t really doing anything except hitting buttons. We see it now with cars and drones.
There has been a wide range of research on trust in automation in the human factors discipline. A quick search of the HFES Proceedings for “Automation AND Trust” found 465 papers. The studies range widely in what they look at and what they find. But a common conclusion is that we are not willing to accept full automation even when it performs as well as humans. An error by a computer is much more troublesome and disruptive than the same error by a human.
But how will all of this emerge in the future when we are all digital natives? When we all grow up comfortable with our lives in the cloud? When we never really know if our contacts on social media are human or automation?
Great topic for discussion. Please share extensively . . .
Image Credit: Aaron Friedman