One of the most compelling arguments for introducing autonomous driving vehicles is the potential reduction in injuries and fatalities caused primarily by human error. According to the National Highway Transportation Safety Administration, human error accounts for 94% of fatal crashes, and advanced safety technologies can be expected to reduce these numbers substantially.
However, at the end of April this year, there was an incident in which a Tesla Model S, from a parked position, collided with a trailer parked in front of it after the “Summon” autopark feature was activated. With Summon, the driver can exit the car and make it roll slowly forward or backward into a tight parking space using the key or an app. The incident caused only about $700 in damage to the luxury car’s windshield, but this incident might be a symptom of a greater problem.
Everyone’s excited and/or scared about artificial intelligence but should we be excited and/or scared about Intelligence Amplification instead?
I have been interested in this dichotomy for a long time, especially in health care . Socio-culturally, there are many reasons why we don’t accept fully autonomous systems, even when they are safer, faster, and more effective. The DTNS hosts use the example of elevators where a human operator was necessary for several years before we were willing to accept automation, even when they weren’t really doing anything except hitting buttons. We see it now with cars and drones.
Vince Mancuso and his colleagues from the Wright-Patterson Air Force Base presented a paper on Cyber human supervisory control at User Experience Day last week at the Human Factors and Ergonomics Society annual conference. This paper won the best paper award and received a $1,000 prize, sponsored by State Farm.
The paper investigated the performance of a human supervisor in cyber security applications and how this performance varies with an increasing number of autonomous cyber assets to monitor. They used the BotNET Operator Agent Ratio Determination (BOARD) system as the environment for the test and gave participants a series of missions to accomplish.
I was really intrigued by this article in the Ideas section of the Boston Sunday Globe. It talks about the interaction between a robot’s projected personality and user acceptance. One of the things I really like about the Globe’s Idea section is that they cover the original research pretty well. Unlike some other media outlets that I have ranted about recently.
Smart machines need the right “personality” to work well—and experts are finding the best choice may not always be what we think we want.
It is really good to keep in mind, as we migrate to fully automated systems that inevitably a human is going to be forced into the loop by something going wrong or unexpected. When it does, the low situation awareness can be devastating. This article by HFI has some great examples.