parallel parked car

What Is The Automation Doing? Mode Awareness Problems Catch Tesla By Surprise

One of the most compelling arguments for introducing autonomous driving vehicles is the potential reduction in injuries and fatalities caused primarily by human error. According to the National Highway Transportation Safety Administration, human error accounts for 94% of fatal crashes, and advanced safety technologies can be expected to reduce these numbers substantially.

However, at the end of April this year, there was an incident in which a Tesla Model S, from a parked position, collided with a trailer parked in front of it after the “Summon” autopark feature was activated. With Summon, the driver can exit the car and make it roll slowly forward or backward into a tight parking space using the key or an app. The incident caused only about $700 in damage to the luxury car’s windshield, but this incident might be a symptom of a greater problem.

The Model S boasts the recently released “Autopilot,” which includes autosteer, speed assist, collision warning, adaptive cruise, and Summon features. However, despite the advanced automation in the Tesla, including the heralded “autopilot” features that collectively can drive the car for you on well-marked roads, the roles and responsibilities of the driver are not clear. For example, according to Tesla Motors, Inc.’s release notes on Summon:

Model S may not detect certain obstacles, including those that are very narrow (e.g., bikes), lower than the fascia, or hanging from the ceiling. As such, Summon requires that you continually monitor your vehicle’s movement and surroundings while it is in progress, and that you remain prepared to stop the vehicle at any time.

In other words, despite all the obstacle detection technologies that permit driverless car operations while moving, this car can’t see above a certain height when Summon is activated, and it’s the driver’s responsibility to monitor the car to ensure the safety of the car and everything around it.

Moreover, Summon includes a configuration option that enables the driver to automatically activate the feature when exiting the car. With this option, when the driver double-clicks the gear selector, a chime sounds and a message appears on the center console allowing the driver to cancel the Summon feature. When the door is closed, the 4,600-pound car will roll itself to a preselected distance to the nearest object. According to Tesla, this is how the driver caused the accident. The driver does not recall activating Summon and claims the feature was activated after he left the area.

This entire incident, which comes as no surprise to aviation human factors researchers, is a classic example of mode awareness failure (i.e., not understanding that the car was in Summon mode) and the automation’s limitations.

This also shows how automated features that place drivers in a supervisory role may introduce new errors. When automation allows us to supervise the driving task (in some cases from outside the car!), it has to provide better information to keep the driver in the loop and/or perform robustly in uncertain conditions. And it absolutely must allow the system to fail gracefully and prevent the human from making erroneous or accidental commands.

Moreover, this entire incident highlights the fact that more testing with actual users in myriad scenarios is warranted to ensure we get the greatest safety benefits from our automated systems.

About the Guest Author: Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) at Duke University. He has a PhD in industrial and systems engineering and Master’s degrees in experimental psychology and industrial engineering. He is also a Certified Human Factors Professional. Since 2002 he has worked as a human factors engineer investigating the intersection between people and technology.

About the Guest Author: Mary (Missy) Cummings received her PhD in systems engineering from the University of Virginia in 2004. She is a Professor in the Duke University Pratt School of Engineering and the Duke Institute of Brain Sciences, and is also director of the Humans and Autonomy Laboratory and Duke Robotics. She is a member of several national and international advisory boards, including the NASA Advisory Council Aeronautics Committee and the Foundation of Responsible Robotics Executive Board.

Image Credit: Gatanass

Leave a Reply

Your email address will not be published. Required fields are marked *