Autonomous Car

Social Graces in AI

You have probably read a lot of coverage of ethics in AI design. We will be covering that here next week. But in the meantime, I came across a related issue that I wanted to share – whether we need AI to understand social conventions. In particular, there are two domains that leapt out at me.

One is with humanoid robots that use emotional responses to establish rapport with their users and to be more effective at activities like health care support. We have talked about this one before.

My Take

So today I want to focus on the other one. It comes to us from Jerry Kaplan in the middle of a piece on ethics in autonomous vehicles (hence the starting point of this article).

Pedestrian versus car

He describes a scenario in which a car is approaching an intersection. At the same time, a pedestrian is deciding whether to cross. As a frequent pedestrian myself, I can attest to the frequent speculation about whether the car will stop (regardless of what the traffic regulations require). A general rule is that it is better safe than sorry. But a more practical approach is to make eye contact with the driver to make sure he/she sees you. You read the facial expression to judge recognition and intention to yield. Often a smile and/or a nod is enough to do it.

But what if the car is autonomous? There might not be a (human) driver. Even if there is someone sitting in that seat, she may not be the decider.

Is there a design solution that solves the problem? It doesn’t have to be social, we could imagine some user interface widget that signals to pedestrians that the car is stopping. It could even signal that it has recognized the pedestrian’s right of way. Perhaps some icons on the hood of the car.

But might it be better to reproduce the social interaction? Could we create the vehicular version of the smile and nod? Is there any added value of solving the problem this way?

Kaplan thinks that we need to. There are so many interaction possibilities that he doesn’t think it is feasible to design a user interface solution that would work for this amount of variability and uncertainty. Especially when we own cars for many years and new situations might emerge that haven’t been programmed in. A more generalized, principle-based social algorithm might be more effective for this environment.

Car versus van

How about this one? A delivery van has pulled up to a sidewalk and the driver gets out to make the delivery. The van is wide enough so that approaching cars need to cross over the double yellow line to get around. But waiting is not a good option because the driver could be gone for quite a while. So what do you do? Most of us would go with the violation of a low value traffic rule, with the preface that you first check carefully for oncoming traffic.

But what would the autonomous car do? Would we explicitly program it to violate the traffic rule when partially blocked by a delivery van? What would the variables be?

  • How urgent the trip is to the passenger?
  • Predictions of how long the driver would be gone?
  • How much visibility there is of oncoming traffic?
  • How far across the yellow line the car would have to go?
  • The risk-tolerance of the driver?
  • The jurisdiction (and how much a traffic violation ticket might cost)?

There are all kinds of factors that we could include, making this a tough design challenge. And then we have to decide whether the ultimate decision should prioritize legal requirements, efficiency, or ethics.

Your Turn

What would you suggest to the autonomous car in scenario two? What you want it to do if you were inside? What would you want it to do if you were a driver in the oncoming traffic?

How about scenario one? Would you prefer a user interface widget or a socially aware interface?

Please share.

Image Credit: Saad Faruque

Leave a Reply

Your email address will not be published. Required fields are marked *