Everyone’s excited and/or scared about artificial intelligence but should we be excited and/or scared about Intelligence Amplification instead?
I have been interested in this dichotomy for a long time, especially in health care . Socio-culturally, there are many reasons why we don’t accept fully autonomous systems, even when they are safer, faster, and more effective. The DTNS hosts use the example of elevators where a human operator was necessary for several years before we were willing to accept automation, even when they weren’t really doing anything except hitting buttons. We see it now with cars and drones.
I can’t decide if this is a triumph for analytics and algorithms or if it is one of those gaps that is ripe for human attention.
Arjun Chandrasekaran from Virginia Tech and pals say they’ve trained a machine-learning algorithm to recognize humorous scenes and even to create them. They say their machine can accurately predict when a scene is funny and when it is not, even though it knows nothing of the social context of what it is seeing.
You have probably read a lot of coverage of ethics in AI design. We will be covering that here next week. But in the meantime, I came across a related issue that I wanted to share – whether we need AI to understand social conventions. In particular, there are two domains that leapt out at me. One is with humanoid robots that use emotional responses to establish rapport with their users and to be more effective at activities like health care support. We have talked…