Read More »
I was really intrigued by this article in the Ideas section of the Boston Sunday Globe. It talks about the interaction between a robot’s projected personality and user acceptance. One of the things I really like about the Globe’s Idea section is that they cover the original research pretty well. Unlike some other media outlets that I have ranted about recently.
Smart machines need the right “personality” to work well—and experts are finding the best choice may not always be what we think we want.
In my visual design course last semester a student brought up the concept of the uncanny valley. The idea is that there is a U-shaped curve to the relationship between how human-like a non-human entity can be and still be accepted. If something is fully human (at the extreme right of the curve), then it is fine. If it is fully un-human (at the extreme left of the curve) then it is fine also. But at the point of the curve where it is eerily “almost human” it is rejected. At this point, it gets associated with a zombie, with disease, or with sociopathy. So it was fun to read the article contrasting R2D2 and C3PO in the May issue of Smithsonian…