You have probably read a lot of coverage of ethics in AI design. We will be covering that here next week. But in the meantime, I came across a related issue that I wanted to share – whether we need AI to understand social conventions. In particular, there are two domains that leapt out at me. One is with humanoid robots that use emotional responses to establish rapport with their users and to be more effective at activities like health care support. We have talked…
I came across this design case by Edward Wilson from Aymmetrica Labs and it got me thinking. He makes a convincing pitch about how to craft an engaging and convincing narrative around a product design. But the description is clearly of a deceptive approach, what seems to me like an example of black hat design.
In the past I spent far too long trying to sell people on excellent, but complicated, unintuitive, unfamiliar narratives. I competed with others who offered the same stuff I did but they dumbed it down, they made it less effective, but they made it easier to understand. The other guys always won. It took a while (15 years) but now I get it. My job is to take the excellent content I am working with and try to make it intuitive, obvious, and familiar. If I don’t make it intuitive, obvious, and familiar, I won’t have an audience.
Jessica Kennedy, from Vanderbilt University’s School of Management, is interviewed by Laura Geller in the Strategy + Business Thought Leadership column on how this applies to team leadership and management. And in a way I would not have expected.
Kennedy has researched the origin of unethical behavior, and why it takes hold. She has found that the whole story is more complex. It’s not always about power corrupting. Rather, power causes people to identify so strongly with their group that they lose sight of whether that group’s actions cross an ethical line. This identification can lead them to support misconduct, rather than stopping its spread.
If you owned a robot that was capable of feeling pain, would it be ethical to cause it? This is a question recently asked in a very thought provoking essay on wtvox.
On the other side, is it ok to torture or murder a robot? We form such strong emotional bonds with machines that people can’t be cruel to them even though they know they are not alive. So should robots have rights? Mistreating certain kinds of robots could soon become unacceptable in the eyes of society. In what circumstance would it be OK to torture or murder a robot? And what would it take to make you think twice before being cruel to a machine?
Editor’s Note – we are happy to introduce this guest article from Moin Rahman, Founder of HVHF Sciences. His bio and link to his company’s web site are located at the end of his article.
Is there a Hippocratic Oath – or something similar – for Human Factors Practitioners? At least I have not heard of one that is specific to human factors, although there is a similar oath for engineers. And there have been discussions about having an oath for scientists and engineers in general. Nevertheless, human factors professionals are driven by our morals and professional ethics to design devices and solutions that in the words of Asimov’s First Law of Robotics “[A robot] may not injure a human being or, through inaction, allow a human being to come to harm.” Good so far. But the ethics of a human-machine system or complex sociotechnical system (STS), particularly at the intersection of humans and safety critical technology may or may not receive the necessary attention it deserves.
I joined Barrett Caldwell’s Scout the Future program. The idea is to heighten our sense of awareness of emerging systems, environments, technologies, and social movements and how human factors can be applied to them.
Through the program, HFES members who are involved in cutting-edge technologies, have particularly broad connections in diverse research and engineering domains, and who can spot a trend before it hits the mainstream can share that information with the Executive Council.
I ranted a while ago about the design approach of the viral “oops” in which the design misleads the user into doing something (like clicking or ) that gets the content shared throughout his or her network. For example, have you ever saw an article on your newsfeed in Facebook that had an interesting sounding title, clicked on it, and then discovered it was cheap marketing? You immediately click and go on about your business. But in the meanwhile, the Facebook algorithm assumes you liked…
I was a little upset when I read this article, which is from someone whose ideas I usually have a high regard for. The article is about what he calls the viral “oops.”
Unlike viral loops, which are actions users take in the normal course of using a product to invite new members, viral oops rely on the user ‘effing-up.
In essence, this is when a user shares your content by accident, blames himself for the mistake, and you get the benefits without the costs of the error.
This is another metric tradeoff that is of great interest to me, both professionally and philosophically. What do you do when your design process is faced with a tradeoff between two options: one that will work better but violates a principle that you think is important (but is not formally illegal or unethical) and one that works less well but has no such violations? This is top of mind with me this morning because of a debate we are having in Boston about P2P parking apps like Haystack. If you are unfamiliar with these apps, they allow someone who is leaving a parking spot to announce it on the app network and someone looking for a spot can grab it, for a fee of course.
A study recently published this month by a data scientist at Facebook brings up some really interesting issues about ethics, big data, and the monitoring and collection (and manipulation) of our behavior on social media. This topic is important for all of us because data is being collected for all kinds of reasons: basic research, design, user-modeling, ethnography, and many others. So no matter what sector you are in, this matters to you…