Black Hat Design

I ranted a while ago about the design approach of the viral “oops” in which the design misleads the user into doing something (like clicking or ) that gets the content shared throughout his or her network. For example, have you ever saw an article on your newsfeed in Facebook that had an interesting sounding title, clicked on it, and then discovered it was cheap marketing? You immediately click and go on about your business. But in the meanwhile, the Facebook algorithm assumes you liked it and so it adds it to your friends’ newsfeeds. The viral oops is basically the spam model – eventually someone will become a customer and how cares how many people you alienate in the process since they weren’t going to become customers anyway.

It seems that the more we learn about human behavior, the more strategies some of us (not you of course) develop that have short term gains for the company against the best interests of the user. I refer to these as black hat design (stealing the term from the hacker community) to contrast it with white hat design, which still has business objectives but also supports the best interests of the user and is therefore a better long term strategy.

The most recent example I came across in the human factors domain is from Nielsen Norman Group (yes, I am naming names today – this is important). They published an article called “Scarcity Principle: Making Users Click RIGHT NOW or Lose Out”. Scarcity is not a new phenomenon; it has been discussed for decades in the behavioral science domain. And it can be used for white hat design as well as black hat. But if you read NN’s article, it doesn’t make that distinction. It is essentially is an instruction manual for how to get users to but things they don’t need, aren’t ready to commit to yet, or aren’t the best match for their needs.

Feeling that there is only one chance can convince people to take action sooner, sometimes without careful consideration of consequences or alternative options.

The same thing is true of other persuasive design or behavioral design techniques. They can be used for white hat or for black hat design. Take the classic environmental design trick of making one choice easier to select than another (e.g. in the lunchroom). Some people always grab the unhealthful food and some always grab the healthful food. But by putting the junk food in the back and the healthful food in the front, a significantly greater number of elementary school students end up with a more healthful lunch. And because the choice was fully under their conscious control, they maintain perceived autonomy and are happy with their decision. Everyone wins, so this is white hat design.

But now do the same thing in reverse. What if a grocery store puts a higher margin item in the front and a loss leader in the back to increase profitability? The customer is none the wiser. You might say that the customer is still the one making the choice, so why not? That is the attitude in the NN article and in many of the others I have seen. In contrast, Chris Nodder (author of Evil by Design) has a great article where he brings up this question. He recommends only using persuasive techniques when they are also in the best interests of the user (or magic shows). And I was heartened to see most of the commenters agreed with him.

But what are we really doing in practice? As I have discussed in many articles on this site, our powers of self-delusion are quite powerful. A soon to be appearing article (so stay tuned) on the slippery slope of unethical behavior summarizes a whole body of recent research on how easy it is to fall into the trap of black hat design when it becomes a standard practice, when it supports your quantitative targets (visits, sales, shares), when you get praise about how great your design is, when you can justify that “everyone does it” or that “the user is still the one making the choice” and so on.

Image credit: Vincent Diamante

5 thoughts on “Black Hat Design”

  1. I fully agree that we should use our extensive knowledge of persuasive design to better the planet, and not just line our pockets. We know better than to say that “the user made the choice” when we stack the deck. That is the foundational finding that launched Human Factors Engineering (HFE): that if we change the tools that people are given, we will change their behaviors. However, I think we need to be brutally honest with ourselves about the goals of each design project. What is the team’s, or the company’s, definition of success? What are we incented to do? Because, even if we as designers have the noblest of intentions, those incentives will win the day. Only by changing the incentive structures (e.g., providing payment clauses for medical device manufacturers that include improving patient outcomes) will we change behaviors. Which is just a system-level view of the same HFE premise – only by changing high-leverage structural aspects of the system (in this case, goals) will we change behaviors.

  2. This brings to mind for me the recent changes I’ve seen in price defaults for charity contributions. Instead of more typical $20, $50, $100 defaults and an option to type in a specific amount, I’ve seen things like $23, $36, $60, etc. I was in one of these situations with the intention of donating $30 – $35 ($20 seeming too little to me and $40 being more than I wanted to share) and was presented with a default option of $36 (were they reading my mind?). Going $1 over my intended number was maybe worth avoiding having to complete the “fill-in” any amount field. Although you may call this “white hat” because the receiver was a charity, I still felt manipulated, in a negative way.

Leave a Reply

Your email address will not be published. Required fields are marked *