A study recently published this month by a data scientist at Facebook brings up some really interesting issues about ethics, big data, and the monitoring and collection (and manipulation) of our behavior on social media.
This topic is important for all of us because data is being collected for all kinds of reasons: basic research, design, user-modeling, ethnography, and many others. So no matter what sector you are in, this matters to you.
Apparently, Facebook “adjusted” its algorithm to vary the number of positive and negative comments that users saw on their Feeds and measured the number of positive and negative posts they subsequently posted. The actual findings of the study are interesting as well, but that will have to be the subject of a future post. For today, I want to talk about the ethical implications of the methods that they used.
Adam Kramer, a Facebook data scientist who was among study’s authors, wrote on his Facebook page yesterday that the team was “very sorry for the way the paper described the research and any anxiety it caused.”
Here are the specific issues that jumped out at me.
First, they manipulated their algorithm in a way that changed the emotional experience of their users. Of course, Facebook is free to use whatever algorithm they want when selecting comments. There is no legal basis precluding them from modifying it from one user to another, one day to another, one geography to another . . . But there is a wide body of research showing that emotional content matters. So is it ethical to manipulate this? Based on the response, users were none too pleased when they found out. Privacy advocates were also incensed.
Facebook defends the study because the data was all reported anonymously. This could mean (depending on your point of view) that it was not a privacy violation. But I think this misses the point. If they degraded a user’s emotional experience over the course of a month by showing them more negative comments, that may be a personal violation rather than a privacy violation. What if that negative emotion transferred over to other decisions I made that month? Maybe I chose not to apply for a job or buy a new stereo? Perhaps I made worse decisions at my job. Emotional research shows that there is a strong influence of these emotional footprints between domains. This study from Fordham University is a good example.
Does Facebook’s study have implications for the internal review boards (IRBs) that we use to get their research designs approved? Faculty use university IRBs. Facebook has its own internal IRB, as do most companies that do research. If an IRB approves a study, should we automatically give them a pass on the ethics? Are internal boards objective?
Then we also have to think about what happens as a result. The Bloomberg article quotes James Grimmelman at the University of Maryland who acknowledges that Facebook has a powerful position in the social media market because of its huge user base and the network externalities of social networks. We can’t really go anywhere else, so we are stuck accepting whatever (legal) practices they engage in. We can complain all we want, but Facebook doesn’t have to oblige.
And since much of what they (not just Facebook, the whole industry) do is secret, most of the time we don’t even know what they are doing. So we can’t complain even if we want to. In this case Facebook’s data scientist published the study in an academic venue, the paper was noticed by someone who decided to share it with the mass media, and they decided to publicize it to us. If any of these steps was disrupted, we would never have found out.
I am very interested to hear what you think about these issues.