On 17 June, researchers affiliated with Cornell University, the University of California in San Francisco, and Facebook Inc. published a study of emotional contagion. By manipulating the News Feed of Facebook users (N = 689,003), they reduced either the number of positive or negative emotion words users were exposed to. They then measured the number of positive or negative words these same people subsequently used in their own posts, and indeed found evidence for what they suggested is emotional contagion: users in the experimental group who were exposed to fewer positive words than the control group (whose News Feeds were left undisturbed) produced fewer positive and more negative words. The opposite was true for users exposed to fewer negative words. The results are statistically significant, but the effect is tiny.
To most people, more remarkable than the study’s results were the ethics of the experiment. The Facebook users involved did not give informed consent, nor were they debriefed. As Katy Waldman formulated it in Slate:
Facebook “intentionally manipulated users’ emotions without their knowledge”.
Others have defended the study: it’s simply a part of Facebook’s ongoing effort to improve its service, and at least this time the results were published – open access even. And besides, no harm was done. Reading more or fewer emotion-related words in your News Feed is not going to upset your mental balance. People aren’t made of spun glass, as someone put it.
In my opinion, the question of harm is irrelevant. Even if there is no risk for the participants, informed consent is mandatory in behavioral research. It is, firstly, not up to the researchers to decide whether the experiment is potentially harmful: that is for the participants themselves to decide. Any other arrangement would be paternalistic. And secondly, it is crucial that people participate voluntarily. Granted, there is a provision in the small print of Facebook’s terms of agreement about the company’s right to use user data for research. Moreover, even if they didn’t read the terms of agreement before clicking “accept”, most Facebook users are aware they sold their soul to the company when they signed up. However, it rather stretches the meaning of the term to call this “informed consent” and publishing your findings in the Proceedings of the National Academy of Sciences hardly constitutes “debriefing”.
The point of informed consent is not to avoid harm, it is to make sure that behavioral research is not just about people, but is done with and for people.
Informed consent forces researchers to consider the interests of their experimental subjects, and the people they study in general. And it works both ways, because informed consent likewise forces the participants to consider the interests of the researchers and think about the subject they study. The fact (if it is a fact) that most participants don’t care (they’re just in it for the credits) is not an argument for not taking informed consent and debriefing seriously, rather the reverse: it means they are not as interested in behavioral research as they should be, and we must try harder to argue for its value.
Relevant Publications and Links
Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 201320040. doi:10.1073/pnas.1320040111
Waldman, K. (2014, June 28). Facebook’s Unethical Experiment. Slate. Retrieved from http://www.slate.com/articles/health_and_science/science/2014/06/facebook_unethical_experiment_it_made_news_feeds_happier_or_sadder_to_manipulate.html