Facebook’s Psychological Experiments. Each week, Educating Modern Learners will pick one interesting current event – whether it’s news about education, technology, politics, business, science, or culture – and help put it in context for school leaders, explaining why the news matters and how it might affect teaching and learning (in the short or in the long run). We’re not always going to pick the biggest headline of the week to discuss; the application to education might not be immediately apparent. But hopefully we can provide a unique lens through which to look at news stories and to consider how our world is changing (and how schools need to change as well). This week (the week of June 30), Audrey Watters looks at a recently published study, in which Facebook revealed it manipulated users’ news feeds in order to try to elicit positive or negative emotions.
Late last week, the latest anti-Facebook firestorm was unleashed. Scientists from Facebook, Cornell University, and the University of California, San Francisco published a paper in The Proceedings of The National Academy of Sciences detailing research conducted on Facebook users:
“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”
That is, scientists manipulated users’ news feeds to see if “positive” or “negative” content influenced what they posted and what they “liked.”
Facebook has apologized: “We never meant to upset you,” said COO Sheryl Sandburg (well, except according to the research design, they did). The company has a long history of apologizing to users, of course, particularly over privacy concerns.
But this isn’t a matter of privacy; it’s a matter of ethics, particularly surrounding scientific experimentation on human subjects.
As University of Maryland law professor James Grimmelmann has written, “Facebook users didn’t give informed consent: The study says: [The study] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.” (In fact, Forbes suggests that Facebook changed its terms of service to include “research” four months after the study was conducted.) Federal law requires federally-funded research to provide informed consent. That’s part of the IRB approval process that studies like these, particularly at universities, must complete. As Grimmelmann notes, “I don’t know whether I’m more afraid that the authors never obtained IRB approval or that an IRB signed off on a project that was designed to (and did!) make unsuspecting victims sadder.”
(Grimmelmann has collected many of the articles and official statements — from Cornell and from Facebook — on this issue. So has GigaOm’s Mathew Ingram.)
So, “should we worry that technology companies can secretly influence our emotions?” writes technology writer and virtual reality pioneer Jaron Lanier. “Apparently so.”
But some have responded to the news that Facebook has attempted to manipulate users’ emotions with a shrug. Corporations, marketers, governments do this all the time. (Indeed there have been some suggestions that this particular research was connected to the US military.)
But sociologist Zeynep Tufekci challenges those who are dismissive about the implications of Facebook’s experiments,
“I’m struck by how this kind of power can be seen as no big deal. Large corporations exist to sell us things, and to impose their interests, and I don’t understand why we as the research/academic community should just think that’s totally fine, or resign to it as “the world we live in”. That is the key strength of independent academia: we can speak up in spite of corporate or government interests.
To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!
That is one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of 21st century. This shift, in my view, is just as important as the fact that we, the people, can now speak to one another directly and horizontally.”
Indeed, as Janet Vertisi writes in TIME, the cuts in federal funding for social science research mean that more and more academics are partnering with corporations to pursue their experiments — and these are less likely to be covered by human subjects compliance and particularly in light of the furor over Facebook, these may be less likely to be published in academic journals or shared in any way with the public.
Writing in The Atlantic, Kate Crawford argues that
It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science. Shifting to opt-in panels of subjects might produce better research, and more trusted platforms. It would be a worthy experiment.
Social media researcher danah boyd writes that
Information companies aren’t the same as pharmaceuticals. They don’t need to do clinical trials before they put a product on the market. They can psychologically manipulate their users all they want without being remotely public about exactly what they’re doing. And as the public, we can only guess what the black box is doing.
There’s a lot that needs reform here. We need to figure out how to have a meaningful conversation about corporate ethics, regardless of whether it’s couched as research or not. But it’s not so simple as saying that a lack of a corporate IRB or a lack of an “informed consent” gold standard means a practice is unethical. Almost all manipulations that take place by these companies occur without either one of these. And they go unchecked because they aren’t published or public.
Indeed, a Wall Street Journal article on Facebook’s data science team details a number of other experiments that have been made on unsuspecting users.
Is this really the trade-off we must make when we use technology? What sorts of oversight should technology companies have here? What do corporate ethics look like in a world of big data?
And importantly for educators: how do we balance the push for more algorithms in education (via adaptive learning, for example) with the need for transparency and ethics about how students’ lives may be being manipulated – even with, ostensibly, the very best of intentions?
Image credits: Maria Elena