
Facebook emotion manipulation study: an explanation - mrmaddog
https://www.facebook.com/akramer/posts/10152987150867796
======
yazinsai
This comment on the piece aptly sums up my response, it's by Sean Tucker:

Adam, I don't know you -- I came here from the Buzzfeed article criticizing
the ethics of the study that linked to this post, but it appears we do have a
friend in common.

I just have to ask - you honestly had a hypothesis that amounted to 'perhaps
we can make people more depressed,' and decided to test it on a group that
hadn't consented to the experiment, with no way to track its impact on their
actual lives, only on the language they used in their Facebook posts? And you
ask us to trust that this passed an internal review so it's ethical?

Please take a moment to step back and consider that. That appears to have been
the train of thought that led to this.

That's appalling. Completely appalling. The Atlantic piece is right -- there's
absolutely no way this passes APA deceptive research standards.

Beyond that, you'll never know what impact this actually had on depressed
people. You can only measure what they posted to Facebook, which isn't a
particularly meaningful or realistic indicator of their emotional state.

If this passed an internal review board, that's only proof that Facebook's
internal review standards aren't what they need to be.

You're in a position of extraordinary power, with access to more subscribers
than any other field study in history, a larger population than most nations,
and subject only to how you review yourselves. You could deceive yourself into
believing you have informed consent because everyone clicked 'accept' on the
Terms of Service years ago, but there's no way even you think that's a
meaningful standard.

I trust you're a reasonable person who doesn't set out to cross ethical
boundaries. But on this one, I think Facebook needs to admit it did and make
some changes. This study was unethical by any reasonable standard. There's
nothing wrong with admitting that and figuring out a way to do better.

There's a lot wrong with going ahead with anything like this, ever again.

~~~
zaroth
This is Facebook. Is has a tremendous effect on peoples lives. And that's
precisely why it's worth $100B. This is nothing more than a tweak in their
ranking algorithm -- much larger and even targeted/dynamic modifications occur
all the time. How can a reasonable person be upset by this?

If the study is unethical by any reasonable standard, does this amount to
condemning the whole company, even the whole industry? Because if this is
unethical, you are calling out an extremely widespread practice...

~~~
3rd3
There is no difference between tinkering with an algorithm with the intention
to negatively influence people's emotions and tinkering with an algorithm to
make it work better?

~~~
zaroth
Who is to say those two statements are at odds with one another?

Also, almost every A/B which results in a measurable impact in user response
will have done so by positively vs. negatively impacting the users' emotions
on the A/B legs of the study.

But more to the point, this is not even a fair characterization of what
Facebook did. The made a tweak to their ranking algorithm which they deployed
to 1/2500 of their users. Much later they discovered there was a extremely
small but measurable impact to those users' engagement, and the types of words
they included in their subsequent posts. They actually didn't know at all
ahead of time what the effect would be, and in fact the effect they observed
was the opposite of the widespread expectation.

------
r0h1n
Wow, what an amazingly tone-deaf post. It demeans Facebook users who were
offended at the company's actions into simpletons who did not really
understand what the study was really about. Sample this:

> _Nobody 's posts were "hidden," they just didn't show up on some loads of
> Feed._

How is hiding any different from not showing up?

> _And at the end of the day, the actual impact on people in the experiment
> was the minimal amount to statistically detect it._

Not what your own study claimed.

> _I can understand why some people have concerns about it, and my coauthors
> and I are very sorry for the way the paper described the research and any
> anxiety it caused._

We are not sorry about the research, but only for "the way the paper described
it".

> _In hindsight, the research benefits of the paper may not have justified all
> of this anxiety._

In hindsight, our users are hyperventilating frogs. They should learn how to
relax in the nice warm(ing) Facebook waters.

~~~
zaroth
I am really trying to understand both sides of this. Focusing on the actual
research, what about it do you think requires informed consent?

Do you understand how widespread this kind of research is? Literally everyone
does this.

The act of publishing can't be the ethical breach -- just focus on the
research, what do you think they did wrong there?

~~~
jamesbrownuhh
Perhaps it could be summed up as "Attempting to negatively influence a
visitor's mental state without their knowledge or consent IS NOT COOL."

~~~
polarix
The assertion that this study refuted was that too much positive bias in a
filtered source of information causes negative sentiment. Reducing said
positive bias had heretofore unknown effects. The attempt was to learn, not to
negatively influence anyone's mental state.

~~~
chris_wot
Just like the Milgram Experiment was designed to see whether a small number of
people could be coerced into believing they are torturing and killing people
when so ordered? It wasn't designed to cause mental distress, but to see what
happened.

You see, that's why there are ethics committees at Universities.

------
gemma
_I can understand why some people have concerns about it, and my coauthors and
I are very sorry for the way the paper described the research and any anxiety
it caused. In hindsight, the research benefits of the paper may not have
justified all of this anxiety._

Ignoring the classic "I'm sorry you were freaked out" non-apology here, this
response completely misses the point, and tries to re-frame the concept of
informed consent into an inconsequential piece of red tape that anxious people
worry about.

People were made subjects of a scientific experiment without their knowledge
or consent. It doesn't matter that Facebook took steps to make the changes as
small as possible; it doesn't matter that the effect on the individual
subjects may have been minor; it doesn't matter that the results were
interesting. It was an ethical breach, and this tone-deaf response is fairly
unsettling.

~~~
elpool2
Is it an ethical breach if I don't get informed consent from users when I do
A/B testing on my website? Or is it only unethical if I publish the results
afterwards? Or is it only if I'm actually trying to affect the user's mood
specifically?

~~~
gemma
If your A/B testing is intentionally inducing a negative mental state in your
users without letting them know up front, I'd say that's probably an ethical
breach.

In general, A/B testing is studying the website. You're not testing us, you're
testing the best ways to convince us to do stuff. On top of that, when users
visit your website (which is presumably pitching something to them), they know
they're being pitched. They know the website is going to try to convince them
to do stuff--click over here, watch this video, sign up for
foogadgetmcawesome. Same drill with advertising. Yeah, this commercial with
the freaking Budweiser puppy made me cr--er, chop onions--in its attempt to
get me to buy beer, but I know they're trying to sell me beer, and I knew as
soon as the puppy hit the screen that I'd probably start sniffling.

~~~
benastan
I don't see the distinction. It's fine to market your product, but there is
such a thing as distortion and as a scam. Providing a fair representation of
your product factors into ethics, as does Facebook fairly presenting the feed
of your friends news, with a reasonable attempt at NOT distorting it.

~~~
gemma
I agree! I'd put unethical behavior like distortion and lying in a different
category than Facebook's, but I agree. I was trying to kill the parallel
between Facebook intentionally manipulating user emotions for science, and
Budweiser intentionally manipulating viewer emotions for beer sales. Video
games work here too: Amnesia scares the unmentionables out of me, but I'm ok
with that, because I know that's the whole point.

Another missing component here is user control. I can turn off the commercial
and stop playing the game. I'm currently working (in a minor capacity) on the
National Children's Study, and our participants can walk away at any time.
When the Facebook subjects on the negative side started feeling slightly more
sad, they had no idea why, and no clue how to stop it.

------
OrwellianChild
Am I alone in being completely un-surprised and accepting of Facebook doing
research in this way on its users? The News Feed as an interface for using FB
is and has always been an evolving product and in the control of FB. They've
been constantly messing with it, A/B testing different combinations of
user/advertiser content and post/comment structure. They do this to test
outcomes, and achieve goals that most likely include (and are certainly not
limited to):

    
    
      - Optimizing ad targeting
      - Maximizing click-thru
      - Maximizing engagement
      - Minimizing abandonment/bounce rates
    

The News Feed already doesn't show you _all_ your friends' posts and hasn't
for quite some time. How they choose to "curate" what they do show is going to
be dictated by their incentives/needs.

Getting outraged about any of this seems akin to getting pissed that the new
season of your favorite TV show sucked...

Edited for formatting...

~~~
kevingadd
You're not alone, but I think you do yourself a disservice by underestimating
the potential negative impact of an experiment _designed for the purpose of
manipulating a customer 's emotions_. There's no defensible business objective
here, and furthermore, any business objective that relies on this kind of
emotional manipulation is suspect.

Intentionally making a customer depressed is not something you fuck around
with. It's incredibly dangerous, and doing it to a huge subset of your
audience with no mechanisms to ensure you don't inflict real harm is utterly
reckless and irresponsible.

The individual mechanisms and approach used here for the experiment are not,
in and of themselves, objectionable. Many A/B tests are wholly defensible. The
end goal and process are the problem here.

~~~
amirmc
> _"... any business objective that relies on this kind of emotional
> manipulation is suspect."_

So all modern day advertising then, that attempts to create a connection with
the viewer/reader (as opposed to presenting bare facts).

I agree that there was a _process_ problem from the point of view of a
_research study_ but had this just been an A/B test for engagement levels, we
might never have known what Facebook did.

------
gammarator
When addressing criticism, it's not great to start "Ok so." (Or "Calm Down.
Breathe. We Hear You" [1], to dig farther back in time.)

[1] [http://www.facebook.com/notes/facebook/calm-down-breathe-
we-...](http://www.facebook.com/notes/facebook/calm-down-breathe-we-hear-
you/2208197130)

~~~
baddox
Do you have any thoughts about the explanation itself? I made it past the
first two words, and thought the explanation was fairly reasonable and
satisfying.

~~~
gammarator
The last paragraph does not give me warm feelings about Facebook's internal
controls on this kind of research:

"While we’ve always considered what research we do carefully, we (not just me,
several other researchers at Facebook) have been working on improving our
internal review practices. The experiment in question was run in early 2012,
and we have come a long way since then."

------
potatolicious
I'm going to try and avoid the minefield that is this research vs. ethics.

One thought I've had is that the blowback against this incident is less about
the research itself and how ethical it is, and more about perception of
Facebook in general. My suspicion is a lot of the opposition at this point
comes from long-simmering distrust of Facebook and the increasingly negative
perception of its brand - this incident is merely the straw that broke the
camel's back, for some.

And if the popular response to this revelation reflects people's general views
on Facebook, it's not good for the company.

~~~
polarix
It's also, unfortunately, deeply harmful to humanity.

[http://www.talyarkoni.org/blog/2014/06/28/in-defense-of-
face...](http://www.talyarkoni.org/blog/2014/06/28/in-defense-of-facebook/)

> by far the most likely outcome of the backlash Facebook is currently
> experience is that, in future, its leadership will be less likely to allow
> its data scientists to publish their findings in the scientific literature

------
nness
Not a single point in that response directly addresses why there was no
informed consent in the study. There are reasons why research goes under
ethical review before it is conducted, and this is certainly going to be a
tough lesson for Facebook.

~~~
thisrod
Experimenters have tell fibs in order to discover the truth about your
behavior: the only way to prevent you from thinking of the word "elephant" is
say the experiment is about cats. This is generally considered OK, provided
that you won't be upset when you find out what was really going on.

And, without the benefit of hindsight, it is hard to see why people would get
upset about this experiment. I reckon Facebook users are more upset by the
truth, that their "personal" feelings are so influenced by trivial aspects of
their environment, than by the way Facebook demonstrated it. The truth
sometimes hurts, but science isn't to blame.

~~~
nness
That's not quite right; Even in circumstances where participants are mislead
for other purposes, they are still informed that they are taking part in an
experiment.

The issue here is that Facebook conducted behavioural experiments on
participants whom were not informed that they were part of a study. It is
unethical. Whilst the outcomes are tame for those involved, the shear number
of those involved and Facebook's influence and presence in everyday life makes
it all the more alarming that they attempted it in the first place.

------
brainsareneat
I wonder where the line is between A/B testing and 'psychological
experimentation' and when it's been crossed. Was it crossed just because it
was published in PNAS? The outraged don't seem to think so.

What if I'm Amazon or Yelp, and I want to choose review snippets? Is looking
for emotionally charged ones and testing to see how that impacts users wrong?

What if it's more direct psychological manipulation? What if I run a
productivity app, and I want to see how giving people encouraging tips, like
'Try starting the day by doing one thing really well.' impacts their item
completion rate. I'm doing psychological experimentation. I'm not getting my
users' permission. But I am helping them. And it's a valid question - maybe
these helpful tips actually end up hurting users. I should test this behavior,
not just implement it wholesale.

It seems like Facebook had a valid question, and they didn't know what the
answer was. Did they go wrong when they published it in PNAS? Or was it wrong
to implement the algorithm in the first place? I don't think it was.

~~~
kevingadd
The main indications that a line was crossed are the combination of a
blatantly questionable hypothesis ('we can make people depressed or unhappy')
with an intentional lack of informed consent & a huge set of test subjects.

If you're doing medical testing in order to roll something out as a
pharmaceutical for prescription use or over-the-counter sales, the tests are
rolled out in stages, with initial testing being done on a _very small_ number
of patients under extreme scrutiny, and even that is only done after the
medication has been vetted carefully using animal models. It's extremely
important to avoid harming your test subjects.

In comparison, they basically went full-steam on this experiment on hundreds
of thousands of people despite the fact that emotional manipulation of this
sort is _EXTREMELY DANGEROUS_. When I say extremely dangerous I mean
potentially life-threatening.

------
natural219
For posterity's sake, I want to clarify what is going on here.

What's been happening, over the last five years, is that American society has
become more trigger-happy in deducing "accurate" moral conclusions from
following online media outlets.

I outlined some of this in cjohnson.io/2014/context, although I didn't
appreciate the full power of this conclusion at the time, and so the essay
mostly falls short to explain the entirety of what's happening currently.

In a nutshell: The Web has broken down barriers between contexts that used to
live in harmony, ignorant of each other. Now, as the incompatibility of these
contexts come into full focus, society has no choice but to accept the
fluidity of context in the information age, or tear itself apart at the seams.

All that was needed to precipitate the decline of Facebook (oh yes, Facebook
is going down, short now while supplies last) was some combination of words
and contexts that fully elucidate the power of online advertising / data
aggregation to have real impact upon people's lives. Put in terms that the
"average person" can understand, the impact of this story will be devastating.
I feel so bad for the Facebook PR team -- they're simply out of their league
here.

The reason _this_ scandal will be the one we read about in the history books
is because it provides the chain link between two separate, but very powerful
contexts: 1, the context of Nazi-esque social experimentation, and 2, the run-
of-the-mill SaaS-style marketing that has come to characterize, well, pretty
much every large startup in the valley.

We've reached a point where nobody knows what is going to happen next. Best of
luck, people.

------
smokeyj
> The reason we did this research is because we care about the emotional
> impact of Facebook and the people that use our product.

That.. or getting users to click more ads.

Maybe ad inventory can be optimized for people in an certain emotional state,
and the user's wall can be used to induce the most profitable state of mind
for fb's available ad inventory. That would be awesome in an evil villain kind
of way.

~~~
baddox
> That.. or getting users to click more ads.

Or both, with no contradictions at all.

~~~
smokeyj
The omission of truth is more disingenuous than contradictory.

------
zaroth
It seems to be that Facebook did something like an A/B test of their ranking
algorithm, and then did a cohort study to see if there were any longer term
impact.

If what they did requires informed consent, what about when the end goal is
maximizing click through rate, e.g. by inciting an emotional response?

Let's say FB finds that certain types of posts in the feed cause a nearby ad
to be clicked on more. They determine this through pervasive testing of
everything that you see and do on their site. They could then adjust their
algorithm to account for this behavior to increase clicks/profit.

I think the actions FB takes to monetize the user base are not only more
intrusive by far, they are actively searching for and exploiting these effects
for profit. If informed consent for TFA is not ridiculous, then I think we
have much bigger problems on our hands? What am I missing about the informed
consent issue?

~~~
kevingadd
You're not missing anything. Informed consent is important for many things
currently classified as A/B tests and it isn't currently acquired by anyone.
There are many existing products that rely on A/B tests to identify the best
way to psychologically manipulate (if not addict) their customers (F2P games,
online gambling, etc.)

------
gouggoug
For those like me who lived under a rock, here's the study:
[http://www.pnas.org/content/early/2014/05/29/1320040111](http://www.pnas.org/content/early/2014/05/29/1320040111)

~~~
toufka
In their summary there's a pretty massive gap in their lack of obtaining
informed consent and how dangerous that might be:

"Significance:

We show, via a massive (N=689,003) experiment on Facebook, that emotional
states can be transferred to others via emotional contagion, leading people to
experience the same emotions without their awareness. We provide experimental
evidence that emotional contagion occurs without direct interaction between
people (exposure to a friend expressing an emotion is sufficient), and in the
complete absence of nonverbal cues."

------
xenadu02
Simple fix:

"We're conducting a scientific study on mood for the next three months. Would
you mind participating? The study won't require you to do anything special or
take any further action. Y/N?"

There we go kids, easy as pie; it's called consent to participate! Pretty easy
to do without spilling the beans on the study.

~~~
chris_wot
This is a wider issue, but how do they prevent knowledge of the experiment
adversely affecting the results?

------
akerl_
Based on the other comments, it appears that the difference between kosher A/B
testing and unethical experimentation with dire mental health consequences is
either "Did you publish a paper when you were done" or "Are you a big,
noteworthy company".

~~~
elpool2
I'd say that (based on comments here) the real different is _intent_. People
seem to be suggesting that the exact same experiment, but measuring ad-clicks
instead of emotional state, would be a perfectly fine A/B test. It doesn't
matter that the experiment manipulated people's emotions, it matters that
Facebook _wanted_ to manipulate people's emotions.

------
cwal37
Universities and think tanks have internal review boards for a reason. The
fact that Facebook can make use of its users' data in other (business-related)
avenues does not excuse them from the most basic of research ethics.

It did pass the muster of Cornell's review board, but that was after the data
was actually collected, which amounts to ex post facto approval.

------
datakid
There's not really any excuse you can give at all, FB. I'm astounded that you
didn't apologise without reservation and implement an end use agreement change
that specified you wouldn't do it again.

Messing with people's mental health is outrageous.

------
onewaystreet
People calling this unethical have to explain why it is but A/B testing is
not.

~~~
napsterbr
A/B testing is for _commercial_ purposes, it's wildly different when you are
testing the user's psychological state.

~~~
onewaystreet
But a lot of A/B testing _does_ test a user's psychological state. One could
argue that doing it for commercial purposes makes it _worse_.

~~~
kevingadd
That's absolutely true. The main difference here is that the experiment's
negative impact was unabstracted yet they still went ahead with the
experiment. The hypothesis was more or less 'we can make people depressed',
without the slightest hint of defensible motives or objectives ('we want to
make our ads more effective', 'we want to find the best way to convince
customers that they want our product'.)

------
alx
Does someone remember the name of the NSA program concerning psychological
manipulation on social networks ?

~~~
alx
GCHQ, NSA ‘infiltrate’ social media to spread disinformation, with slide
concerning psychology aspect of social network, including facebook :

[http://inserbia.info/today/2014/04/gchq-nsa-infiltrate-
socia...](http://inserbia.info/today/2014/04/gchq-nsa-infiltrate-social-media-
to-spread-disinformation-and-propaganda-report/)

The Intercept article concerning JTRIG actions on social networks :

[https://firstlook.org/theintercept/2014/02/24/jtrig-
manipula...](https://firstlook.org/theintercept/2014/02/24/jtrig-
manipulation/)

This study would fit pretty well in documenting possible manipulations.

