Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dupe] Facebook tinkered with users’ feeds for a massive psychology experiment (avclub.com)
77 points by r0h1n on June 28, 2014 | hide | past | favorite | 33 comments


Great! We should be congratulating Facebook for this, for several reasons:

1. now Facebook can start designing against this. What design features promote negativity? Where is negativity being inadvertently fostered? 2. now everyone else can benefit from the research as they chose not to keep it private despite the potential PR ramifications like this 3. this is not research more than a few entities in the world can or will do.

Ethically, I don't see the problem. Users already rely on Facebook to tinker with user feeds in a variety of ways, and this tinkering causes them no harm, and may benefit them and all future users. (When the users signed up for Facebook, did they sign an agreement saying 'you agree to be mentally poisoned with rage, frustration, anger and disappointment, without recourse, by anyone you are foolish enough to friend and let flood your feed'? I rather doubt it.) Given that users are already buffeted by countless unpredictable random forces beyond their control or comprehension, what is tweaking one factor going to amount to? This is the same as clinical drug trials: the question should not be whether it is moral to run a trial, the question should always be whether it is moral to not run a trial.

The OP shows no sign of understanding any of this.


A private corporation is developing the capability to alter the moods of massive populations in a controlled way; unlike mass media, society doesn't (yet?) understand this sort of influence.

Perhaps soon Facebook will understand how to tweak the news feed algorithm to e.g. whip up negative sentiment in response to an attack? Or stop a particular candidate being elected? Etc.

Perhaps they will use this for good, as you mention. But its still huge power for a corporation to wield; effectively in secret - its easy enough for an observer to make a judgement about a bias in e.g. Fox News, and call that out - but it'd be very hard to observe subtle changes to individual news feeds.

I don't really buy your argument that they have a clear moral responsibility to develop this capability. That depends on the potential good of such capability if used for good, the potential bad if used for bad, and the chance it will be used for each. So I don't see this as analogous to a corporation conducting a clinical trial of a drug; this is more analogous to a clinical trial of a drug that could also be used as a weapon, with the more complicated ethical issues that would entail.

I think its absolutely something there should be ethical discussion around.


> Perhaps they will use this for good, as you mention. But its still huge power for a corporation to wield; effectively in secret - its easy enough for an observer to make a judgement about a bias in e.g. Fox News, and call that out - but it'd be very hard to observe subtle changes to individual news feeds.

This is power and capability given to all media organizations. When you permit their existence, you permit both their deliberate use of their capabilities for goals you dislike but also their accidental unintentional random biases. There is no difference in terms of the consequences.

I am reminded of one of the AI Koans http://catb.org/jargon/html/koans.html :

> In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said. Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher. “So that the room will be empty.” At that moment, Sussman was enlightened.


Isn't your reference to other media organization an appeal to popularity? Why shouldn't we strive for media that are more concerned about privacy and dignity? Furthermore the interactivity and private data make social networks a whole different beast.

> There is no difference in terms of the consequences.

Not in terms of consequences, but there is the difference that in a scientific study you need the consent of the experimental subjects. I think it's problematic that they assume this was given when the users clicked the TOS checkbox, since they must know that a large percentage does not know what it entails.


Facebook isn't private, and Coca-Cola already alters moods.


How would you regulate such a thing?


Ethically, I don't see the problem.

The problem is, classically, lack of informed consent: the vast majority of Facebook users simply don't know they can be subjects of emotionally manipulative behavioral experiments at any time via UI tinkering, so it could be argued the TOS small-print waiver does not de facto ethically (nor legally) constitute informed consent proper.

An altogether different matter of course is whether you agree with the Belmont report principles in the first place, or the adequacy of principlism for social science research in general [1,2]. But as far as vanilla regulatory research ethics is concerned, this study should rightly raise several flags in any self-respecting IRB.

[1] http://www.uvu.edu/ethics/seac/Bulger-Principlism.pdf

[2] http://eprints.ncrm.ac.uk/85/1/MethodsReviewPaperNCRM-001.pd...


> An altogether different matter of course is whether you agree with the Belmont report principles in the first place, or the adequacy of principlism for social science research in general

Absolutely. I've always found 'informed consent' to be a bizarre and ill-founded moral concept, with no place in consequentialist ethics, and this is an example of why: so, here is an experiment, of great importance to our understanding of social interactions & community design, where the intervention was just one of countless influences on the subjects and not a large one at that (what about the 'lack of informed consent' that is inherent with every form of advertising?), which did not harm them, and yet... this is somehow bad? Why? Uh, because 'informed consent!'

It all seems like so much angel-counting, when I read absurdities like "Near-Death-Experience Experiment: Ethical Concerns" http://www.csicop.org/si/show/nde_experiment_ethical_concern...


Very well said. If I proposed this study to the IRB where I work they would likely freak out. I think Facebook should be taking advantage of their data and conducting research. But they need to do this the way most respectable research organizations do and run all their studies through a credible IRB.


Why would you assume FB is going to engineer/optimize against negativity?

Who knows, the next, probably internal, research is going to reveal that people on certain parts of the negative emotional spectrum are way more likely to click on ads than happy people.


Am I a terrible person for finding this neither surprising nor unethical? If I had the opportunity to do this study I'd definitely do it – where else are you going to have the opportunity to see insights like that?


I don't find it surprising or unethical, but I do question the validity of the conclusions drawn and the idea that doing this would actually get them the kind of information they wanted.


(IANAL)

The standard for psychological experimentation would normally be informed medical consent, not the terms and conditions of a commercial contract.

This is the sort of thing that a clinical psychologist could lose their license for.


Yes. If your study is done with federal money and at a research institution you need the Institutional Review Board to clear your experiment. One of the requirements is that patients/subjects be given a consent form and the experiment explained to them.

It is possible to perform an experiment, hiding its real purpose, if the results of the experiment require the subject not to understand the design/aim. But usually these are badly designed experiments, because you can never know if a subject has been exposed before to the experiment, and can therefore game it.

I guess FB is arguing that users gave informed consent when they signed up for facebook. I don't know if there is a review board for experiments at FB.

I think, with a good enough lawyer, you can successfully execute a class action lawsuit if you find users whose mood worsened as a result of this (a user under treatment for depression, for example, whose mood is clinically assessed)


Reactions against this sort of research are similar to the reactions against the nudge research. People don't like it because it seems manipulative. But the thing is any order of events a programmer throws against the wall will impact people. Just like any process design will nudge people in some way. Isn't it better to know how people are affected by our designs? Because people are being affected, we just don't know how. It's better to know.


According to the paper[1], "[the research] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research."

The Onion AV Club bit quotes "internal operations, including troubleshooting, data analysis, testing, research and service improvement" as part of the FB policy.

If that's really the part they're relying on, informed consent doesn't mean much.

[1] http://www.pnas.org/content/111/24/8788.full


This is gross, but unsurprising. Facebook hasn't cared about their users in a long time. It's their information not yours, it's their network not yours. It's a systematic way for a third party to manipulate you into doing what they want or what whoever is paying them the most wants.

Ugh, what a scourge on the tech community. Shit like this reflects badly on all of us. Users hate this manipulative crap. Their MySpace days can't come fast enough.


While I agree with you, I think you'll be hard pressed to find a money-making company that isn't trying their darnedest to manipulate you into doing what they want (buying their stuff, mostly)


That's a nice rant. Zzz...


Honestly, all I want is:

a) have an option to AUTOMATICALLY have all those stupid games banned, without having to ban a new Farmville version or clone every other week (aka: I just want to view real, user generated content and not be woken out of my sleep due to some retarded friend playing Farmville at night and asking me for crops)

b) make it REMEMBER the "show me the full feed" and not switch to that abomination it calls "top posts"



Wonder if we'll hear from any of the data scientists, whom surely exist in good numbers here.

Also, I wonder how much cognitive dissonance programmers feel, or often they rationalize, or reassign values, when things are done (which are quite typical) in the nature of their profession, to this effect.


Fascinating how hard scientists will work, hunkered down in front of their calculators, to discover basic facts about human social experience that regular people have known for thousands of years: emotions are shared.


Why would you make this comment? Do you really think it's a waste of time to try to better understand the world? Are you aware that there exist many, many, many examples of experiment showing commonly-held beliefs to be incorrect?


Nobody could have predicted, except Vernor Vinge.


So now when the citizens are angry, the government can have a talk with Facebook, to see if they can settle the nation down a tad.

Not creepy at all, nope.


Or thinking about it more, get ready to downvote this too, governments are more likely do the exact opposite.

Want to cause a revolution against a dictator? Talk to the social networking sites, see if you can increase unrest. We already know the US government talked to social networking sites to help the Iranian people overthrow their government.

http://www.cbsnews.com/news/state-dept-asked-twitter-to-dela...

A few small steps from that, and you're manipulating the emotions of nations in an attempt to start a revolution, instead of just help one that formed by itself.


i cant wait to see facebook to leverage this to condition users into favoring advertisers/products who pay for premium advertising packages

seriously what the fuck is facebook's PR team thinking letting their research staff publish something like this


> seriously what the fuck is facebook's PR team thinking letting their research staff publish something like this

Yes, it'd be so much better to let that research be done in secrecy, and sold to the highest advertising bidder. /s

"Scary" as some of the things Facebook (replace by the bigdata corp of your choice) does may sound, I appreciate a lot when extremely-large-scale real world scientific tests, like this one, can be achieved and I appreciate a thousand times more the results being published. Call me crazy, but to me this is what "advancing the understanding of the human race forward" looks like... rather than "let's do designs on both round AND square watches".


Do I have any reason to think they aren't running other experiments in secret and selling it? And considering that the US government already takes bids for "persona management" to influence social media, why shouldn't I think they're not a customer?

Facebook, and all centralized social media platforms, are dangerous. Nobody should have this influencing power.


> Facebook, and all centralized social media platforms, are dangerous. Nobody should have this influencing power.

I couldn't agree more. That also was not my point.


Secret or not secret are not mutually exclusive, was all my point. the rest was just an aside. The knowledge is even useful, but for me it's been completely overshadowed by knowing that they are manipulating us to learn, in effect, how to manipulate us better. It's not even theoretical.

I was watching Google I/O, and in the section about Android TV, they showed how the device, if you left your TV on, would show you soothing images that they selected. What kind of power is that! Imagine if suddenly, all the images on every Android TV switched to the same image. How would what that image presented affect the world at that moment?


facebook doesnt exist to "advance the understanding of the human race", it exists to make money

same with google, apple, and every other new age special snowflake company that pretends to be the good guy

publishing these findings can only damage facebook's reputation and business - i can see it now; with two-bit tech journalists citing this paper for the next 10 years as evidence as to why facebook is controlling everyones minds




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: