Hacker News new | past | comments | ask | show | jobs | submit login
Facebook's unethical experiment manipulated users' emotions (slate.com)
130 points by sculpture on June 29, 2014 | hide | past | favorite | 148 comments



155,000 users for each treatment of the experiment, says the paper. Let's presume random selection, and that the occurrence rate of mental disorders is the same for Facebook users as the general public (probably not too far off). Then Facebook intentionally downgraded the emotional state of:

10,000 sufferers of Major Depressive Disorder

5,000 people with PTSD

4,000 bipolar disorder sufferers

1,700 schizophrenics

and a plethora of others with various mental disorders.

11/100,000 people commit suicide each year in America. How many were part of that treatment of the experiment, without consent or knowledge?

As a scientist, I'm fascinated by the research. As a human being, I'm horrified it was ever done.

http://www.nimh.nih.gov/health/publications/the-numbers-coun...


I'm even more concerned about the friends of those participating in the experiment, particularly those that were part of the group filtering out more negative messages.

If a friend made a post deemed negative, that would otherwise signaled to close friends to check up on that person and intervene if necessary, there's a very good chance these messages would have been filtered out in a system like this.

The potential affected population becomes much greater than the 155,000 being experimented upon then, and would have put a much greater number of people at risk of not having any close friends or others available to intervene, who would have otherwise been able to do so if the facebook algorithm hadn't been altered to follow this bullshit happiness metric for research purposes.

I really hope this becomes much bigger news, and some action is taken to ensure something like this doesn't happen again; but considering the lack of funding and attention given to mental health, especially in the work-till-you-drop business world, i sadly doubt it.


> How many were part of that treatment of the experiment, without consent or knowledge?

How would this be determined in offline experiments where people volunteered?


All subjects would have the potential consequences explained to them so they could make an informed decision about whether to take part. Informed consent is a very important ethical principle in human-subject experiments.


hi, i'm one of the people you're worried about.

i have bipolar disorder, and i barely made it through an intense schizoaffective episode where i heard many voices, felt like my consciousness was splitting into multiple parts, and was terrified. this was just two months after my startup exited and i got nothing in 2012.

oh yeah - i'm also a startup founder, worked at uber, google and microsoft - now facebook. i don't speak for them. just me.

i'd like the world to understand me better. so much of what i've struggled with is mood-based.

you know that guy - http://www.losethos.com/ - i know _exactly_ what he means. i understand him when i hear him speak, and i feel bad for the guy. it's scary being where he is, "knowing" how powerful and right you must be, and knowing how people laugh at you behind our back - but you know you're right, that you're a conduit for god.

do you think it makes sense for him to be stuck like that? i sure as hell don't.

i'm sorry, i'm getting emotional here. this is hard for me.

i can't speak for my employer here. but i will tell you that your mindset of me as a victim who must not be upset - it can be more than a little offensive. fortunately through all of my experience dealing with these issues, i've learned how to better manage my emotional states, and i've also learned to see emotion as form of sensory input - like light and sound. i don't believe everything i hear, or see - why should i believe everything i think or feel?

if facebook was making up random shit that was negative and showing it to their users - that sucks. if they were making falsely positive posts - forging your friends activity - that also sucks.

but when they are selectively showing you portions of your friends' activity - something they were already doing anyway - it's wrong to say that they have "intentionally downgraded " my emotional state. if my friend says she's having a shitty day, she's not intentionally downgrading my state. she's having a bad day. if facebook hides that from me, are they making my day better? are they making her day worse? it's not really clear here. people know facebook adjusts their posts, and they did show that people who are exposed to negative content are less likely to post positive things. does this mean that the users are themselves feeling less positive? or are they just trying to keep with the tone of the social area they're in? it's not sure.

our culture does not understand emotion - i think this is a serious problem and we really need to do something about it. that templeOS guy is not enjoying life or functioning nearly as well as he could if he were not shunned for being so wildly antisocial. you know who else gets shunned for being antisocial? people who say things like 'i am sad', 'i feel lonely,' etc etc, in public.

i'm sorry for the tone of this - it's - it's hard for me to stay calm here.

but let's look at this as if emotion were the "same kind of thing" as light or sound. spreading a negative emotional reaction to this article and saying you are "horrified" that someone who was depressed had more people see their depressed posts - you're contributing to the problem.

it feels to me that emotion has some 'conservation' like properties; you can't diminish a lot of negative emotion at once. it also seems to 'move' places; negative emotion between people who interact seems to get pushed to scary places where lots of fear and hate are concentrated. in late 2012 i felt like all of the evil in the world, all of the hate was being shoved into me because i told the world i could take it, i told the world i didn't want them to hurt like i did.

i heard voices telling me to kill, and i wanted to kill myself rather than hurt someone else. i thouht icould take myself and the voices out with me - and then i'd heard the voice of my parents and loved ones calling to me.

when i see those shootings, where some loser with no friends and no hope goes out and kills a bunch of people - i feel like it happened to them, and rather than blame themselves for the horrible shit being pushed on them, they blamed the outside world.

but they're just as much victims - we're all victims - of our misunderstanding of emotion.

i'm sorry, i know you didn't ask for me to be upset at you, i know oyu mean well, i'm just.

i'm tired.

i want people to understand this stuff better because the better i've come to understand it, the better i've functioned in life. the ability to remain calm - an ability i have not exercised here because it impedes the ability to express genuine content - that ability is invaluable and a huge source of power.

energy moves from a heat reservoir, a bunch of pissed off angry furious people, to a cold rerservoir - a room full of sociopaths who use fear and anger to extract energy from a warmer place, the heat engine carnot-cycle of samsara and the transition from golden age to successsive yugas is just adiabatic/isothermic expansion and contraction of emtional content

shit this is making no sense.

so you see where i'm going - i'd like to understand this stuff better. i hope the study helps.


I'm puzzled about the outrage.

FB _already_ filters out updates based on some blackbox algorithm. So they tweaked the parameters of that algorithm to filter out the "happier" updates, and observed what happens. How is this unethical? The updates were posted by the users' friends! FB didn't manufacture the news items; they were always there.

I detest FB as much as the next guy, but this is ridiculous.


Human subjects research has ethical obligations that at times go beyond what seems like common sense, but are still really important because of past ethical abuses. See http://www.maxmasnick.com/2014/06/28/facebook/ and https://news.ycombinator.com/item?id=7959941

In this case, I don't think there was actual risk but just reading the PNAS paper it doesn't sound like the study went through the proper process. If it was reviewed by an IRB then it did go through the proper process and it's ethically sound, but a PR nightmare.


Facebook altering it's block-box proprietary algorithm for what to show is something it does every day. That can't be unethical by itself.

It's possible that for it to be published, a higher standard would be needed. That is, the actions they took were ethical, but perhaps inappropriate for scientific publication.

But, if that were the case, the peer reviewers and the journal in which it was published should have flagged that. That it was published shows they didn't have any significant concerns.


> It's possible that for it to be published, a higher standard would be needed. That is, the actions they took were ethical, but perhaps inappropriate for scientific publication.

> But, if that were the case, the peer reviewers and the journal in which it was published should have flagged that. That it was published shows they didn't have any significant concerns.

This would not be a scientific ethics issue if they explained their IRB review in the manuscript. It is so unusual to not do this that it is a reasonable assumption something funny is going on, in my experience.

For example, the journal could be incentivized to look the other way in order to publish a high publicity article. I'm NOT saying that's what PNAS did, but just because something is published doesn't mean it's ethical. (See: https://www.ncbi.nlm.nih.gov/pubmed/20137807)


Fair point, yes, it is not evidence of it being ethical or even of PNAS believing it was ethical. But it's reasonable to assume that a high-profile journal like PNAS is extremely aware of the relevant ethical considerations, and likely (but not certaintly) would not violate them.


Then why did they let the manuscript go to press without the "this study was reviewed by the University of Somewhere IRB (protocol #XXXX) and was ruled exempt" sentence?

I see this enough in papers that it seems pretty standard to me, and it especially makes sense in a paper where the editors think there are potential ethical issues.

If I was an author of this paper, I would have actually spent a sentence or two explaining why there was not a risk to human subjects, etc.

The way they address this by basically saying it is ok because of the ToS that no one reads is the worst possible way to handle this. It seems to me more like no one thought about it at all than that the editors carefully considered it. I just don't see how you get from recognizing big ethical issues to not even addressing them in the manuscript.


> Facebook altering it's block-box proprietary algorithm for what to show is something it does every day. That can't be unethical by itself.

Why not?


Well, it could be, but (1) a commercial site optimizing itself seems quite reasonable, and (2) the public knew they were optimizing themselves, and did not complain in any significant way.


That "optimization" might be reasonable does not imply that any method is, "optimization" is not an actual activity, but a label for the purpose of a wide variety of things you can do. Murdering your competitors is also good for "optimizing" your bottom line, but still unethical, to put it mildly.

Also, I strongly doubt that the public has any clue of what facebook is doing. For all the public knows and understands, facebook could be employing magic message fairies.


They altered people's feeds for a psychological experiment with the specific intent of manipulating their mood. That is highly unethical.


But why if it's done for science it's unethical, but when done by news stations, politicians, advertising agencies, motivational speakers, salesmen, etc. it's suddenly ok?


> when done by news stations, politicians, advertising agencies, motivational speakers, salesmen

Each of those are cleary identified.

If Facebook wants to put a little icon next to each "experimental study" status update which disclosed the party that funded the study, it would be different.

Even in "science", some studies are funded and others are not.


Are you even serious? Stating that mass media manipulation is overt and identifiable... The HN stopped to be self-correcting if one happens to hate Facebook, Google or LinkedIn.(not that I like them personally)


Any ad, political speech, sales pitch, pr/journalism etc. that you see today is identified by a named author/publisher/vendor byline. Assessment of possible manipulation is left as an exercise for the viewer, who can decide whether to ignore the communication.

There are any number of technical means by which:

(a) opt-in permission could be requested in advance of a study

(b) opt-out option could be advertised in advance of a study

(c) start and end dates of non-optional study could be disclosed

This is about CHOICE of participation, not the NATURE of the study.


"Who can decide whether to ignore the communication"

Wow... Not understanding basic theories of communication and human irrationality that doesn't allow lots of data to be processed and accepted without critical thought.

"opt-in permission could be requested in advance of a study"

Wow... Not understanding basic theories of psychological and sociological studies that state subjects should not be informed about the study or their behavior will change.

But please, go on with your clueless "ethics".


Standard rules of ethics for experiments on human subjects say that (with a few exceptions) subjects should always be informed about the study. If that changes the subjects' behavior such that the study is no longer valid, it's the researcher's obligation to come up with a better design that works in the face of informed consent, or to give up and study something easier.

It's been fairly well established that "I wanted to learn something" isn't an adequate excuse for doing things to people without informing them or receiving their consent.

Before you talk about peoples' "clueless 'ethics'", you might want to read the professional standards of the field, for example the American Psychological Association's Ethics Code. The section on "informed consent to research" is here: http://www.apa.org/ethics/code/index.aspx?item=11#802


Why do newspapers, facebook, twitter, etc. differentiate advertising or sponsored content from journalistic or user-generated content?

> subjects should not be informed about the study

About the study, or about being _in_ the study?


Users of Facebook see it as a neutral platform for communicating with people they know. Consumers of the things you listed know it's top-down messaging coming from people they don't know or necessarily trust.

So, there is a difference. It's still a complex question, though -- is filtering or prioritizing based on emotional sentiment really different from what they are already doing with inserting ads and such?


I see it this way: they did a study, so it's fair.

Were they to filter posts by emotional sentiment as a part of their normal operations, I'd find it unethical, or at least something I might not want. But I'm totally fine with them subjecting users (including myself) to random research studies, as those are temporary situations, and with Facebook's data sets, they can have great benefits for humanity.

Perhaps Facebook should provide an opt-in option to for user to be a subject of various sociological experiments at unspecified times. I'd happily select it.


New relevant xkcd: http://xkcd.com/1390/


Academia holds itself to higher ethical standards than those other actors. Those are the standards violated here.


They got approval from an IRB, so not really.


Any idea on the name/reference?


Would it be unethical if they were trying to make people happier?


Yes.

Would it be unethical if I broke into your house to randomly place pieces of candy and $5 bills in your drawers?


That would be unethical, because of the breaking into the house part, not the mood-altering part.

Would it be unethical if I placed candy and money around my own house, invited you into my house, and told you to take anything you desire?


All Facebook users have agreed to be part of research experiments. It's in their ToS. If you got people to agree to allow you to enter their homes for research it wouldn't be unethical for you to do so.


ToS are a copout, and in my eyes a tragedy of the modern legislature around software services.

Every single company in the world knows very well that 99.99% of their userbase won't read the ToS, and use this to do whatever they want with their users' information and privacy.

There needs to be dramatic improvements in that area.


Except Facebook didn't break into anyone's house. They changed their own algorithms on their own website that people visit voluntarily.


Facebook is free to push boundaries, their customers and lawyers and regulators are free to differ.

http://www.ftc.gov/news-events/press-releases/2011/11/facebo...


Totally agreed. I'd be surprised if there's any kind of legal action here with enough backbone to get a settlement out of Facebook, however.


I'd happily PM you my address.


Consent is the difference between sex and rape.


Nope; the primary difference is that being raped doesn't make you happy. But please let's not go there; it's stretching analogies too far.


Who defines happy?


Whoever is feeling it.


Oh lord, not this "break into the house" fallacy again.

Your FB profile is not your house; it is just some data you have shared with FB. FB decides what to do with the data: how to share it, where to share it, when to share it, who to share it with, etc.

Everybody knows that FB _already_ manipulates the feed to change your mood: to make you more engaged with the site; to make you click more on ads; etc. It's been doing this basically for ever.


But they _already_ manipulate the feeds with the specific intent of increasing user engagement: to get more views, to get more ad clicks, more time spent, etc.


  I'm puzzled about the outrage.
As I understand it, the American scholarly tradition uses a 'bright line' definition of human experimentation that stresses getting ethical review board approval for pretty much everything involving humans other than the experimenter. Doing anything that needs review board approval without getting review board approval is seen as highly unethical, even if approval would clearly be granted were it requested.

For example, I once saw a British student e-mail out surveys about newspaper-buying habits; one was sent to an American academic, who replied to the student's supervisor saying the student should be thrown out of university for performing experiments on humans without ethical review board approval.


I strongly hope that they won't care about any of this "outrage" and continue to do more and more experiments. Maybe even open it up as a platform for scientists to conduct studies.

Facebook is in the unique position of possessing data that can be orders of magnitude more useful for social studies than surveys of randomly picked college students that happened to pass through your hallway. There's lot of good to be made from it.

But the bigger issue I see here is why it's unethical to "manipulate user emotions" for research, when every salesman, every ad agency, every news portal and every politician does this to the much bigger extent and it's considered fair? It doesn't make much sense for me (OTOH I have this attitude, constantly backed by experience, that everything a salesmen says is a malicious lie until proven otherwise).


It's an interesting question. I have the same averse reaction to this story that a lot of people here have, but I admit I also thought, "If Facebook hadn't published this as research, but just had it as a business decision to drive more usage or positive associations with the website, no one would care."

My own way to reconcile this -- and I admit it's not a mainstream view -- is that advertisement and salesmanship should be considered just as unethical. I don't know how to quantify what "over the line" is, but it all feels like brain-hacking. Things like "The Century of the Self" suggest that in the past century or so we've become extremely good at finding the little tricks and failings of human cognition and taking advantage of vulnerabilities of our reasoning to inject the equivalent of malicious code. The problem is that when I say "we" I don't mean the average person, and there's an every-growing asymmetry. Like malware developers adapting faster than anti-malware developers, most people have the same level of defense that they always have had, while the "attackers" have gotten better and better at breaking through defenses.

Sometimes I'll see discussions about "what will people centuries from now think was crazy about our era?" and there's a part of me that keeps coming back to the idea that the act of asymmetrically exploiting the faults of human thinking is considered normal and "just the way things are."


> My own way to reconcile this -- and I admit it's not a mainstream view -- is that advertisement and salesmanship should be considered just as unethical.

I agree with that, or probably think even more strongly - that advertisements/sales are more unethical than research. It's difficult to put limits though, because even if many salesmen clearly act maliciously, pretty much everything you do or say influences people this way or another; it's how we communicate.

What I'd love to see is Facebook creating an opt-in option for an user to be a part of further sociological research. I'd gladly turn it on and be happy that I'm helping humanity, while Facebook could limit their studies to people who explicitly consented (there's an issue with selection bias though). They have too good data to be not used for the betterment of mankind.


Good point. I guess my concern -- recognizing this makes me sound like a Luddite or someone going on about "humans were never meant to know about this" -- is that the results of research like this aren't going to be used for the betterment of mankind. Rather, it'll be all about how to use a new mental vulnerability to get more eyeballs on someone's content or to increase the dopamine hits from browsing the site.

What I would love -- and what I would eagerly opt-in to -- would be a system where Facebook could educate users on irrational behaviors. "We noticed that 60% of users like you spent an average of 30 seconds more looking at this kind of content... this is because your brain etc etc etc". Creepy, perhaps, but if there were a way to help people be more aware of and defend against advertisement that would be neat.


> Rather, it'll be all about how to use a new mental vulnerability to get more eyeballs on someone's content or to increase the dopamine hits from browsing the site.

Sadly, you've made a great point here. It's very likely that the end results of research will be used exactly for that - as it already happens with most of psychology.

I hope though that some of that research will be used to create better policies and help the society.

> What I would love -- and what I would eagerly opt-in to -- would be a system where Facebook could educate users on irrational behaviors.

I'd happily opt-in to that as well (and opt-in all my relatives too ;)). I don't expect Facebook to ever do that, as it'd exactly opposite to their goal to be able to a/ influence their users, and b/ cater for advertisers, but there already are websites doing exactly that (e.g. LessWrong). They're niche places though; I'd love to see something popular enough to reach general audience.


I don't agree no one would care. It seems like you're basically saying no one cares if you don't get caught. That's true for anything from murder to theft to NSA spying.

As for advertising trying to manipulate you you know it's an ad and there seem to be regulations for making that clear. For example things that look like articles in magazines but are actually ads are marked as "Special Advertising Section". Similarly ads for political candidates have to be marked "Paid for by XYZ". etc.

We don't tolerate deception in those areas why would we tolerate it from FB?


> For example things that look like articles in magazines but are actually ads are marked as "Special Advertising Section".

Sadly, we've learned how to work around all of that; it's now called the PR industry. There's no way to spot a good PR campaign until after the fact because no single "fake organic news", no single article that happened to be published at a particular time can be pointed to and said "hey, that's an ad!". Bump up the scale and people will stop noticing.


It's a matter of algorithmic scale. What would be the result of a social network (or anyone really) creating fictional users for the purpose of running social experiments on humans whose permission was not requested?

The speech of one salesman/politician is different from thousands of machines impersonating human speech. Speech is protected in some countries. Fake speech (e.g. experiments/spam) decreases the trust of humans in all speech on the network.

In anyone wants to run large-scale experiments, they can:

(a) ask for volunteers

(b) pay for labor

Those who want to volunteer or microtask can (a) opt-in for money or fame, (b) disclose that they have opted-in, so that others know that their conversations are part of an experiment.

Why is advertising (e.g. a promoted tweet) differentiated from non-advertising? Why is "disclosure" required in journalism? Why are experiments differentiated from non-experiments?

The act of observation changes the behavior of the observed. If experiments are not disclosed and clearly demarcated, users must defensively assume that they may be observed/experimented upon, which affects behavior in the entire network. As a side-effect, this pollutes any conclusions which may be drawn from future "social" research.


>The act of observation changes the behavior of the observed. If experiments are not disclosed and clearly demarcated, users must defensively assume that they may be observed/experimented upon, which affects behavior in the entire network. As a side-effect, this pollutes any conclusions which may be drawn from future "social" research.

Sounds like the effect the Snowden revelations have had on some discourse or interactions online, now that people more clearly know that someone may be listening.


> What I would love -- and what I would eagerly opt-in to -- would be a system where Facebook could educate users on irrational behaviors.

That's exactly the reason why most of psychological/sociological experiments are not disclosed. If you read an accout of pretty much any experiment, you'll see that scientists told their volunteers that they're doing X, where in fact they were doing completely different Y. You need people to be unaware of the true goals of your research for your results to make any sense.


There is a difference between disclosure of an experiment's purpose, and disclosure of BEING IN AN EXPERIMENT.


I would think that everyone expects salespeople, ads, news, and politicians to be acting in their own interest, and therefore takes precautions to counteract that.

Facebook, though, pretends to be a communication service. The general expectation from a communication service is that it transmits information between users, thus users expect that what they see or hear coming out of the communication service is what the person at the other end has said/that the person at the other end will see what they say. A service that doesn't provide that isn't really a communication service at all, by definition - and lacking any other uses, isn't really good for anything. Just imagine a phone company with advanced speech recognition and synthesis software in the line that rewrites your conversations to be happier (or any other quality that the company or its customers prefer).


> I would think that everyone expects salespeople, ads, news, and politicians to be acting in their own interest, and therefore takes precautions to counteract that.

You're underestimating human irrationality and how communication works. No one has enough brainpower to constantly correct everything they hear for someone's self-interest. That's why ads work. They're identifying and exploiting methods for bypassing rational thought.

> The general expectation from a communication service is that it transmits information between users, thus users expect that what they see or hear coming out of the communication service is what the person at the other end has said/that the person at the other end will see what they say.

No, Facebook is a curated communication service. Always has been. You wouldn't be able to keep up with raw, unfiltered stream of posts by your friends and liked pages. This would turn the service useless.


Are you sure you are not missing the point? Whether curated or not, the expectation of users certainly is not that it's curated according to facebook's interest of the day, but according to their own interests (that is, to reduce noise, not to influence them), and whether perfect or not, people certainly do expect and counteract lying in ads.

Also, just because ads are generally perceived with some scepticism, doesn't mean that certain kinds of ads aren't unethical as well. In particular, ads that exploit methods of bypassing rational thought might indeed be unethical, and certainly are not the same as just general self-interested advertisement. And despite what you claim, manipulation is actually not a defining property of ads - if you have a product to sell that is actually a rational thing to buy, advertisement can use perfectly rational thinking in order to pursuade you to buy it. Just because much of end-consumer directed advertisement nowadays is trying to sell bullshit, doesn't mean that advertisement can only be used to sell bullshit.


Facebook’s Unethical Experiment: It intentionally manipulated users’ emotions without their knowledge.

I'm not defending Facebook or the experiment, but if you're going to call them out for "manipulating users' emotions without their knowledge", then you need to call out every advertising, marketing, and PR firm on the planet, along with every political talkshow, campaign, sales letter, and half-time speech...


> then you need to call out every advertising, marketing, and PR firm on the planet, along with every political talkshow, campaign, sales letter, and half-time speech...

Except that in every one of those examples, you know you're being fed a worldview slanted by the author for their purpose. They may try to appear unbiased, but you still know that they have a worldview that is in their best interest to share. That makes it completely different.


Except that in every one of those examples, you know you're being fed a worldview slanted by the author

Not always. A perceptive person will know they're being bombarded with messages 24 hours a day, there's persuasion going on, and they may pick up on some of the more overt techniques; however, they won't always know the mechanisms at work or the entities behind the messages.

For example, big PR firms often work behind the scenes to set the stage for large conglomerates. Unless you are actively engaged in trying to connect and analyze the message space, you won't see how a coordinated campaign's mosaic of messages coalesce over time and space. You won't be privy to all the facets of the push and pull in play. You might detect things like frame boundaries, but you won't always know what's omitted. And some things are only evident in hindsight.

Only the most active and astute observers will be aware of the extent of it, and most people won't see any of it because they don't have the mental model for it.


With that in mind, I'd find it important if our social media analytics publicly accounted for such emotional manipulations.

It could be wise to begin to share such emotional-information influences to equally let users and admins be so sensitive (and acknowledge its a real part of our systems).


Users aren't fed every post by every one of their friends -- users would be overwhelmed with posts flying by so fast they couldn't keep up -- so it's no secret FB tweaks its feed algorithm to keep users engaged. And experimenting with network effects is what social networks do.

Every time you see a picture of someone you like it's like a little shot of dopamine goes off in your head. Facebook wants to optimize those dopamine shots to continually bump engagement and create an experience where everyone is habitually checking their feed every 10 mins.

This type of behavior design/economics research is done by Dan Ariely at MIT ("Predictably Irrational" http://www.ted.com/talks/dan_ariely_asks_are_we_in_control_o...) and the Stanford Persuasive Tech Lab (http://captology.stanford.edu/).


Exactly why I'm trying to articulate importance of giving public facing language to such persuasive technologies, by actually having formal public markers for emotions being manipulated.

(e.g. Apple ad with astericks linking to emotional signaling research and analytics, to help users determine deeper interests [this ad attracts people who feel X, Y, Z based on...]).


There's a caveat though; a study can be only made if people are unaware they're being subjects of it.


Do you have other examples of studies (which changed the user in some way) that were involuntary?


Pretty much every study? Most of the important psychological studies tend to follow a pattern where scientists tell the participants that they're doing X (e.g. measuring their control of body parts when drunk) while actually measuring a completely different Y (half of the group were given fake, non-alcoholic drinks; the goal of the study was to check how much of drunken behavior comes from you knowing that you drink alcohol).

You can't get good results if your subjects know what you're testing for.


Since those participants did give consent to physically being _in_ a study, that's voluntary.

If social norms within the test population (e.g. graduate students) are that pretty much every study tells participants they are doing X, but they end up doing Y, then people are clearly agreeing (consent, free will, expectation based on norms) that they won't know. The norms are a form of disclosure.

What we have here are invisible, silent experiments that violate precedents and norms, without consent.


To say this response is unnecessary and unfounded is disingenuous. Marc Andreessen (whom I respect) tweeted "Run a web site, measure anything, make any changes based on measurements? Congratulations, you're running a psychology experiment!" I could not disagree more.

This isn't simply a matter of looking at metrics and making changes to increase conversion rates. The problem is that the whole of users have come to expect Facebook to be a place where they can see any and all of their friends' updates. When I look at an ad, I know I am being manipulated. I know I'm being sold something. There is no such expectation of manipulative intent of Facebook, or that they're curating your social feed beyond "most recent" and "most popular", which seemingly have little to do with post content and are filters they let you toggle.

What FB has done is misrepresent people and the lives they've chosen to portray, having a hand in shaping their online image. I want to see the good and the bad that my friends post. I want to know that whatever my mom or brother or friend posts, I'll be able to see. Someone's having a bad day? I want to see it and support that person. That's what's so great about social media, that whatever I post can reach everyone in my circle, the way I posted it, unedited, unfiltered.

To me this is a disagreement between what people perceive FB to be and how FB views itself. What if Twitter started filtering out tweets that were negative or critical of others?


There are better ways to question the ethics of the experiment. Here's a simple approach:

Would anyone want their emotions manipulated to be unhappy or unhealthy?

The corollary in a medical experiment would be, would a healthy person want to undergo an experiment that could make them sick?

Some people mentioned advertising as a counterpoint, that what Facebook does is not at all different from advertising's psychological manipulation. Well maybe some forms of advertising ought to be regulated too. Would a child voluntarily want their emotions manipulated by a Doritos ad to make the sicker or fatter?

Even if it's not known what the outcome is, the two points are:

(1) Facebook's various policies specify you will randomly participate in their studies, but

(2) It matters if an experimental outcome can harm you.

So even though you agreed to participate in experiments, you weren't told the experiments could hurt you. That is a classic medical ethics violation, and it ought to be a universal scientific ethics violation.


"Well maybe some forms of advertising ought to be regulated too."

In the UK, advertising using subliminal messages/stimuli is not permitted: "No advertisement may use images of very brief duration, or any other technique that is likely to influence consumers, without their being fully aware of what has been done."[0]

[0] Source: http://www.cap.org.uk/Advertising-Codes/Broadcast-HTML/Secti...


science doesn't take place just in the lab.

every time someone asks me 'how are you doing' and doesn't mean it, it makes me a little less happy inside. i have to work hard to focus my thoughts on their intent - to greet me - and not on the fact that they asked a question i'd love to answer but they don't want to hear the answer to, because they don't mean what they are saying.

it hurts.

but nobody's going to say it's "unethical" to ask people how they're doing unless you mean it.

so if someone gauges people's reactions, and realizes 'hey, people don't like this' - they've already commited an "ethical violation" according to research ethics defined this way.

people don't like emotional manipulation. i get that - but laws don' fix the problem. laws ignore emotion; they make no special cases. if anything, our society's obsession with rules that are specifically designed to _prevent_ emotion from changing our decision making just makes this worse.


> The corollary in a medical experiment would be, would a healthy person want to undergo an experiment that could make them sick?

Yes, many people would willingly volunteer to take experimental drugs that

i) might not work

ii) might uave severe side effects

because those people are dying and want some months more life.


Note the parent comment said "healthy". Last I checked, healthy != dying.


> place where they can see any and all of their friends' updates.

But that's the thing - Facebook have been manipulating your feed for years based on what it thought you would be interested in: favoring popular posts, posts from those you interact with frequently, posts that it thinks could be popular etc. There's always been 'most recent' which is a more accurate timeline, and as far as I know Facebook never manipulated that.

While I neither agree nor disagree with whether the study was right or not, it wasnt with this study that they 'misrepresented people and their lives'. Facebook has been doing that for years!


The consequences of an activity are context dependent.

http://civic.mit.edu/blog/petey/betraying-expectations-in-us...

"I've begun to think that if you, like me, found yourself surprised by these case studies, or nodding along with them as variations on a familiar theme of “gaming the system,” then it is because we came to the cases expecting or wanting something else. Some baseline behavior which ought to have been in action, some baseline norm which ought to have been observed, but wasn’t, rendering the results of the process suspect, “altered,” “manipulated,” “unnatural,” relative to some unaltered, unmanipulated, natural baseline, which would exist but for sin and sinners. But what are these baselines? Where do they come from?

These are not, as they say, purely academic questions. The answers not only explain why we find some behavior problematic: they create the very possibility of the problem itself. The opportunity for subversion arises from the gap between intended and actual use of a system. James Grimmelmann once wrote that “if we must imagine intellectual property law, it must also imagine us.” When engineers design social systems they must imagine both possible uses and devise methods to make sure the uses are producing desired results, endlessly iterating upon the real to move it closer to the ideal."


They ostensibly did that to make the experience better for users. In this case they did it to make them feel worse.


No, they did it to study user reaction to a ratio of positive/negative posts. Its entirely possible that people would be more engaged if they received more negative news from their friends because of empathy, schadenfreude, or simply that knowing others aren't perfect provokes people to feel unashamed about posting.

Whilst the experiment only looked at a single metric, facebook themselves likely looked at all metrics.

Secondly, can we really say people felt 'worse'. Hypothetically, people may feel better if they see brief happy news in a sea of negative news, rather than seeing only happy news all the time with a few negatives since what is rare gets a stronger reaction. Do we know this is true or not? Not without a study...


We do know that there are elements of contagion around suicidality and self harm. People use the word "trigguring" (a bit problematic because the word has spread from and to other situations which have different levels of appropriateness) but yes, we are pretty sure that someone with anorexia who spends a lot of time talking to other people with anorexia, especially if they're actively sharing thinspo tips, is going to find recovery harder and that this is causal.

While it's great that people with lots of data do research, and there are problems with ethics panels[1], we should be very careful with anything designed to manipulate emotions.

[1] Ben Goldacre talks about this http://www.badscience.net/2011/03/when-ethics-committees-kil...


I think the difference between that hearing generally 'negative newsposts' is direction.

People who are sharing tips to stay thin, (or tips on football, or living a 'moral' life, etc.) have a stated goal in mind, so it shouldn't be surprising if they succeed in it.


From the article, they know they felt worse because the subjects' own posts subsequently became more negative.


that doesn't mean they felt worse. maybe they were matching the tone they were seeing around them? if you're giggly at a funeral, you're going to piss off the people around you and make everyone feel worse. you can keep the giggliness inside and pretend to be all somber and serious without affecting your mood.

http://www.today.com/health/stop-cheering-me-some-people-don...

if anything, you could argue that facebook helped the people posting negative statuses who got more views, comments, and likes on the things they were struggling with.


Sometimes I write negative things after reading negative things, but it's ultimately an enriching experience for me.


> Its entirely possible that people would be more engaged if they received more negative news

Or the other way around, there have been many articles about social network envy, having to keep up with the FB Joneses in terms of having an exciting life etc.

I wouldn't have been surprised if people felt worse after looking only at positive status updates...


NOBODY thinks that Facebook is a place to see all your friends' updates. And it has never been anything like that.

What Marc was getting at was that this is A/B testing. Everybody does A/B testing these days. Claiming it is "unethical" because the B group might be less happy than the A group is ridiculous. That's essentially the whole point of A/B testing. Try two things and see what makes your users happier, then you can do more of that.


Is there a difference between A/B testing which surveys "taste" and A/B testing which causes "behavior change"? There is the small matter of side-effects..


I couldn't agree more. What if someone who was part of the experiment group committed suicide? Where's the duty of care?


I am guessing somewhere below their duty to their shareholders?

That was sort of snark at Facebook, but it is a difficult legal question. However, it also sounds like a bit of a beat-up, like the Strava lawsuit.

You are not gauranteed by Facebook for a service that allows you to communicate in a clearly defined way. They are allowed to, an do, tweak it constantly to improve whatever metric they think will result in more engagement and bottom line. Much like google tweaks their search results (or their cafeteria food) these companies are always running experiments. In this case, it resulted in a paper. The researchers who published the paper might have to ensure they upheld the ethical obligations of the journal, and any affiliated institutions. If they did, the fact this is published is a better outcome then normal.

[0] http://cyclingtips.com.au/2012/06/strava-lawsuit/


Well, I used to think that. Back in 2004 when it started, you could and wanted to see all your friends updates, and even for years afterwards. Now, it's probably less than 10% of everything you'll ever see, others are purposefully hidden from you. Facebook was a social place, and was supposed to be "the place to keep in touch with friends", so yeah, wanted to see 100%...


But then everyone around you and me joined as well and now an average Facebook user has over 300 friends (I myself have 568); there's no way to keep up with all their updates even if everyone sent just on average two posts a day. And that doesn't even include various thousands of pages one liked.

If they let everything in unfiltered, it would probably start looking like Twitter - a place I don't frequent because it's just damn too noisy for me even if I follow just few dozen people.


It's also worth noting that this is not (contrary to Andreessen's disingenuous tweet) a case of a website accidentally tripping over scientific-ethics norms through its normal course of operations, unaware that what they're doing might be considered a psychology study.

This was explicitly a psychology study, performed by professional psychologists, for the purpose of collecting data publishable in a journal! The lead author (and the Facebook employee on the project), A.D.I. Kramer, has a PhD in social psychology. I think it's perfectly reasonable in that setting to expect the researchers to be following the norms of scientific ethics.


The difference between Facebook and Twitter on your last point is that Facebook does it for you like a gentleman, and twitter has you pick feeds which will never disagree with you manually.

Either way you're just going to listen to the least dissenting material you can find, might as well let them figure it out for you.

Maybe the difference is between moral good with questionable ethics, and moral mediocrity with unquestionable ethics.


I just wrote a blog post about the ethical/professional obligations of the researchers associated with this study: http://www.maxmasnick.com/2014/06/28/facebook/

When you publish a paper, you are supposed to write in the body of the manuscript if it's been approved by an IRB and what their ruling was. I'm surprised it was published without this, even though it apparently was?

It's also appropriate to address ethical issues head-on in a paper about a study that may be controversial from an ethical perspective.

If it really was approved by an IRB, then the researchers are ethically in the clear but totally botched the PR.

If not, then I think the study was not ethical.


It was approved by an IRB.


The difference between this experiment and advertising or A/B testing is _intent_. With A/B testing and advertising, the publisher is attempting to sway user behaviour toward purchasing or some other goal which is usually obvious to the user.

With this experiment, Facebook are modifying the news feeds of their users specifically to affect their emotions, and then measure impact of that emotional change. The intention is to modify the feelings of users on the system, some negatively, some positively.

Intentional messing with human moods like this purely for experimentation is the reason why ethics committees exist at research organisations, and why informed consent is required from participants in experiments.

Informed consent in this case could have involved popping up a dialog to all users who were to be involved in the experiment, informing them that the presentation of information in Facebook would be changed in a way that might affect their emotions or mood. That is what you would expect of doctors and researchers when dealing with substances or activities that could adversely affects people's moods. We should expect no less from pervasive social networks like Facebook.


Oh, please.

Every single time Facebook changes anything on their site it "manipulates users' emotions". Show more content from their friends? Show less? Show more from some friends? Show one type of content more, another less? Change the font? Enlarge/shrink thumbnail images? All these things affect users on all levels, including emotionally, and Facebook does such changes every day.

Talking about "informed consent" in the context of a "psychological experiment" here is bizarre. The "subjects" of the "experiment" here are users of Facebook. They decided to use Facebook, and Facebook tweaks the content it shows them every single day. They expect that. That is how Facebook and every other site on the web (that is sophisticated enough to do studies on user behavior) works.

If this is "immoral", then an website outage - which frustrates users hugely - should be outright evil. And shutting down a service would be an atrocity. Of course all of these are ludicrous.

The only reason we are talking about this is because it was published, so all of a sudden it's "psychological research", which is a context rife with ethical limitations. But make no mistake - Facebook and all other sophisticated websites do such "psychological research" ALL THE TIME. It's how they optimize their content to get people to spend more time on their sites, or spend more money, or whatever they want.

If anyone objects to this, they object to basically the entire modern web.


Exactly. I find this situation to be an example of ridiculous pattern matching. Is it published? Then it's a psychological experiment, and needs to be evaluated by an ethics board. Is it just A/B testing? Then it's not "science", so no need for ethics board.


So as long as you aren't publishing the results of your experimentation, none of the ethics that apply to experimenting on humans apply?

That's an...interesting...theory.


Any A/B test is "experimentation on humans". Facebook and all other web giants constantly do such behavioral studies. The only difference is that this one was published.

In other words, if someone argues that this was unethical experimentation on humans, then there are 1,000 other studies we never heard of that are far, far worse. But we know they exist.

It doesn't make sense to argue that. Websites have to experiment with different ways of doing things, and seeing how that affects their users. This isn't just a web thing either, of course - businesses need to try different things in order to optimize themselves. And to measure which is the best to get people to spend more.


Well that seems to me to be a core issue here. Otherwise, you'd need to get IRB permission for every A/B test you do on your website.


I'm torn about this. In some ways, I can see how mental health issues can be detected which can hopefully help us avoid these horrifying events (mass shootings off the top of my head). But then again, I can see how the Army or the government in general can control any type of popular uprisings. FB, Twitter, etc have given us tools to connect and join in efforts to fix what is wrong (I'm thinking the Middle East though that can be said about the Tea Party or even Occupy movement). If the price is right, FB can hand over that power (i.e. NSA) or through these secret courts, the Army/government can have direct control of FB. It's crazy to think that this only occurs in countries like Russia and China but wake up America! This is happening here as well!


You know why I think they are doing this? Because there have been studies showing that people are miserable on facebook (see below) and I think people are starting to pick up on it. So FB feels some pressure to lighten the mood a bit. But as usual they do it with the subtlety of a drunken fool.

Also, the comparison to an A/B test is a false one. This is specifically to alter the moods of the user and test the results in a study, not to improve the users experience or determine which app version works better.

Regarding the study mentioned above: http://www.newyorker.com/online/blogs/elements/2013/09/the-r...


> Facebook intentionally made thousands upon thousands of people sad.

Hang on. Wasn't the experiment to see whether users would post gloomier or happier messages respectively? This very different from intentionally making people sad.


"I only silently removed happiness from your life because I was curious what your reaction would be!"


Their method of removing happy posts is not the same as deliberately making people sad. Before the experiment it was highly plausible that people would become more happy if they saw less happy posts by others[0].

Moreover, I think companies should be doing more experiments like the FB one -- it's high time that human happiness be prioritized over other metrics such as profit, revenue, number of likes etc.

[0] http://www.slate.com/articles/double_x/doublex/2011/01/the_a...


Silently removed happiness from my web application that you visit on a voluntary basis. If you think my web app is too gloomy, feel free to stop coming.


What if Gmail started silently removing happy emails from your Inbox by auto-archiving them?


You expect GMail to show you every non-spam message sent to you in your inbox. On Facebook, on the other hand, you expect a curated list of recent posts - otherwise you wouldn't be able to keep up with what your 500+ friends and 1500+ liked pages post every day. So comparing GMail to Facebook makes no sense at all.


Gmail users have an expectation that Google won't start silently diverting their legitimate email as an experiment on them. That's the comparison if you didn't quite grasp that.

You're claiming that users would "expect" Facebook to do something like filter out all the happy posts from their friends and family without telling them? I don't think many would agree with you.


I think he's merely claiming that you expect Facebook to curate the news feed. How they do so (and for what purpose) is ever-changing and has never been fully transparent, thus your expectations for those particular factors is irrelevant.


Yes. He's saying that their lack of transparency justifies their abuse. I'm trying to explain that I disagree.


I don't see any abuse here, and I believe that their lack of transparency wrt. filtering algorithms is justified.

First off all, the shortest description of what they do wouldn't probably be far from publishing the algorithm iteslf. An algorithm that's ever changing and probably different depending on where you live or to what group you were randomly assigned. 99% of people wouldn't care anyway, and being transparent about the algorithm would likely make them less happy - right now they accept Facebook as is and don't think twice about it; give them the description of how things work and suddenly everyone will start saying that Facebook filtering sucks because random-reason-511.

Moreover, the only people that stand to benefit from knowing Facebook's algorithm are advertisers, who will game the hell out of the system for their own short-term benefit, just like they do with Google. It's something neither Facebook users, nor Facebook itself want.


Sorry, you failed to comprehend what I said and I even made it really short. Maybe try reading it a few more times.


This study really makes me feel vindicated for unfollowing all of my friends along with every brand on facebook. I could've been part of the study but I'd never know, since the only way I see my friends' posts is to visit their pages directly where I can see them all unfiltered. I've been doing this for the past six months and it has dramatically improved the way I interact with the site. I can still get party invites and keep in touch with people, but I'm immune to the groupthink.


I have a feeling a lot of college courses on research methods are going to use this as an example of a grave ethics breach for years to come. With an experiment group as large as they used, statistically it's almost inevitable that someone in that group will commit suicide in the near future. If that person is in the group that was targeted for negative messages, even a rookie lawyer could make a sound case before a jury that Facebook's researchers have blood on their hands.


surely people have committed sucide after using facebook even without this study. is facebook guilty of that, too?

you may argue that facebook was "trying to make people depressed" but that simply isn't true. what if showing more of my friends negative status updates actually _helps_ them? depressed people are shunned in our society; facebook gave a voice to the voiceless. that's wonderful!


> you may argue that facebook was "trying to make people depressed" but that simply isn't true.

Legal culpability issues aside: did facebook manipulate people's emotions intentionally? Did they inform them that they were going to do this, and of the risks involved? Did they get their consent? If the answers to the last two questions aren't unequivocally yes, then facebook is in deep trouble.

Edit: this also misses the problem that the subjects were never screened for their basical ability to give informed consent. Merely clicking through the ToS does not mean that you're not suffering from a mental illness that nullifies their agreement to the ToS.

Lastly, this experiment clearly involved deception, since the test subjects weren't informed up-front that they were being manipulated. This is problematic[1] if the subjects weren't debriefed after the study:

>It is stated in the Ethical Principles of Psychologists and Code of Conduct set by the American Psychological Association, that psychologists may not conduct research that includes a deceptive compartment unless the act is justified by the value and the importance of the results of such study, provided that this could not be obtained in an alternative way. Moreover, the research should bear no potential harm to the subject as an outcome of deception, be it physical pain or emotional distress. Finally, a debriefing session is required in which the experimenter discloses to the subject the use of deception in the research he/she was part of and provides the subject with the option of withdrawing his/her data.

[1] http://en.wikipedia.org/wiki/Informed_consent#Deception


FWIW, the HN discussion on the study published on PNAS here:

https://news.ycombinator.com/item?id=7956470


Yes, and today's wave of media controversy about it hasn't added significant new information, so I think this post counts as a dupe.


Heavens forbid the fluff piece on a Google executive gets pushed down the page. I'm dumbfounded that you would kill this story. Hacker News is changing.


Well, it was a borderline call, so I've restored the thread.

Perhaps I should explain our thought process. There were at least half a dozen major web publications today putting out variants of this indignant post about "Facebook's unethical experiment". Did all these authors suddenly develop a passion for science ethics? Of course not. It is simply the internet controversy du jour. Those have never made for good HN stories, and the policy has always been to penalize them, because otherwise they would dominate the site.

In cases of pile-on controversy like this one, when the original story has already been discussed on HN—which is pretty common, because HN users tend not to miss a day in posting these things—we usually mark the follow-up posts as dupes unless they add important new information, or at least something of substance. Does this article add anything of substance? It didn't strike me that way, but arguably it does.

As for the PR fluff piece you think is on the front page, why haven't you flagged it? It's impossible for us to catch (or even see) all such things. We rely on users to point them out.


The idea that this story is "controversy du jour" is wrong in my view. I think it's an incredibly important story and the underlying issue may be the biggest in technology. At the very least it is not spam, gossip, or other obvious junk.

The explicit HN policy used to be to allow controversies like this to wash over the site. We all remember seeing the home page covered in many submissions on the same topic. The fear that this would cause a topic to "dominate the site" has been proven false numerous times. I'm not sure why that would be a consideration.

I wasn't objecting to the puff piece on the home page. I don't think lightweight stuff like that can dominate the site either.


Complaints about stories taking over the entire front page of the site are as old as the site itself. This comment might be the first one I've ever read suggesting that the phenomenon was a good thing that we should preserve.


Who is going to decide how many stories on a topic we get to have? Should there have been one Mt.Gox related submission? One Snowden related submission? Up to one submission per day per topic? I'm not suggesting it's "good" I'm suggesting it's better than the alternative.

Killing dupes when there is more than one active discussion is one thing. This submission was the only active discussion on this topic. Removing it is just editorial curation that is of no benefit to anyone at all.


Those are good questions, but you seem to be under the impression that HN didn't use to be intensively moderated. HN was always intensively moderated, curated, or whatever one calls it. That's why you can write:

The fear that this would cause a topic to "dominate the site" has been proven false numerous times.

It was PG who made it false. He poured countless hours into managing the site and countless more into writing code to manage it.

That model hasn't changed. It's more transparent now, because users asked for it to be. Transparency has the side-effect of making it seem to some people like we've fundamentally altered HN when it doesn't work like they assumed it did.


Few people know better what PG, yourself, and others have done for this site or appreciate it more than me. I've seen lots of threads get penalized or killed and reversed. I know it hasn't been perfect in the past either.

I regret saying anything and I won't comment in the future. Thanks.


Please don't regret saying anything and for heaven's sake please don't stop commenting! This stuff is messy, unobvious, and unsatisfying. I'm painfully aware that there's no way to make HN consistent, to satisfy everybody any of the time, or anybody all of the time. The least bad job is all we can strive for, and we can't do that without feedback.

Also, sorry for the snippiness in my tone above. I don't always succeed in responding the way I want to.


So does this mean that people can increase their happiness by using plugins that hide negative posts from their social media?


Possibly, but only in the short run, as skewed perception of reality tends to have long-term negative consequences. Which is precisely one of the reasons why this kind of stuff is evil.


People can reduce their risk of suicide by using something that filters low-quality reporting of suicide from their timeline.


Author falsely assumes that people changes their sharing behavior due to changes in their mood. More likely they just feel like "everyone's posting cats on Facebook, so that's a place for sharing cats, let me do too", or otherwise.


Before and/or after the fact, research participants are made aware that they were part of a psychology experiment.

I wonder if Facebook plans on alerting subjects of this experiment to their participation?


Isn't Slate in the business of exactly that: manipulating their readers emotions?


Yes, but they're better at it than Facebook. They've got a bunch of gullible illogical peasants about to ban A/B testing... or at least drive it underground. For the children.


I just use Facebook to bookmark youporn at this point


Been kind of surprised there hasn't been more of a reaction to this. I guess the Internet has reached peak Facebook outrage.


Another nail in the FB coffin.

Edit: for me at least.


And only 90,000 more nails to go before your average non-tech user who has Facebook as their homepage drops them.

Until a replacement comes about and a large number of contacts move, it has become such a large part of these peoples lives it isn't going anywhere. Arguments and reasons don't sway them. Sadly.

I've never even been on facebook. But my girlfriend and extended family use it religiously. My dad and a couple of other members finally dropped it as the result of my rants but the rest (the vast majority) just think I'm suspicious and nutty and go right on posting their entire lives.

So, facebook can pretty much do as they please. And apparently they do.


people still use facebook?


> "If you are exposing people to something that causes changes in psychological status, that’s experimentation"

Or art, or journalism, or advertising, or football etc.


Every business that makes sense will try to make its customers happier.

Showing people bad news to get more engagement has roughly the same moral standing as the evening news.

I guess I don't get it.

[It must be wrong because they learned something from it, I guess?]


What's your position on creating fake news to get more engagement? Some lines are defended because the slippery slope on the other side is infinite.


I'm opposed to lying, I don't imagine that's controversial.

If the slope is so slippery smooth there has got to be a point in between changing the details of an arbitrarily complex sort and filter default on a social network site and purposely propagating lies.


Well, where is it when not where you are purposely changing those sort and filter defaults with the explicit goal of changing people's perception of the world in a way intentionally not in line with their interest?


The thing about the slippery slope is that it is far more often a logical fallacy than it is a real danger.


http://pando.com/2014/06/28/facebooks-science-experiment-on-...

"Facebook itself could target certain users, whether they be corporate rivals or current/former employees. Having such strong psychological control over your workforce would certainly have its benefits. And if Facebook ever gets caught? Why, the company could claim it’s all part of a social experiment, one that users tacitly agreed to when they signed up.

With over one-tenth of the world’s population signing into Facebook every day, and now with evidence to back the emotional power of the company’s algorithmic manipulation, the possibilities for widespread social engineering are staggering and unlike anything the world has seen. Granted, Facebook’s motives probably are simply to convince people to buy more stuff in order to please advertisers, but the potential uses of that power to impact elections or global trade could be enticing to all sorts of powerful interest groups."


The thing about fallacies is that you can't claim one just because it's common.

The thing about this case is that network effects of communication services make for very strong path dependence, thus making it extremely hard to get back up the slope a bit if you notice you've been slipping down a bit too much.


Well that's pretty much every news portal out there, for the definition of 'fake' as 'not matching reality'.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: