This isn't simply a matter of looking at metrics and making changes to increase conversion rates. The problem is that the whole of users have come to expect Facebook to be a place where they can see any and all of their friends' updates. When I look at an ad, I know I am being manipulated. I know I'm being sold something. There is no such expectation of manipulative intent of Facebook, or that they're curating your social feed beyond "most recent" and "most popular", which seemingly have little to do with post content and are filters they let you toggle.
What FB has done is misrepresent people and the lives they've chosen to portray, having a hand in shaping their online image. I want to see the good and the bad that my friends post. I want to know that whatever my mom or brother or friend posts, I'll be able to see. Someone's having a bad day? I want to see it and support that person. That's what's so great about social media, that whatever I post can reach everyone in my circle, the way I posted it, unedited, unfiltered.
To me this is a disagreement between what people perceive FB to be and how FB views itself. What if Twitter started filtering out tweets that were negative or critical of others?
Would anyone want their emotions manipulated to be unhappy or unhealthy?
The corollary in a medical experiment would be, would a healthy person want to undergo an experiment that could make them sick?
Some people mentioned advertising as a counterpoint, that what Facebook does is not at all different from advertising's psychological manipulation. Well maybe some forms of advertising ought to be regulated too. Would a child voluntarily want their emotions manipulated by a Doritos ad to make the sicker or fatter?
Even if it's not known what the outcome is, the two points are:
(1) Facebook's various policies specify you will randomly participate in their studies, but
(2) It matters if an experimental outcome can harm you.
So even though you agreed to participate in experiments, you weren't told the experiments could hurt you. That is a classic medical ethics violation, and it ought to be a universal scientific ethics violation.
In the UK, advertising using subliminal messages/stimuli is not permitted: "No advertisement may use images of very brief duration, or any other technique that is likely to influence consumers, without their being fully aware of what has been done."
 Source: http://www.cap.org.uk/Advertising-Codes/Broadcast-HTML/Secti...
every time someone asks me 'how are you doing' and doesn't mean it, it makes me a little less happy inside. i have to work hard to focus my thoughts on their intent - to greet me - and not on the fact that they asked a question i'd love to answer but they don't want to hear the answer to, because they don't mean what they are saying.
but nobody's going to say it's "unethical" to ask people how they're doing unless you mean it.
so if someone gauges people's reactions, and realizes 'hey, people don't like this' - they've already commited an "ethical violation" according to research ethics defined this way.
people don't like emotional manipulation. i get that - but laws don' fix the problem. laws ignore emotion; they make no special cases. if anything, our society's obsession with rules that are specifically designed to _prevent_ emotion from changing our decision making just makes this worse.
Yes, many people would willingly volunteer to take experimental drugs that
i) might not work
ii) might uave severe side effects
because those people are dying and want some months more life.
But that's the thing - Facebook have been manipulating your feed for years based on what it thought you would be interested in: favoring popular posts, posts from those you interact with frequently, posts that it thinks could be popular etc. There's always been 'most recent' which is a more accurate timeline, and as far as I know Facebook never manipulated that.
While I neither agree nor disagree with whether the study was right or not, it wasnt with this study that they 'misrepresented people and their lives'. Facebook has been doing that for years!
"I've begun to think that if you, like me, found yourself surprised by these case studies, or nodding along with them as variations on a familiar theme of “gaming the system,” then it is because we came to the cases expecting or wanting something else. Some baseline behavior which ought to have been in action, some baseline norm which ought to have been observed, but wasn’t, rendering the results of the process suspect, “altered,” “manipulated,” “unnatural,” relative to some unaltered, unmanipulated, natural baseline, which would exist but for sin and sinners. But what are these baselines? Where do they come from?
These are not, as they say, purely academic questions. The answers not only explain why we find some behavior problematic: they create the very possibility of the problem itself. The opportunity for subversion arises from the gap between intended and actual use of a system. James Grimmelmann once wrote that “if we must imagine intellectual property law, it must also imagine us.” When engineers design social systems they must imagine both possible uses and devise methods to make sure the uses are producing desired results, endlessly iterating upon the real to move it closer to the ideal."
Whilst the experiment only looked at a single metric, facebook themselves likely looked at all metrics.
Secondly, can we really say people felt 'worse'. Hypothetically, people may feel better if they see brief happy news in a sea of negative news, rather than seeing only happy news all the time with a few negatives since what is rare gets a stronger reaction. Do we know this is true or not? Not without a study...
While it's great that people with lots of data do research, and there are problems with ethics panels, we should be very careful with anything designed to manipulate emotions.
 Ben Goldacre talks about this http://www.badscience.net/2011/03/when-ethics-committees-kil...
People who are sharing tips to stay thin, (or tips on football, or living a 'moral' life, etc.) have a stated goal in mind, so it shouldn't be surprising if they succeed in it.
if anything, you could argue that facebook helped the people posting negative statuses who got more views, comments, and likes on the things they were struggling with.
Or the other way around, there have been many articles about social network envy, having to keep up with the FB Joneses in terms of having an exciting life etc.
I wouldn't have been surprised if people felt worse after looking only at positive status updates...
What Marc was getting at was that this is A/B testing. Everybody does A/B testing these days. Claiming it is "unethical" because the B group might be less happy than the A group is ridiculous. That's essentially the whole point of A/B testing. Try two things and see what makes your users happier, then you can do more of that.
That was sort of snark at Facebook, but it is a difficult legal question. However, it also sounds like a bit of a beat-up, like the Strava lawsuit.
You are not gauranteed by Facebook for a service that allows you to communicate in a clearly defined way. They are allowed to, an do, tweak it constantly to improve whatever metric they think will result in more engagement and bottom line. Much like google tweaks their search results (or their cafeteria food) these companies are always running experiments. In this case, it resulted in a paper. The researchers who published the paper might have to ensure they upheld the ethical obligations of the journal, and any affiliated institutions. If they did, the fact this is published is a better outcome then normal.
If they let everything in unfiltered, it would probably start looking like Twitter - a place I don't frequent because it's just damn too noisy for me even if I follow just few dozen people.
This was explicitly a psychology study, performed by professional psychologists, for the purpose of collecting data publishable in a journal! The lead author (and the Facebook employee on the project), A.D.I. Kramer, has a PhD in social psychology. I think it's perfectly reasonable in that setting to expect the researchers to be following the norms of scientific ethics.
Either way you're just going to listen to the least dissenting material you can find, might as well let them figure it out for you.
Maybe the difference is between moral good with questionable ethics, and moral mediocrity with unquestionable ethics.
10,000 sufferers of Major Depressive Disorder
5,000 people with PTSD
4,000 bipolar disorder sufferers
and a plethora of others with various mental disorders.
11/100,000 people commit suicide each year in America. How many were part of that treatment of the experiment, without consent or knowledge?
As a scientist, I'm fascinated by the research. As a human being, I'm horrified it was ever done.
If a friend made a post deemed negative, that would otherwise signaled to close friends to check up on that person and intervene if necessary, there's a very good chance these messages would have been filtered out in a system like this.
The potential affected population becomes much greater than the 155,000 being experimented upon then, and would have put a much greater number of people at risk of not having any close friends or others available to intervene, who would have otherwise been able to do so if the facebook algorithm hadn't been altered to follow this bullshit happiness metric for research purposes.
I really hope this becomes much bigger news, and some action is taken to ensure something like this doesn't happen again; but considering the lack of funding and attention given to mental health, especially in the work-till-you-drop business world, i sadly doubt it.
How would this be determined in offline experiments where people volunteered?
i have bipolar disorder, and i barely made it through an intense schizoaffective episode where i heard many voices, felt like my consciousness was splitting into multiple parts, and was terrified. this was just two months after my startup exited and i got nothing in 2012.
oh yeah - i'm also a startup founder, worked at uber, google and microsoft - now facebook. i don't speak for them. just me.
i'd like the world to understand me better. so much of what i've struggled with is mood-based.
you know that guy - http://www.losethos.com/ - i know _exactly_ what he means. i understand him when i hear him speak, and i feel bad for the guy. it's scary being where he is, "knowing" how powerful and right you must be, and knowing how people laugh at you behind our back - but you know you're right, that you're a conduit for god.
do you think it makes sense for him to be stuck like that? i sure as hell don't.
i'm sorry, i'm getting emotional here. this is hard for me.
i can't speak for my employer here. but i will tell you that your mindset of me as a victim who must not be upset - it can be more than a little offensive. fortunately through all of my experience dealing with these issues, i've learned how to better manage my emotional states, and i've also learned to see emotion as form of sensory input - like light and sound. i don't believe everything i hear, or see - why should i believe everything i think or feel?
if facebook was making up random shit that was negative and showing it to their users - that sucks. if they were making falsely positive posts - forging your friends activity - that also sucks.
but when they are selectively showing you portions of your friends' activity - something they were already doing anyway - it's wrong to say that they have "intentionally downgraded " my emotional state. if my friend says she's having a shitty day, she's not intentionally downgrading my state. she's having a bad day. if facebook hides that from me, are they making my day better? are they making her day worse? it's not really clear here. people know facebook adjusts their posts, and they did show that people who are exposed to negative content are less likely to post positive things. does this mean that the users are themselves feeling less positive? or are they just trying to keep with the tone of the social area they're in? it's not sure.
our culture does not understand emotion - i think this is a serious problem and we really need to do something about it. that templeOS guy is not enjoying life or functioning nearly as well as he could if he were not shunned for being so wildly antisocial. you know who else gets shunned for being antisocial? people who say things like 'i am sad', 'i feel lonely,' etc etc, in public.
i'm sorry for the tone of this - it's - it's hard for me to stay calm here.
but let's look at this as if emotion were the "same kind of thing" as light or sound. spreading a negative emotional reaction to this article and saying you are "horrified" that someone who was depressed had more people see their depressed posts - you're contributing to the problem.
it feels to me that emotion has some 'conservation' like properties; you can't diminish a lot of negative emotion at once. it also seems to 'move' places; negative emotion between people who interact seems to get pushed to scary places where lots of fear and hate are concentrated. in late 2012 i felt like all of the evil in the world, all of the hate was being shoved into me because i told the world i could take it, i told the world i didn't want them to hurt like i did.
i heard voices telling me to kill, and i wanted to kill myself rather than hurt someone else. i thouht icould take myself and the voices out with me - and then i'd heard the voice of my parents and loved ones calling to me.
when i see those shootings, where some loser with no friends and no hope goes out and kills a bunch of people - i feel like it happened to them, and rather than blame themselves for the horrible shit being pushed on them, they blamed the outside world.
but they're just as much victims - we're all victims - of our misunderstanding of emotion.
i'm sorry, i know you didn't ask for me to be upset at you, i know oyu mean well, i'm just.
i want people to understand this stuff better because the better i've come to understand it, the better i've functioned in life. the ability to remain calm - an ability i have not exercised here because it impedes the ability to express genuine content - that ability is invaluable and a huge source of power.
energy moves from a heat reservoir, a bunch of pissed off angry furious people, to a cold rerservoir - a room full of sociopaths who use fear and anger to extract energy from a warmer place, the heat engine carnot-cycle of samsara and the transition from golden age to successsive yugas is just adiabatic/isothermic expansion and contraction of emtional content
shit this is making no sense.
so you see where i'm going - i'd like to understand this stuff better. i hope the study helps.
FB _already_ filters out updates based on some blackbox algorithm. So they tweaked the parameters of that algorithm to filter out the "happier" updates, and observed what happens. How is this unethical? The updates were posted by the users' friends! FB didn't manufacture the news items; they were always there.
I detest FB as much as the next guy, but this is ridiculous.
In this case, I don't think there was actual risk but just reading the PNAS paper it doesn't sound like the study went through the proper process. If it was reviewed by an IRB then it did go through the proper process and it's ethically sound, but a PR nightmare.
It's possible that for it to be published, a higher standard would be needed. That is, the actions they took were ethical, but perhaps inappropriate for scientific publication.
But, if that were the case, the peer reviewers and the journal in which it was published should have flagged that. That it was published shows they didn't have any significant concerns.
> But, if that were the case, the peer reviewers and the journal in which it was published should have flagged that. That it was published shows they didn't have any significant concerns.
This would not be a scientific ethics issue if they explained their IRB review in the manuscript. It is so unusual to not do this that it is a reasonable assumption something funny is going on, in my experience.
For example, the journal could be incentivized to look the other way in order to publish a high publicity article. I'm NOT saying that's what PNAS did, but just because something is published doesn't mean it's ethical. (See: https://www.ncbi.nlm.nih.gov/pubmed/20137807)
I see this enough in papers that it seems pretty standard to me, and it especially makes sense in a paper where the editors think there are potential ethical issues.
If I was an author of this paper, I would have actually spent a sentence or two explaining why there was not a risk to human subjects, etc.
The way they address this by basically saying it is ok because of the ToS that no one reads is the worst possible way to handle this. It seems to me more like no one thought about it at all than that the editors carefully considered it. I just don't see how you get from recognizing big ethical issues to not even addressing them in the manuscript.
Also, I strongly doubt that the public has any clue of what facebook is doing. For all the public knows and understands, facebook could be employing magic message fairies.
Each of those are cleary identified.
If Facebook wants to put a little icon next to each "experimental study" status update which disclosed the party that funded the study, it would be different.
Even in "science", some studies are funded and others are not.
There are any number of technical means by which:
(a) opt-in permission could be requested in advance of a study
(b) opt-out option could be advertised in advance of a study
(c) start and end dates of non-optional study could be disclosed
This is about CHOICE of participation, not the NATURE of the study.
Wow... Not understanding basic theories of communication and human irrationality that doesn't allow lots of data to be processed and accepted without critical thought.
"opt-in permission could be requested in advance of a study"
Wow... Not understanding basic theories of psychological and sociological studies that state subjects should not be informed about the study or their behavior will change.
But please, go on with your clueless "ethics".
It's been fairly well established that "I wanted to learn something" isn't an adequate excuse for doing things to people without informing them or receiving their consent.
Before you talk about peoples' "clueless 'ethics'", you might want to read the professional standards of the field, for example the American Psychological Association's Ethics Code. The section on "informed consent to research" is here: http://www.apa.org/ethics/code/index.aspx?item=11#802
> subjects should not be informed about the study
About the study, or about being _in_ the study?
So, there is a difference. It's still a complex question, though -- is filtering or prioritizing based on emotional sentiment really different from what they are already doing with inserting ads and such?
Were they to filter posts by emotional sentiment as a part of their normal operations, I'd find it unethical, or at least something I might not want. But I'm totally fine with them subjecting users (including myself) to random research studies, as those are temporary situations, and with Facebook's data sets, they can have great benefits for humanity.
Perhaps Facebook should provide an opt-in option to for user to be a subject of various sociological experiments at unspecified times. I'd happily select it.
Would it be unethical if I broke into your house to randomly place pieces of candy and $5 bills in your drawers?
Would it be unethical if I placed candy and money around my own house, invited you into my house, and told you to take anything you desire?
Every single company in the world knows very well that 99.99% of their userbase won't read the ToS, and use this to do whatever they want with their users' information and privacy.
There needs to be dramatic improvements in that area.
Your FB profile is not your house; it is just some data you have shared with FB. FB decides what to do with the data: how to share it, where to share it, when to share it, who to share it with, etc.
Everybody knows that FB _already_ manipulates the feed to change your mood: to make you more engaged with the site; to make you click more on ads; etc. It's been doing this basically for ever.
I'm puzzled about the outrage.
For example, I once saw a British student e-mail out surveys about newspaper-buying habits; one was sent to an American academic, who replied to the student's supervisor saying the student should be thrown out of university for performing experiments on humans without ethical review board approval.
Facebook is in the unique position of possessing data that can be orders of magnitude more useful for social studies than surveys of randomly picked college students that happened to pass through your hallway. There's lot of good to be made from it.
But the bigger issue I see here is why it's unethical to "manipulate user emotions" for research, when every salesman, every ad agency, every news portal and every politician does this to the much bigger extent and it's considered fair? It doesn't make much sense for me (OTOH I have this attitude, constantly backed by experience, that everything a salesmen says is a malicious lie until proven otherwise).
My own way to reconcile this -- and I admit it's not a mainstream view -- is that advertisement and salesmanship should be considered just as unethical. I don't know how to quantify what "over the line" is, but it all feels like brain-hacking. Things like "The Century of the Self" suggest that in the past century or so we've become extremely good at finding the little tricks and failings of human cognition and taking advantage of vulnerabilities of our reasoning to inject the equivalent of malicious code. The problem is that when I say "we" I don't mean the average person, and there's an every-growing asymmetry. Like malware developers adapting faster than anti-malware developers, most people have the same level of defense that they always have had, while the "attackers" have gotten better and better at breaking through defenses.
Sometimes I'll see discussions about "what will people centuries from now think was crazy about our era?" and there's a part of me that keeps coming back to the idea that the act of asymmetrically exploiting the faults of human thinking is considered normal and "just the way things are."
I agree with that, or probably think even more strongly - that advertisements/sales are more unethical than research. It's difficult to put limits though, because even if many salesmen clearly act maliciously, pretty much everything you do or say influences people this way or another; it's how we communicate.
What I'd love to see is Facebook creating an opt-in option for an user to be a part of further sociological research. I'd gladly turn it on and be happy that I'm helping humanity, while Facebook could limit their studies to people who explicitly consented (there's an issue with selection bias though). They have too good data to be not used for the betterment of mankind.
What I would love -- and what I would eagerly opt-in to -- would be a system where Facebook could educate users on irrational behaviors. "We noticed that 60% of users like you spent an average of 30 seconds more looking at this kind of content... this is because your brain etc etc etc". Creepy, perhaps, but if there were a way to help people be more aware of and defend against advertisement that would be neat.
Sadly, you've made a great point here. It's very likely that the end results of research will be used exactly for that - as it already happens with most of psychology.
I hope though that some of that research will be used to create better policies and help the society.
> What I would love -- and what I would eagerly opt-in to -- would be a system where Facebook could educate users on irrational behaviors.
I'd happily opt-in to that as well (and opt-in all my relatives too ;)). I don't expect Facebook to ever do that, as it'd exactly opposite to their goal to be able to a/ influence their users, and b/ cater for advertisers, but there already are websites doing exactly that (e.g. LessWrong). They're niche places though; I'd love to see something popular enough to reach general audience.
As for advertising trying to manipulate you you know it's an ad and there seem to be regulations for making that clear. For example things that look like articles in magazines but are actually ads are marked as "Special Advertising Section". Similarly ads for political candidates have to be marked "Paid for by XYZ". etc.
We don't tolerate deception in those areas why would we tolerate it from FB?
Sadly, we've learned how to work around all of that; it's now called the PR industry. There's no way to spot a good PR campaign until after the fact because no single "fake organic news", no single article that happened to be published at a particular time can be pointed to and said "hey, that's an ad!". Bump up the scale and people will stop noticing.
The speech of one salesman/politician is different from thousands of machines impersonating human speech. Speech is protected in some countries. Fake speech (e.g. experiments/spam) decreases the trust of humans in all speech on the network.
In anyone wants to run large-scale experiments, they can:
(a) ask for volunteers
(b) pay for labor
Those who want to volunteer or microtask can (a) opt-in for money or fame, (b) disclose that they have opted-in, so that others know that their conversations are part of an experiment.
Why is advertising (e.g. a promoted tweet) differentiated from non-advertising? Why is "disclosure" required in journalism? Why are experiments differentiated from non-experiments?
The act of observation changes the behavior of the observed. If experiments are not disclosed and clearly demarcated, users must defensively assume that they may be observed/experimented upon, which affects behavior in the entire network. As a side-effect, this pollutes any conclusions which may be drawn from future "social" research.
Sounds like the effect the Snowden revelations have had on some discourse or interactions online, now that people more clearly know that someone may be listening.
That's exactly the reason why most of psychological/sociological experiments are not disclosed. If you read an accout of pretty much any experiment, you'll see that scientists told their volunteers that they're doing X, where in fact they were doing completely different Y. You need people to be unaware of the true goals of your research for your results to make any sense.
Facebook, though, pretends to be a communication service. The general expectation from a communication service is that it transmits information between users, thus users expect that what they see or hear coming out of the communication service is what the person at the other end has said/that the person at the other end will see what they say. A service that doesn't provide that isn't really a communication service at all, by definition - and lacking any other uses, isn't really good for anything. Just imagine a phone company with advanced speech recognition and synthesis software in the line that rewrites your conversations to be happier (or any other quality that the company or its customers prefer).
You're underestimating human irrationality and how communication works. No one has enough brainpower to constantly correct everything they hear for someone's self-interest. That's why ads work. They're identifying and exploiting methods for bypassing rational thought.
> The general expectation from a communication service is that it transmits information between users, thus users expect that what they see or hear coming out of the communication service is what the person at the other end has said/that the person at the other end will see what they say.
No, Facebook is a curated communication service. Always has been. You wouldn't be able to keep up with raw, unfiltered stream of posts by your friends and liked pages. This would turn the service useless.
Also, just because ads are generally perceived with some scepticism, doesn't mean that certain kinds of ads aren't unethical as well. In particular, ads that exploit methods of bypassing rational thought might indeed be unethical, and certainly are not the same as just general self-interested advertisement. And despite what you claim, manipulation is actually not a defining property of ads - if you have a product to sell that is actually a rational thing to buy, advertisement can use perfectly rational thinking in order to pursuade you to buy it. Just because much of end-consumer directed advertisement nowadays is trying to sell bullshit, doesn't mean that advertisement can only be used to sell bullshit.
I'm not defending Facebook or the experiment, but if you're going to call them out for "manipulating users' emotions without their knowledge", then you need to call out every advertising, marketing, and PR firm on the planet, along with every political talkshow, campaign, sales letter, and half-time speech...
Except that in every one of those examples, you know you're being fed a worldview slanted by the author for their purpose. They may try to appear unbiased, but you still know that they have a worldview that is in their best interest to share. That makes it completely different.
Not always. A perceptive person will know they're being bombarded with messages 24 hours a day, there's persuasion going on, and they may pick up on some of the more overt techniques; however, they won't always know the mechanisms at work or the entities behind the messages.
For example, big PR firms often work behind the scenes to set the stage for large conglomerates. Unless you are actively engaged in trying to connect and analyze the message space, you won't see how a coordinated campaign's mosaic of messages coalesce over time and space. You won't be privy to all the facets of the push and pull in play. You might detect things like frame boundaries, but you won't always know what's omitted. And some things are only evident in hindsight.
Only the most active and astute observers will be aware of the extent of it, and most people won't see any of it because they don't have the mental model for it.
It could be wise to begin to share such emotional-information influences to equally let users and admins be so sensitive (and acknowledge its a real part of our systems).
Every time you see a picture of someone you like it's like a little shot of dopamine goes off in your head. Facebook wants to optimize those dopamine shots to continually bump engagement and create an experience where everyone is habitually checking their feed every 10 mins.
This type of behavior design/economics research is done by Dan Ariely at MIT ("Predictably Irrational" http://www.ted.com/talks/dan_ariely_asks_are_we_in_control_o...) and the Stanford Persuasive Tech Lab (http://captology.stanford.edu/).
(e.g. Apple ad with astericks linking to emotional signaling research and analytics, to help users determine deeper interests [this ad attracts people who feel X, Y, Z based on...]).
You can't get good results if your subjects know what you're testing for.
If social norms within the test population (e.g. graduate students) are that pretty much every study tells participants they are doing X, but they end up doing Y, then people are clearly agreeing (consent, free will, expectation based on norms) that they won't know. The norms are a form of disclosure.
What we have here are invisible, silent experiments that violate precedents and norms, without consent.
Or art, or journalism, or advertising, or football etc.
When you publish a paper, you are supposed to write in the body of the manuscript if it's been approved by an IRB and what their ruling was. I'm surprised it was published without this, even though it apparently was?
It's also appropriate to address ethical issues head-on in a paper about a study that may be controversial from an ethical perspective.
If it really was approved by an IRB, then the researchers are ethically in the clear but totally botched the PR.
If not, then I think the study was not ethical.
With this experiment, Facebook are modifying the news feeds of their users specifically to affect their emotions, and then measure impact of that emotional change. The intention is to modify the feelings of users on the system, some negatively, some positively.
Intentional messing with human moods like this purely for experimentation is the reason why ethics committees exist at research organisations, and why informed consent is required from participants in experiments.
Informed consent in this case could have involved popping up a dialog to all users who were to be involved in the experiment, informing them that the presentation of information in Facebook would be changed in a way that might affect their emotions or mood. That is what you would expect of doctors and researchers when dealing with substances or activities that could adversely affects people's moods. We should expect no less from pervasive social networks like Facebook.
Every single time Facebook changes anything on their site it "manipulates users' emotions". Show more content from their friends? Show less? Show more from some friends? Show one type of content more, another less? Change the font? Enlarge/shrink thumbnail images? All these things affect users on all levels, including emotionally, and Facebook does such changes every day.
Talking about "informed consent" in the context of a "psychological experiment" here is bizarre. The "subjects" of the "experiment" here are users of Facebook. They decided to use Facebook, and Facebook tweaks the content it shows them every single day. They expect that. That is how Facebook and every other site on the web (that is sophisticated enough to do studies on user behavior) works.
If this is "immoral", then an website outage - which frustrates users hugely - should be outright evil. And shutting down a service would be an atrocity. Of course all of these are ludicrous.
The only reason we are talking about this is because it was published, so all of a sudden it's "psychological research", which is a context rife with ethical limitations. But make no mistake - Facebook and all other sophisticated websites do such "psychological research" ALL THE TIME. It's how they optimize their content to get people to spend more time on their sites, or spend more money, or whatever they want.
If anyone objects to this, they object to basically the entire modern web.
In other words, if someone argues that this was unethical experimentation on humans, then there are 1,000 other studies we never heard of that are far, far worse. But we know they exist.
It doesn't make sense to argue that. Websites have to experiment with different ways of doing things, and seeing how that affects their users. This isn't just a web thing either, of course - businesses need to try different things in order to optimize themselves. And to measure which is the best to get people to spend more.
Also, the comparison to an A/B test is a false one. This is specifically to alter the moods of the user and test the results in a study, not to improve the users experience or determine which app version works better.
Regarding the study mentioned above:
Hang on. Wasn't the experiment to see whether users would post gloomier or happier messages respectively? This very different from intentionally making people sad.
Moreover, I think companies should be doing more experiments like the FB one -- it's high time that human happiness be prioritized over other metrics such as profit, revenue, number of likes etc.
You're claiming that users would "expect" Facebook to do something like filter out all the happy posts from their friends and family without telling them? I don't think many would agree with you.
First off all, the shortest description of what they do wouldn't probably be far from publishing the algorithm iteslf. An algorithm that's ever changing and probably different depending on where you live or to what group you were randomly assigned. 99% of people wouldn't care anyway, and being transparent about the algorithm would likely make them less happy - right now they accept Facebook as is and don't think twice about it; give them the description of how things work and suddenly everyone will start saying that Facebook filtering sucks because random-reason-511.
Moreover, the only people that stand to benefit from knowing Facebook's algorithm are advertisers, who will game the hell out of the system for their own short-term benefit, just like they do with Google. It's something neither Facebook users, nor Facebook itself want.
you may argue that facebook was "trying to make people depressed" but that simply isn't true. what if showing more of my friends negative status updates actually _helps_ them? depressed people are shunned in our society; facebook gave a voice to the voiceless. that's wonderful!
Legal culpability issues aside: did facebook manipulate people's emotions intentionally? Did they inform them that they were going to do this, and of the risks involved? Did they get their consent? If the answers to the last two questions aren't unequivocally yes, then facebook is in deep trouble.
Edit: this also misses the problem that the subjects were never screened for their basical ability to give informed consent. Merely clicking through the ToS does not mean that you're not suffering from a mental illness that nullifies their agreement to the ToS.
Lastly, this experiment clearly involved deception, since the test subjects weren't informed up-front that they were being manipulated. This is problematic if the subjects weren't debriefed after the study:
>It is stated in the Ethical Principles of Psychologists and Code of Conduct set by the American Psychological Association, that psychologists may not conduct research that includes a deceptive compartment unless the act is justified by the value and the importance of the results of such study, provided that this could not be obtained in an alternative way. Moreover, the research should bear no potential harm to the subject as an outcome of deception, be it physical pain or emotional distress. Finally, a debriefing session is required in which the experimenter discloses to the subject the use of deception in the research he/she was part of and provides the subject with the option of withdrawing his/her data.
Perhaps I should explain our thought process. There were at least half a dozen major web publications today putting out variants of this indignant post about "Facebook's unethical experiment". Did all these authors suddenly develop a passion for science ethics? Of course not. It is simply the internet controversy du jour. Those have never made for good HN stories, and the policy has always been to penalize them, because otherwise they would dominate the site.
In cases of pile-on controversy like this one, when the original story has already been discussed on HN—which is pretty common, because HN users tend not to miss a day in posting these things—we usually mark the follow-up posts as dupes unless they add important new information, or at least something of substance. Does this article add anything of substance? It didn't strike me that way, but arguably it does.
As for the PR fluff piece you think is on the front page, why haven't you flagged it? It's impossible for us to catch (or even see) all such things. We rely on users to point them out.
The explicit HN policy used to be to allow controversies like this to wash over the site. We all remember seeing the home page covered in many submissions on the same topic. The fear that this would cause a topic to "dominate the site" has been proven false numerous times. I'm not sure why that would be a consideration.
I wasn't objecting to the puff piece on the home page. I don't think lightweight stuff like that can dominate the site either.
Killing dupes when there is more than one active discussion is one thing. This submission was the only active discussion on this topic. Removing it is just editorial curation that is of no benefit to anyone at all.
The fear that this would cause a topic to "dominate the site" has been proven false numerous times.
That didn't prove itself false, nor did the community make it false; it was PG who made it false. He poured countless hours into managing the site and countless more into writing code to help manage it.
That model hasn't changed. It's more transparent now, because users asked for it to be. Transparency has the side-effect of making it seem to some people like we've fundamentally altered HN when it doesn't work like they assumed it did.
I regret saying anything and I won't comment in the future. Thanks.
Also, sorry for the snippiness in my tone above. I don't always succeed in responding the way I want to.
I wonder if Facebook plans on alerting subjects of this experiment to their participation?
Edit: for me at least.
Until a replacement comes about and a large number of contacts move, it has become such a large part of these peoples lives it isn't going anywhere. Arguments and reasons don't sway them. Sadly.
I've never even been on facebook. But my girlfriend and extended family use it religiously. My dad and a couple of other members finally dropped it as the result of my rants but the rest (the vast majority) just think I'm suspicious and nutty and go right on posting their entire lives.
So, facebook can pretty much do as they please. And apparently they do.
Showing people bad news to get more engagement has roughly the same moral standing as the evening news.
I guess I don't get it.
[It must be wrong because they learned something from it, I guess?]
If the slope is so slippery smooth there has got to be a point in between changing the details of an arbitrarily complex sort and filter default on a social network site and purposely propagating lies.
"Facebook itself could target certain users, whether they be corporate rivals or current/former employees. Having such strong psychological control over your workforce would certainly have its benefits. And if Facebook ever gets caught? Why, the company could claim it’s all part of a social experiment, one that users tacitly agreed to when they signed up.
With over one-tenth of the world’s population signing into Facebook every day, and now with evidence to back the emotional power of the company’s algorithmic manipulation, the possibilities for widespread social engineering are staggering and unlike anything the world has seen. Granted, Facebook’s motives probably are simply to convince people to buy more stuff in order to please advertisers, but the potential uses of that power to impact elections or global trade could be enticing to all sorts of powerful interest groups."
The thing about this case is that network effects of communication services make for very strong path dependence, thus making it extremely hard to get back up the slope a bit if you notice you've been slipping down a bit too much.