Think of this way: there's some sort of algorithm that controls what goes on your Facebook feed. There has to be, basically, or else they just have to show you everything. The algorithm takes in sentiment analysis as part of making those decisions, which seems perfectly reasonable. So far, just by building something that interacts with users and affects their emotions, FB has a completely uncontrolled experiment. If they show negative stories vs positive stories, what happens? So far, they just don't know, which seems really bad. Well, to find out the consequences of what they actually do, they have to dial up and down some knobs and see the effects. If they can't try things and find out what happens, how do you expect them to do anything? Just guess? Even worse, if they just guess, they're still experimenting, but now with way less information.
This is completely normal, and every business does it. They have to answer questions about what effect the things they do have, or get stuck not doing anything. A huge part of what businesses do is interact with people, so a huge part of that effect is actually the impact on their customers' emotions. What if we make the colors on this display brighter? What if we play different music in our store? Psychology experiment! You can call it "manipulation" if you want, but you're really just going for cheap connotation points. Almost any editorial decision you could make on any subject with any audience is manipulation. Being systematic about it doesn't make it more so. Do you think you should be asked for consent before being subjected to an A/B test on an ecommerce site? Because you're definitely being manipulated in the relevant sense.
It sounds to me like the objections are not so much that FB ran an experiment so much as it's that they published a paper about it. You could argue that that really is dispositive, but, well, nobody is. And it seems if they're going to do some research, we should actually encourage them to share their results.
The controversy is the framing of the issue, and that the published paper specifically mentions tracking engagement based on the manipulation of users' emotional state. If it was real research from the outset, it requires ethical review and informed consent, even under the guise of "anonymous news feed research which may affect what you do or don't see".
One could just look at it as zoologists studying lions (Facebook) interacting with gazelles (the population) in their natural habitat.
Marring this research as unethical will just stop it from being published, not stop it from happening since without publishing it it is just an ordinary business processes.
Facebook themselves understood they needed consent... which is why they added a blurb in their TOS. The only problem... it was 4 months after the "experiment".
This was far beyond simple A/B testing. The complete disregard of ethics, and the downright Facebook fanboy-ism that is going on in this thread is bewildering.
1. Describes the limits of what experimentation is ethical without consent (i.e. the relevant way in which A/B testing is different than what FB did) in a consistently applicable way.
2. Justify those limits from more basic principles.
Perhaps you can make that argument, and I wouldn't mind be convinced (seriously, I have no horse in this race). But as it stands, the handwaving about "manipulation" and breathless denunciation of an advertising company for being data-driven is pretty weak. You can yell "unethical" as loud as you please and brand people taking the opposite view as "fanboys", but it's not at all convincing.
1) Facebook is not an advertising company, although they are trying to be (and so far have been unsuccessful).
2) Facebook was not just manipulating adverts, but instead your entire feed, which up until recently, had been largely organic (what your friends posted, you saw... all of it).
3) Facebook never has discussed the possibility of "testing" on users, and the general assumption they do, does not excuse the practice.
4) A/B testing for which color button makes people click more often is far different than only displaying emotionally charged posts/images and seeing how users react. This has reminiscence of psychological warfare tactics employed (and now largely banned) by a Vietnam era CIA.
5) Facebook acknowledged the need for prior consent via TOS implicit agreement... but only after they had already concluded the "experiment" (actually, 4 months after).
6) If another company, say Google, manipulated your inbox without your consent nor knowledge, you would likely feel strongly. Both Gmail and Facebook are operated by companies that can do whatever they wish -- but this does not make it right to do whatever they wish.
You seem to be accusing Facebook of having no business model. I believe virtually all of their revenue comes from ads, so it's difficult to say what they are if not an advertising company.
2) News feed has been around for a long time in a form beyond what's newest. This has been well publicized.
3) Every company with durative engagement tests on users. If they don't, they don't have the interaction with users they desire.
4) Much like how the music in stores example is reminiscent of the CIA used music torture in Guantanamo. Or maybe, just maybe, there is some nuance to discuss?
5) This is the most valid of your points, but it's not uncommon for legal documents to be updated for more general coverage and doesn't necessarily imply the activity is unethical.
6) Gmail does. It's called spam filtering.
IMO, it would have been pretty easy to ask some random users if they wanted to participate; from asking around my group of friends, we all would have opted in because we find it interesting, especially if the findings would be published.
IANAL either, but a) I'm guessing not, but b) not really the ethical question here either way (e.g. it might be legal but unethical or vice versa, or both, or neither). So I'll focus instead on the ethical question raised by the suicide: I guess I don't see how this is any different from some editor in a newsroom saying, "let's publish more gory crime stories and see what happens". Someone reads a bunch of it and commits suicide. Tragic, obviously, but not really the victim of an unethical experiment. Worse, the paper doesn't even know, since it can't monitor the response. Or: let's say FB never runs this experiment and they never find out that their current algorithm is especially depressing and all sorts of people kill themselves. What are the ethics of that?
By getting informed consent before experimenting on their users? You act like this is some outrageous bar they have to cross: it isn't.
"Ethics" aren't a universal Kantian imperative, they're contingent. It's ethical, more or less, for you to talk about about your conversation with your acquaintance, but not if you're an attorney and they're your client, unless they've put it on the record before publicly, etc.
Ethical rules usually exist for a normative purpose - eg, to allow people to converse freely with their lawyers. The psychological research community doesn't care about their work actually working or proving something valuable nearly as much as their care about maintaining their status. Conversely, the advertising industry cares very much about proving their capabilities to their clients. So, different ethical regimes.
It seems to me that the appropriate ethical framework to apply is the research context it was conducted and presented in, not a looser one outsiders wish to see it in. That disparity /conflict of interests is a symptom of the problem here.
I'm setting aside the questions of how this experiment even furthers facebook's business interests. My guess is, it doesn't really.
I was nodding my head and prepared to take you seriously until this sentence. Turns out you're just another HN reader angry at the academic community because you once had a lazy professor or something.
In particular you can view IRBs and the like as a form of entry-restriction by entrenched actors that tries to keep disruptive research from competing.
The researchers had IRB approval. And this whole 'scandal' is really just a media experiment to manipulate the emotions of dumb people. If these media reports were phrased like 'Facebook conducts research into affect correlations of user-generated content', which is probably more or less what the original paper actually said, I doubt anyone would care.
"Facebook said that since the study on emotions, it has implemented stricter guidelines on Data Science team research. Since at least the beginning of this year, research beyond routine product testing is reviewed by a panel drawn from a group of 50 internal experts in fields such as privacy and data security."
In fact, it is directly contradicted by the article.
Ok, this seems like it should be about as controversial as a supermarket that plays downbeat music and tests to see if sales decrease.
That's not to say that the study was manipulative, involuntary, and barely consented to, but hey, that's Facebook in a nutshell.
The tone of this article is "these studies happened with no oversight!" but it never suggests who would oversee Facebook, except Facebook, which doesn't really seem like oversight at all.
So no, I do not think that Facebook overseeing its own researchers is so far fetched. The ethical review board would probably consist of a combination of lawyers, PR people, and people with research backgrounds. The lawyers and PR folks would not have seen this experiment as a "great opportunity for research that was previously impossible" so much as "lawsuit bait" and "PR nightmare."
The critical difference, in my mind, is that academic research takes place more or less in the public sphere. Even when it's not, an academic can always be counted on to challenge and discredit a fellow academic.
An in-house council of research practice could exist, but it would take a group of professors with reputations on the line. And then again, to my main point, Facebook's central business practices revolve around manipulating users in ways that are even _less_ concerned about the user's well-being.
I doubt that anyone outside of the authors of these articles is actually upset over this, but if there are such individuals, they are just the normal cadre of people that are upset because they like being upset and have nothing better to do. Anyone legitimately bothered by this should go join a parents group and start writing letters to TV execs about how they and their children were scarred for life by the last wardrobe malfunction they saw on live TV.
The meaning behind the music in a supermarket is negligible - you don't give any weight to songs played in a supermarket other than "I like this song" or "this song is annoying".
In a sense this whole fiasco is a good thing. The feed, for me, has now become less attractive as a "news source" for what my mates are up to. And that can only be a good thing. I dislike how much everyone (including myself) has taken to Facebook as a means of friends/family communication.
Artificially adjusting what is seen though that lens with the express intent of modifying someone's mood is horrifying by itself. It's even less acceptable when it is done without consent.
I'm very surprised that so many HN commenters seem to be unable to see, or unwilling to accept, this crucial and key difference.
Here's a non-shit analogy: what if Verizon only let you receive depressing phone calls?
If you want an analogy relating to socializing, how about a dating site experimenting with showing you more or less attractive people? Is that terrible and unethical?
As you say, what "oversight" is required of a corporation doing passive demographic or behavioral studies of its users/customers? Should we go nuts if we see Google Analytics or Omniture tags on any given Web site?
Suppose a gym did A/B testing specifically designed around reducing the number of active members and maximizing the number of inactive members.
One of the questions this raises for me though is if these people who don't see this as an issue feel that way because they've already submitted to the oppression that is Facebook? Is it the case that Facebook has already "broken" them, as you would a steed?
We should be sure to only show people patriotic words on July 4th so they can think patriotic thoughts and suffer none of that negative criticism stuff about the government they are so abused with normally. Maybe we should do that for all National holidays. Or maybe we should just do that all the time.
Good thing it wasn't a military branch providing funding for this. That would be scary.
Thousands of [Facebook] users received an unsettling message two years ago: They were being locked out of the social network because Facebook believed they were robots or using fake names. To get back in, the users had to prove they were real.
I vaguely remember the fuss about this at the time on HN, and was a bit disturbed to discover that it was staged. I haven't used Facebook for more than an hour or two since 2010, but for many other people it seems to be their 'home on the internet' and the notion that they would be arbitrarily locked out of their accounts under false pretenses is a troubling one.
A University professor asked students to solve an anagram. After they were done, they had to walk into the professor's office to tell him the word. In his office he'd be pretending to have a conversation with a visitor.
The words the the students had to solve were either positive, negative or neutral.
The result was that students that had a negative word, were far more likely to interrupt the professor's conversation with the visitor more quickly than students that had a positive or neutral word.
After the students told the professor the word, he asked the students if they believed the word had an impact on their emotion. All of the students said no, but the data showed the negative words had an impact.
It's staggering to think that out of all the Facebook users, 700,000 is still only a small portion of the total available users. I wonder what Facebook's ongoing role will be as a source of anthropological data?
Just like every customer of the Wall Street Journal that's not a subscriber. All of the sophisticated advertisers for the Journal, online and in print, are running similar tests.
The key question is at what point is a certain method a quality control decision vs a research experiment. What Facebook did was an internal quality investigation. What are they supposed to do show posts at random, for this to qualify as an "experiment" we need a baseline. In reality there is no such baseline.
If it was an internal quality investigation, why was the experiment designed by Cornell researchers, and published in PNAS?
I'm not sure I see how this is all that different from running a bunch of ads with different messages and/or emotional content and seeing how they differential ly perform.
You should tell your users and include it in your TOS. (not do it, then add it to your TOS after the fact).
Would food industry people be coming out in support of them the way it seems half the people here are fine with what Facebook is doing...? "Hey, it says Happy Meal on the box, and they're delivering!"
Why do so many people here think this kind of thing is okay when Facebook does it? Is it simply because they feel kinship to a tech-related company?
Yup. It's like asking in a wolf forum if having the sheep for dinner is considered ethical.