Hacker News new | past | comments | ask | show | jobs | submit login
Evidence of massive-scale emotional contagion through social networks (2014) [pdf] (pnas.org)
285 points by kick on March 4, 2020 | hide | past | favorite | 72 comments

What most people don't know is that there is an actual science behind the mechanics of manipulating mass behavior using social networks. Sentiment analysis can be used with graph analytics to infer the emotional state of certain users to trigger state transitions towards more negative emotions. http://keg.cs.tsinghua.edu.cn/jietang/publications/2016-08-w...

Is there a relation between sad/depressed emotional state and buying behavior?

There is definitely a correlation between pain perception and buying behavior.

This has been documented in a study done by Carnegie Mellon: https://www.cmu.edu/homepage/practical/2007/winter/spending-... (spoiler alert: cash 'hurts more' to spend -- credit cards 'hurt much less')

I'm sure that there is a correlation between emotion and the ability to perceive / numb pain -- so there probably (at least) an indirect correlation between emotion and buying behavior.

I wonder if that will hold as people use cash less often... if you grow up only using credit cards and have no association with paper money having value, will this study no longer be true? Most adults at the time the study was done probably were still using cash for most purchases for most of their life, making that association stronger.

An excellent question. I only have anecdotal evidence to indicate that it continues to hold up. (My family and I continue to use cash for many purchases. I'm a man in my 40's)

My anecdotal evidence is the opposite... I have basically used credit cards for 95% of purchases the last 5-10 years, and cash feels like 'extra' money to me now... since it doesn't show up in my tracking systems, I don't really see where it is going... I don't have to ever look at the spend again after I buy something, so it induces less guilt... I feel like I want to get rid of the cash in my wallet, because it is 'wasted' in my wallet.

I think that problem is a non-linear one. I'm pretty sure that it's not possible to predict the behavioral outcomes from the reactions of emotional states. The science only goes so far as to provide a proof that it's possible to polarize sentiment by identifying the community bridge in a network. That bridge would be the user that has optimal influence in both groups.

try convincing my phd adviser of this who for whatever reason doesn't believe it. lol.

Here's everything you need with Java and Neo4j. You might need to swap out Google Cloud NLP, since it's prohibitively expensive for research. https://www.kennybastani.com/2019/09/sentiment-analysis-on-t...

After getting burned in 2014 and then again in 2016, I'm sure Facebook has learned its lesson: never let social science researchers anywhere near the company. They don't actually do anything (effect size in this study is 0.1%, even though, as I personally checked in 2014, the filtering it used was the absolute crudest, strongest possible), but they can generate an infinite amount of bad press.

You can bet that there are a lot of social science researchers within Facebook (I know at least two), but they've learned to never ever publish the results of that research and only use it to increase revenue.

This a million times. All the public hysteria does is punish publication. The work goes on.

Hysteria around product experiments has had a big effect in education. As we all know, most online software companies conduct A/B tests to improve outcomes. IMO, education companies should do a/b testing in order to optimize student outcomes. It is unethical to not run tests. But, this can be described as "psychological manipulation" and "turning children into guinea pigs" -- and people freak. After all, there is not informed consent in A/B tests. Since educational software optimization does not actually lead to greater profit (the edu market is weird) the optimization work stops and no science is published. I find this disturbing.


In my opinion, A/B tests are a flawed application to optimize the curriculum before even touching ethical concerns.

The mechanism is far too reductive and would only increase current problems in education. We know that a mentor for every child isn't logistically feasible but conducting these tests would just make sure we continue to work past the problem.

> It is unethical to not run tests

Certainly not.

I'd really appreciate knowing your specific objections to A/B testing in digital education.

For instance, I have an adaptive quiz system that delivers interventions when students struggle. An intervention might be, for instance, a video for teaching fraction addition. In that case, why not compare our video with a Khan academy video teaching the same topic, and see the results?

It feels unethical to NOT use A/B tests to improve digital education. Why only apply that technology to the improvement of advertising and YouTube recommendations, vs curricular recommendations?

For math there certainly is some leniency to use such tests for demonstration purposes and I am not opposed to using digital tools to enhance learning. I think it is even mandatory to provide the best support and that it can vastly improve classical learning.

My reservations are primarily that it mechanizes learning and experience tells me that pedagogical measures will quickly be reduced to just test pupils that leads to the opposite of the goal, that test become more individualized for the learning requirements of the kids in question.

Another problem is that the pupils might be smart. So I wouldn't be surprised when the tests show that the best form of test is the one they just like the most or can be done with the least amount of effort, because they take being tested for granted. And while teaching math, you might also teach them to apply A/B tests on other people, including teachers of course.

> In that case, why not compare our video with a Khan academy video teaching the same topic, and see the results?

I wouldn't have any issues here. Most of the criticism is void as long as pupils get access to both resources and if there is an analysis after the A/B test and the students know about it. Would you still think it ethical, if you made it a competition between the tested groups?

Perhaps I am to cynical if I believe such measures will always be used as an excuse to delay employing more teachers and that the result of the tests will be treated as gospel, even if the failure rate is known and most classes don't provide random samples. But aside from that, I don't think this form of test is a "technology" and there can be a lot of criticism leveled against youtube and the ad industry too for that matter.

I love this thoughtful response!

>Would you still think it ethical, if you made it a competition between the tested groups?

I do -- but it's all about framing, transparency and perceived intent.

We are planning a participatory design session between teachers, product people and edu researchers -- to design the a/b tests and generate generalizable scientific knowledge.

Might you have any further advice for us?

The problem is that when you run a/b tests, the changed case is often worse. If the thing being tested is a website, that might not be so bad. If the thing being tested is your education, that might be horrible.

No need to a/b test when there are tens of thousands of universities in the world all doing slightly different things. The cream rises and then is copied.

For example, FSU recently went up in rankings from a ~#50 public university to a #18 public university year over year over the past decade. Whatever changes they put into place are now being studied by other universities and if there’s anything novel that they’re doing, it will likely become more widespread as other universities seeking higher rankings implement new things. Or, it could be possible that the university studied the success of other universities and implemented what worked best and stopped what wasn’t. You run an a/b test when there aren’t already tens of thousands of other colleges and universities like yourself out there that you can study and learn from.

Of course, rankings DNE educational quality, but it’s one example. Another example, that may have a better correlation with educational quality (although part of the equation is the constitution of the cohort) may be bar exam pass rates for law school grads listed by law school.

> No need to a/b test when there are tens of thousands of universities in the world all doing slightly different things. The cream rises and then is copied.

Why would it be any different in webtech or advertising? Why wouldn't everyone just copy the best vs gathering empirical data?

Are universities not gathering empirical data when they study the results of others’ efforts at scale? Study being the key word in that scenario, it’s more than just copying what they see, but understanding why it’s effective, and if it would be for them.

And it may be quicker and more reliable with less risk than A/B testing in their setting. The most highly trafficked web tech companies can gather statistically significant feedback data about a change in moments. A/B testing a curriculum or educational practice could take a semester or more, and then the risks are higher — it would be a two-sided hypothesis test where the B group could not only do better, but could also do much worse, and it would reflect poorly on the institution if so. People are paying tens of thousands of dollars per seat per year for the best education they can get. Seeing how many items are in the shopping cart right on the “checkout” button doesn’t really reflect poorly on Amazon, but it could help Amazon increase conversion rates by .03%, which could mean millions of dollars at their scale, and they could also complete the test fairly quickly given their volume (in a day or so?) at a 99.7% confidence interval.

With that being said, I’m sure that smaller scale or faster turnaround time A/B tests are being ran at Uni’s.

That's true only if the system in question has already had a lot of optimization. By the time I got to Google about 90% of experiments failed to improve metrics. When they started A/B testing nearly every experiment yielded large improvements.

I would bet that education is much more like Google c. 2000 rather than Google c. 2010. A general rule of thumb is that in the absence of extensive training and repeated failures, human intuition is terrible, and that any system based on people's opinions without hard data has a lot of room for optimization.

So, how do you know it is worse? Isn't that the whole point of A/B testing?

It also leads people to leave the platform.

Ha! I can safely say that companies do not, in general, learn their lesson until it becomes an existential threat.

Companies are often filled with humans. It’s a human thing.

It's disappointing that so many comments are deriding the results of this study. I think this study is actually underestimating the effect of social contagion through social media, not just for emotional states but political views as well.

While I think this type of social contagion happens through all forms of media, social media is highly individualist so I think the effect is slightly different. I don't think it's a coincidence that the explosion of identity politics over the past ten years lines up with rise of social media. This individualism is especially obvious with post-modernist leftists - just think of how many Twitter bios start off with listing someone's gender identity, mental illnesses, ethnicity, etc. Leftist social media is more of a subculture than an actual political movement, with no discussion or debate before ideas are treated as wrongthink.

Periodically I've read articles / research that helps remind me that our physical bodies are the product of evolution that only recently began enjoying the fruits of a technological society.

This article helped me recognize that our emotional states are also the product of that same evolutionary process. Only a few decades ago it was unheard of that people would be regularly conversing with others all over the globe from the comfort of their homes (or even more recently on their phones). Yet humans have been living on this planet and evolving to deal with the (mostly harsh) realities of life on earth for hundreds of thousands of years.

Blaming this on social networks is overdue, in my point of view. What about TV, books, music, arts, ideologies, religions? Yes, they play a part as massive as the social networks and were used (and studied) as a media for contagion.

Then, it remains the question that what's different from an emotion that helps or harms people. Social networks acts in this way as a decentralised marketing tool, and totally vulnerable to non-organic manipulation.

As a complete layman in this stuff I'm always surprised by the parallels between social networks today and the rise of mass media, radio and later TV / cinema, in the 20s through 50s. Especially the period of the 20s and 30s looks pretty similar. Also how some political movements / parties managed the new technology better than others. And what huge impact propaganda on a massive scale had.

And than mass media became the new normal. Maybe we are in this early phase with regards to social media.

Does anybody know if any scientific comparison has been done on that issue? Might be an important and useful thing!

The difference is that social networks can see in real time how users react and what works.

There’s always been a feedback loop and new innovations to tighten it.

~100 years ago Macy’s started a radio show that enabled them to quickly put out new messaging to their (mostly female) audience and see how it affected shopping behaviour pretty much “instantly” when compared to their previous paradigm of running newspaper ads. They also saved $100,000/yr doing so — the equivalent of ~$1M/year in today’s currency — demonstrating how radio had significantly tightened their feedback loop when compared to newspaper.

An artist can accomplish the same at a live show. With the artist seeing the response of a live audience (their focus group, in a way) while trying different things in real-time. This is how quite a few musical trends started.

TV, books, music, arts have all had cash flows that are responsive to decisions.

The person you're replying to still has a point, that you probably reinforced with the example of Macy's: the time constants of a given feedback loop can and do influence greatly the necessary input (the money they saved) to get to a similar or better state (what you mention as tightening the feedback loop).

The current world brings it down to a really small reaction time (i.e. the system can change dynamically within 1s, give or take an order of magnitude). I believe there is still room for even faster feedback loops (say, when a google-glass-like device reads in real-time a person's biometrics and feeds that to a system that optimizes what the user is interacting with), and I would not be surprised if Facebook already had considered that since they deal with VR devices (Oculus).

Right on :)

But I guess what I’m getting at is, this isn’t anything new. The media has always been an effective social engineering tool, whether or not the social engineers at the time were aware of it. And the last few instances where faster feedback loops via more effective media helped one company get ahead didn’t spark the end of the world. Not unless newspapers were the beginning of the end. And then radio. And then video. And then TV. And then netflix and social media. And then the screens that we’ll put on behind our eyelids so that we can watch our favorite shows without opening our eyes, or whatever facebook is cooking up.

I guess, it is just the speed and the ability to instantly go viral makes social networks more prone to bad press.

I think this is pretty normal human behavior. If someone says to you "My father died yesterday," your response wouldn't be "Wow! The weather is great today!"

Contagion implies the user has no control once "infected", whereas in reality we all have the ability to 1) not be uncontrollably reactive, 2) close the tab & 3) thinking a little critically about the information being presented. I'm not saying the research is wrong, but I am saying that people can be a bit lazy about their mind.

> Contagion implies the user has no control once "infected"

No, it doesn't. Regular physical contagions leave the victim agency in treatments, lifestyle habits that improve/worsen prognosis, etc., much as you describe for the proposed emotional contagion. And are often worse in practice than they would be in a world of ideal people because people make suboptimal use of their agency, just as you describe is the case for the proposed emotional contagion.

So, I'd say the metaphor is reasonable.


> I am saying that people can be a bit lazy about their mind.

Blaming the victims will not solve the problem. This is systemic problem and only broad changes in the system can fix it.

I think that it helps if you realize that

1) People does not have access to the education that they need. We need to fix this.

2) People has many many problems to solve. So, that is why it is difficult to get focus to solve them.

In isolation the problem seems simple enough. But, taking all the factors into account we cannot expect that everybody will solve the problem by themselves. And it does not seems very efficient either. To spend millions of person/hour on this is a waste when it can be solved in a better way.

Past coverage (though using the abstract at the time):


I think this is one of the most important, period-defining papers of the past ten years, and it seems like a shame how few people have read it.

The effect size is barely a blip on the radar...

This study is barely worth the paper it's printed on. Everyone in my lab was left scratching their heads that PNAS would publish such a weak result.

If you think this is important or period-defining, you haven't been paying attention to the field.

They created consent where there wasn't any, and attempted to sway the emotions of hundreds of thousands of people at once for the negative.

It's not important for the effect, it's important and period-defining because A: it was the first publicly-admitted instance of Facebook actually using what they had to do harm (admitted to in a way that implied they didn't realize they had done harm), and B: because it laid out how the rest of the decade would look in regard to the behavior of tech companies. What you think about the study itself isn't the interesting part; statistics/psychology aren't actually science. It's the actions and motive found within of the researchers and company that are of interest.

>They created consent where there wasn't any

Well... maybe. The effect size is so weak, it barely did anything.

To give you an idea: people's hunger-levels probably influenced them about 10x more than whatever Facebook was doing.

I'm sorry, you're giving this study credit where none is due. Your conclusions are not supported by the evidence.

Maybe people have read it but don't think renaming "ideas" and "empathy" to "emotional contagion" is particularly groundbreaking or constructive.

The value of the paper is that it's where Facebook first started getting creative: a massive (N=689,00!), barely-consented-to (successful) attempt to sway the emotional states of hundreds of thousands of people.

That's both groundbreaking and constructive, if constructive in a way that harms people who don't have Facebook stock.

It set the stage for so much of what's happening today. Acting like it's just another boring paper is baffling.

> barely-consented-to (successful) attempt to sway the emotional states of hundreds of thousands of people

If you read the newspaper, watch TV, or read articles online, I have some bad news for you.

You miss a crucial part - none of this are individually tailored to you. Social media is the first form of media that can deliver an individually customized payload.

Are you saying the paper itself is interesting, or that what the paper says about Facebook is interesting?

The actual result (people who see positive or negative messages are more likely to post the same) seems so obvious and uninteresting as not to be worth mentioning at all, though the fact that Facebook was willing to run the experiment and publish the result is perhaps more notable.

The context at the time was that some authors were making the argument that seeing positive posts on FB increased negative affect.

This study, which was much larger, showed that this was not the case.

Oh my. That would indeed be interesting, if only the method could even have a hope to show such a thing.

From the abstract:

> When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.

Well and good, when people see more positive posts, they post more positively. But then:

> This work also suggests that, [...] the observation of others’ positive experiences constitutes a positive experience for people.

What a leap into the dark that is!

This study cannot show anything about affect, only about what was measured, which is what people posted on Facebook.

Entirely agreed. But the previous study was a self-reported observational analysis, with appproximately 300 (German) participants.

I'm not defending the study in general, but the context of the research is somewhat important.

The subject is interesting, but there's barely any effect. Look at the d scores!

>a massive (N=689,00!)

Your critique reads like someone whose never done any research or serious statistical work. Big N's don't automatically mean the results are robust.

I'm sorry, this isn't the groundbreaking paper you want it to be.

Condescending remark aside, it's not important because of the effect or lack thereof; it's important because of how it was done.

Please consider that this stuff is very nuanced, and that while you clearly care about these issues a lot (as do I!), your analysis betrays a lack of understanding.

Any way you want to cut it, these results are not to be trusted. You cannot even be sure the conclusions are true. The effect sizes are within measurement error.

You can gmail me if you want a more receptive ear for your thoughts on this paper and related subject matter.

It has an impact if singular events (of which there are millions each day) can severely affect social cohesion and stability. Regardless of what you call it, the inevitability of emotionally triggering events each of which has the potential to reach everyone in real time and impact them will lead to extreme social instability.

Similarly, too many unregulated synaptic connections or overexcited neurons in the brain can cause seizures.

How does it bode for society if, for example, every single instance of racial animosity between people is broadcast for the whole world to see? Even if the rate of occurrence of these sorts of interactions is incredibly small, there will be many per day, driving entire segments of our society apart and causing even more such negative interactions in the future which are just fuel for the fire.

Crime is down but reporting is up, and therefore fear is too.

Just leave people a venue to vent. Trying to purify every platform made the problem much, much worse in my opinion.

I found the Editorial Expression of Concern on the last page of the post more interesting than the paper itself (the results might have been surprising in 2014, but in 2020 it reflects the mainstream view).

/msg Dave SCP-[REDACTED] is out again, can someone notify MTF-^$ to get on a disinformation campaign and cleanup?

This is a crappy psychology study where none of the researchers is a psychologist. PNAS is a big name in medical science (?), but an unimportant journal in psychology.

There's no serious psychology journal going to publish this kind of stuff. Why? The first problem is measurement. What is the reliability of using positive/negative words to determine positive/negative emotional state? 70%? 80%?

The effect size of this study is 0.001, which would be way way way smaller than the measurement error. LOL. What a laughable "study".

Completely agree with you that the study is trash, however ...

>but an unimportant journal in psychology.

This is untrue. It's comically untrue.

I was doing my PhD in cogsci when this came out and everybody was surprised that PNAS would publish such a bad study, given that we were more used to seeing things like this: https://www.pnas.org/content/pnas/106/5/1672.full.pdf and this https://www.pnas.org/content/pnas/112/2/619.full.pdf.

PNAS has an _excellent_ reputation in psychology, especially in the psychophysics and EEG/MEG crowd.

PNAS does this sort of thing often enough, it doesn't really deserve its positive reputation (if any): https://statmodeling.stat.columbia.edu/?s=pnas

Adam Kramer has a PhD in psychology, for what it's worth.

Thanks. Good to know.

cognitive science is a unique field, it is somewhat related to psychology, but definitely not a subfield of psychology.

In the psychological sciences, it seems like you're damned if you do and damned if you don't.

When a phenomenon with a large effect size is demonstrated with tens or hundreds of participants, everybody crows about how the sample size should have been larger.

On the other hand, when a small effect size requires millions of observations to detect, now the criticism is that the effect is too small to matter.

At any rate, this effect is small - but it is reliable. The only crappy part about this study is the ethical boundaries it crossed. In most other ways, this study was kindof amazing...

In statistics, there is an optimal sample size for avoiding type 1 and 2 errors.


There was a tiny difference detected with millions of observations. But it has no scientific meaning.

Humans can be easily influenced! Who knew!!! Next they’ll tell us not to lick metal poles when it’s cold out

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact