Hacker News new | past | comments | ask | show | jobs | submit login
What our research says about teen well-being and Instagram (fb.com)
144 points by DLay 63 days ago | hide | past | favorite | 173 comments

Yes it's not uncommon for addicts to have a generally positive disposition towards their addiction [0]. It would actually be difficult to avoid this in an engineered platform. I'm sure FB is doing their best, though.

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3163475/

> I'm sure FB is doing their best, though.

Is it though? Their entire business model is like alcohol. The higher the alcohol proof the more money they earn. Their algorithms purposefully makes people have wide swings in emotions to drive engagement in order to generate money. Blaming suicidal thoughts on addicts isn't the solution, lowering the alcohol proof is. You can have freedom of speech without the algorithm purposefully pitting one person with another to drive engagement. Exposing alcohol to teens couldn't be a worse idea.

That was meant as sarcasm ;)

I understand the problem but then shouldn't also videogames (for instance) be controlled/regulated? They also provide high-swings in emotions coupled with almost real virtual world that can mimic much more vividly experiences that could lead to high-addiction.

Despite violent games like Call of Duty, they have never caused players to want to physically harm themselves nor others. Video game developers set out to create an enjoyable experience for the players because players are the users and, ultimately, the paying customers. People are addicted to video games because they enjoy it (positive emotion).

In contrast, Facebook's customers are not the users. Their customers are advertisers and users are simply the product. They maximize their algorithm to satisfy the customers and advertisers are addicted to Facebook because of how much user data they contain which helps them sell better.

Since users aren't the customers, Facebook doesn't care if users experience either positive or negative emotions. In fact, a combination of both has been shown to increase engagement. Facebook set out to optimize this combination with their algorithms.

So if video games did cause people to periodically experience negative emotions such as suicidal thoughts and they are purposefully doing so to maximize profits from their customers, then yes they should be regulated. But that isn't the case with video games.

Regardless of who the customer or product is, the actual thing is built with the capitalistic idea that more people = more money.

Hence everything is pretty much built to attract and keep people attracted for the most possible time.

Saying social media is different because of their ad-based sales model is a bit misleading. They're both essentially interested in addicting users to their product, regardless of their behavior afterwards.

I am not arguing that both are not trying to attract people to become addicted. I am arguing why one causes users to have suicidal thoughts and the motivation behind it. In your terms, because it brings in more money.

Not sure why this is being downvoted.

What percentage of alcoholics would say alcohol makes them feel better.

There's a reason people become addicts. They must feel there's something to be gained or kept.

    This week I'm going to get my shit together, these will be my last cans of RedBull and my last all-nighter.

    New bottle... last one too. Fuck it, I really need a good nights rest. Everything will be much better after finally having had a chance to mention this insane workload during tomorrow's perf meeting.

    Alright, I've got this. At 35% leverage I'll have it all back in no time.
I've worked and spoken with many addicts. Even after them having been clean for years, many never seem able to defeat that false idea of there being light at the end of the tunnel / benefit.

Just imagine there are people, some of whom are probably reading this too, whose job it is to artificially recreate or induce this dark and destructive part that exists in each and every one of us.

> Just imagine there are people, some of whom are probably reading this too, whose job it is to artificially recreate or induce this dark and destructive part that exists in each and every one of us.

I can imagine that there are people addicted to doing this engineering, maybe believing that with a tweak or two more the negative effects will disappear. I'm reminded of 'Crazy Eddie' from 'The Mote In God's Eye', believing there's a way to engineer our way out of every problem.

They feel they can "keep" themselves, while also remarking that their self is deteriorating. It's not so much a love for the thing but a fear of the unknown - and in alcohol's case it's also very very much a chemical addiction that bypasses logic.

  >  false idea of there being light at the end of the tunnel
What I meant to say was light at the end of the tunnel if only they keep behaving or using as they are.

Can no longer edit my comment unfortunately.

> What percentage of alcoholics would say alcohol makes them feel better.

What percentage of people who drink alcohol are alcoholics?

It's an interesting comparison because the vast majority of people have no trouble moderating their consumption of alcohol and I'd argue that the same goes for social media. Yet we're so focused on the subset of social media users who are addicts that many here are convinced it's not possible to use social media without being an addict. I suspect if similar rhetoric was leveled against alcohol we'd see a strong backlash from people who enjoy alcohol responsibly, or from people who simply prefer freedom to choose their own actions.

>What percentage of people who drink alcohol are alcoholics?

In the US about ~7% or 14 million people depending on how you define alcoholism. Globally about 3 million people die every year of consequences related to drinking, or put differently, 5% of global deaths are attributable to alcohol consumption.[1].

That is about as many people as covid seems to have killed last year. You're actually right in drawing a comparison between social media and drinking, but I think you're wrong about thinking that means we should take social media less seriously, rather we should take drinking much more seriously. Certainly very few people would argue we should take a pandemic less seriously that costs hundreds of billions per year and kills millions, and that is what alcoholism does as well.

As a society we are extremely negligent of threats that cause enormous social harm in the aggregate simply because they don't harm everyone they come into contact with.


> In the US about ~7% or 14 million people depending on how you define alcoholism. Globally about 3 million people die every year of consequences related to drinking, or put differently, 5% of global deaths are attributable to alcohol consumption.[1].

That's the statistics for Alcohol Use Disorder (or AUD) not alcoholism. AUD is definitely not what people think when they speak of alcoholism. That would be alcohol addiction.

In typical American fashion, AUD has an extremely broad definition. For example if twice during the past year you went out with friends longer than you expected, had more beer than you were planning to, ended hangover the day after and thought "I should really stop drinking" you have mild AUD.

> AUD is definitely not what people think when they speak of alcoholism.

That's true, but what people think of when they think of alcoholism is pretty silly and it's not how any addiction specialists think of addiction today. There's an enormous gulf between what experts think about addiction and what laypeople picture when they think of an "alcoholic."

> For example if twice during the past year you went out with friends longer than you expected, had more beer than you were planning to, ended hangover the day after and thought "I should really stop drinking" you have mild AUD.

And why not? If you took a drug that resulted in adverse consequences and your response to that is that it's no big deal and you'll probably do it again, then that suggests something that's at least mildly problematic.


WastingMyTime89 is likely referring to the DSM-5 definition of AUD here. It is rather broad, though a bit more narrow than they've outlined. Worth a look. I certainly have friends who would qualify.

You can also access the DSM-5 text directly through the Internet Archive, though it is sometimes unavailable depending on usage: https://archive.org/details/diagnosticstatis0005unse/page/49...

That's the definition used to calculate the percentage cited by the person I was replying to. Don't get me wrong I don't doubt it's a useful diagnostic tool in the hand of professional. It's just that the threshold for mild symptoms make it somewhat useless for a statical point of view from my point of view.

As far as I know, self-reporting is considered a poor tool to study addiction anyway because addicts are the least likely to agree to answer questions about their consumption or tend to lie.

> As a society we are extremely negligent of threats that cause enormous social harm in the aggregate simply because they don't harm everyone they come into contact with.

Probably. But see also the harm of overly stringent regulation of said threats, with the obvious example being prohibition of alcohol in the US.

So as a society we take reasonable measures to prevent the worst outcomes. In the case of alcohol we have ratcheted up the consequences of drunk driving, limited the drinking age to 21+, etc.

According to those definitions at least 30% of the people in the UK would be alcoholics. I hear that in Japan one is expected to drink, and the life expectancy is higher.

The difference is (apart from original formula Four Loko) there is no mystery of what goes into alcoholic beverages.

Wheras the specifics of what Facebook chooses to show you is either opaque or incalculable for people who visit their website.

This includes catastrophic mistakes in the algorithm like the massive groups pushing Q anon early in.

When you buy Bud Light, there isn’t some constantly shifting emotionally persuasive unlicensed content available to offer just the right amount of drip to give a site visitor a drip just before their average session length is wrapping.

I personally think alcohol is often destructive like rot and sloppy tool to play with one’s consciousness.

However, it is at lease consistent and well understood. Facebook is neither, and can much more easily be linked with havoc in pursuit of ad spend.

>apart from original formula Four Loko

Death in a can.

Drinking two of those bad boys in an hour was like getting into a time machine. You'd wake up in an unfamiliar place with a trail of depravity and destruction in your wake like Hansel and Gretel on a bender.

I don't really think knowing those things would have much if any affect on users or the platform's impact to society or the ability to regulate these platforms.

What are you going to do, tell them they can't show you more content similar to things you or your friends or people with similar interests spend a lot of time on?

> This includes catastrophic mistakes in the algorithm like the massive groups pushing Q anon early in.

Was this really a mistake? Or just something that caused a lot of outrage and discussion and views and clicks so therefore got amplified and spread by design?

> This includes catastrophic mistakes in the algorithm like the massive groups pushing Q anon early in.

Is it “catastrophic mistakes” or is the algorithm working as intended to maximize engagement?

From Facebook’s point of view I’m not sure it’s a mistake - these Q people have “engaged” plenty and the conspiracy is still a wonderful generator of engagement, whether from the Q people’s side pushing endless conspiracies, from people trying to reason with then or from those merely laughing at them.

The ratio of alcoholics to alcohol consumers is probably significantly higher for teens than for adults.

But this article is talking about teenagers. We don’t let teens drink in the US, because they aren’t mature enough yet.

Some would argue that the introduction of alcohol as a social norm earlier in life would reduce the amount of irresponsible use when those still-immature young Americans turn 21.

Edit: I feel the need to clarify that I am also an American and a parent of young kids, and I do not intend to allow them free access to social media anytime soon (oldest is 11).

Problem drinkers become extra problematic when their drinking harms others. That goes beyond just hurting yourself.

What percentage of FB/Insta users are truly addicts and harm themselves? Maybe moderately low, who knows. But, how likely is it that their social peacocking impacts others even if they themselves are not being hurt by it? The body image thing for teen girls is happening when other girls post their pictures. That’s what makes this whole thing very difficult to measure.

I’d venture most alcoholics will admit that alcohol makes them feel physiologically better, but know that it’s a poison that they wish to rid themselves of. So the positive disposition towards the intoxicant is gallows humor — a coping mechanism to rationalize a situation they can’t see a way out from despite it ostensibly being within their control.

On the others hand, a lot of smokers or drug users tell people to never touch that stuff, even while they're still using it themselves. At some point, the pain of quitting just becomes a barrier, despite you knowing thwt continuing is bad. And this description actually matches social media quite well, too.

A big thing with addiction is denial. A lot of alcoholics won't admit to being alcoholics.

Actually, with alcoholics, it is usually one of the greatest obstacles to overcome: their being honest on the fact that they do drink.

Ambivalence is a key part of most mental health issues. If there were not positive aspects, people wouldn't do it. (The exception would be things that are very much not a choice, like schizophrenia or Alzheimer's).

I know "choice" is a bit of a dirty word when talking about mental illness, we can talk about free will vs predestination ... everyone is just a bunch of atoms bouncing around without "real" free will. But to the extent that people do have anything like a soul or consciousness, a lot of bad choices are still effectively choices, and for a bad choice to be made it needs to have an upside.

> I'm sure FB is doing their best, though.

You're more sure than most people.

Indeed, this seems like exactly the sort of situation where you need to ask the participants what effect it has on other people in order to get an honest answer.

Or study facts; we can monitor sleep vs instagram use, for example. We can figure out who has body image issues and compare that to who uses IG or other social media.

In the early 90s there was an advertisement on Dutch TV. It was about a company's toilet cleaning product, and during the ad some "scientists" said (literally translated) "We from Toilet Duck recommend Toilet Duck" (https://nl.wikipedia.org/wiki/Wij_van_Wc-eend_adviseren_Wc-e...). The slogan ended up being synonymous with organizations recommending their own products.

This article reminds me of that. Facebook will never admit their services are harmful, as that would be in direct conflict with their business. And because of that I would treat any claim they make about how good their services are as nothing more than just bullshit.

Exactly - corporations pretty much by definition will never do anything that challenges their core revenue generators.

I mean, just look at how Facebook fought tooth and nail against Apple's anti-tracking changes. Facebook still knows an absolute boatload about you, they can still show you extremely targeted ads on Facebook, but they treated a removal of their ability to track you on pretty much every site or app you visit (and not even a removal - just a question on whether you, the consumer, want this) as the end of the world. Their ads claiming how they were just sticking up for small business owners were nauseating and gross.

Since it's clear Facebook would never release a study, even if true, that said "Gosh, you know, we looked over this unbiased study that says that Instagram is extremely damaging to children and teens, so we're just going to prohibit access to Instagram for those under 18." then any communication Facebook releases on the subject can be summarily ignored.

Haha, first thing that pops into my mind! "Wij van WC-eend..." That's all you have to say in the Netherlands to label this behavior :) [0] (Today this would qualify as a meme).

I didn't remember the super German accent, probably has to with the public image of a scientist looking like Einstein :) (btw the commercial is from 1989 and in 2007 it won best slogan ever with 46% of votes [1]!)

[0]: https://www.youtube.com/watch?v=YsvHeLUOoxs

[1]: https://nl.wikipedia.org/wiki/Wij_van_Wc-eend_adviseren_Wc-e...

If those were quacky scientists in toilet duck lab coats it would make it quite endearing.

Facebook should also show scientists in company uniforms, with fluffy hair and fake moustaches to announce their research while holding test tubes with smoke swirling out of them. It would give their research the kind of grandeur it deserves: We the scientists from Facebook recommend: Instagram.

Facebook as Toilet Duck - the analogy is nearly perfect.

Or Tobacco companies who knew their product was actively harmful to their customers.

Reading this, all I see is the tech equivalent of "Cigarette companies find no link between smoking cigarettes and lung cancer."

Well put. Thank you for smoking came to mind as well.

"Our products increase suicidal ideation for at least 200,000 teenagers in the United States, which we believe is not bad enough to worry about considering the revenue we will generate putting advertising in front of this demographic."

"The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced."

This is research 101. Of course kids that "Instagram hears from" are going to have positive reviews of the app. The is convenience sampling from your own users. Additionally, they interviewed kids 13+, do they really have the ability to identify the source of issues they are experiencing?

That may be true, but the WSJ headline was Facebook ignored its own research that Instagram is harmful. This blog post is intended to contest what the research said. Whether the research was good or not is a different issue.

If 1% of the users of your app start having suicidal thoughts as a direct result of using your app, then guess what: your app is going to be responsible for a significant number of suicides. No matter how FB spins this, it’s just disgusting. Instagram is literally killing thousands of kids a year.

FTA: “What the data shows: When we take a step back and look at the full data set, about 1% of the entire group of teens who took the survey said they had suicidal thoughts that they felt started on Instagram.² Of course, even one person who feels this started on Instagram is one too many. That is why we have invested so heavily in support, resources and interventions for people using our services. In addition, some of the same research cited by the Journal in the slide above shows that 38% of teenage girls who said they struggled with suicidal thoughts and self harm said Instagram made these issues better for them, and 49% said it has no impact.”

> If 1% of the users of your app start having suicidal thoughts as a direct result of using your app, then guess what: your app is going to be responsible for a significant number of suicides.

I'm not defending facebook, but even if starting to have suicidal thoughts does cause some suicides (and I'm not suggesting it doesn't), I don't agree it's quite that simple.

Internet browsers are not responsible for suicidal thoughts beginning with internet browsing, nor email clients for thoughts beginning with email. Nor with instant messaging, SMS, or telephone. For example if one of the causes was due to bullying on the platform, it's possible other platforms (or the real world) would have been used for bullying instead.

I wouldn't be surprised if 1% of children said they started having suicidal thoughts as a direct result of school, so you could also say the education system is literally killing thousands of kids a year (not that schools can't be improved in that regard or in instances a school may be culpable due to negligence or incompetence of staff).

It's important to try to really understand and be clear about what the issues are and not be overwhelmed by emotions. If parents get scared by ban it without understanding the issues, their child can become isolated and marginalized, or it can encourage their child to be dishonest with them. It's not necessarily the best approach to take.

>Internet browsers are not responsible for suicidal thoughts beginning with internet browsing, nor email clients for thoughts beginning with email

Except browsers aren't "curating" content specifically designed to keep you obsessed and checking it constantly, which commonly includes things that will make users feel self-conscious (which teens will OF COURSE be susceptible to).

I'm just stating that those things are not necessarily responsible for suicidal thoughts. Not trying to claim they are exactly the same as instagram. Good point that all my examples are somewhat more passive though, I could add some others:

Making or curating content people like to consume does not necessarily make something responsible for those thoughts. Television stations, Hacker News, video games, magazines, books, movies, whatever.

I'm not saying instagram can't be criticized or improved, I'm saying it's not as simple as just assigning complete blame of all suicides to the place or thing where suicidal thoughts first arose. It's simplistic and I don't see how it is helpful or will lead to actual improvements.

Have you seen the efforts in chrome around the home page with ‘Discover’?

There is too much value in the curation business for an entity like google to ignore.

That's nowhere near the level of algorithmically-induced addictivity though. The goal may very well be the same, however.

Yeah, 1% actually seems shockingly high to me. Are there other products or entertainments in which 1 in 100 users have suicidal thoughts they attribute to the product? I was surprised to see Facebook bragging that "only" 1%.

At least drug companies give you a heads up that some products may give you suicidal thoughts. Seems shitty that Facebook hid this until they were caught.

The drug companies have legal requirements to publicize side effects

Yea, definitely feels like there needs to be regulation around social media. Fb almost feels like the new Philip Morris. They may not give you cancer, but they have a product that affects the mental health of millions.

Not just mental health.

- Multi-level marketing ponzi schemes

- Anti-vaccination

- Far-right extremism

- Islamic extremism

- Totalitarian government political propaganda

- Censorship of anti-totalitarian movements leading to incarceration and torture.

All of these had an exponential growth amplification platform, all of these would have otherwise had an organic growth at best.

Teen suicide is just one negative aspect that happened to be studied in more detail probably due to CYA liability more than anything else.

Stop following idiots, I don't see any of that.

As long as /you/ are all right there's no problem them. Done. Thanks.

edit: I'm sorry the tone of that obviously rubbed me the wrong way. Let me try again.

If you have personally been able to fully avoid direct contact all the suggested problems listed that actually doesn't make those problems cease to exist for others. Such problems also may affect you quite profoundly in an indirect fashion via people you love or society at large or a number of other vectors.

I'd suggest that these problems are extremely likely to exist and more research needed on their full effect and indeed the trade offs involved in any suggested solutions to it. Your proffered solution of "don't follow idiots" probably isn't particularly useful to anyone. The majority of people would believe, just like you do, that they don't follow idiots.

If we can't trust people not to follow idiots, then we must find the idiots and stop them being publishers.

Point is it's not the social media company's decisions that cause the problem, humans cause the problem.

So in the end it's an absence of government overseeing the health of its people, because it decided to be involved in as little a way as possible.

The internet does the same thing as any previous media, just better, faster, and governments aren't keeping pace, or being global.

Facebook isn't broken, government is.

If you want Facebook to be the new government, that's an interesting question.

So you are saying that facebook has exactly zero moral obligation to do the right thing?

They make instagram, they know instagram can cause suicidal thoughts in teens, the government hasn't regulated it yet, so facebook is in the clear?

That sounds like a zuckerberg-y level denial and blame shifting. I don't think they should operate like philip morris, and I'd guess that some of their employees would agree or we wouldn't have even heard about this information.

Reality causes suicidal thoughts in teens though. Does Facebook cause more or less than anyone else? Fashion magazines, their peers, global warming, poverty, etc etc.

If there's evidence someone found out that doing X would cause more suicides, and they did it anyway, that's bad.

It's not that governments have to regulate Facebook specifically, it's they have to act more rapidly, and globally, and that's two things they're terrible at.

Facebook isn't the problem, it's the symptom of a rapid massively interconnected world that hasn't existed before

It’s disgusting how they are reducing human life to relative percentage of 1%. In absolute terms, there are 1 billion users meaning 10 million users have suicidal thoughts because of the app. That’s a large city population. NYC has 8.5 million population, imagine every single person in NYC is suicidal. That’s a frightening thought.

Not because of the app though, while using the app. The app isn't making causing those thoughts, society is.

Society is promoting being attractive/thin/wealthy/always happy/being envious

> The app isn't making causing those thoughts, society is.

You are right in that the app isn't making it, but it is promoting it with their algorithm to maximize engagement.

Their algorithm to maximize engagement automatically promotes what's popular. What's popular is a side effect of society and how human brains work.

Every supermarket magazine is doing the same thing, just they're not as good at it.

> Their algorithm to maximize engagement automatically promotes what's popular.

No, their algorithm is maximizing conflict. There's no algorithm needed for what's popular. A 5 year old can write that algorithm. Go read up on research on what drives engagement on social media. Popular != maximum engagement.

Conflict is popular, there's more comments when people disagree. It's a very unintelligent way to measure popularity, number of comments, number of views.

That's why the rest of the Facebook posts you see are "what was your favourite meal as a child?" etc with thousands of comments. The ones people think are phishing scams.

Ask them the same thing about school, or living with their family members. Unfortunately environment can be extremely stressful to teens and it usually gets dismissed by adults because "they have been through it, it was not that bad" except that it was and still is a source of trauma for many people.

It doesn't seem high to me. Almost 30% of people 18-24 claim to have seriously considered suicide as a result of the pandemic, for example


If your bar for a consumer product (intended for leisure!) is to be better than an actual global catastrophe, there's nothing I could add to make it clearer that something is seriously wrong.

>> Are there other products or entertainments

> the pandemic

You might be the first person on the planet who's found the pandemic to be entertaining (or a product...)

“If 1% of the people who go to college start having suicidal thoughts as a direct result of going to college, then guess what: higher education is going to be responsible for a significant number of suicides. No matter how educators spins this, it’s just disgusting. College is literally killing thousands of kids a year.”

And that would be absolutely true as well. If colleges were/are so horrible that 1% of people contemplate suicide, it is quite obvious that something needs to change.

I’m quite sure that this is already the case, and the numbers are higher than 1%. Modern society has a lot of pitfalls which we are blaming on FB/Insta instead.

These are additional suicides attributed to FB/Insta, right? We can and should blame FB/Insta for those.

The feelings are invoked by the other humans posting content on FB/Insta. Again, this is a societal ill, and nobody wants to acknowledge it.

A similar example is when newspapers report on a high-profile suicide, and you have multiple copycat suicides occur afterwards. You wanna ban news reporting too?

When Berkeley surveyed their graduate students, 47% were depressed and 10% had considered suicide... https://www.insidehighered.com/news/2015/04/22/berkeley-stud...

Now I agree with your conclusion that "it is quite obvious that something needs to change" with our graduate education system.

How many lives has education saved? Thanks to education we have developed science and technology that save and extend billions of lives.

And if you were to drop everyone out of college today, how would the suicide rate evolve over the long term? I'm pretty sure it would be worse.

You should finish your analogy and tell us, what exactly, is the long term benefit of instagram/FB that makes up for it's short term negative effect?

Life is responsible for all suicides, I still don't see your point.

College has a societal value, Instagram doesn't.

Pretty sure talking with your friends is societal value, and people do that on Instagram

People did that long before Instagram, and will do it long after, both online and off.

> Of course, even one person who feels this started on Instagram is one too many.

If we only accept risk-free things into the world, and claim that everything made is fully responsible for all that it begets, we are going to have to shut down, remove, bury a huge amount of the world.

Risk is a part of nature, and one of the best parts of being human. Good things have their shadows. We all become entailed into the world, into the things we care about, and our hopes for these things rarely fully materializes as we'd hoped. That is distressing, but it's a part of maturing, growing, aging, and learning to cope, being able to cope, with the vast great reality out there: that's life, that's living.

Social media is interesting, in that it exposes us to each other in a much more raw, lower-context form. We don't have socialization cues, we don't really witness the repercussions on other people of the actions we take. This is a very wild, very difficult field to navigate.

Even though social media is, I agree, dangerous, I reject the core premise. One person is not too many. That is absolutism. It doesn't try to see or understand, it doesn't accept the reality of the situation, doesn't consider potential: it simply dictates terms of discussion to the world, based off a maxim you personally hold. I cannot accept your starting position.

You are replying to a direct quote from FBs article. I don’t personally think that 1 person is too many, but clearly 1% of their user base is a lot of people.

I’m curious how they define suicidal thoughts. Almost all of us think about suicide. What matters (and what’ll get one sent to a psychiatric hospital) is thoughts that are actively planning a suicide like time and place. Are these users engaged in suicidal ideation? Or are they an active danger to themselves? I would assume not, because product research on actively suicidal people would likely exceed the scope of human subject review board parameters.

> Almost all of us think about suicide.

Really? Almost all?

Like, I've never ever considered commiting suicide, but sure, I have wondered what it would be like, just like I have wondered what it would be like to fly to the moon or to do heroin - does that count as a "suicidal thought"? I'd hope not.

> your app is going to be responsible for a significant number of suicides. No matter how FB spins this, it’s just disgusting.

It being “disgusting” may well be true, but doesn’t follow logically, and therefore needs more argumentation than what you’ve given here. It’s entirely possible for something that causes deaths to be worth it from a utilitarian perspective. Cars (and industrial development in general) are the cliché examples.

Disclaimer: I worked for Facebook in the past, but my opinions are my own and haven’t changed much since before I worked there.

> It being “disgusting” may well be true, but doesn’t follow logically, and therefore needs more argumentation than what you’ve given here

Why doesn't it follow logically? Some teens are depressed, Instagram makes them even more depressed, they kill themselves, and it's disgusting that Facebook knew about it and hid it. Seems pretty logical to me.

I think their argument is that if IG makes the rest of their users much more happy, that happiness outweighs the cost of making a small portion of users suicidal.

First of all, it's not clear that this is the case -- that Instagram makes some teenagers so much happier that it outweighs the downside.

Second, any product with the capacity to make some population of their users die or harm themselves (1% suicidal maybe leads to 0.01% of people actually attempt suicide; not counting other detrimental effects like eating disorders or depression) should be regulated for safety. C.f. cars, guns, tobacco, alcohol, food, medicine, etc. Such a product should not be under the control of one private company.

I.e. it should not be Facebook's choice to create winners and losers for those who use Instagram. We need regulation from some independent entity that is not motivated (or as motivated) by Facebook's profit. The best candidate would probably be some independent government agency.

You are correct, but none of what you said contradicts what I said. I just said that the original comment was not a logically sound argument; I didn’t make any counter-argument of my own, nor did I even say that his conclusion was necessarily untrue.

Indeed it’s not at all clear without further study whether Instagram increases people’s happiness enough to be worth the downsides.

I quibble with your notion that this argument needs be logically sound in the first place. Logic is about consequence; this is a question of values.

It is my value judgment that Facebook is disgusting, and my arbitrary desideratum for this judgment is that Facebook (allegedly) spun a narrative to cover their suicide machine.

You'd be right in saying that this line of thought is decidedly not sound; nor could it ever have been sound in the first place, because it's irrational and arbitrary. Does its irrationality and arbitrariness make it less valuable than logically sound arguments? If so, why?

Alcohol makes some depressed people more depressed, and they kill themselves. Is it disgusting that liquor companies know this yet keep manufacturing the product?

And this is why we don’t let minors drink. Adults get to do lots of potentially dangerous things, as adults we as a society (generally speaking, the specific details are different from country to country but the general principle seems universal) let adults decide such things for themselves.

Yes, it is, but we as a society attempt to limit its impact on children. The fact that there are alcohol and cigarette companies that do shitty things should not give facebook a free pass to do shitty things as well.

It’s simple math, if even 1% of the 1% who have suicidal thoughts as a result of Instagram are successful in a suicide attempt then based on the size of their user base that would mean thousands of people die from suicides at least partly caused by Instagram.

1% may not be a bad estimate. This article [1] suggests that 1/31 individuals with suicidal ideation will attempt suicide.

[1] https://pubmed.ncbi.nlm.nih.gov/33351435/

If 1% of drivers died because of driving, then yes, it would be a big problem.

Nobody is talking about 1% of Instagram users dying, and furthermore, nobody ever said it’s not a big problem.

What you're saying is not true.

If your product causes 1% of people to have suicidal thoughts, and causes 2% or people not to have suicidal thoughts who otherwise would have, your product has prevented suicides overall.

In this case, the data is closer to the latter case. Given the sample size of the survey and background prevalence of suicide, the result should not be considered significant either way.

First the article directly contradicts you on the number of people sampled:

“ ² According to the raw unweighted data, we surveyed 1,296 teens in the US and 1,309 teens in the UK and asked them if they experienced a range of feelings or experiences in the past month. Of those teens that self-reported struggling with suicidal thoughts, the survey then asked if the feeling started on Instagram. A very small percentage of the total number of teens surveyed (~1%) said they had these feelings and felt they started on Instagram.”

Second the CDC directly refutes you on the teenage suicide rate: https://www.cdc.gov/nchs/data/nvsr/nvsr69/NVSR-69-11-508.pdf

Suicides amongst adolescents have increased over 50% since 2005, which coincides perfectly with the rise of social media.

But teens weren't allowed to use Facebook until 2007 and Instagram wasn't popular until 2010 so by your own data the increase in adolescent suicide couldn't be caused by FB

> Instagram is literally killing thousands of kids a year.

Even if I accepted your premise (which I don't). I don't think those words mean what you think they mean.

This is the same argument that people who have bullied others into committing suicide have used to try and avoid responsibility. The courts disagree with you.

my issue was more over the phrase, "literally killing". Unless I'm grossly mistaken about the feature set of Instagram, I'm pretty sure the best you could charitably get is "directly responsible for" and I'd still need a lot of convincing to get me past "could be and indirect or contributing factor".

About said court decisions, without knowing what court cases you mean specifically I'm left assuming that they were able to prove malicious intent where as I don't think you get that from software. Can you?

I'm a fan of your delivery; "Unless I'm grossly mistaken about the feature set of Instagram" gave me a hearty chuckle.

Anyway, if you could spare me a moment of charity and suppose that Instagram _is_ directly responsible for teen suicides, why shouldn't we consider that equivalent to "literally killing"? I'm unsure if there's real value to be preserved in splitting these hairs.

If I'm "directly responsible" for fatally hitting someone with a car, then I've "literally killed" that person.

What if someone pushes them infront of your car, did you still kill them? Is it still fair to say you "literally killed them"? What if they decided to kill themselves, and intentionally jumped in front of your car? Samething? Both sound more misleading than true. Technically true granted, but i don't think the bar for reasonable is should be "technically true, but misleading"

What about planters nuts? Is it reasonable say they are "literally killing" children/teens/people as well?

Well, I explicitly asked for your charity. If you're unwilling to grant me that, then we're already at an impasse.

Yes, what if what if what if, a hundred times over. Imagine whatever mitigations you deem appropriate to obviate this corporation. We obviously disagree. I'm not trying to argue you into my position, but can you understand where my ire comes from?

It's not that I've done the cost-benefit analyses and rationally found Facebook deficient. I'm saying that in this specific circumstance, with these specific actors, I find Facebook directly responsible for every suicide that any teenager wishes to blame on Instagram.

I value Instagram and their sniveling PR bureaucrats far less than I value the absence of any harm they've wrought from their profit motive.

I'm not stating that you hold the contrary opinion; I believe that there is a set of morals you can hold that allows you to not blame Facebook, while not precluding the goodness of your character. In fact, I'm willing to give you the charity of assuming that you hold some such set of morals; if I didn't think you were engaging in good faith, I wouldn't engage you at all.

I believe I understand what you're saying. I don't agree with your values, such as I understand them. So I'll ask again: will you grant me the charity of assuming, just this once and solely for the purpose of my argument, that Facebook is directly responsible, without mitigation? And, under this assumption, can you relate to me what value you find in delineating between "directly responsible" and "literally killing"?

If you can't grant me this charity, then that's entirely fine. I'll still believe we're engaging in good faith, with good humor; and, in fact, I'll enjoy our engagement all the same. But if you can't grant me this charity, may I ask: why?

I did grant your hypothetical though? If Facebook and Instagram software was the primary, and root cause of the distress that caused someone to commit suicide. I still think it's misleading to say they "literally killed". The value in that distinction is exactly that misleading.

Reusing my planters nuts example. If planters nuts is "literally killing" people with their peanuts; the solutions will fit around that premise. How do we stop plantars from killing people. Contrast peanuts are responsible for the death of people. How can we stop people from dying to peanuts?

If you still reject that argument because I'm not conceding the assumption of responsibility enough. Then I apologize for misunderstanding your argument/question. And I can answer why I can't grant the charity. it's because it is so divorced from reality that I lack the context to speak intelligently. The amount of things that I understand about reality would have to change so drastically in order for a software application to literally kill somebody leaves me without the ability to reason about it logically.

No, I actually rescind what I said about you not granting me charity. With this further elaboration, I can better read your first response and see that you did indeed engage with me as I asked. I talked past you. Thank you for your engagement!

Just to be clear, I'm not saying Instagram the app is directly responsible. I'm saying Instagram the organization is directly responsible for killing people, by leveraging their app into a toxic cesspool so as to realize the most gains from it. From my perspective, Facebook chose to ratchet the dial on Instagram's (the app) addictive and harmful powers to a full 11. Teenagers seem to be paying the price. This, to me, is unacceptable. I'm not blaming the app for doing what it does, I'm blaming the people that make and deploy it.

Going to your planter's nuts example: if their salty nuts presented a significant health risk, on the merit of what it is, and Planters' reasonably discloses that to the customers, then I'm fine with it.

If Planter's intentionally makes their product more harmful to get greater profits, but still reasonably discloses the risk of their product, then I'm angry but I understand.

If Planter's chooses to create an extremely addictive product, hides their research which acknowledges fatal risk, deceives Congress by way of technical half truths, dedicates their platform to spreading propaganda about how good they are, and puts out PR spin to placate the seething masses... then all I can say is that they deserve Kaczynski.

We can’t start living in hamster balls just because something might be dangerous. Many of us grew up with a Facebook or a FB-like and survived just fine because our parents understood that it’s the parent’s job to parent, not Facebook’s. When are we going to start pointing the finger where it belongs - the deficient parenting?

Yeah kids, listen to your dang parents, not that hip social media.

"Why would anyone do drugs when they can just mow a lawn"

Parents these days are children of the hippie generation. "Hey kids, why mow a lawn when you can tear it up and replant it with weed? You'll make a lot more money that way..."

Are you so sure about your timeline? Don't most folks place hippies in the 60's and early 70's?

That's my mother's generation. People my age are having grandchildren - I'm in my mid 40's. We dressed up as hippies from time to time, though. A lot of today's parents were born in the mid 80's to 2000, approximately (a few stragglers on either side). I really don't think this is the hippie generation.

I'll add that smoking pot is much, much more widespread than anything "hippie", even if someone takes on part of hippiedom as their personality. In fact, a lot of folks look to older hippies and laugh because they were wrong about it never being legal again.

Meh, I'm in my 40s and have a young kid, and my parents were hippie-adjacent. Some people breed faster; generations are a horrible measure of time.

The point really is that most folks having children are younger: You aren't, and you still aren't the hippie generation.

Generations are a horrible measure of time, to an extent, but they do bind people together with events and movements at times and I'm pretty sure this is one of those. Plus, I'm not sure what would take their place that is actually better.

I'm concerned that WSJ cherry picked only the worst bits of information from the slides and did not give a balanced discussion or quote the more positive results. They intentionally only included the most negative results and intentionally avoided publishing the root slides so as not to show their hand that there were also mitigating stats as well. In my view this kind of cherry picking of information from the source material, then hiding the source material in order to spin the result in a specific way, is a breach of journalistic integrity.

1. Totally disagree- these slides should be treated like a hostile witness. This is one of the largest companies in the world on the back of advertiser and investment faith. Facebooks public image is paramount to its continued success and data like this should come from a third party. The points that are good for FB should be treated with heavy skepticism, and the points that are bad for FB should be treated as the absolute floor of the negative impact.

2. Facebook themselves are cherry picking:

> WSJ said: “Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram"

> What the data shows: When we take a step back and look at the full data set, about 1% of the entire group of teens who took the survey said they had suicidal thoughts that they felt started on Instagram.

Suicide is the second leading cause of death from 13-19. 6-13% of teens saying instagram made them suicidal is a huge deal. Trying to detract from that delta by showcasing that not that 99% of teens don't want to kill themselves is repulsive and dishonest.

It's absolutely fair that this type of data is better off coming from a third party, peer reviewed, and with a larger sample size. However, the slides here were never intended as any sort of definitive review, or especially not for PR consumption, they're kind of like an internal memo looked over by a few internal teams. So their data is not cherry picked - it shows the good and the bad uncovered about Instagram's effects, intended for internal discussion. Where I feel WSJ was dishonest is that it did not show all of the data uncovered in these slides, but only the most negative points, in order to get clicks and provide a nice loud framing for their article.

Any large scale phenomenon like Instagram or Reddit or pizza or video games has both upsides and downsides. So I don't think it's fair that just because we've uncovered links to heart disease for pizza or depression/anxiety/social withdrawal/suicide to excessive video gaming or suicide to Instagram, that we should ban any of these things. They all have extremely positive effects as well. I love pizza. If we only harp on negative aspects of every large scale phenomenon we're going to think everything is terrible, ban everything, then live in a box.

That is the sensationalist vibe which WSJ intentionally curated for their article by intentionally hiding any positive information found in Facebook's memos and selectively exposing only the bits which make people go "WOW NO" and make for nice clickbait. If they had included both positive and negative results contained in Facebook's slides, then I don't believe there would be a problem. I personally did not expect this kind of spin from WSJ and am disappointed.

That might be true but in the end the study itself is a load of B, I mean you asked 40 kids if they like instagram and many said yes so this trillion dollar company is justified in their evil deeds?

Great discussion here already. I want to bring attention to a bit of additional research relevant here which I found to be a good read and was conveniently not mentioned by Facebook: a 10 year, 500 participant longitudinal study on social media use and suicide in teens. Published Feb 2021.

link: https://link.springer.com/article/10.1007%2Fs10964-020-01389...

Even if we take fb word for it that the WSJ's interpretation of data from fbs study was not generous, I would like them to respond to the critical external research too. Fbs study has 40 participants who were directly asked their opinion about the causality of using Instagram and their well-being (no chance that a harmful addiction would be coloring their perception on that!). So, let's look at the longitudinal data also, fb.

Hardly a great discussion. It's the expected hateful and one-sided comments lacking any nuance.

At least the actual scientists in this field acknowledge social media's positive impact on teenagers while still talking about the problems. Here, it's all binary. Big Tech bad and evil. Period.


The post says the slide was missing context by stating it was only girls who already had body issues.

Numerous studies show a lot of girls have body image issues. Further, in this post they don’t even indicate what the “correct” percentage might be.

I don’t really get their point. What am I missing?

It's a case of "correlation does not equal causation"

Neither the survey nor the slides suggested that Instagram was the root cause of the issues, but it does appear to be one of the triggers for people with these issues. Obviously, Instagram isn't the only media that aggravates those issues, as print media, television, movies, reality television, and any other source of imagery would be expected to induce similar problems in those who are susceptible. This problem existed in vulnerable populations long before social media was a thing.

Normally HN is very quick to point out when sources are conflating correlation and causation, but these slides were immediately assumed to indicate causality when in reality they only showed correlation.

I'm honestly of the opinion that the Internet in its present form is much harder on teens than what came before, and Instagram is only one of the myriad of places with potentially toxic cesspools in them. And not the worst.

For our teen, it was Tumblr that was the first and the worst -- content and conversations on there went dark really quickly, and without our knowledge. Instagram is actually where things were redirected after it became clear a blanket social media ban wasn't going to work. At least on there the focus has (for her) so far mostly been on visual art and self-expression and fandoms and it's a slightly more moderated platform.

I grew up with BBSs, MUDs, MOOs, IRC, etc. and figured I'd be set as a parent for moderating use of this stuff. But nope, it's been probably the worst struggle for us in parenting so far.

I think social media is fine, but it should not take precedence over physically meeting and interacting with people. Not to mention legal issues where it's been argued that it's your presence on the web and can be used as a means to serve you (hasn't been truly tested yet, but has been done and upheld before).

The weird thing I've noticed since growing up with the internet is that the internet used to be a destination (and still is), but it has transformed into a a place where you presently interact instead with livestreaming, discord, and whatever else along those lines. It's causing people to believe these forms of interactions can legitimately replace physical ones when that is not the case at all.

Just in case anyone else is wondering recent studies have said that up to 80% of nineteen year old women have body image issues. So maybe it’s not 1/3 of all teenage girls that are having their body image issues worsened by Instagram but it’s safe to say that it’s still a lot.

“Teens say they like using instagram” doesn’t make it not toxic.

If you don't believe the surveyed teens, then why should we have been concerned by the original reporting in WSJ which claimed (misleadingly) that teens said they didn't like using Instagram?

If 99% of teens self reported that IG helped them cope with body image issues, would you still dismiss them as not knowing what's best for themselves? Why is 60-70% so different?

"More Doctors smoke Camels than any other cigarette."


This is a hilarious comparison because it really feels that way. Why would you as a company need to defend yourself of accusations like this if they aren't true? Why not sue WSJ for libel? It's not like Facebook doesn't have the cash to do so. Oh wait, that would mean that they have to prove damages and have the spotlight shown further. So they don't do it.

Responding to the allegations is almost worse than remaining silent. It shows they are aware it's a problem and the government stepping in is the only way it'll stop.

> Why would you as a company need to defend yourself of accusations like this if they aren't true?

I'm wildly against Kafka traps like this. However, after reading the rest of your comment, I fully agree with the spirit of your post. I feel that this rhetorical question is superfluous and weakens the strength of your following rhetoric.

I _only_ mention this _because_ I like your rhetoric (in fact, I plan to steal much of it); my apologies if it seems as tone policing, which is the opposite of my intention.

I'm used to most of HN downvoting my stuff or arguing with me even though it generates a major discussion in the thread. It's almost worse than reddit but on a more anal nitpicking "how dare you condemn WSJ!" version of it.

Its like Big Tobacco telling us about studies they have done on the damage caused by smoking.

They would NEVER say that their research shows Instagram is bad for teens.

Exactly. If you are going to ask a company to review its own products you could save yourself some time and write the glowing review yourself. Even if its a lie, they'll still say 'our product good'.

If they don't, that spokesperson is fired and replaced by someone who will say the thing correctly, after redacting the earlier statement.

> ... more teenage girls who said they struggled with that issue also said that Instagram made those difficult times better rather than worse

It's not good enough for more than 50% to feel better than worse. There are hundreds of millions of teenage girls on Instagram, even if Instagram made a tiny minority worse in difficult times than better that's an awful lot.

Try that again with "cars". Clear positive to most people's lives, very very bad for some.

Your end point may or may not be right, but the logic here pretty clearly doesn't hold

Cars have advantages that a clear majority of drivers, wannabe drivers, and economists would tell you about - driving to work, for pleasure, shopping, giving freedom etc. Motoring deaths per US inhabitant per year is 0.01%

It's very hard to claim that Instagram has anything close to same number of advantage as motoring

I gave FB the benefit of the doubt and read through all the discussion without having any of the facts that they were asserting were incorrect, meaning that 11 out of 12 items helped teenage girls. Why not just say up front what those things were instead of forcing me to read their interpretation of them for a dozen paragraphs?

And, then to say almost gleefully, we are releasing the "full slide" to tell the "full story" which WSJ didn't do. How about releasing the full slide deck? Did I miss that?

Honestly, it feels a lot like propaganda to me and that I'm only getting the part of the story that Facebook wants me to see. I know WSJ did that too, but let's at least be honest that both sides are selling something. And, I'm not worried about my two daughters reading the WSJ.

Edit: read it again, don't see a link to the full deck that contains the one slide in question. Why not release it in full?

According to a company that makes substantial profit off of a controversial product, said product is safe and effective. Science has spoken, folks, there is nothing harmful with young women trying to compete in an online beauty pageant for the entertainment of creepy strangers.

How do you fight it though? Take child beauty pageants, do those still go on? To most sensible people we know it’s disgusting. Yet, as a society, we are unable to develop a consensus sensibility on social media self aggrandizement.

I think we’re still at the tip of the iceberg. With over a billion people, there’s nothing stopping FB from using machine learning and identifying people that literally look like you - ‘here’s a prettier version of you’. Then we’re in real black mirror territory. Perhaps they would even look for slight variations if it triggers too much of an uncanny valley effect.

How? We could invite the Taliban to govern the USA. Degeneracy in your home town? Start the beatings! Gambling, porn, and inebriety? Haram! They do seem to be able to manage challenging political situations much better than the US in Afghanistan. Think about it :)

My take away is that there probably isn't much signal from this particular dataset.

What we want to know as a society is what is the treatment effect for IG or FB usage. This doesn't really tell us much, self report of impression is important to FB because it's how their users see them, but outside of that it's pretty meh

People who blame "algorithms" are completely besides the point.

Facebook/IG make some people feel bad, because they open the website and see people having fun, people out partying, people looking more attractive, and they feel "worse". You can't blame Facebook for that! Those people will exist regardless, and the "social competition" will happen regardless, simply through other formats. And by blaming Facebook, you're really blaming these "happy" people for making [you/others] feel worse by comparison. You will never be better than everyone alive at everything, and comparing yourself to others will always be pathological, leading to either a superiority or inferiority complex.

The point people miss: it is simply impossible to have a website where a billion people post pictures of themselves, that doesn't "make people feel bad". You can argue the human brain isn't meant for that, but it's clearly non-pathological for the otherwhelming majority of people, except those with preexisting low self-esteem and anxieties. People with inferiority complexes "externalise" their feelings onto Instagram when the core problem is their inability to deal with the awareness of other people's lives. Seeing a YouTube video of someone backstage at a fancy private concert would have the exact same impact.

Look at any social media websites without these "evil" algorithms: Mastodon, Gab, Parler. All equally toxic, though for very different reasons. First, you can't moderate at 1-billion-users-scale without moderation algorithms. Second, specifically, "discovery/recommendation algorithms" don't create toxicity; the toxicity is caused by flawed, sometimes-pathological humans, and amplified not by algorithms, but by the social dynamics among these flawed humans. Fixing that requires more "manipulative algorithms", not less!

The Facebook problem is clearly not, as millions believe, that Facebook creates toxicity from the top-down, and makes people feel bad. The core (and only) problem is other people. i.e.: their moderation needs to improve. Algorithms will inevitably recommend eating-disorder content based on common interests, the solution is to ban such content. Simple.

Same with the heavily edited IG modelling photos. Some people like that look, some people hate it. They should require disclosure when a photo is heavily edited, but the overwhelming majority of people will never feel insecure from that, maybe pity at most. Your own psychological makeup determines how you react, not any Facebook policy, and any such policy categorically cannot solve the core problem, which (I say this with respect and sympathy) is that a large amount of people have emotional wounds and need help.

People are throwing the baby with the bathwater, believing that mass-scale social media is fundamentally nefarious. It only is, for the same reason that large corporations become less innovative, or that any large social group becomes less cohesive: people's emotional problems get amplified the more complex a social group/organization is. Entropy sets in, and the social dynamics become too complex; conflict increases. The core problem, that everyone misses, is that this cannot be addressed by the organization, at best mitigated. The root cause is in individuals. Society is blaming Facebook for its own defects, at great cost to its worst-off members.

so the education and parents, especially them, are to blame, I guess.

>Facebook/IG make some people feel bad, because they open the website and see people having fun, people out partying, people looking more attractive, and they feel "worse".

Indeed, Internet's standards are higher when it comes to anything basically, not only look, but even things like programming and stuff

I wonder how much problems Instagram creates and how much it just tunnels. Mobbing has always been a problem and kids can be truly cruel; if Instagram is the platform kids use to talk and brag, it simply becomes the (literal) messenger.

That's not to defend FB/IG or to say that this study does not have glaring conflicts (in fact, it's shocking how bad it is despite the polish it surely got), but squashing IG might not make all of that go away.

Current controversy aside can Instagram please be less like Facebook in its quest to dominate everything under the sun and give people back their chronological timeline (I can maybe understand this with Facebook, because they have to put ads in the feed) and also not require people to log in to view public photos?

While I don't doubt Instagram can derail young girl's emotional reward system and skrew hew life long term, I believe IG derails young boys feeling of their position in the world and make them feel miserable.

Corporate world and media freak out when hearing even a hint that men can have their problems too.

Surprising they decided to include those bar charts because don't they suggest that not only is social media bad for 22% of teen girl's body image, but that it's even worse for loneliness and sadness and social comparison and fomo! For some things, more than 50% of respondents said worse. That, paired with the title (which comes off as a bit sardonic, over text) "But, we make body issues worse for 1 in 3 teen girls", as if FB is pointing to some slight positive sentiments as counterbalancing the negative things.

edit: ah I might be misreading this; apparently it's a measure of e.g. of the people who felt sadness to begin with, IG made it more/less/no different.

Facebook/Instagram launched across geography at different times. Is there any study of launch/popularity of FB and suicide/attempt rate in the geography?

They will probably never say that their products are harming teens. Some factors are not presented in their data visualization.

Really reassuring to see every party to the dispute displaying so much integrity and good faith

'What our research says about gambler well-being and slot machines'

from your friends at Bally.

How about the fact that Facebook is doing psychological research on people with no oversight?

This ... is ... evil. Doubly so when children are involved.

This is the core problem with the social networks that promote "engagement". They are carrying out psychological research with no oversight whatsoever.

See: Stanford Prison Experiment for the kind of problems this creates.

Theyre not well, I have millennial friends still making bizarre posts; as someone not on social media / will get a wind of a post here and there.

The captions, the poses. Just doesn't seem normal.

Facebook having to write a whole blog post on trying to convince us that Facebook and Instagram are not harmful to the mental wellbeing of teens is proof that they are immensely harmful.

Delete Facebook.

Case closed.

I quit FB after they admitted they intentionally manipulated user's emotions for research purposes (how convenient).

the fact that facebook even employs a person with the job title "Global Head of Safety" is telling. Sounds like they have a lot of unsafe stuff going on.

Sad, these people have no morality. Teen suicide rates have been on the uptick since the introduction of FB and has only accelerated.

One day we will look down on all the programmers who built these applications.

After a thorough investigation in to ourselves, we found ourselves to be the bees knees, now return to the feed, and feed on all these positive stories [1]

[1] https://www.thewrap.com/facebook-is-using-news-feed-to-promo...

How is it that nobody's noticed that facebook (and google)'s motto is basically:

"To Serve Man"

Spoiler alert -- it's a cookbook.

> What the data shows: What the study also said was

Giving yourself the prompt "What the data shows" and then, instead of discussing whether or not it shows the thing, discussing the fact that it also shows another thing, is a classic tactic of two categories of people: obfuscators, and those who can't converse logically.

While I agree that Facebook is lying, how come we want to hold Facebook accountable when it’s the parents who are being negligent? Giving your children privacy is important but that doesn’t mean you have to let them use social media or have no idea who they’re talking to. Talk to any kid on the internet today and they will tell you they encounter pedophiles, groomers, psychological torture, and bullying. Talk to the schools and they tell you all their issues (fights, bullying) arise from Facebook. You can control what apps your children use, technology is getting easier than ever and if you can afford their iphone, you can afford robust parental controls or at least time out of your day.

We can't expect kids to win in a fight against trillion dollar conglomerates who want to create psychological addiction to their products, and we can't expect parents to effectively control their children at all waking moments.

This is a proper domain for governmental regulation.

The parents are also addicted too, it's the same story as cigarettes.

There's a reason why advertising is the same whether it'd be for Coca-Cola, Cigarettes or Social media.

They don't know any better.

I agree with this.

> ² According to the raw unweighted data, we surveyed 1,296 teens in the US and 1,309 teens in the UK and asked them if they experienced a range of feelings or experiences in the past month ... then asked if the feeling started on Instagram

I'm not a statistician but isn't 1,296 (US teens) + 1,309 (UK teens) such a small sample size to make these conclusions with?

A quick Google shows:

> In 2019, approximately 21.05 million young people between the ages from 15 to 19 lived in the United States.

Source: https://www.statista.com/statistics/221852/number-of-youth-a....

EDIT: Added link to US teen numbers.

EDIT 2: Thanks for the replies and further reading it's more about confidence and margin of error.

No, it's plenty to measure the effect sizes reported. The larger the effect, the smaller the sample necessary to detect the effect. In this case the sample is large relative to the effect.

A bigger concern would be that the results don't generalize outside the US and UK, and my guess is that they probably don't.

I'd love to hear from the downvoters of this comment, of which it appears there are several. The comment reflects my understanding too, but I'm not an expert on study design/statistics and would like to know if there's something I'm missing.

I worked at Facebook around the time this data was collected, and read these results internally as part of my job. I analyzed similar surveys to this one and worked on doing things like designing experiments based on predicted effect size.

Maybe it's because I didn't say this originally. The sample size agrees with my intuition about how large a study should be for the sorts of questions it wants to answer. If the proportion of people giving an important answer were very small, say 0.01%, then the sample size would need to be much larger.

Yes, that roughly matches my understanding as well, which is why I was curious what the actual criticism of your comment would be, in case you and I are both missing something.

From the fact that even that neutral curiosity was downvoted, I'm inclined to believe that there is no such substantive criticism beyond "Facebook bad, me angry". Sometimes I forget how poor the quality of the average HNer has gotten.

You can get a UK election pretty much bang on with a similar sample size. It's not really about the sample size but how you handle the data.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact