“The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing...
“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.
“Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.”
From what researchers said in a March 2020 slide presentation posted to Facebook’s internal message board.
That statement cannot be false because it is devoid of any content, being a mere hypothetical. The fact that it is surrounded by "research" is meaningless as well, as the aforementioned sentence is like saying "the elements of the empty set fulfil all properties"...
So no, they are not contradicting themselves. They are simply and wantonly misleading people.
A lot of pharmaceutical advertisements do this to: this might help you, it might also kill you, you choose(!) but please ask your doctor about us anyways and throw us some money.
Therefore, if they say "may help against X", then it doesn't really, because if they had substantial evidence of that, they would state that.
I make $BILLIONS from this drink.
(fill in whatever value of X, Y and Z makes this all ok)
The rest I would truly contest.
Prom is not an every single morning, universe commenting event no matter how the teenage brain may attempt to blow it up.
Cosmo and Vogue do not have the same continuous feedback loop and clearly have 100x (1000x? 1mnx?) less content being created, repeated, refreshed, etc. Its a magazine. If you want to invoke their IG handles honestly I could find 100s of influencers with more reach and engagement
How did you pull this from a rather benign statement about high school prom and magazines?
I believe other social phenomena are responsible for this, mainly forms of peer pressur. The stories influencers use to lure their audiences often invokes the threat of sexism and that naturally increases fears that might lead to a feedback loop.
"Regular smoking use is associated with lower BMI, which is shown to correlate with improved heart health and lower mortality rates."
"Use of fossil fuels powers a number of ecology-preserving tasks, allowing us to care for the environment in a way we could not without this amazing source of Earth-loving fuel."
It still needs a name. Here let me try:
Disproportionate benefit insinuation error: Implying (without explicitly stating that it has more benefits than disadvantages because that would be a disprovable lie) that because something has some benefit, it is overall helpful.
Or just "plausible deniability"- factually stating the existence of a beneficial tree, but omitting mention of the harmful forest
Or just "cherry-picking"
Or to cover cases where a harm is insinuated/emphasized in the same way in order to discredit something (see: literally all the antivax data, antiscience, antimedicine... "trust doctors, you mean like the ones who prescribed thalidomide?")... "Misrepresenting the forest"
There's a conspiracy theory stating that he's wearing AR contact lenses such as those (https://www.wired.com/story/mojo-vision-smart-contact-lens/) and we know he's been working on this since 2017 (https://www.dezeen.com/2017/04/20/mark-zuckerberg-facebook-e...).
I think the problem is most people probably read the statement and don't realize just how weak the evidence required to make it true is.
1. I remember that while growing up fashion magazines, actors/actresses and I want to say peer pressure (but am fearful its incorrect) were a big source of body image issues.
2. Maybe it's a side effect of our society or "normal" to have security issues while growing up. We've been having good childhoods for how long - 100 years? Maybe there are some feelings that we never had the option of expressing or feeling.
From the article it seems that young girls know about the toxicity in the application, why they keep using it? if their parents are aware too, why do they let their daughters use it?
But I'll not get in to that. :)
Facebook and Instagram are drug dealers. Sure, they're not physically distributing a substance, but the social interactions they provide are every bit as addictive. In time, society will abolish or tightly constrain them in the same way society has banned other highly addictive substances.
And this is why I don't blame the users.
getting off soapbox
I don't work in behavioral science, but this isn't my understanding. I was under the impression that research indicates peers are the larger influence, which can be tempered somewhat by parents. For an easy example, whether or not peers smoke is a better predictor than whether parents smoke.
Those are their peers.
They aren’t their friends. Kids make their own friends, but it’s unusual for them to be able to make lifestyle choices like where they live before adulthood.
To note, your peers aren’t people on Instagram per se. I’d guess a study on the issue would say that a peer group they associate with on daily basis like in school would have more of an effect on their choices than Instagram influencers.
True, but for most people this is only available within a subset of constrained choices. Parents in rural Appalachia or the rust belt probably aren’t going to have a lot of Phillips Academies to choose from
You might say that, but you would be wrong. Parents of teens have been the most influential people in their lives up to them becoming teens. Part of the process into adulthood is to break away from that pattern, to explore and build connections outside the family sphere. That's why teens are the most vulnerable demographic for a lot of things -- their brains are in the process of rewiring themselves for more personal responsibility and less parental oversight. So they're actively seeking to avoid parental control, but haven't yet learned to correctly weigh and assess long-term effects of their decisions.
I certainly think the internet gave me access to many more humans, ideas, and tribes than my parents' generation had access to, and it would be hard for me to see how it would have been possible for my parents to have as much influence on me as their parents had on them.
I would even say it is indisputable there are forces beyond parents’ control unless the parents opt to live an Amish lifestyle, such as using devices connected to the internet and various social networks. If you do not give it to your kid, someone at school will, and even more, you probably need to teach your kid how to play the game rather than have them start it blind while the other players have experience.
And not all parents realize, or can realize, everything that goes on in their childrens' lives.
That’s not true in any western country :)
There is no need to have any grudge against me (I hope). I get it. I never said being a parent is an easy job. I acknowledge how hard it is but...
> Facebook and Instagram are drug dealers
If you teach your kids not to consume drugs, why Instagram should be any different?
Imagine if all the other kids at school used drugs on a daily basis, that there was advertising plastered everywhere telling you that drugs are cool, that successful people use drugs and that your worth in this world can be directly tied to successful use of drugs.
I don't mean to sound mean here, but are you a parent? If not I suspect you're not really aware of the realities of parenting, particularly once a child becomes teenage. You can only do so much. And you certainly can't win against a multi-billion dollar enterprise determined to make your rebellion-inclined teenager do something.
Do you have any comments on the tactics used by cigarette companies -- specifically in the United States -- before the 1998 "Tobacco Master Settlement Agreement"? When I was a kid, the amount of advertising by tobacco was incredible. It was unavoidable and everywhere... and any cool or famous seemed associated with cigarette companies.
Even if you ignore smoking cigarettes, the topic of smoking marijuana will surely be a major issue for current and next gen parents. How would you parent around this issue? It is so complex.
The tactics worked? Rates of smoking used to be way higher in the past.
With this I just want to say that I know it's a difficult task but presenting it as a lost battle/impossible seems to me wrong.
"Peers’ smoking is the strongest predictor of adolescent
 Gecková, A.M., Stewart, R., van Dijk, J.P., Orosová, O.G., Groothoff, J.W. and Post, D., 2005. Influence of socio-economic status, parents and peers on smoking behaviour of adolescents. European addiction research, 11(4), pp.204-209.
The thing is, we made headway in the battle against smoking by disallowing people from doing all those things. You're not allowed to market to kids, you're not allowed to depict smoking in most TV that kids are likely to watch, you're not allowed to smoke in or near schools and teachers who do are frowned upon. Cigarettes themselves are taxed heavily specifically to price young people out of getting into the habit, vendors are required to card, and the cigarette makers are even required to pay into a fund that promotes anti-smoking messaging.
So yes it is definitely not an impossible task. But the things that make it possible require taking it seriously as a danger and addressing it collectively.
> But I was taught not to do it and never did it.
So not all young people in your time were taught similarly. Or if they were, those teachings didn't stick.
Ideally the number of minors smoking would be zero. I don't think that's a radical idea, and I hope it's something that everyone can agree on. "Teachings from parents" obviously didn't achieve that goal, from your own experience.
In many high schools this isn't far off the truth.
One thing that does some "obvious" for this generation of parents: Work hard to educate your children on the dangers of traditional cigarette smoking. That is a seriously terrible habit for your health and well-being. I feel much less so about vaping (e-cigarettes), as the health effects of nicotine addiction are still far lower than traditional cigarette smoke inhaled into the lungs.
I think there's a certain nuance there. Drugs (which ones?) can (should?) be legal - or at least their consumption decriminalized. We've made good progress already and yet much more is to be made.
But all this is not saying that drugs should be pushed hand over fist down people's throat and that billions should be spent on studying the ways people can be more encouraged to become drug users.
And this is what Facebook et al are doing. They are spending untold resources on devising the most efficient ways of making people addicted and ensuring that no other way exist of satisfying the cravings. They create echo chambers and push specifically topics that get the most response out of people.
And this is completely legal (currently). Facebook, Instagram, Tiktok etc... are allowed to be cool, hip, desirable and consumed in ways that the tobacco industry couldn't imagine in their wildest dreams during their heydays.
These companies can tap into the deepest secrets and desires of vast swathes of people in ways that is unprecedented.
I feel that the vices of the old world, like drugs, are chickenshit compared to the power that the new vices can wield over their captives.
I hope not, the War on Drugs is one of the biggest disasters in modern history, directly linked to untold problems in society
"I got it all, buddy. Instagram, Snapchat, Facebook...I can hook you up."
“We” certainly blamed the users of addictive substances and vilified or at least looked down on them. That is until it was heavily white and middle class or above people with opoids. Even then it was slow, but some things were done. Far more than what was done with other highly addictive substances. It’s a disgrace.
I shutter to think any drug banning history happen in any other context.
I know, because like most children, I was exposed to online porn at a young age and was addicted to it well into my adult life. These companies need regulation, because they are bad actors.
Nah. That's been studied heavily. Here's an overview from the National Institutes of Health. Wikipedia has an overview. The overall conclusion is that most of the research is of terrible quality and there's no big measurable effect.
>The proposed DSM-5, slated to publish in May of 2014, contains in this new addition the diagnosis of Hypersexual Disorder, which includes problematic, compulsive pornography use. Bostwick and Bucci, in their report out of the Mayo Clinic on treating Internet pornography addiction with naltrexone, wrote “…cellular adaptations in the (pornography) addict’s PFC result in increased salience of drug-associated stimuli, decreased salience of non-drug stimuli, and decreased interest in pursuing goal-directed activities central to survival.”
No, nothing needs regulation. Stop making the internet fucking worse. Can we go back to 2000 now (not that it was good then either since the internet was fundamentally broken already)? This is like the bat shit insane morons who think having a popup about cookies on every page is solving the """privacy""" issue.
Literally every single political issue on HN is bogus. Take the ad blocking issue for instance, nothing that has ads actually matters. Your "solutions" like Brave are pure garbage.
The "privacy" issue doesn't exist because if we were using sane tech instead of webshit, there wouldn't be any tracking since it wouldn't be conceptually possible. Why the hell can tech even track you in the first place for reading static documents? This is a poor analog that cannot even compete with paper newspapers (which are also much more legible because they are not on LCDs).
Net neutrality doesn't matter because nobody can ELI5 why I should care about it. Since the internet is all garbage, it shouldn't be an issue that it's expensive. Just don't use it. Make a free replacement. Cuban citizens have already done it.
Now let me try and list CURRENT_YEAR.addictions:
- Working out
- Social media
- TV (youtube or whatever you use now)
- HN (muh dunning kruger syndrome, imposter, et al)
- Lotto tickets
- Stock market
- Things that are sort of drugs but not
- Any substance what so ever
- Literally any hobby
Oh look guys, HN needs to be regulated because I can come up with a person who has problems because of it.
Guys we need to regulate fat and high calorie food. Oh wait it grows on trees.
People who see a problem and immediately go "we need regulation to solve this" (and even proceed to come up with some ad-hoc hypothesis of how it solves the problem after it's proven that it doesn't solve it in a substantial way) are morons. There is actually something wrong with their brain. They hold back progress. Every new law is a potential stumbling block for progress and thus why new legislation should be avoided at all costs. See MECHANISM NOT POLICY article on wikipedia to see how people already knew about this 70 years ago in tech.
This is a hyperbolic statement. Even if you are just talking about internet regulation.
Let's imagine for a moment that someone invented a hypnosis algorithm and hosted it on a website. Anyone going to this website went into spasms and died in front of their screen. Would we seek protection for our children and for the general public from such a website from internet browser companies, ISPs and the government? Yes we would. This is an extreme example but it illustrates a point. You can say the same thing about websites that prey on children, or the elderly.
I'm not advocating for the banning of pornography altogether. I am making the simple proposition that it be better regulated. People who distribute porn know full well that their content is seen by minors. Having a child check a box that says that they are over 18 is not good enough. If I hadn't seen porn as a minor, I might have had a better chance of avoiding the extremely negative impacts that it can carry with it. Don't believe me? Visit a support page like r/NoFap and read the hundreds of thousands of stories there.
I'm not going to touch any of the other subjects you raised because I'm not arguing for any of the things you listed.
Literally every social issue on the web for the last 20 years follows this one simple formula:
> X causes Y. Yes it sounds stupid, but read this long winded reasoning or spend the next 70 hours of your life going down my trail of studies to back this up
And nobody actually invests their lives in rebuking them, and they get bored and stop talking about it 5 years later.
I didn't say it did.
> You being unable to control yourself does not justify undue restrictions on other people.
I'm not talking about myself, I'm talking about minors. Protecting children from products that require an adult brain to ascertain harm is a positive function of government.
> The internet should not be ceded to nanny staters and morality police.
This is not about morality. Please don't read intentions where there are none.
1 - mandate disclaimers in front of all videos describing the possible negative effects of porn (there are concrete, well-studied effects). cigarettes and tobacco have the same mandates and they do have an overall positive effect on educating the public
2 - hold video hosting sites liable if content is shown to minors. there is a reason why a bar can get closed down or a gas station attendee can lose his or her job if alcohol and tobacco is served to minors. the same rules need to apply for sexually explicit material that is turbocharged to reach children
Why would you think this will work? My parents, school, etc already gave you a million false warnings about porn and yet I looked at it. Did that even work for smoking? I think smoking only stopped once vape replaced it. Now I have to skip the intro logo as well as some stupid disclaimer, and producers have to waste more of their time on legal checkboxes, great.
> 2 - hold video hosting sites liable if content is shown to minors.
That's not a concrete plan. Do we need photo ID here? Some experimental crypto to disclose your government certified age to the website so it can decide not to kick you off? What about a forum where anyone can post any image? Does the forum have to be legally liable to block minors if it has no rule against porn?
The internet worked perfect in 2000. I got my porn when I was 13 and had no problem. There was not a single complaint aside from corporate scum trying to enforce DMCA crap (the multi billion dollar company was complaining, nobody else). Only when all you American idiots came in 2010 from faceberg all these pretend social problems started existing. The internet is literally just data transmission and this act could not be more harmless if you wanted it to be. Quite literally, the internet is the most harmless technology in existence. It cannot give you any disease, etc. It costs nothing, etc. What we are seeing here is the American art of being a professional victim. One should start by observing that almost every single complaint about the internet starts with "I read some text and now I am offended".
I envision the internet as community run, and free. The current internet is all obsolete garbage. The problem is, on this new internet we wont actually be able to make it because everything will be illegal by then. It will be illegal to run point to point to your neighbour because of some stupid porno law that has absolutely nothing to do with your application.
I'm not familiar with actual research in this space, but i would imagine that its similarly addictive to being with friends IRL?
(To me) social media is a great tool to connect with friends and stay present even after we move away and work and pandemic quarantine. Its a suppliment for IRL relationships. I used to live with friends in college, and i used to go out and get food or drinks or whatever almost daily to get my "fix" of socialization.
Are social networks really that different? I recognize that some (teen girl?) people might wish they looked like a supermodel on social media or get jealousy of the lives of influences, but is that different from old magazine and movie stars? Is the mixture of "social" and "influences" to one news feed detrimental? Is there more exposure? What makes social networking more "addictive" (and therefore dangerous) than actual socialization?
Not even a little? Users aren't brainless, as much as they'd want to make you believe.
If you are not using it, uninformed people will laugh about you (peer pressure, network effect). Furthermore, because so many people use FB, many people will use FB to organize events, which one does not even know about, because of not being on FB. In the end, they will spin it, because they do not know better themselves, that it is you, who isolated yourself from the rest of "society".
I have experienced it many times. I have missed out on probably many things over time. Yet I refuse to be a part of FB and stuff like that. However, I am only one person. The peer pressure probably works on most people, because most are not as informed about FB (and Instagram and whatever else they own) and what it does, as the crowd on HN is for example. That means, that the argument of "no one is forcing anyone" is a bad one, because you would need to add "but they will make your life worse, if you do not join!". It is kind of an extortion, which an uninformed society unknowingly is excerting on the individual, put in motion by dark patterns, privacy-hostility and bad practices on the side of FB.
Their business model is poisoning the well of society, and strip-mining its value by breaking the bonds that hold it together.
The very best that can be concluded about it's entire leadership is that they entirely lack any hint of moral compass or sense of responsibility to the society from which they extract their wealth.
The more I observe their behavior, the more it looks like worse conclusions are supported by the data.
Just stop justifying things on technical bases. It is what FB is doing the the top post above
They may know that it's bad for you, just as I expect most smokers to know smoking is bad for you.
But the "badness" isn't direct, it grows, and they could think "maybe I'll be able to control it / I can quit whenever I want", whereas if you don't follow current trends (or whatever it is people follow on instagram), you're left out immediately.
Of course, by the time you realize you can't quit, it's already too late.
Yeah, it is difficult for people to admit they're at fault.
Blame is justified only when someone is forcing you to do something or keeping you from something.
If FB/Instagram use is damaging a substantial percentage of your population, it's your problem, regardless of whatever moral frame you decide to put around it.
Unless telling people to take responsibility for themselves is an effective systemic intervention (it might be!) then it's not very useful, except as a PR strategy to deflect blame from Facebook.
This is a disheartening take.
For instance, I'm not sure becoming a social outcast is great for anxiety and depression issues.
And every time somebody else must do something about it in a way that affects everybody.
For example, take some 13 year old girl, probably a poor farmer, from Ireland in 1800. She will have seen almost no examples of the female body except her own, her peers rarely, and her familial elders. Then plop her down into today's world, and she'll definitely be barraged by all the most ridiculous images of flat stomachs, huge anatomical parts and tiny anatomical parts, immaculately photoshopped forms: she'll be made to feel less than them.
I remember reading somewhere, can't remember where, something that addressed this point.
Broadly speaking, the idea was that "people in magazines" weren't perceived as "peers", so you wouldn't compare yourself to them in the same way as you'd compare with a classmate, or some other "regular person", "just like yourself".
Instagram is an easy target like Video games, TV, Cell Phones, and Internet were before it. But like those things I don't think there's much causation going on.
About #1: From my childhood experience, this is accurate. Youth fashion magazines and network TV from 7PM to 10PM played and outsized role to influence our (small) world view. However, I find it interesting that as social media has exploded, there is parellel movement to reduce digital post processing on models photographs. It is still a moving target, but the trend is less and less processing of photographs in fashion magazines and adverts. (Note: This "commitment" varies wildly by region!)
About #2: This comment is so deep. I remember reading the novel "Orphan Train" a few years ago, and the author spent considerable time pulling you into the world within which these young people lived. Granted, the events occur about 100 years ago, but they help us the understand the pressures of youth from four generations ago. By the end, you felt like a movie director, peering into their fractured lives. Generally speaking, I think about a "generation" as being 25 years. If we look back 100 years, someone born in 1921, then each 25 years is a new generation. Assuming these people lived in relatively free and prosperous places... I dunno, pick Belgium or Argentina or Australia... each 25 years, children's lives would be hugely different than their parents due to major social advances, improved education, and new media outlets. In my generation, one of the biggest concerns was "peer pressure" and "too much TV or video games". Some of that still exists, but it has morphed more towards bullying (including early-age homophobia and transphobia) and too much Internet / social media. An exciting question: What comes after this generation? Too much AR / VR!?
In some cases, it's justified, but the vast majority of time it's just avoiding responsibility.
However, if you are looking at a large percentage of people experiencing something, it isn't helpful to just say all those people need personal responsibility. You can point to individual failings when you talk about an individual, but if 75% of people are having individual failings it is symptomatic of something structural.
There's always going to be some of this kind of thing in everything. I'd accept the concentration of it in social media (TikTok is probably the worst of them) is not healthy, but the issues themselves are independent of the medium of the time imo.
Yes 1/3 are effected badly but we just say that it can have positive effects. Just omitting the fact that it often has a detrimental effect…
> Over everything—up through the wreckage of the city, in gutters, along the riverbanks, tangled among tiles and tin roofing, climbing on charred tree trunks—was a blanket of fresh, vivid, lush, optimistic green; the verdancy rose even from the foundations of ruined houses. Weeds already hid the ashes, and wild flowers were in bloom among the city’s bones. The bomb had not only left the underground organs of plants intact; it had stimulated them. Everywhere were bluets and Spanish bayonets, goosefoot, morning glories and day lilies, the hairy-fruited bean, purslane and clotbur and sesame and panic grass and feverfew. Especially in a circle at the center, sickle senna grew in extraordinary regeneration, not only standing among the charred remnants of the same plant but pushing up in new places, among bricks and through cracks in the asphalt. It actually seemed as if a load of sickle-senna seed had been dropped along with the bomb.
Never mind that the rest of the article will probably elicit a few spontaneous sobs from an empathetic reader.
It's only "much harder if not impossible" for lack of trying. And while yes, there are some out there that simply don't have such options, we also have people that need to cut down a tree but can't afford the work, and so simply hope it won't fall on their house the next time it gets windy.
Email lists and chat servers are just Facebook with extra steps. The only difference ends up being the fact that Facebook-like social networks suggest you people and content you might like from a giant pool, whereas the alternatives have rather limited pools and the signal to noise ratio is pathetic because you get literally everything that is posted and have to filter through it manually.
Don't get me wrong, I hate Facebook and only use it on maybe a handful of occasions per year, but you can't tell me that it didn't enable things that weren't possible for many before it was invented.
I do not have any accounts in my real name, and I had no problems finding others via our pseudonym's when I wanted to share them with people in the physical world.
In all discussions I've heard that mention "exclusive value" or a similar concept, the agreed-upon definition was always something like "where all others are orders of magnitude worse".
Why? Why is this socially acceptable? Is this different from the past, or are we just more aware of it?
Second, it's possible to objective demonstrate whether a factual statement is correct or not. But whether a statement is disingenuous or misleading cannot be proven with the same level of certainty (absent evidence of intent). So bad-faith actors can always guarantee that disputing that contention will end in an "agree to disagree" draw at worst.
Running a company in today's environment feels very different than 20 years ago, and to make it clear - it's better for the world this way. Any employee - no matter how low or high ranking - has the ability to erase a significant amount of enterprise value from any business, no matter how large or small. This is driven by improved moral standards, but also extreme connectivity that we have in terms of obtaining information internally (slack, notion, google drive and thousands of other software solutions that have increased everyone's access to documents) and sharing it with the outside world (social media, easy access to journalists, readers' interest in holding companies accountable).
I would be surprised if we don't experience some type of a push back from the companies. At the very least, I imagine access to info will be reduced (already happened at Google), and also that we'll increasingly start seeing new ways of how employees are connected with each other (both in terms of policy as well as actual barriers that will prohibit anyone from reaching too many people). I imagine that companies will also start researching new candidates' propensity for activism by analyzing their social media content. For example, if you post "tax the rich!" on Facebook, I imagine that in the not so distant future that will have a negative impact on your market value.
If there's one thing that I've learned, it's that every action triggers a reaction (which is time-delayed in the sense that it arrives late and also overshoots the original target), and the constant yo-yoing between those two forces is what explains much of the irrational behavior in the world.
One thing could be to hire fewer people. Oddly large software companies is an evergreen topic on HN. Every middle manager wants to empire-build, but if each new hire is a potential leak, then that could be a brake on Google's relentless goal of hitting 1M employees by 2030: https://image.cnbcfm.com/api/v1/image/106318886-157800431271...
They reach a point where they're fundamentally unable to honestly ask themselves whether their "good thing" is actually good. I have a fairly respectable social graph on Facebook and Twitter, but far fewer actual friends than I did in the pre-social network days. A row in a database isn't connection, and it isn't good.
Is it the mission statement itself which is dangerous? Or is it possible to come up with a "good" one?
> It’s All About the Long Term
> We believe that a fundamental measure of our success will be the shareholder value we create over the long term. This value will be a direct result of our ability to extend and solidify our current market leadership position.
> The stronger our market leadership, the more powerful our economic model. Market leadership can translate directly to higher revenue, higher profitability, greater capital velocity, and correspondingly stronger returns on invested capital.
> Our decisions have consistently reflected this focus. We first measure ourselves in terms of the metrics most indicative of our market leadership: customer and revenue growth, the degree to which our customers continue to purchase from us on a repeat basis, and the strength of our brand. We have invested and will continue to invest
aggressively to expand and leverage our customer base, brand, and infrastructure as we move to establish an enduring franchise.
> Because of our emphasis on the long term, we may make decisions and weigh tradeoffs differently than some companies. Accordingly, we want to share with you our fundamental management and decision-making
approach so that you, our shareholders, may confirm that it is consistent with your investment philosophy: (...)
You don't have to operate your company in a way that makes your shareholders maximum profit from quarter to quarter. You do have a responsibility to them, but that responsibility can be met by following your mission statement. If you believe your company should only (e.g.) use sustainable energy sources because it's important for the planet, or that you're going to invest for long term growth, not short term profit, you can disclose that to your shareholders and meet your obligations. Company directors have considerable leeway in the way in which they meet their fiduciary responsibilities, and the idea people have that companies "must maximize profit at all cost" is incorrect.
Now, if your company is failing, the stock is falling, products failing in the market, then your shareholders may pressure for a variety of things: a change in mission, direction, the resignation of leaders; or they may buy enough of the company to own enough board seats to take control and out and replace current leadership. But if you're successful at executing on your mission and have investors who are onboard with it, and you're also making money (or growing, which is just as good or better), then you don't need to worry about that.
If your goal does not include making money for shareholders at any point, then you probably shouldn't organize as a for-profit corporation, though.
The fix here is that you must balance that mission statement with a well-defined set of Values as well. The mission is what you want to accomplish in the world, and the Values are the principles you plan to adhere to while doing so. The values can't just be some afterthought "check the box" exercise - they have to have significant decision-making weight in practice.
A set of (imperfect, I'm sure!) examples from the Wikimedia Foundation:
Have a mission statement, just don't go crazy with it.
(I was joking with the comment, but I'm not so sure now.)
> students who reported having sex with persons of the same sex or with both sexes (30.3%); and students who identified as lesbian, gay, or bisexual (23.4%)
Assuming that most people who have sex with others of the same sex do so because they are gay (rather than e.g. because they are experimenting), this indicates that people who are in denial about being gay are more prone to suicide.
It's reasonable that if one gender is developing self esteem issue from a social network, then the other gender _probably_ is too.
In 2018 "obesity prevalence was [...] 21.2% among 12- to 19-year-olds."  according to the CDC. That's one out of 5 being obese, not overweight. And it has more than tripled since the 70's .
And then we start blaming the "evil screens" for people not finding themselves attractive.
Move fast and break Zuckerberg!
it just seems really silly to me to walk into this thread about how Facebook internally believes they're doing harm, and reply to a statement about how the harmed demographic feels they're being harmed, in the context of well-documented ways that harm occurs and what kind of effects it has, with a statement doubting the harm actually exists and questioning the self-reported experience of harm
It’s like asking why studies using self-reported survey numbers on penis sizes cannot be treated as the ultimate truth, despite adults surely being able to use a measurement tape or a ruler. There is a reason for why the average numbers on all those studies using self-reported numbers are always at least one standard deviation larger than the average numbers obtained from a study that wasn’t just a self-reported survey.
Does society value beauty? Sure. Do models set "unrealistic expectations"? Sure. Are some high school girls assholes to other girls? Absolutely. But once upon a time kids would go home and those people would be gone. Now it's a 24/7 feedback loop and it's completely unhealthy.
This is pretty much an internet problem, not a Facebook problem. The things people say to each other on forums or Twitter or Facebook, especially when anonymous (but not always), are quite often horrendous.
It's absurd to absolve a company as wealthy as facebook who optimizes for "engagement" from their externalities on children.
Is there something on the internet that isn't designed as such?
Maybe weather apps... Everything else, from wikipedia to github to stackoverflow to the site you're on now is yet another automated massively multiplayer kudo ranking system. The only meaningful difference appears to be demographic; Facebook is one of the places 'teen girls' spend their time. All I see here is evidence that clicks can still be had by ascribing some concern to the fate of young women.
Gamification is benign compared to an interactive news feed developed to drive ad sales. I find pointing at especially wikipedia, but also github as being designed to maximize engagement comparable to social networks as silly. Neither has much to gain from having addicted users, the same can't be said for facebook, IG, or Reddit.
All three have various feed mechanisms. They all have various ranking systems. Every one of them have people employed (even if mere 'volunteers') to 'maximize engagement.' And every one of these systems have people obsessed with their profile. Every. Single. One.
They're just mostly not 'teen girls' and so the 'problem' makes for poor headline material.
Wikipedia is rife with obsessed 'editors' climbing the rungs of the interweb status ladder. They're easy to find. Look at their profile pages; filled with achievement badges (gamification) and vast profiles of their lives. Does Facebook have 'campaigns' for 'elections' that grant power on the platform? I honestly don't know because I spend no time there, but I know Wikipedia does, and we can only imagine the anxiety involved for these 'candidates.' Thankfully though, they're mostly not 'teen girls' so we're not going to worry about them.
Speaking to your point more directly, the massive, massive difference is the primary user of Instagram and typical social media vs the primary user of these other platforms. The primary user of Instagram is shown addictive feeds non-stop. It’s really not possible or enjoyable to use Instagram without interacting with designs deliberately and literally trying to get you addicted to the platform. This isn’t true for something like Wikipedia or Stack Overflow. The primary user is going to come across a few articles now and then or search something. But they are rarely going to be interacting with an algorithm or design trying to get you addicted for hours a day.
Think about it like gambling. The only way to use Instagram or Tiktok or Facebook or anything like that is by pulling the lever on a slot machine, trying to get a dopamine hit.
The inherent design and purpose of social media platforms is to addict people so they spend as much time on the platform as possible, and then show them ads. That’s not true for Wikipedia or stack overflow. Those examples are more about policies that encourage toxic behavior or just crappy people on the internet, not a critique of the inherent platform design.
"They came to the conclusion that some of the problems were specific to Instagram, and not social media more broadly. That is especially true concerning so-called social comparison, which is when people assess their own value in relation to the attractiveness, wealth and success of others."
"'Social comparison is worse on Instagram,' states Facebook's deep dive into teen girl body-image issues in 2020, noting that TikTok, a short-video app, is grounded in performance, while users on Snapchat, a rival photo and video-sharing app, are sheltered by jokey filters that 'keep the focus on the face.' In contrast, Instagram focuses heavily on the body and lifestyle."
So I suspect that their understanding of the problem is better than yours, and that there is something about Instagram that makes it worse than generic social media.
You are correct however, that Twitter is also toxic; like all social media.
The problem with social media, then, is that it allows -- indeed encourages, because "engagement" -- all that toxicity to spread so much more widely and rapidly than ever before.
This post is about FB suppressing their own research, which showed that Instagram and its algorithm are toxic to teenage girls.
> We make body image issues worse for one in three teen girls,”
> Teens blame Instagram for increases in the rate of anxiety and depression,”
> “This reaction was unprompted and consistent across all groups.”
But a very small minority that doesn't have morals, is causing the illusion that it's most people that don't have them.
Right now we have a large percentage of people that refuse to get a vaccine to protect their neighbors.
Everyone might have morals, but the bar is low.
I don't know many such morally-absent folks. Perhaps the perverse incentives in our system mean a bunch of CEOs are quite sociopathic, but on the whole, people seem rather good intentioned to me. The internet certainly serves as a platform for many that would be otherwise shunned IRL, but I don't think they're the general majority...just the majority that decide to voice their opinions. The rest aren't even paying attention - only a small fraction of people have a Twitter account, for example.
Even reddit, which can be anonymous, is filled with good discourse, assuming you avoid certain subreddits, and sort by top. Moderation goes a long way to mimicking our more natural IRL tendencies to turn down the assholes. Twitter and Instagram generally lack those tools, so the assholes can be louder and _seem_ more prominent than they are.
Given how profitable that’s been it seems reasonable to expect them to be involved in fixing it.
Or they’re paid massive salaries not to do that.
And yes we need regulatory help. Tens of millions in this country are experiencing mental health issues as a result of these platforms. When we understood that other industries were causing a health crisis we regulated them, the same needs to occur here.
Note. I don't know what the regulatory solution should be, but we should be having that discussion.
I agree with the 2nd part of your comment. But this part is just dishonest, and sounds like a startup pitch about addressable market. Do you honestly believe that each teen girl is being followed by hundreds of millions, who comment on every single photo of her? Almost everyone still lives in their social bubbles, and yes, it's easier to to communicate and say mean things to each other (totally 24/7 feedback loop), but they almost exclusively come from people you know, not hundreds of millions of internet randos.
Doesn't matter if it reaches 10 or 10000 people.
An individual girl in 1995 had enough shows on TV and enough magazines to tell her she is not looking good.
And that's just some of the content. The algorithm is also just as happy to feed you a continuous stream of outraging, extremist "punditry", manufactured drama, fake "crafting" videos, conspiracies, and paranoid, depressing "news" - anything to keep you engaged.
I'm not really sure about it. When highschool girls wore the same clothes the Spice Girls wore, had the same haircut they had and replicated their dances, for me that's a lot of connection.
I'm sure Facebook/Instagram have a lot to be blamed for but we (not just young girls; parents, teachers, etc) need to be responsible of our actions to some degree.
Those girls in 1995 got their media fix through magazines geared at women and teenaged girls, where they'd find an impossibly thin model on every other page. The Kardashians wouldn't have been well-received in the 90s, because all of them would have been considered fat by those standards.
I honestly can't tell if you're joking or not, but study after study has shown that dating was far more mentally healthy in 1995 than it is today. Online dating falls into almost the exact same category of causing anxiety and making it more difficult for younger generations to form meaningful relationships - just like instagram and facebook.
It's a genuine problem, in that it is fully exploitative of men, but not related to this imo.
If anything I'd argue that more than ever both women and men are searching for authenticity in a partner. Something increasingly difficult to come by in our social media fueled world.
Sure, there is a subset on both sides that has been completely sucked in by this culture and measures each others worth by the number of followers on their instagram, but I'd actually view this as a positive. It's really convenient to be able to identify and filter out these vain individuals early on in the dating process.
Keep your head up nodejs_rulez_1, there are still plenty of good women and men out there.
Simply put, any man or woman with a lot of choices would be less invested in any of the options presented.
Also for all of the women I've seen flown out from Alabama, Mississippi, etc to NY, LA & Miami by the more affluent men in my social circles, I haven't seen a single long term relationship develop out of it. The wider net isn't leading to better outcomes.
Species, not culture.
The evolutionary explanations for such are not hard to come up with just with a bit of thought, and not hard to confirm in the literature either. Which also means the species isn't about to stop admiring those things any time soon.
Which means, rather than the "boil the ocean, then boil it a few more times again" plan of trying to somehow "fix" the species not to admire those things, one needs to pursue a plan of figuring out how to live within the existing constraints.
Which takes you right back to old idea of one's rationality being a small human trying to corral the crazy elephant that it is riding to go where we want it to go. The trick is to learn how to prevent the crazy elephant from even seeing the undesirable stimuli, rather than trying to deal with what happens if it does after the fact.
Unfortunately, explaining that to teenagers is a tough sell, especially when the alternative is tuning you out and going back to the highly-addictive social media.... even explaining it to adults can be a tough sell.
Facebook/YouTube/Twitter all exploit the dark side of human nature. They've been doing it for so many years, that it's difficult to imagine that it's unintentional.
>in Google’s effort to keep people on its video platform as long as possible, “its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with—or to incendiary content in general,” and adds, “It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content.”
In corporate america, it's hard for me to see any private owned corporation doing what's in best interest for the public versus their own pockets.
Kind of like how we said that the answer to alcohol & tobacco was to show self-control, not restrict sales and advertising? Expecting people to go one on one against enormous companies’ profit motives is a recipe for failures.
If people didn't want to see it, they wouldn't click on it. Banning it or regulating it is not going to change that, it's just going to cause it to shift elsewhere. It's human nature.
Regulations come in flavors other than draconian. Do you have a specific policy proposal which you're referring to? Otherwise it seems somewhat disingenuous to predict the failure of something which we don't even have enough detail to discuss.
There are some things which you could try regulating: for example, a lot of what drives the dubious aspects comes back to algorithmic promotion maximizing time on site and advertising views. Legislators could ban algorithmic promotion for children, require companies to identify and curb addictive levels of consumption, or require companies to put more effort into moderation on the posts & comments which they promote.
Similarly, I believe at least some countries are exploring requirements to clearly indicate photos which have been modified or retouched.
"Some people feel bad about their bodies after viewing social media" doesn't nearly meet the threshold of measurable harm that tobacco does. And algorithmic promotion can be positive, unlike the universally health-corrosive effects of cigarettes.
Where the impact of cigarettes is relevant is in the discussion of how _strong_ a particular regulation should be. A deadly threat certainly warrants stricter rules than something minor, just as we do not enforce zoning violations with the death penalty.
If there's a specific policy proposal you could talk about whether you think it'd be effective or overkill but instead you appear to be arguing that there's no need to even consider the range of policy options.
There are tons of great examples where algorithmic feed is useful, particularly when your feed is related to activities such as cooking, music and exercise.
In theory I think the concept of forcing the companies to behave a certain way is ideal but I'm still unsure of what type of legislation could be put in place to address the problem in the article.
I definitely agree that there isn't a proven solution for this problem — that's normal for major technological changes. We saw the same thing with printed books, magazines, and newspapers; radio; TV; the internet; etc. — not to mention things like cars which weren't communications technologies but definitely had major impacts on society. I think the best thing we can get right now is more of the data companies like Facebook and Google tend to avoid sharing, especially after various governments experiment with rules and it becomes possible to see what does and doesn't work.
I agree that this is a test of our society.
> ... and are capable of discipline and self-control
These organizations have enormous resources dedicated to exploiting our frailties and overcoming an individual's discipline and self-control. One option is to continue to expect every man, woman, and child to fight this battle alone in their own head every day.
But we'd probably achieve better results more efficiently by organizing ourselves as well. Then we can combat it collectively as a community and a society like we have done for other human frailties. This would mean things like education, societal pressure, and regulation.
These companies are already amplifying content they think will hold our attention, by leaning into people's worst instincts.
>In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.
Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.
I think it's a losing game.
For as long as big social is allowed to be “free” with its paying customers being advertisers, it will keep benefitting from trolling and other unhealthy behaviours (narcissism?) that happen to drive up engagement and ad revenue. With user lock-in, full control over UI and algorithms at its disposal, big social has way too many tricks up its sleeve for your average tired-after-work-or-school, running-out-of-willpower, vulnerable user to consciously compensate for.
Normalising paid social (forcing interoperability, downgrading platforms to pipes) is probably the most straightforward way for us to finally gain the ability to vote with our wallet and to choose client software crafted with our needs in mind. (I’m not a proponent of regulation bloat or special rules for select big companies; I think a small but strategically focused general requirement could be enough for such a change to happen.)
The idea that this is just a lack of willpower on our end is total bs
So maybe they understand the dangers of Instagram better than you do?
PS: Where is the people that use to proudly say "with great power comes great responsibility"?
Sometimes people are seeking something that is beneficial, and maximizing that is fine. Lots more times, people are mostly responding to angry posts, or falling down a conspiracy rabbit hole that they cannot critically think their way out of. Maximizing the attention of those people is clearly negative.
So does society share some blame? Sure. Does an algorithm that maximizes some people into very bad places share some blame? Absolutely.
That doesn't mean giant corporations should allowed to exploit these mechanisms for profit.
Do we take the approach we take with opiates, and ban companies from allowing users to upload images?
Or do we take the approach we take with nicotine, and force companies to disclose the possible harms that can come from social interactions online?
Or is there another approach you'd propose?
Hopefully this doesn't come across too confrontational - I genuinely am curious as to what solutions are viable, and I do recognize the harm social media can cause. But it seems to me that as long as there exists any platform where we can freely post photos, we'll have toxic comparisons, and I don't see more education changing this - I don't think it's a rational choice that we make, to choose to compare ourselves to others.
It was created much longer ago than that. We've had beauty standards for as long as humans have created art, and probably for as long as humans have existed. It's probably biological to a large degree, although the manifestation has changed over time. As with many of social media's deleterious effects, they hijack, amplify, and distort our natural inclinations for their own purposes.
Ironically it seems a totalitarian country like China is better equipped to deal with these things (see how they simply banned teens from gaming lately).
The liberal democracies number one value is individual freedom; well it works out great most of the time but other times we are not that great ourselves in handling our lives and using our time constructively. Some of us get bored, addicted and obsessed under certain circumstances and I don't actually see an easy solution for that.
Maybe social media should be age restricted like porn?
But anyway, I think the reason that facebook is aware of the problem but doing nothing is not that they are cynically exploiting people, and it's rather that they don't know how to solve it.
I don't think the former scales, whereas the latter lends itself to amplification. Authenticity and community require much more complexity to uphold than anger, anxiety, and resentment.
I do take your point, and think that un-nuanced conversations about the evils of Social Media are unhelpful, but I think the scaling problem is the real danger. Negativity scales globally, positive sentiment and experience is limited to the individual or local level.
What do you think of any of this?
Probably, but because of salience asymmetry, we don't realise it. Like the good stuff is often more local as you say, and more distributed, while the bad stuff makes the news.
You are right that there will always be a different “drug” that exposes the same underlying societal issues. But given that FB+Insta are /algorithmically/ pushing these posts to users to increase engagement, it is their problem too.
Nobody is saying that this is not a general social media problem, just that this article is reporting on FB hiding crucial information.
> “They came to the conclusion that some of the problems were specific to Instagram, and not social media more broadly. That is especially true concerning so-called social comparison, which is when people assess their own value in relation to the attractiveness, wealth and success of others."
"'Social comparison is worse on Instagram,' states Facebook's deep dive into teen girl body-image issues in 2020, noting that TikTok, a short-video app, is grounded in performance, while users on Snapchat, a rival photo and video-sharing app, are sheltered by jokey filters that 'keep the focus on the face.' In contrast, Instagram focuses heavily on the body and lifestyle."
That's needlessly rude and doesn't add anything to the conversation. Lets try and be nice :) You do have some valid points.
Not that it really matters, I don't think either would help you find a fix that doesn't involve destroying the whole thing.
If we figured too much TV was bad for people, "social" media is 100 times worse. How is it even possible to still feel good about yourself after spending time there? I don't use any of these services because I see them as obviously bad for our mental health.
Tackling this issue in a sustainable way is something I believe most companies are capable of with the right guidance.
Why "should" FB do so? It is doing the "right thing" for its stake holders. Of course, Ideally they should, but realistically not going to happen.
It is us who need to protect our families because we are incentivized to do so. We have failed in protecting our own interests.
It is no different from the drug dealers in our town, we warn our children of the dangers, take steps within our means to reduce our children's interactions, etc.
Most users benefit from social media and you can't condemn them for not questioning other aspects.
Nevertheless, it is essential to question the intentions and procedures of the company behind it. I believe that external observers and institutions are required for this to happen and to spread awareness of things running afoul.
However, like so many things, software can be used for good, as well as for evil purposes. The difficult task is to define that boundary without compromising the utility for most users.
Related topics: Games, app stores, default browsers, etc.... How far does the state have to intervene? How much responsible behaviour can be expected from the user themselves?
What evidence supports this? The story is about evidence that this is in fact not true.
The software we are talking about is not dropped from the heavens. It is created by extremely large, powerful companies in pursuit of profit. If this software harms people, these companies are not neutral actors merely swept along with the tide of technology.
> How far does the state have to intervene? How much responsible behaviour can be expected from the user themselves?
Surely the most relevant question is how much responsible behavior can be expected from the companies themselves? They are the ones armed with research departments actually studying the effects of how their software affects people.
A great example is the girl who searched for exercise tips once and then her feed was algorithmically focused on weight loss tips and the like afterwards, which is not a cultural issue.
Don't you mean "as a species"?
Culture is one of the few tools we have to overcome these natural inclinations. Religion, too.
Most issues are human issues. Just because it is a human issue does not make it alright to exploit it at scale with such invasiveness. Especially since they know that is it harmful. Sounds like cigarettes marketed to children. It's not a cigarette company issue it's a human issue after all! We are not liable.
Yes it is Facebook's or Instagram's fault. We are endowed with our natural desires (which are even more fundamentally biological than cultural in this case), that we have no choice over, but we have the choice to remove bad technologies from our societies.
This is in fact the only choice we have. You can't cure people of their desire for power or beauty, but you can destroy the tools that amplify the worst instincts we have.
The message of the Web isn't looking so great, all around, I'd say.
“Just”? I would say this is plenty enough. And this is done willingly. So is it their fault ? Yes (not exclusively though)
It's one and the same.