In reality, companies operating within a market is perfectly capable of imposing intrusive and arbitrary power on citizens, and collective action / government intervention is needed in order to protect our freedoms.
As others have pointed out, we don't need to 'ask', we need to legislate.
Congress will have a hard time passing legislation against this not only because of the filibuster (which encourages minority rule), but because of this fundamental confusion. Why would a Senator or Congressperson who believes only in the first type of free market freedom and is blind to the second type of freedom make moves against a corporation?
Anyways I just started reading this great book called Freedom From the Market, points are 100% lifted from there: https://thenewpress.com/books/freedom-from-market
Legislate what, specifically? I always see calls for more legislation around social media, but I rarely see any actionable suggestions.
Frankly, I’m kind of shocked at the degree that tech communities are rushing for legislation on this topic. When I was growing up I remember the big issue being violence in video games. Politicians went on crusades to demonize video games and demand that we pass legislation to protect the kids from any exposure to violent video games, lest the video games turn them into school shooters.
With video games, the online tech communities rallied against any legislation and wrote volumes about how kids were smart enough to manage themselves and not let violent video games influence their thinking. With social media, tech communities are doing the opposite: Writing volumes about how kids are incapable of managing themselves and will succumb to social media destroying their mental health.
The major difference between the two is that tech people (on average) like video games but they don’t like social media. So video games get a pass, but social media must be attacked with everything we’ve got. The secondary subtext buried in most of these articles is that young girls, specifically, are most vulnerable to social media issues, whereas violent video games were largely an issue for young boys. Apparently we’re okay letting young boys handle difficult content but we don’t give the same credit to young girls. The more I watch all of this play out, the more hypocritical it all feels.
I also suspect that if anti-social-media politicians see success that we’ll see a revival of the anti-violent-game politicians. After all, we need to regulate what the kids consume, right?
Because there is no evidence for this. In cases where there is, like in lootboxes, we see a push for legislation.
We have enough evidence to at least build correlation between social media use and decline of mental health. This doesn't call for banning or legislating the business to oblivion, but for talks in the society. We need to talk about social media use, our representatives have to discuss measures with the scientific community and promote research so that they can legislate based on data.
One thing is for certain, what we have now is unacceptable. What we will build in the future is up for debate and should be debated.
Again, eerily similar to the violent video game panic in the 90s.
The original congressional hearings on violence in video games followed record-high gun violence in 1993 ( https://en.wikipedia.org/wiki/1993_congressional_hearings_on... ). Politicians pushed a correlation between increasing popularity of violent video games and increasing gun homicides. It sounds dumb now, but video games were relatively new at the time.
The correlation felt right to many, especially those who hadn't grown up with video games in their own lives. I see this social media debate following the same pattern where adults feel that social media is evil and assume evidence will eventually support their feelings.
> Because there is no evidence for this. In cases where there is, like in lootboxes, we see a push for legislation.
That's rewriting history. Decades ago, there was a huge push for legislation against violent video games long before lootboxes were a thing ( https://en.wikipedia.org/wiki/1993_congressional_hearings_on... )
Violent video games have been called out as recently as 2019 by president Trump ( https://www.hollywoodreporter.com/news/politics-news/trumps-... )
Violent video games have been a political scapegoat for decades. The debate about social media is following in the same footsteps.
This is the best study i know about digital media and adolescent wellbeing and they find an extremely small effect size (accounting for less than 0.4% of adolescent wellbeing).
This doesn't mean we shouldn't be concerned, especially if we are motivated to design better media culture. I can't stand Facebook.
No I don't think so.
I've played lots of first person shooters and it's harmless.
But social media causing damaging self worth worries, I slightly experience myself, sometimes. It's a real problem, in my eyes.
(Loot boxes, and computer gaming addiction, are real problems with the games though.)
I never saw such a game. -- The FPS games I've seen, I'd compare them more with looking at a ww2 movie.
"The last of us" -- that game maybe that one can cause, like you wrote, a bit desensitization. It's more realistic brutal violence in that game than what I saw in the games i played as a kid.
Any games come to your mind?
For sure I wouldn't want military or policemen to play such games, and not kids either
If social media as it stands causes unreasonable harm to society, how would we know for sure? And how would we know we aren’t making the same mistakes made with demonizing video games?
Not only that but anyone that has used social media can say that to you, specially younger people.
Meanwhile studies show there was no correlation between violence and videogames. And except for lag you can't really find a lot of people that will tell you videogames made them violent.
There are many ways to regulate social media:
- Determine constraints and objective statistical goals the ranking algorithm must meet
- Blocking and marking fake content
Facebook could choose to rank positivity higher but it does the opposite since it increases engagement.
Do we actually have these, or are you inferring we do from the general tone of the media conversation around this topic? I'm aware of some correlation studies, and think the topic bears keeping an eye on.
Would you mind sharing one of the "quite a few" causal studies you're referring to?
> Hell a genocide happened because Facebook fell asleep at the wheel.
This is a non sequitur. Facebook didn't cause the genocide any more than Marconi caused the Rwandan genocide or AT&T caused Watergate. Widespread communications technologies and platforms are deeply fundamental to human interaction in the modern age; the only model that marks Facebook as at all causal of Rohingyan persecution requires giving them a nonsensical amount of undue credit for the human interaction that occurs across their platform. Eg, you'd have to make obviously absurd claims like that Facebook is responsible for legalizing gay marriage in the US (activists heavily use FB and other social media platforms to organize).
To be clear, I think criticism of FB from the UN et al on the topic of Myanmar is warranted. The nature of the platform is such that the capability can be built out to direct and constrain the conversations that are had, and it's fair to say that Facebook needs to expand the manner in which it does so.
But this contradicts your point: the salient difference is the ability to control communication so that it stops violence, which in your framing is a _positive_ of social media.
(Note that I'm personally close to an unrestricted speech maximalist, but I'm taking your framing for granted in the above paragraphs)
Just as a quick gut-check, ask yourself this: if they didn't change your thinking, why would you play them? Assuming they don't offer any level of excitement or entertainment, what is the point?
Evidence: the US army uses FPS to train people for combat, and there is a longer history of what it takes to train people to shoot people: switching from round targets to human-shaped targets makes it a lot easier for soldiers to aim the gun at a person before pulling the trigger.
Now clearly most people who play violent video games don't turn into mass killers, but to say that the deep immersion of a FPS doesn't change your thinking is farcical on the face of it.
Then feel free to post the research. This was the core topic of my studies, and to my knowledge there simply is zero evidence of a causal relationship.
The US indeed has a big violence problem, but there are actual, established causes, e.g. the high wealth inequality, which is a big predictor for violent crimes in most countries.
Treat it like digital tobacco or similar - a product with known mental health side effects engineered specifically to modify your behavior in the name of "engagement". Limit access to children, require warnings with every email and notification. etc. etc. etc.
It's not hard, as long as you keep focus on the problem. This isn't about guns.
(I can dream, can’t I?)
Why should we allow microtargeted advertising?
Having said that, I agree that _another_ core issue is the data collection and privacy issues. Certainly, viewing advertisers as the value-subtracting rent-seekers as I do, I wish for them to know as little as possible about the world in general and about me specifically.
I prefer not to have salespeople come to my door, I prefer to seek them out when I am inclined to.
Advertisers should be relegated to their own greasy pit, to speak only at my pleasure and where I know to find them. Any other attempt to engage with me, anytime and anyplace else except where I explicitly permit them, is nothing less than an unwelcome intrusion. Begone, rent-seekers! Trouble us with your A/B tests no more.
For example, an advertiser might be able to microtarget people with certain characteristics that correlate with voting for a particular political party, and bombard them with ads that discourage them from voting.
That's targeted, but not "relevant".
The goal of the advertiser isn't to create a win-win situation, where you get just the right ad at the right time, for a good deal on what you actually need. That would be nice, but it's leaving money on the table - and the goal of the advertiser is to make money. So the ads get optimized further: a better ad is one that gets you to buy something regardless of whether you need it, on a deal that's as bad for you as possible without burning you as a future customer.
 - Which does not imply the deal isn't absolutely shitty. There are ways to guarantee you'll buy again, no matter how much you hate it. See e.g. the telecom industry - there's only few players on the market, and they're all equally shitty, and it doesn't matter because people need phone service, and those abandoning telco A in anger are offset by those flocking to A from the other telcos.
Remembering how incredibly shitty people could be in middle and high school, it's not hard at all for me to see how constantly comparing oneself to one's peers could be detrimental to one's health. It was bad enough having to go to school every day, but (back in the day) at least I didn't have to have that shit follow me around everywhere else too. In short, yeah, I'm having a hard time buying any comparison between Doom and Facebook.
All that said, I also have a hard time seeing a legislative solution to this problem, especially given what the typical legislature looks like these days.
That's kind of my point: The tech community can relate to Doom in the context of their own childhoods because we grew up with it. Adults can't relate to Instagram in the context of childhoods because that came after our time. So without first-hand experience, they substitute whatever feels correct, as prompted by hyperbolic articles like this one.
Every generation thinks the way they grew up was correct, while subsequent social changes are bad. There has always been moral panic about what "kids these days" are doing, from social media to video games to portable music players to watching TV and so on.
It was the exact same story with video game violence: Adults at the time didn't grow up with video games, so they assumed the worst. Politicians and news media stepped in to seed the most dramatic possible interpretations, pointing to correlations with increasing gun violence (at the time). To adults who didn't grow up with the context and saw no personal upside to the video games, heavy-handed legislation felt right.
Back in the day we joked about "Evercrack" too, but addiction to it actually ruined lots of people's lives in a way Doom never did.
Also note that both Fortnite and World of Warcraft are entirely multiplayer games. The way I see it, their addictive nature comes from the same set of factors that make social media addictive. In terms of addiction, Fortnite and WoW are like Instagram, not like Doom.
"Gaming has also been associated with sleep deprivation, insomnia and circadian rhythm disorders, depression, aggression, and anxiety, though more studies are needed to establish the validity and the strength of these connections"
And let's get real, this is not just a FB problem. If you really care about this, you have to burn a whole lot more than just FB. And it's probably going to include most internet companies. Cyberbullying is going to happen wherever teens spend most of their time on the internet.
We have important societal issues to solve. c54's comment touched on that in an insightful way. There are other things to balance here. Marginalized groups have benefited a lot from social media much like geeks benefited a lot from video games, D&D, and board games. But we don't like to talk about that because in many minds FB == the devil, and mentioning any benefit means propping up the devil. Soon we'll be talking about TikTok in the same manner.
This should be investigated but it needs to be done with an objective point-of-view. You do not have an objective point-of-view. Facebook does not have an objective point-of-view. The media definitely does not have an objective point-of-view. Let's at least try to be scientific.
I'm unsure about the direct causal relationships between games and violence, but at the very least they're an expression of a violent culture. The glorification of shooters as 'lone wolfs', the fact that a picture of a breastfeeding mother gets banned but someone's head blown off in a game is no big deal, it makes you ponder what priorities that creates in a culture.
And the criticism of the addictive nature of games, how gaming can isolate people, the general lack of sexual relationships of young men in particular, but also young people in general, these concerns written off as 'boomer mentality' or whatever are completely valid.
>After all, we need to regulate what the kids consume, right?
I think it's completely obvious that you need to regulate what children consume, they are after all... children. Regulating what they do is kind of the point of raising them. This consumer culture that leaves everything to 'market choice' and where every intervention is seen as overreach is terrible. This is exactly the sort of pseudo-freedom that OP was talking about.
>the general lack of sexual relationships of young men in particular
That's because everything sexual is declared amoral and banned including a breastfeeding mother. Men simply behave in accordance with what they are taught.
Not something like the fact that "social media" is a (very strange) form of communication, not a type of literal media that is produced, marketed, and consumed either as popular commercial productions or niche fan-productions?
I'm sure you could point out more meaningful differences than "tech bros stopped liking one of these" between:
- a form of fantasy entertainment, largely analogous to the nonfiction novels, violent comic books, and violent movies/TV before them (and were complained about before them). Its only novel feature being interactivity - but no more so than even-younger children playing cops and robbers in a playground.
- a ubiquitous new form of peer communication that's full of novel features, from massive network effects and propagation of speech across and in and out of peer groups, gamification of peer approval, infinite-scrolling feeds and other dark patterns, widespread coopting by marketing and degenerate forms of "news" media...
Currently companies can do what they want in secret.
Perhaps they should be able to do what they want, but in the open so it can be examined and criticized.
That means their blacklists, algorithms, word lists, preferred user lists, and reasons for banning users become openly accessible.
Even without compelling them directly to change anything, compelling them to be transparent would help a lot.
I got some suggestios
- No company w/more than 1 million MAU and deriving any income from online advertising may not combine any online information about a user with any physical real-world data; they may not sell data to another 3rd party that will combine online and physical;
- Ban the algo: Any user deriving more than $250 in value from an online service may not be banned by an algorithm. They must be given 60 days notice and an opportunity to remove their data from the service
- No company may own more than 1 domain in the top 5,000 (as measured by monthly traffic). All domains must be consolidated to 1
- No more hiding behind scale, expand the carve outs for 230 form beyond child porn to definitely include any threats of violence - but especially death threats
Thats just a start...
I have a radical proposal: reduce social media's value proposition by breaking them up. Then users' interest will naturally decrease.
>So video games get a pass, but social media must be attacked with everything we’ve got.
It applies not to any social media, but only to centralized giant monopolistic services. Federated social media like email are fine.
Advertisement targeted at children and using children.
If your advert features children or offers a product a child might be interested in, purchase it themselves or influence purchasing decision of the parent, is associated with a product children use or service children frequent then it's illegal. As simple as that.
You wouldn't believe how much such ban would clear up media sphere and improve children's health in all aspects.
I think the points about violent video games are good, but the OP here is that people are self-reporting instagram to be negatively correlated with mental health. That's distinct from more nebulous and bygone arguments about violent video games.
Funnily, this is also top page HN right now:
"Is America Inc getting less dynamic, less global and more monopolistic?"
In short, negative liberty is freedom from external interference, while positive liberty is having the resources and power to actually accomplish things.
"Such theoretical shifts set the stage, for Berlin, for the ideologies of the totalitarian movements of the twentieth century, both Communist and Fascist–Nazi, which claimed to liberate people by subjecting – and often sacrificing – them to larger groups or principles. To do this was the greatest of political evils; and to do it in the name of freedom, a political principle that Berlin, as a genuine liberal, especially cherished, struck him as a ‘strange […] reversal’ or ‘monstrous impersonation’ (2002b, 198, 180). Against this, Berlin championed, as ‘truer and more humane’, negative liberty and an empirical view of the self."
The idea of positive liberty is that you are free to become the best man you can be, but there is someone or a group who defines what 'best' is. And that inevitably ends in dictatorships or similar types of abuse.
Also, thanks for the book reference.
You’re already free from Facebook if you want to be. Nobody will make you use it. Your also free to parent your children in anyway you want to. But freedom isn’t forcing everybody else to make the same decisions you do.
A proper free marked requires that:
- People have a choice (no monopolies or oligopoly on any level),
- People can see problems and decide to therefore not to buy from a given company => i.e. transparency. We are very far way from this, with how the world and technology moved on the only way to archive it is to require easy accessible transparency, especially of ownership of companies, land and especially in the finanzial/stock market. Even if this means that people privatly investing there in more then small sums losing privacy (because if you move larger amounts of mony you ARE basically a company!)
- If a company messes up, the competition should be able to pick up where the original company left of. I.e. Patents and locked down electronic devices are fundamentally non free marked.
- probably more stuff I missed.
Many of the points above are often not associated with a free marked (at least in the US) potentially the opposite, but they are IMHO essential for a free marked to work like originally intended.
Or in other words, the "free marked" the US has IMHO can not work, in the sense that it can't have the positive effects a free marked is supposed to have because the dynamics in thir marked often outright conflicts the dynamics a free marked is supposed to have leading to a continuous degrading marked (from a human/citicen/society POV).
I believe that legislation got us into the Facebook problem in the first place. If you disagree, please take the time to correct me instead of downvoting my opinion out of existence. I'm open to a change of mind. Here's how I see it:
All of these social media companies have one thing in common: they exist to sell ads. We don't really know what Facebook would have looked like as a subscription service, because it wouldn't have ever become dominant.
For ads to work best, they have to be targeted at users. This incentivizes data collection that helps with ad targeting, which is pretty much everything.
But for ads to work at all, the user has to actually see the ad. This incentivizes engagement, because you need attention on your ad for the ad to have value. If you need engagement, you have to find a way to reach users. This is quite good if your business is selling ads, as you already have all of the data. And so the cycle of collecting and using data for acquisition goes on forever.
Since the very beginning of the ad business, companies like DoubleClick (later acquired by Google) have stretched the limits of web technology to make all of this work. Pixel bugs, cookies, app SDKs, all kinds of tricks, anything they can do to get more data about the user, because that's the gas that makes the whole engine run.
Users, however, are prevented by government regulation from stretching the limits of technology to fight back against these companies.
Have you ever wondered why there isn't a large market for paid alternative Twitter clients that cut out ads and data collection? There used to be, back when Twitter had a very permissive view of their data. Then Twitter changed the rules and now it's a grey area if using their private API with an app-extracted key is a crime or not. It's certainly not something you can build a company on.
So the situation here is that social media companies rely on you not being able to "hack" because there are legal protections on how you can interface with their public-facing internet servers and what you can do with the data that those servers send in reply. Because of legislation like the DMCA, paid for by media companies, users never even got the chance to fight on the same technical battleground where they were already being exploited by ad companies.
As I see it, these companies are using public infrastructure when it suits them. If a computer client is allowing third-party cookies, well, that's just downright reasonable to use for ad tracking. But if you alter a query string, you might be a criminal. That's where regulation has got us.
If Facebook is to be regulated, you can be sure that they'll write the regulation.
My litmus test for if there's a free market situation here: Are you free to make an alternative ad-free Facebook client that uses private APIs extracted from the real Facebook app? Because if the answer to that is no, as it has been since these companies have existed, then no amount of regulation is going to improve the situation.
User rights and privacy is a fight we could win technically without some very specific computer and copyright laws. We need to get back to the idea of the internet being public infrastructure.
If there was some change to the law that suddenly let people build better Facebook clients without fear of being sued into oblivion or even arrested, then I think we could finally get these companies under control.
The only new legislation I'd like to see enacted is for these social media and data aggregators to be split into two companies: the interface company, which would exist to serve their interface, and a data utilities company, which would exist to serve the data they don't own. This would open their data market for hacking upon, in the spirit of what you suggest and enabled by the repeal of the legislation you've outlined.
The only way towards greater freedom is to make the world freer.
Harks back to the antitrust action splitting train lines from steel producers in the 1920s. You need steel to make rail, and you need rail to transport steel, and the steel barons got rich by integrating across these two domains and giving themselves massive discounts compared to participants who were only in one industry and not both.
I'm not sure exactly what the details or enforeement would look like here but I do think it's an interesting line of thought. One thing Matt Stoller says with respect to antitrust legislation is that every company is a unique situation and should be handled as such.
How would any third-party client win the inevitable arms race this would initiate and what would stop your system from re-centralizing? Also how would any infrastructure become 'public' when it can't marshall the public interest or allocate resources via legislation?
Yes, I believe we lost the public nature of the internet by adopting regulation like the DMCA.
> How would any third-party client win the inevitable arms race this would initiate and what would stop your system from re-centralizing?
Attrition. Facebook would have to realize at some point that they can't stop their client from being reversed and can't prevent clones from popping up. Right now, they can stop that, but they stop it with laws not with technology. Without that protection, Facebook could never keep up with the whole internet collectively reverse engineering their apps out in the open. Also FB engineers would know that being on the DRM team is a waste of time and nobody would want to do the work.
> Also how would any infrastructure become 'public' when it can't marshall the public interest or allocate resources via legislation?
As another commenter pointed out, I shouldn't have used the term "public infrastructure" when referring to the internet, because it's largely privately owned and not regulated as a public utility, which is true.
What I mean is that if you're going to open up a public web server and publish a free app in an app store, I should be able to interact with your servers however that app does it without committing a crime. And I should be able to publish my own app that interacts with your servers too, without your claim to intellectual property standing in the way.
I think this would cause companies like Twitter and Facebook to lose their entire market, but the core of the service would be maintained by the community of people interested in the service.
So for example, if there were 10 Twitter clients allowed to exist and be above-board companies, and they captured 80% of the Twitter client market share, then there could be a serious push to invent or adopt a new protocol to serve their users and de-federate Twitter themselves if they don't want to go along with the protocol.
I think that's a really solvable technical problem and all the privacy activists would have great reason to pick up an editor and join the fight. But as it is now, regulation has made that type of activity illegal, because it would infringe on Twitter's rights.
One thing I'm cautiously interested in is the anti-corporate action taking place in China over the last couple months. From my view on the sidelines as an American, the Chinese Politbureau seems to have just up and deemed certain profitable business models counterproductive for their view of society, and just laid down the hammer to say that those companies can't be profitable anymore. Specifically they did this for private test-prep/tutoring companies--they have to be nonprofits now.
I think the US doesn't have the state capacity or the political will to do something quite so hard hitting. Maybe we say that ad companies can only have X percent profit margin, and above that their tax rates increase? Force companies to invest their earnings in non-ad technologies? Not exactly sure.
Speculation aside, I like your point about a 3rd party ad-free facebook client. This mirrors what another commenter mentioned about the platform/participant distinction. This could mirror the antitrust action taken against Microsoft in the 90s, maybe we say that facebook has to open up their internal APIs such that third party people people should be able to make their own clients.
The weird thing here is that the client app isn't really facebook's business model, right? The marketplace facebook creates is to their ad buyers, the users are subjected to that on the back-end. Maybe the ad marketplace should be made public in some way? Could imagine at least forcing the ad marketplace to operate like a 'lit' exchange where bids and offers are visible by all participants. Not sure what this would actually solve though.
It's been said before but Facebook should've never been allowed to buy Instagram.
More concretely there's room for small incrementalist pieces of legislation which could help shift the landscape in beneficial ways, like Australia's laws forcing social media networks to pay news media sites for their content. 
Matt Stoller's not perfect but I like a lot of his stuff on antitrust news, some links below
Thanks for taking the time to reply, these discussions are much more interesting on HN than anywhere else on the internet.
Your line of thinking really drives me crazy because I feel like we're so close to agreeing. The horseshoe is almost touching in this thread.
I don't know enough about China to know if their societal engineering efforts are considered successful, but I do know enough to see the parallels to the regulatory solutions that you're floating.
I think these are all well intentioned but terrible ideas:
> can only have X percent profit margin
Profit margins can be engineered with creative accounting. You can't practically do this.
> Force companies to invest
I don't even know how you would force a company to invest in something. Generally you'd tax or subsidize them, but in either case you're disrupting market forces.
> maybe we say that facebook has to open up their internal APIs
This would actually help the situation in the short term, but it's still not a good idea. Facebook would continue asserting intellectual property rights on the data they serve and entrench them as a public utility. The goal here should be to create an environment that's hostile to Facebook by protecting them less.
> forcing the ad marketplace to operate like a 'lit' exchange
The ad marketplace doesn't need to be transparent, because Facebook can't actually compete for attention if they lose government protection of their IP or government protection from the harm caused by their UGC. If Facebook didn't have these protections, we wouldn't need to worry about the marketplace because their business would fail.
Though Facebook is currently in a position to easily commit undetectable fraud on ad buyers. The low quality of their market should be another reason to not use Facebook. The reason we don't have a transparent and competitive ad market is because Facebook gets to decide the rules for their marketplace.
> Facebook should've never been allowed to buy Instagram.
I don't think they should have been prevented from buying Instagram. If they didn't own them, they'd just share data anyway for mutual benefit. Who owns the stock is separate to me from the abusive products they produce.
> forcing social media networks to pay news media sites for their content
News media sites are their own problem, but they shouldn't get a special carve out for being paid to be indexed. If they don't want their content public on the internet, they shouldn't publish it. Once they do, they should assume it can be liberally indexed/mirrored/redistributed.
I think we'll ultimately see blockchain solutions take over the space, because fundamentally they work and are an unstoppable technology.
It's also possible that all these regulations don't have any of their intended effects, but are still successful in crippling Facebook by making its operation very restricted. In other words, use regulation to make Facebook a worse product, to the point where the users organically leave.
The best outcome would be if we simply got rid of all these silly intellectual property protections and let the chips fall where they may. That would end a lot of media companies, but it'd be better than the problems they've caused.
A good outcome is that social media companies eventually lose their users organically, through some combination of onerous regulations that can't apply to decentralized projects, and new technologies that don't fit their business model simply being better.
A bad outcome is if governments adopts social media in official ways, it becomes essential to government business and therefore gets regulated into permanence. This blurs the line between what it means to have a Facebook account as opposed to an internet connection. That's probably where Facebook will take us if they can write the regulations. Maybe to vote or do some other government business, I'll need to let Facebook verify my identity, which will be fine, because it's totally regulated and normal.
I've read a lot of posts on HN about the "aha" moment of people using metamask for the first time. I think we're going to see a new wave of internet technologies that obsolete things like Facebook and Twitter very quickly. They might also be way more damaging and worse, but they won't be companies and they won't be able to be regulated. I'm not sure how that plays out, but we'll figure it out when we get there, because that's where we're going.
I agree with everything you've except what's quoted above. How is FAANG using public infrastructure, when private companies own everything from the undersea cables to the servers?
Now sure, the federal government has subjugated all of the states and is in a position to enforce arbitrary compliance to arbitrary whims, but it does also derive power from the collection of states and so it mostly ignores the discretion provided by the 10th amendment, to prevent that power from dissolving
I know it is tired to say this but we don't have a free market with the existence of intellectual property. They are also a Natural Monopoly, mostly due to network effect, but also because of their tendency to acquire the competition. Add to that that we don't put executives in jail for corporate wrongdoing, alongside treating corporations as people in terms of speech and political donations.. you end up with the present situation.
Especially when the community = everyone on the internet that's a problem.
One of the reasons I left Silicon Valley was that over the years I came to see that people even among the people who were uncomfortable with or knew that the things they worked out were a net negative to society, fewer and fewer were willing to leave, usually because of money.
It's Upton Sinclair's "It is difficult to get a man to understand something when his salary depends on his not understanding it."
I have new fangled ideas for server less infrastructure, but I know it has a hidden cost as it would be used for all sorts of evil.
personally I am a freelance artist who fights to get stupid corporate social media platforms to actually show her work to the people who've said they want to see it, so I'm somewhat less complicit in the creation of this cyberpunk dystopia than the average HN reader.
The interesting thing about all this is that the #1 competitive advantage of Instagram in the US that prevented it from completely losing the teen demographic to Snapchat was that teens felt less bullied and more safe on IG. I read literally dozens of surveys and studies that specifically asked about wellbeing and choices to use one platform over the other. Because of an intense and intentional push from FB leadership, Instagram was comfortably the safest destination for US teens.
Also there's certainly no evidence that Instagram usage has affected teen suicides in a significant way. The total number of teen suicides in the US per year is far too small for any statistically significant measurement claim. Any extrapolation or anecdote-based claim has to explain why IG doesn't seem to cause any decrease in self-reported wellbeing.
And what if the people making the decisions also share the profit?
I’m sure we can reach some kind of middle ground.
The thing I seriously wonder if there is a new evolutionary pressure unfolding, and whether or not we should fight it. Or, will it take care of itself?
The article buries the actual data in a paragraph toward the end of the article:
> According to one slide, 32 percent of teen girls said the app made them feel worse about their bodies. Of those who’d experienced suicidal thoughts, “13% of British users and 6% of American users traced the desire to kill themselves to Instagram,” the Journal reported, citing another presentation.
We really need to see the survey questions and the data to understand what’s going on here. 6% or 13% of respondents reporting suicidal thoughts is worth investigating, but I’m really curious what the other 94% of self-reported contributors were.
I say this as someone who uses no Facebook properties, simply on principle.
So even if Facebook succeeds in removing all predatory, malicious and disturbing content, are they going to eliminate all of the above from Zuck's kiddy-insta?
I see absolutely no reason why kids should be any different than grown-ups in this regard. Quite the opposite. Kids are absolutely obsessed with comparing themselves to other kids, and Zuck's Kiddy-insta is an absolute recipe for disaster.
I think the trend of hyper self-centric content begun with Instagram indeed. For example Photo Models and Body Builders sharing photos of their bodies for no apparent reason besides to glorify and/or sell themselves.
Photos of food make no sense to me, photos of pets I can somehow understand and yes I can understand how looking at happy lives and happy experiences can make you feel miserable but then again Facebook or Instagram don't really know your mental health and state you are in and can not magically remove "sensitive" content that would disturb you.
Best thing would be to stop using social media altogether or use it only to inform yourself about the world around you but in the form of text or perhaps audio/voice only.
So I’m curious if this was also seen outside of US? The root cause might not be social media itself, but just the fact that these apps allow amplification of unrelated damaging cultural aspects of youth as a female in the US.
It seems like a problem with ability to do x colliding with basic human behavior.
But anyway, it’s not reasonable to expect parents to know if this or that app is more likely to increase the risk of suicide. Fortunately I have a government that I pay for that can research this and help me with that.
There’s a balance to be had here. First there’s the social responsibility- if there isn't something targeted towards kids that is proven to harm their mental health, that is a net good.
There’s also parental responsibility- being a college student I of course do not have the insight necessary here but I feel like I would’ve made better decisions if my parents weren’t as controlling with tech. It was almost a game to me to see what I could get away with. Simple things like adding me to a web filter _when I was in high school_ eroded the trust I had with them. Granted, it took me < 5 minutes to bypass it, but I still felt wronged.
Parenting wise, again, I’m completely unqualified, but I think having an open and honest relationship with technology is a better way than what my parents did. Rather than harping about “everything you do is our business,” being allowed to have some degree of privacy would have fostered trust.
tl;dr there are ethics involved with shipping a product. Don’t offload these ethical decisions entirely to parents, because kids generally don’t give a shit.
source: am 19
They and their ecosystem are individually toxic and societally destructive; they are bad for innovation, bad for the marketplace, and most of all, bad for the humans caught in their sociopathic pursuit of money via "engagement" at all and every cost.
This week alone the number of truly damning stories is infuriating, yet these are now some continual and commonplace, that it is hard to keep track of the myriad flavors of abuse, deceit, deception, double-dealing, and outright lies, including to putative oversight e.g. the US Congress.
If you work for or with them, you should take a hard look at the real costs and consequences, and see if you find your soul in order.
Abuse and parasitism at the expense of the common good and of individual lives and wellbeing is real, literal evil;
and you don't need to believe in the divine to know that doing evil has a cost.
Get off the ecosystem and work against it.
Sends you emails and status reports about your childrens mood and emotional changes. We call it Instaface parent.
1. Do understand it is the same group of 5-14 who go on to transition to becoming 15-24 & so after having their life experiences all of which contribute to suicide
2. Low rates among 5-14 might precisely be because they are currently sheltered from the shitstorm that is the world & wider implications of losing that innocence. Mass adoption of social media by that age group might directly accelerate that & atleast may cause a rise in suicide rates. Even a single life lost to such bullshit is devastating.
Please stop advocating for bs like social media that is really never required in our lives much less kids' lives.
Genuinely curious : I'd like to know the benefits (not to FB but to society) of Instagram for kids in order to even contemplate the risk benefit analysis. Do you have any sources at hand for this?
That plus the idea around tying it to a custodial "parent" account so that a parent can keep tabs on who they are following vastly improves the current situation.
AnecData alert : We tried Youtube kids in 2019 for about a day (for my then 4 year old son). We were so shocked to see suggestive content and other crazy stuff from "verified" and reputed (100k+ followers) that we uninstalled the app then and there. There's no worse insecurity than a false sense of security
And yes, I've heard all the stories about YT Kids, and even took a brief look. We're staying away from it. Some research + youtube-dl is our way of choice for sourcing songs for our kid to listen to.
 - What do people expect? That content for YTKids / IGKids will be created by 9-13 years old?
Wouldn't it be ideal that our children don't feel suicidal due to the technology and dark patterns that we make available for their young minds (which would take till post-25 in general to actually know what's good for them)?
I suggest that even one suicide - especially in a child - is too high.
-- A Fan
Or are you saying suicidal ideation cannot be correlated to anything other than age? Which is also untrue since we have evidence social media (and other factors) can increase this risk.
If one even ignores the insane fact that that many young kids kill themselves every year, in what world is it okay to gleefully ignore an increase in suicides to any age group and the cause(s) of the increase?
Until now it seems