Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What do you think about “The Social Dilemma”?
331 points by messo 4 months ago | hide | past | favorite | 248 comments
For those who have seen the film on Netflix already; what are your thoughts? I am especially interested in hearing from people involved in any algorithm-driven platforms.

This is the first time I have seen such coherent, powerful and accessible explanation on the mechanics of algorithms and negative consequences of social media, and I wonder if this film can be the push that non-technical people need to take a step back and maybe even delete their accounts.

Anecdotally, it was my non-technical, Instagram-loving partner who saw this film first and recommended it to me. She re-watched it with me and is now asking serious questions about the platform and her continued use of it. She can't be the only one.

What will the cultural impact of this movie be?




Though there was hardly anything that I didn't already know, I liked the movie very much for its presentation that drove the point well to people who may not have understood how these platforms work, the privacy implications and the dangers therein. I've already recommended this movie to some more people.

As for the cultural impact of the movie, I don't think it's going to be much. Cambridge Analytica had so much news coverage during its time (along with public hearings by lawmakers and documentaries about it) and still did nothing material to the bottom line of these companies. They've actually grown bigger and become a lot richer since then.

1. Most people just wouldn't care enough to give up these platforms. While I've been enraged for a long time about these platforms, the big gap here is that there is no good answer to the question, "what are the better alternatives?" Don't tell me that Mastodon and Mastodon clones can be replacements for Twitter, Facebook, Facebook groups, Facebook events, Instagram, Snapchat, TikTok, etc. Where are the nice(r) mobile apps (not just some website designed for desktops) for any replacements?

2. Governments will not regulate these platforms in meaningful ways that create fundamental changes. Regulatory capture is what's looming around, where the current biggies make the rules and ensure that nobody else can beat them.

I will keep pushing people to switch to better platforms (even if they seem deficient in comparison), but I'm sadly not very optimistic about big changes in the next decade or so.


> Don't tell me that Mastodon and Mastodon clones can be replacements for Twitter, Facebook, Facebook groups, Facebook events, Instagram, Snapchat, TikTok, etc.

Of course not. Because Mastodon is a federated microblogging service. As that it is a good alternative for Twitter. My Mastodon client has a better UX than twitter, no algorithmic feed, no ads, no trackers, no distracting recommendations, and - importantly - breathing a friendly, open and largely non-toxic culture.

Sure, it only has a tiny number of users in comparison to twitter (about 4 million), but there is no reason this can be many many more. And hosted on a federation of 1,000's of servers. Much less risk to be de-platformed from your favorite walled-garden for obscure reasons.

The Fediverse is only at the start of its evolution, and things are becoming exciting for us devs. Diverse applications, PeerTube, PixelFed, write.as, Lemmy, many more, all start to be integrated together. A gradual process, as the underlying standards mature and new ones (content addressing, object capabilities, datashards) are being developed. Besides federated ActivityPub there's work ongoing to extend to full p2p contexts.

I'd say you should have Developer FOMO. To miss out on opportunities to stand out, and be part of a path that could be part of the solution to our social dilemma :)


> Much less risk to be de-platformed from your favorite walled-garden for obscure reasons.

Federation does absolutely nothing for this. The risk is probably even higher given how small most of these instances are. Much more likely to run into an admin/moderator.


I run a small instance and I don't think this applies. New signups are closed on my instance, and the only way to get on is for you to ask me for an invitation, or for a friend of yours who's on the instance to ask for you.

If you look like someone who I'd be likely to kick off my instance, I won't let you on in the first place. It's a private garden that I let my friends hang out in, not a public park where I have to let some asshole who thinks me and my friends should all be dead come in and rant and rave all they like.

"Oh I am totally being de-platformed! Free Speech! Filter Bubble!", you cry.

And then you discover that the Fediverse is split into multiple parts, and that there are instances that are delighted to let you say anything and everything you want as long as it is not explicitly illegal. You can even run one yourself if you have a bit of spare change! You can say anything you like on your own instance, you can follow anyone your heart desires. And anyone on another instance can follow you if they like. But no other instance is obligated to listen to you.


> But no other instance is obligated to listen to you.

So deleting DNS solves this for the Internet?


In the fediverse, you're more likely be able to say "Hey Lisa, why did you ban me?" than you can get into contact with the Youtube/Facebook/etc forces that be.

That may or may not be better because your admin may respond "Well because I think you're a jerk and disagree with your political views" but at least you can migrate to a new instance still instead of just being banned for all eternity.


If you want to keep your followers you'd need to migrate before they ban you.

But yes, I guess you can start over somewhere else while on big ones you're out forever. So in that sense I guess it is a tiny improvement. But if most of your followers were concentrated on server[s] that don't like you, you may not be able to reach them anymore.


> if most of your followers were concentrated on server[s] that don't like you, you may not be able to reach them anymore

This seems extremely unlikely. If the server admin doesn't like you enough to ban you, I strongly doubt you're going to have many followers there.

In any case, however likely or unlikely it is in practice, the incentives here seem exactly right to me. If you want the benefits of being part of a community, you have to respect its norms in your interactions with others in the community. The possibility of losing connections with people in that community if you are forced to leave it is part of the deal.


If your followers are "real" followers, and if you are important to them, they'll get in touch outside the fed and will know where to find you. Maybe that's what we should strive for. I have 200+ followers on Twitter, but if you ask me, I can only name 15 with real, social interaction.

And I think that's great.


If that's how you define "real" followers then you might as well use an IM application instead. IMO the whole point of (Twitter style) social media is having effortless reach and interactions without explicitly sending stuff to anybody.

I had over 10 thousand and had many interactions with people but wouldn't feel like bothering them on other platforms (unless it was of the same nature).

And the few people who bothered to contact me personally felt a little awkward too. I don't necessarily want to be talking to these people directly.

The magic is you just publish whatever you want and people can look at it. They can comment but there's no obligation to respond. Having actual conversations with that many people would be a huge chore. The whole thing isn't about having relationships with people at all but about spreading information. It could be about yourself in which case there would be some implicit one way relationship but that's just one use case.


Maybe that's why people are feeling lonely. They think their voice should reach thousands of people while they simply don't care about those thousands. How can I expect someone to care about me when I don't care about them beyond a useless number or marketing campaign?

People are thinking of others as consumers while expecting to be treated as a friend?


I second this. While I foolishly didn't export the list of people I was following when I've decided to shut down my own instance and just use an already established one, I've seen other people switching between instances without really losing much in terms of interactions.


And if they don't agree with the censorship on their instance, they are free to migrate too.


I guess the questions this raises next for me are:

Would this scale if a given instance somehow gained the level of adoption people seem to value in a Twitter or a Facebook? It seems to me that the major draw and reason people stay on those services is the fact that they can communicate with essentially everyone in one place.

Also, to follow the first question, given that a major (if not the only) draw for most users is the ability to communicate with everyone, wouldn't that result in relatively few (or one) instance gaining dominance over time? You want to be connected to all of your friends and family, of course. But then all of them also want to be able to read what Beyonce or Trump or Biden or Bill Gates is saying. Their barbers and bartenders want to post the latest specials to all of their current and potential clients. If you all have to agree on an instance/sub-network/server/etc to use, why would people choose anything but the one everyone else is on?

I guess it might make it easier to migrate to a better-run or better-supported instance and add a bit of healthy competition to keep things in line, but from what I've seen of Facebook and Twitter, the near impossibility of getting people to switch to another competing service en masse always overpowers any individual's complaints about the current state of affairs. Facebook sucks? Well, why not swap over to that new ServiceX? Well...sure it's nicer but nobody is on it!


I agree. The issue with social media isn't the tech, its the people and its power structures forming over social networks.

This whole thing, social media, derives from 2 types of issues: Errors/Flaws in natural thinking, and malignant hacking of brains.

The second will always arise when you have more than 100 people in a group, and said group has some non trivial impact on voting patterns.

At that point, this group is like a unprotected bank account - which will be hacked.

The normal response is to protect this account and to target hackers - but current social thought (excluding China) has huge issues when it comes to censoring speech - and especially political speech.

Add to it layers of additional complexity, (political parties can abuse the laws and say they are being unfairly censored), and the distance between this solution and the underlying human problems seems equidistant as that of current Social media and the same problems.


Good points. First of all, it doesn't matter which instance you're on. You can still follow your friends, influencers, etc. on other instances.. unless your instance has the other instance in its blocklist (e.g. if it doesn't adhere to moderation rules that apply on your instance).

Natural re-centralization is an issue, with mastodon.technology and mastodon.social being by far the largest instances. This could devolve to a situation like you have with email / Gmail.

Another challenge is that Mastodon as early adopter of ActivityPub has made a number of required implementation choices to fill in the blanks of the spec. They support a bunch of custom properties in the message format, plus they use the Mastodon client API instead of ActivityPub client-to-server. Their dominance wrt running Mastodon instances means they have a big say in how Fediverse evolves.

Luckily the project team is open-minded, and mostly on the same page to others wrt fediverse innovation.


Thanks for the clarification. I'm not terribly educated on the state of Mastodon/similar federated social media.

Also interesting that you brought up Gmail/email. I've thought for a while (admittedly as someone far more on the "user" side than the "developer" one) about how email is an example of how things might work better. For as much as Gmail and a few others have captured the vast majority of the average userbase for personal email, it's still reasonably easy to choose a different provider (and can even roll your own if you really thought it worthwhile).

Switching email providers doesn't stop me from emailing people who use Hotmail or Gmail (ignoring for a minute any issues of overzealous filtering of uncommon mail servers). The individual services can implement their various extra features like spam blocking, auto sorting, etc. but for the most part, the common protocol means that an email from me@me.server can still be sent to a list of addresses at gmail.com, aol.com, hotmail.com, etc. If I want to send a message to people on Facebook, I need an active "valid" Facebook account that can be tied to their social graph.


The draw of the fediverse is that you can follow cross-instance.

I can be in my local community instance and follow people on say a major software instance and also an art oriented instance. It doesn't really matter who is on what instance. Connecting instances up is trivial - you join into the feeds generally.

Your instance more dictates your general discovery patterns and local moderation. In picking a local instance, I'm going to generally see discovery material related to my local area. If I joined an art instance, it would be art material, etc.

Generally well behaved instances (ie not spammy) shouldn't have an issue with getting federated, even if they're relatively small.


Unless you are building from ground up rules against types of mental virii, you are simply building a new circulatory/ neuronal system, without planning an antibody/immunity system.

Reddit USED to be relatively federated - dont like the subreddit mods and behavior, make your own sub, release hundreds of posts announcing it to users, and then users chose which sub they preferred.

Automod came along and white lists/black lists of words killed that system whole.

Now people send laborious messages on reddit to let people know of which new, new subversive subreddit has opened up. With all the credibility of a guy selling illicit goods on a street corner.

Not sure how you can prevent botting, and thence white lists/black lists of words.


I'm not sure I agree there's less deplatforming risk. It's structurally harder to be completely excluded from the network, since each instance has its own banlist, but that won't be of much comfort if you're banned from the instance with all your friends in it.


Unfortunately, Mastodon doesn't have a lot of things Twitter has going for it, such as search or trends. These two things make it easier to find others and to get a pulse on what's happening around you.

I just tried searching for "seattle" and it just gives me a list of my own posts that match it and users with seattle in their name. In the past search seemed to work differently (not sure what changed), but when I searched for "seattle" then, I mostly got back a list of prostitutes who were hashtagging their posts with city names—–not exactly what I'm looking for.


> My Mastodon client has (...) no trackers

Perhaps your client hasn't, but how do you know that others don't track you through their clients (or perhaps even bots)?


While many of us (technologists) were unsurprised, is there any reaction from friends and relatives who say, "wait, you knew about this and you didn't say anything?"

This doc is way less dramatic, but when the Snowden leaks happened I responded to people with, "what parts of this could I have told you about that you would have believed if you hadn't seen it in the news?" Reality is, most people are more afraid of being thought of as an outsider than any conceivable consequence that befalls their perceived in-group.


> Reality is, most people are more afraid of being thought of as an outsider than any conceivable consequence that befalls their perceived in-group

I think that very nicely sums up the problem. Very few people are willing to socially isolate themselves this way. I've been working on an open-source self-hosted private blogging platform[1] that I have visions of being a facebook replacement, but the truth is very few people want to tell their friends: "follow me on this website I built with these separate login credentials". Reminds me of stories I read on here about middle school kids being ostracized for sending text messages without iMessage and showing up as a "green bubble"

[1] https://simpleblogs.org


> "follow me on this website I built with these separate login credentials"

That does seem to be one of the biggest hurdles stopping migrations from inferior platforms to superior ones. (The other main hurdle being network effects / the lack of interoperability).

It's like the saying "If you can get your ship into orbit, you're halfway to anywhere.", i.e. "If we had a single sign-on system that worked across the whole of the web, users would be halfway towards all the best sites."


> While many of us (technologists) were unsurprised, is there any reaction from friends and relatives who say, "wait, you knew about this and you didn't say anything?"

This was basically the reaction my girlfriend had after watching it.


> Though there was hardly anything that I didn't already know, I liked the movie very much for its presentation […]

This is exactly the same impression I had after watching the movie. There was little new information in this movie for me, other than the interviews with leading developers that now has turned on their own creation. That was quite powerful to watch.

> As for the cultural impact of the movie, I don't think it's going to be much.

The cynical part of me agrees with you. The hopeful part of me is encouraged by the fact that this movie communicates rather technical and boring information in a very human way, and that it shows how algorithm manipulates you, not some random folks that was targeted by Cambridge Analytica. This was much more relatable and personal than during the news coverage on how Facebook and Twitter was (ab)used during the last election, and that gives me hope.

The discussion has largely been relevant to political animals, nerds and idealistic people so far, and this movie feels like the first real brake out of that sphere and into the popular culture.


I'd slot the documentary right next to others that cover things like the horrors of factory farms, slave labor due to consumerism, etc.: while they can be incredibly moving, the impact on your day-to-day is so intangible that it's quite easy to ignore while making no changes to your habits.

It reminds me of when I was working at a grocery store in Texas (HEB) and they experimented with hanging the cow's corpse that was being butchered in plain view. I guess in an effort to look fresh.

Well, people complained about it. They said that it was gross, and the store went back to butchering before opening. Just looking at meat as pink saran-wrapped blobs that appear on shelves lets people ignore inconvenient truths. And I think we have this type of preference in all parts of our lives from social media to ignoring death itself.


As long as people/corporations value convenience and profit, everything else is secondary.


>> but I'm sadly not very optimistic about big changes in the next decade or so.

Yeah. I liked it a lot as well, even though there was nothing new.

This documentary is like walking into a crack house and saying "Hey guys! I've got this great PowerPoint presentation on how bad crack is for you!"

Nice idea, I guess, but there's no reason at all I would think it has a chance of accomplishing anything.

These efforts appear, make a splash, and then ... dissolve. I wish it weren't so, but there are some extremely powerful social forces at work here. A TV show ain't going to do much.


I quit FB 5 years ago due to a couple other friends quitting and explaining their reasoning. If .1% of FB or Twitter users delete after this film, I'd say it's still a win.


This type of thing reads like "please compost, recycle, and take public transportation" stuff. That's what I thought of the movie too.

Andrew Keen has the right answer (https://www.youtube.com/watch?v=iRbI-Ui9vEI) but nobody likes it. We're still all stuck in the John Barlow trap, thinking that if we just tweaked things here and there all will be right.


TLDW: i think anonymity is a huge curse i think if we can do away with anonymity then we can begin to rebuild democracy and of course ultimately it comes down to all of us ultimately we need to learn to be responsible not to insult not to propagandize and i think that will only come just as we look at each other as physical people in a place like this that will only be that will only eventually be realized if we can do away with anonymity so a new social contract new ways of reinventing democracy through analog and a more creative and responsible regulatory attitude towards um to to big social media and internet companies


cool story, bob29 the 18 day old account with no profile info ;)

I'm just teasing. HN survives on a strong culture, and a moderator who appears to be inhuman, or at least doesn't need the kind of sleep and bathroom breaks we associate with human beings.

Losing pseudonymity would be a high price to pay, for a bunch of reasons. Moderation, on the other hand, doesn't scale.

Perhaps that's a feature, not a bug. Maybe the scales moderation can't reach aren't scales we benefit from having.


It depends on whether you're for or an enemy of the EFF's type of libertarian politics which he definitely is and so am I. I'd even put Jaron Lanier in that camp too. His solution is just make everyone pay. But I guess in such situations you'd have to use your card and you'd also lose anonymity but then you'd have to pay for all your services and I think that's worse. I don't have gripes about the ad model at all, most people see ads as a nuisance anyway. Also if AI is driven by the big data it gets from the people, it's people that facilitate such trends and that's what I am more worried of.

People will weaponize that power vacuum and not everyone is an engineer that just wants to build something. There are people with nationalist interests, theocratic interests, religious interests, and profit interests all in the way of that and this will never change. It's why the internet was way better when it was just engineers and smart people. Also why America on paper looks awesome but in practice doesn't live up to the hype. It's why the freedom of thought Internet got us nothing more than the degeneration into fascist, theocratic, and communist trends. You may agree with the theory that anonymity, no government involvement of any kind at all, anarchist self organizing systems, etc outweighs the bad, but I don't. Not when the central argument for such thing is turned on it's head and totalitarianism is the "fight against the man" which in this case would be liberal democracy.


Agreed.

If nothing else, this makes for a great explainer video. Find somebody who needs to know it and show it to them. The key problem is that 99.9% of folks even when they understand the problem can't fix it.

Having said that, knowing beats not-knowing. There is good here.


> Don't tell me that Mastodon and Mastodon clones can be replacements for Twitter, Facebook, Facebook groups, Facebook events, Instagram, Snapchat, TikTok, etc.

It has been for me. But I am very aware that the fediverse is far from ready to capture the large masses currently using all the regular social networks. And that's fine, if the fediverse were growing at the same rate as VC-funded, growth-hacked social media platforms, that would worry me greatly. It takes time to build tools and communities that is made by and for the users of said tools.

I find Mastodon to be a much better experience to use both on the web and on the apps available for my Android phone, than Twitter. I have tried to use Twitter again a few times the last years, but it's not the same anymore. I used to be an active Twitter user. I prefer being a part of my Norwegian mastodon instance, in addition to one of the larger English-speaking instances.

> Where are the nice(r) mobile apps (not just some website designed for desktops) for any replacements?

I use Tusky (https://tusky.app/), but there are several other really nice apps to choose from. On my laptop I use Tootle (https://github.com/bleakgrey/tootle).


I feel like pointing out that the fediverse isn't really growing at any significant rate. Looking through some pretty-rudimentary-but-best-we-can-get stats (https://the-federation.info/), the number of users active in the last month hasn't really changed compared to a year ago, remaining steadily at 400k-500k.

I personally find that slightly demoralising, but as someone already pointed out, there are fediverse platforms that are inches away from being usable (Lemmy, Mobilizon, Zap, Epicyon).


On the mobile side, I can recommend Fedilab (available on F-droid) and Husky too, as alternatives to Tusky.


> While I've been enraged for a long time about these platforms, the big gap here is that there is no good answer to the question, "what are the better alternatives?"

This assumes we want something to replace these platforms. I for one want no social media. I admit I can't prove they are damaging no matter the implementation but I challenge anybody to prove the contrary.


"social media" doesn't have to mean "wall-to-wall influencers and hyper-targeted bad-faith political ads disguised as individual opinion".

The original promise (a more convenient+reliable way to contact infrequent acquaintances, plan events, read/report local events, and broadcast personal updates) is certainly something worth having.


IMHO social media has a natural tendency to mean "wall-to-wall influencers and hyper-targeted bad-faith political ads disguised as individual opinion". If you build it, they will come - there's a strong practical motivation for these people to try and be there in any social medium that gets popular.

In the absence of a strong mechanism to actively, effectively keep them out or kick them out (what would that be? It's not easy) social media will become like that unless that's a niche platform that almost nobody uses, so all the influencers and political ads don't care because they are somewhere else - but they will follow the eyeballs wherever they go.


Rather than surgical regulation, which as you rightly point out is often prone to capture, I would instead propose a cruder but effective governing instrument: the Pigovian tax

Trace the chain of incentivizes further upstream, and what do we find? What animates the system? It’s advertising revenue, of course. So let’s tax the living hell out of advertising revenue.

Fundamentally decreasing the profitability of a market is a sure way to disincentivize profiteering interests in it.


We are building a solution with social.network, it's not fully completed yet but please keep an eye on it towards the end of the year.


I'm applauding every effort to compete with the big players, but I also hope that anyone who spend time trying to solve these problems builds interoperable systems (open protocols etc).

What I really like about the ActivityPub protocol (though it is not perfect), is the way it makes permissionless innovation a thing again. People can build tools and communities to their liking and at the same time be a part of a much larger ecosystem. One example is a rather fresh project based on this protocol, trying to recreate reddit/HN's functionality, only federated: https://dev.narwhal.city/ (https://git.sr.ht/~vpzom/lotide)


Yeah we will be open sourcing it, mostly the governance/economic model will be built on parity substrate so anyone can deploy or upgrade that system. I think solving the ad driven business model is much more important than decentralizing the content servers, hard to meet the UI/UX standards most people have when you go this route also.


This is surprising to me. I didn't think that platforms that leverage network effects were typically well-suited to open sourcing unless it was something federated. Are you worried about the system being gamed or usurped by someone willing to be more exploitative?


I think proof of stake/liquidity providers with lockup creates enough incentive to keep the network effects within one system. Open sourcing the application side is a bit trickier.


Similar reactions from me. Every other day we see articles those focus on these things. Significantly nothing new; in fact I cringed at some moments. I guess I/we are not the target audience.

For cultural impact: Nope. It's gonna stay in people's mind few days. Then again back to normal.


what about something like a non-profit? I was quite disappointed that keybase.io sold to zoom. I always thought more of it like a neutral public utility.


wt.social ?


The regulatory capture is the key bit.


I think what I think about docudramas in general: Hyperbolic. I think the harms of social media are exaggerated, and not entirely unique to social media.

I don’t have a problem with Facebook knowing what content I interact with and using that to serve me ads, or content that I want to spend time consuming. I don’t agree with the characterization of publishing or volunteering information (likes, posting photos, profile details) as private information. Facebook makes clear what information is public, what is shared to friends, etc. I think most people are relatively aware their interactions determine ad selection, as it is intuitive. It’s really not scary to me that Facebook knows which city I live in to target ads.

I don’t agree with the characterization of engineering apps to provide entertainment or social value as “manipulation” in the harmful sense. The fact that something is enjoyable to use or is used a lot isn’t necessarily a bad thing, and is not necessarily an addiction. To use another example, I would want game developers to “manipulate” me to enjoy playing a game. That’s the point. The fact that people spend lots of time on social media just demonstrates that they derive something from it, not that it’s addictive or harmful. Is social media a worthwhile use of your time? Maybe not. Is it an intrinsically harmful activity? No.

I think the Internet in general allows for the more rapid, selective spread of information, and that it lowers the cost of publishing. It follows that ideas can spread quickly, including conspiracy theories. I don’t agree that belief in pseudoscience, rumor, or superstition is anymore prevalent than it was before social media. I think it’s just more visible to everyone how many people do believe wacky ideas. Regardless, social media isn’t going to suddenly make me believe the moon landing was fake, so I don’t see it as being harmful.

In short, I don’t think ad targeting is harmful or violating my privacy. I don’t think social media is harmful but that passive consumption of media is not the best use of time, like video games or watching TV.


> The fact that people spend lots of time on social media just demonstrates that they derive something from it, not that it’s addictive or harmful.

I was totally with you until this moment. This is like arguing that people with alcoholism get something out of drinking too much.

It’s one thing to say that you’re comfortable with the amount of information that’s collected about you and how it’s used, but how can you ignore the fact that these companies hired experts in psychology and paid millions to keep people using their apps, just like child psychologists were deployed to TV advertising in the 80s and 90s? These apps are absolutely addictive because they were explicitly and intentionally designed to be.

Spending hours in an infinite scroll wormhole does not automatically mean you’ve derived something important or worthwhile. It just means you were entertained enough by the possibility something might be good if you just keep scrolling, and that’s a problem for some.


> The fact that people spend lots of time on social media just demonstrates that they derive something from it, not that it’s addictive or harmful.

I can't tell you how many times I hear my wife say "I need to get off Facebook" but never does. There's very much an addiction.

I use Facebook, but I've found that I spend nearly all my time in specific groups (for me that's motorcycle related groups) because it's a great source of help for "who wants to go riding" or "how do I fix this" or "can I borrow a tool". When it's very topic specific, social media can be really awesome.

What my wife complains about is "friends" that are posting about politics. She just wants to hear about their personal lives but wants them to shut the hell up about politics.


> I can't tell you how many times I hear my wife say "I need to get off Facebook" but never does. There's very much an addiction.

My wife says that too. Then while watching this, she said wasn't that interested, got bored, and decided to use Facebook and Instagram next to me instead. It was surreal and disturbing to watch it in that situation. :/


I often derive the most enjoyment through more niche social media spaces, such as HN or certain subreddits. I think the difficulty I have is that you get pigeonholed. I don't use Reddit as much because I like a few posts on /r/Funny and then my feed is taken over. I don't always want to go there to learn technical stuff, some days I just want memes. Other days I don't want any memes.

Content discovery is a difficult challenge but I think it is hard when incentives aren't aligned.


> These apps are absolutely addictive because they were explicitly and intentionally designed to be.

Same goes with fast food, TV shows etc. It's your responsibility to not over-consume anything; and if you have children, teach them to not do that as well. The problem is not with FB, Google etc. The problem (which is over-consumption of everything) is cultural. Blaming some specific corporations is easy but it won't lead to any positive solutions. We desperately need stoicism in the west.


Speaking as someone who came from a rather colorful childhood of abuse, I'm wondering what your personal past is? You're assuming so much about the lives that folks lead, and to me it speaks of a lack of empathy for the personal struggles each individual is enduring.

Stoicism will fix things in the west? I assume you mean the USA, but as a person in the USA, I have to say that we are about as stoic as you can get. We lean on personal responsibility for every little thing.

"Oh, you went bankrupt from cancer. Why didn't you plan for that? Don't you know around 50% of folks get that?"

"Oh, you went to college for a useless degree? Why didn't you have someone in your life teach you that was a mistake? Why didn't you reject the counselors in public schools who encouraged you down that path?"

"Oh, you eat too much sugar? Why didn't you reject the industry sponsored studies that downplayed their health effects?"

"Oh, you got covid? Why didn't you stay at home? What do you mean you're an essential worker? Why didn't you take care of your health so you weren't predisposed to it?"

It's not stoicism the west needs, but a dire injection of empathy and community.


It's not just an argument about stoicism vs empathy, but freedom vs control. How much paternalism do you want from your politicians? How much control over your daily routine do you want to give some barely accountable millionaire legislator/governor/president who's loyalties are only to you based on how many votes their donors can buy (the rate differs greatly depending on the politician). The more effectively money buys their votes, the less they give a crap about you.


We need a better word than "paternalism" for "not inclined to use a pandemic as an excuse to give trillions of dollars to the rich".


Stoicism would be helpful because of how common emotional manipulation is as a vector from sociopaths and how those they manipulate wind up /acting/ almost indistinguishable from sociopaths except they don't even remotely serve their own interests. Look at the Covidiots spreading disease because a politician doesn't want to admit the obvious - that they fucked up big time.

Or the long list of fucking stupid moral panics oriented around children from people trying to get reelected and everything "satanic" which ironically enough many children could have told them was just plain dumb. While empathy would be good emotions alone aren't enough as even "a desire to protect one's offspring" may be twisted horrifically. Community I am deeply skeptical of as a value in itself given how often it winds up something deeply toxic and stupid from acting like dumb herd animals. Sure internal investment would be good but given their record community itself can piss off.


Good old personal responsibility. It's your responsibility not to walk into traps laid out to catch you.

Why stop people from laying out the traps? They're just trying to make a living, it's perfectly fine!


Absolutely. Let's lay out traps for the trap layers then, and find a way to do it for a living. You're saying that's fine, so lets do it.


Except the trap layers are bigger (more money), stronger (good lawyers from that moeny), and there are more of them than you (lots of people and lawyers).

How do you fight that? With a bigger group of people. Since they are so large, that means a nation or state (if you're a big one, as if you're small they probably won't care, that's how big they are).

If there's a pride of man-eating lions roaming about, you don't set out by yourself to fix the problem. That's just an imaginative attempt at suicide. You get a large group of people. That's why regulation is what we need, because that's what it means when you get a really large group of people to impose their will in the modern age.


Except you don't lay out traps for the trap layers. You make it very clear that if they lay out traps then they will face consequences.

They are the ones acting in bad faith and trying to extract resources from you via subversion.


What do you propose to solve this problem besides teaching others to take personal responsibility?


Maybe we can look for historical examples of how legislation had a positive impact on a widespread harmful habit.

  Early on, the U.S. Congress adopted the Federal Cigarette Labeling and Advertising Act of 1965 and the Public Health Cigarette Smoking Act of 1969. These laws—

  * Required a health warning on cigarette packages
  * Banned cigarette advertising in the broadcasting media
  * Called for an annual report on the health consequences of smoking
https://www.cdc.gov/tobacco/data_statistics/sgr/history/inde...


Some basic rules and regulations would be great :)

This should of course not replace taking personal responsibility, but humans are way to easy to influence / hack to leave it entirely to market forces. Some common ground and basic rules is at the very core of any healthy society or community.

Speed limits, traffic lights and bike lanes creates great conditions for a healthy marketplace. An absence of regulations benefits only the biggest and meanest, and usually ends up creating unsustainable systems that is crushed under it's own weight.


> Some basic rules and regulations would be great :)

Any specific ones? I personally can't think of any at all that would work.


I can see two obvious interpretations of what you might be implying:

1. That, because you are unable to think of rules and regulations that would work, there must not be any.

2. That, in order for anyone to legitimately suggest that something might be worth attempting, they must already have a working solution.

Is it one of those two, or something else?


Some (including myself) would argue that a repeal of section 230 would go a long way towards solving some of these issues.

A large part of the problem is that social media companies algorithmically serve content to serve their purposes, but are not liable for the consequences of algorithmically served content.

To me, using an algorithm to select what content to show a user is exactly the same thing newspapers do: editorializing. I don't understand why its treated any differently.


Those people who say that about section 230 are better known as either dishonest or complete fucking morons.

The entire mutually exclusive division between platform and publisher is a zombie propaganda lie that won't stay dead.


I personally can't think of any at all that would work.

This rhetorical technique is called 'poisoning the well,' preemptively injecting doubt into whatever follows. The GP gave several examples of regulatory mechanisms that already work to some degree, and an argument for why, but you chose to ignore that completely for some reason.


The GP gave examples of regulatory mechanisms that work for other situations, but I also am unsure of what specific regulatory mechanism would work well for this, and those examples don't really bring anything to mind, even though I am in favor of a regulatory solution. I think coloring the question as poisoning the well might be a bit premature.

That is, what are traffic lights and safety lanes for social networks? How do we enforce those for private products when generally they are things why apply to our public spaces? A social network is much more of a mall than a public park, so what restraints are we willing to make towards a private property, and what will actually work? I think those are very valid questions, and while the argument from blueterminal may have gotten there in a roundabout way (and in a way that some consider not in good faith), I they are well worth considering in detail.

To me it's blatantly obvious something needs to be done, I'm just not sure what that is, and am slightly afraid we'll implement a fix that if not as bad as the problem, is still much worse than it needs to be unless we consider it carefully.


It's the pre-emptive foreclosure of discussion I find objectionable, negating the attempt rather than exploring the problem space.

I would personally reduce the private property privileges substantially since a social network by definition derives its value from the number and variety of people that use it. I'd like if FB were at least as searchable to its users as to advertisers, for example; arguably FB knows more about many of its users than they know about themselves.


> I would personally reduce the private property privileges substantially since a social network by definition derives its value from the number and variety of people that use it.

That does make sense, and also fits with how private property can't entirely be viewed in isolation if it has negative externalities. I.e. You should be able to pollute the air immediately above your land as much as you want because that doesn't just stay on your land.

> I'd like if FB were at least as searchable to its users as to advertisers, for example; arguably FB knows more about many of its users than they know about themselves.

That might help in some small amount (and as long as people people actually reviewed info and requested removal, and that had to be honored, it would help), but I'm not sure the companies in question wouldn't just turn their neuroscience divisions to the task of making people want to allow the data for some reason.

To me this feels more like a drunk driving type situation, or age of consent, or ability to sign away your rights. We believe some things should be disallowed because the combined cost to society pf allowing it is much greater than the sum of the individual costs for disallowing it added up. But even if that can be sold as a good idea for this, I'm not sure what specific things we would do to block it that aren't nebulous and can be gamed. Maybe disallowing advertising, but that seems to be targeting one industry, and I suspect something else with a similar negative and the same incentives might take its place for this area.


Yes, make facebook news feed default to first in / last out, and let users upload community created filtering algorithms so they can decide whether they see certain news/sites etc. or if they just see their grand kid’s picture (and if there are none, they would probably log off of FB quite quickly, which is why FB does not offer this as an option now)


@blueterminal: There are many proposed solutions, some better than others. GDPR is a step in the right direction, it should not be easy or trivial to collect huge amounts of data on people. There is no good rational justification for this behavior. Consumer protections is important and should be extended to how algorithms are used. Fine-grained targeting of ads (political or not) should not be allowed, due to the obvious negative consequences for societies. An expert panel could be given the task to make some sensible compromises.

The irony is that such regulations would ultimately benefit the companies that are currently being criticized, making long term survival more likely. Regular people would be better of too, and I would not need to discuss movies like The Social Dilemma on HN.


Stop people from laying out the traps, and saying it is not really fine. Guess that means regulation and enforcement thereof. Things go both ways, of course. There is a certain level of personal responsibility to teach and take into account.


> Stop people from laying out the traps, and saying it is not really fine. Guess that means regulation and enforcement thereof.

What kind of regulations would you impose? Any specifics?


How about stop the hysterical hyperbole of calling speech traps instead?


Not the original poster - just replying with some ideas...

I would purpose that Google in particular be broken into many different companies. The fact that they own such a large part of mobile, browser, search, video, email, news aggregation, online office products is hugely problematic. I believe one of the issues that was not highlighted is the overall reach of these companies, they focus in on the mobile aspect, but there are huge swathes of data being collected via other means. Search in particular should not have a bias in the autocomplete - search should be agnostic, like Wikipedia. When I search it should show the same results as when you search. Next there should be a requirement that these companies provide access to a users data back to them, along with the analytical output that creates their digital avatar. There should be the ability to FULLY delete the data per a request. Next there should be a requirement that the sites provide ability to utilize their services with browser anonymity enabled. Much of the data leakage is harvested automatically from the mobile device or browser, the ability to slow the trickle of information should be required by law. Perhaps a requirement that a price that an advertiser paid for each ad should show up, and an opportunity for the user to opt-out and pay for the service instead at the same rates in truly anonymized fashion. More extreme options - restrictions on access based on age, we do this for cigarettes and alcohol, perhaps this should be applied to social media as well. Perhaps there should be a throttle on data gathering or hard limits placed on how much data can be pulled in and kept. Require maximum data retention periods on this data to make the models dumber. We have data retention periods set for health data - why not restrict it for social media.


Time and time again I see it suggested that these "lets be evil" giant companies like Google be split up. I'm coming around to this. They aren't able to be accountable because they are far too giant (not necessarily in the head-count sense) to be influenced by human conscience. They take on a life of their own, unconstrained by their externalities, adapting like DNA, optimizing for their survival at any cost. Force these tech giants to split up, because we agree as a society that they are evil (or at least problematic). The individual people that make up the mini-corps will then have more influence to steer these mini-hydras. What about shareholders?


This is like arguing that the fast food industries, the sugar industry, and the ad industries have no role to play in diabetes because it's your fault you can't control yourself.

Never mind that you will have been exposed to tens of thousands of hours of literal behaviour modification programming. You should somehow magically still have the personal, emotional, and cultural resources to override it. And if you don't - it's your fault.

But curiously it's not the fault of the corporations, because there is no equivalent requirement on them to control themselves, or to take corporate-personal responsibility for the consequences of their actions.


How would you try to help an alcoholic? Would you tell him that it's alcohol producers' who is at fault? Or would you tell him to try and get help? Why would an advice be different for over-consumption of social media, coke, big macs, tv shows, casinos etc.?


The answer to that is extraordinarily complex, and dependent on the abusers' history and personal circumstance so it's not really answerable here.

Cigarettes might be a more apt analogy IMHO. It's extraordinarily hard to quit, but at least we've come to a point where there is enormous social pressure to do so. And not just for health reasons: it's widely considered an unattractive habit.

Contrast that with the 50's where it was not only accepted by society but considered "glamorous". That's (sort of) where social media is today. Its image needs to go from fun and youthful to gross and harmful in the popular perception (barring that, the entire model needs to be re-invented).

Educating people on the long-term harms is the primary (perhaps only?) way forward at this point. This movie does that very well but it will be a long slog.

I'm not even sure this war can be won, but we must try.


That's weird framing. The individual needs help, but there are significant regulations on alcohol companies, too. We went too far, by prohibiting it altogether, but we still limit alcohol advertising, we have age limits, etc. None of that is "telling him" to blame the alcohol producers; we need both regulations against harms facilitated by corporations for profit and support for those harmed.


> Why would an advice be different for over-consumption of social media, coke, big macs, tv shows, casinos etc.?

Of course, different people deal with these kinds of issues differently, but it's also important to recognize the vast differences between consuming too much alcohol vs consuming too much tv. Firstly, if you've spent any significant amount of time around a recovering alcoholic and discussed the issue openly with them, you realize pretty quickly that it's usually not just a raw craving. A lot of alcoholics drink in response to trauma of various kinds, and your backstory as an alcoholic plays a fairly significant role in how to seek the best recovery strategy. There isn't a one-size-fits-all solution there.

TV is much different for several reasons. First, there's not a universal societal acknowledgement that TV is addictive like alcohol. The US went through prohibition because alcoholism was rampant in society, TV hasn't disrupted society in the same way. That's not to say that TV isn't addictive or capable of being abused, it's simply that TV is much more passive in terms of how it's consumed, and it's higher order effects on people who consume too much. This is also true of social media.

Social media and TV don't usually force people between feeding their families or consuming endlessly the way that alcohol did in the 19th century, but it does encourage us to spend more time on autopilot and less time consciously thinking and being present.

I highly recommend David Foster Wallace's talks on entertainment addiction[0]. He predicted the genesis of a lot of these issues 20 years ago, and his commentary is still so relevant today, years after his death.

0: https://www.youtube.com/watch?v=DoAME7gPBaw


There are four trillion bits coming at you, 99% of them are shit


Not a great comparison, since if alcohol were invented today I'd expect it to be banned or at least much more tightly control. It is incredibly damaging to society and we only tolerate it for historical reasons.


all those examples you listed have restrictions and regulations in many places in addition to services that provide help to the affected. Doesn't have to be one OR the other, can do both


> How would you try to help an alcoholic?

That's a distinct question from "How would you try to reduce the prevalence of alcoholism in your country?".

When considering a particular individual, yes you need to get them medical and therapeutic help, some kind of life coaching/counseling program etc.

When considering the whole population, you do need to think about regulation of selling, advertising, public health campaigns for prevention etc.

Similar with sugary junk food. You can tax it at higher levels, ban advertising to children etc. Be strict on forbidding misleading claims. Perhaps pub highly visible "high sugar content" warnings on the product etc.

Yes, all this stuff is very un-American and too "socialist" I guess, which leaves you with the other option of being the most obese developed nation and having a substantial proportion of people be obese at a level of it becoming a legit disability. Which is fine for most people I guess, because it's not themselves, so who cares about other people anyway.


One of the main issues is regulation. With TV, government was able to regulate the content via FCC. The movie's example to this was Saturday morning cartoons compared to YouTube for Kids. The latter is very much self regulated by Google/YT and parents, rather than government which, given history, has the actual power to curb it. Giving the keys of regulation to the target of it is not the way to get it done. YT in this example will set the main goal to driving their product to further profit over content regulating one way or the other.

I definitely understand your argument of "leave it to the parents" but why add to their ever growing pile of worries? What if they don't know any better b/c of technical illiteracy? This issue of an unregulated media line has been seen and dealt with before yet we're hesitant in doing anything about it.


The reason to be hesistant is /because/ we saw how it was dealt with before! If we had the same godawful most self-righteous prude standards we would be censoring the very concept of LGBT people as "corrupting". It is no accident that acceptance's first step was from unregulated media who slipped through the crack and was able to laboriously crack and pry through the societal bullshit.

Fuck the parents - they should be adults and taking responsibility for their life's direction if they have kids.


What "unregulated media" do you feel was responsible for "acceptance's first step"? I'm not an expert on this by any stretch, but I've studied both media history and LGBT issues, and the shows that get cites most often for doing a shocking amount of work in shifting American attitudes toward LGBT folk were, well, regulated media. Specifically, the network sitcoms "Ellen" and "Will & Grace," with earlier tentative steps in "L.A. Law" and, in the late 1970s, "Soap."

While "unregulated media" may have become leading edge in LGBT presentation in the years since, it's also become leading edge in reactionary backlash, from Fox to internet media to the right-wing propaganda pushed by Sinclair Broadcasting to their local TV stations -- which in many markets they wouldn't have been able to own if local TV markets were still more tightly regulated.

I'm not suggesting regulation is a failsafe panacea by any stretch -- I'm just suggesting that deregulation isn't, either. There is probably a balance to be struck, and it is not at all clear to me that the balance we have now is correct.


They are the Johnny come lates - look at the various zines and very small newspapers published way back when homosexuality was actively illegal for one and would have been called obscene material but regulation thankfully couldn't be meaningful. Although niche they certainly had a role in growing networks of allies.

On the other end of the contact Internet contact is a later one and although nothing concrete it seems very non-coincidental that generations where straight people could watch gay or lesbian porn, and fanfics being ubiquitous enough that Draco and Harry ships were well known even among people who never read a single one due to lack of interestm

The "balance" is a golden mean fallacy. Best illustrated by a rhetorical question that sounds like a death threat: Is there a balance to be struck between you being alive and unharmed and stone dead?


I'm not sure we're using "balance" in the same way, precisely; I don't mean "balance" in the "present both sides of every issue as if they're equal," I mean that there's a continuum of regulatory options in a given field and that "less regulation" is not always the correct direction. (Not that regulation is really a spectrum, for that matter, as you could conceivably have high levels regulation in a given field that's still regulating badly.)

So, to answer that last rhetorical question with the way I'm intending "balance" to be used: I'm not at all opposed to regulations in food safety, wiring standards, water treatment, etc., that shift the balance of likely outcomes for me toward "alive and unharmed" and away from "stone dead." :)


This is an utterly unscientific and healthcare ignorant view on the level of "spirits cause tuberculosis". Science and medicine have long past moved from the model of personal responsibility in treating addiction behavior. We now know that a wide variety of factors completely out of control of the user, such as trauma in childhood, genetic influence, isolation, depression, etc. contribute to someone's tendency to become addicted to something.

Additionally, we also know that it is possible to make something addictive by spacing out dopamine hits in such a way that can be designed to cause a person to pursue this over other necessary duties. Pairing this with people who are more vulnerable to addiction through no factors they could've possibly controlled essentially takes advantage of people.


I think this argument is odd. You are literally arguing that every individual should have more self control than can be overcome by a team of experts whose job it is to defeat your control. I'm sorry, but we're just out matched.


One way of promoting personal responsibility is to make people aware of a problem. Some people may be able to figure it out on their own, others find out from someone close to them, yet others will learn of it from other sources (such as documentaries).

That being said, I believe that social responsibility also has to play a role. If companies were simply interested in making a product appealing, they would look at how well it is received. When companies start employing psychology, there is a strong possibility that they are doing so to override people's judgment (thus their ability to take personal responsibility).


> One way of promoting personal responsibility is to make people aware of a problem.

Totally agree. Movies like these or like "That Sugar Film" are extremely important.

> That being said, I believe that social responsibility also has to play a role. If companies were simply interested in making a product appealing, they would look at how well it is received. When companies start employing psychology, there is a strong possibility that they are doing so to override people's judgment (thus their ability to take personal responsibility).

They are going to inevitably do that, and that's what we should expect them to do. I don't think there are any solutions to that except building some new services / protocols.


> They are going to inevitably do that, and that's what we should expect them to do. I don't think there are any solutions to that except building some new services / protocols.

The trouble is interoperability. Creating open social media platforms has been tried before, yet the uptake seems to be those with strong ideological views. You would pretty much have to compel new and existing social media networks to support the services/protocols in order to give people a viable option. I'm not sure that many people are ready for that type of legislation, and I'm not even sure it is a good thing. At least existing social media services have some incentive, may that be the company's reputation or the value of the data itself, to offer some degree of privacy protection.


I don't think it's a given that Social Media's harms are at the level of fast food. Figuring out where it falls on the "harmful to society scale" is a worthwhile discussion. Is it sugar, e coli, or something in between?

We do have regulatory protections around food. We don't think it's pragmatic for everyone to run their own home lab to determine if there's poison in their food. It may also not be pragmatic to expect everyone to check sources or even be able to locate the truth of a source (I sure can't most of the time!).


'Personal responsibility' is often used to inhibit the formation of a consensus for collective action. The people who sell things like fast food or saturate TV with ads are acting collectively to profit themselves, but you're saying that the people who are weary of it should be stoical rather than cooperating among themselves to counter the unwanted behavior.


This argument and arguments like it suck because it fails to recognize that people who can't control themselves have the potential cause harm to those who can. Stoicism would only work if everyone would instantly adopt it at once.

People who mal-consume social media contribute to cancel culture, spreading lies and propaganda, and providing the fuel for negative power-seeking behavior in political and business realms. These have very real consequences for those innocent parties that don't mal-consume social media.

Aligning with your libertarian pattern, you can say "Well it's up to the people who can control themselves to defend themselves from the people who can't", and then that's precisely why we would introduce laws/regulations limiting things.


I spend lots of time reading. Is reading an addiction? I don’t think spending time on something is necessarily an addiction. Can social media be an addiction? Sure, in the same way videgames can be an addiction. I wouldn’t go as far as saying video games are harmful. Social media is also different in that it’s a means of communication. It does have utility for communicating whereas video games or drinking doesn’t.


It’s not about how you consciously choose to spend your time, it’s about being able to choose when to stop.

I commonly hear from alcoholics that the problem isn’t that they want 1 drink, the problem is that they want 10 drinks. They lack the ability to moderate consumption and be able to consciously stop.

Social media struggles from this exact same problem. It’s so easy to just keep scrolling, to just keep seeing what’s further down the feed. Maybe I missed a picture that I would like, or something. This FOMO is intentionally designed and baked into the apps to keep people from leaving them.

You can choose to put down the book whenever you want to or need to, and maybe you don’t have a problem putting down social media either. But many, many people do. Just because you don’t feel psychologically pulled in doesn’t mean that it’s not a real addiction.

There’s an easy way to test this out on yourself. Commit to going a whole day without checking your favorite social media apps and add the parental control time limit to those apps. If you find yourself unconsciously whipping out your phone and checking social media, you’ll see the prompt to input your parental control code. If you’re anywhere near the statistical average human, you’re gonna find yourself mindlessly checking social media multiple times that day. That’s what I’m talking about.


> I spend lots of time reading. Is reading an addiction? I don’t think spending time on something is necessarily an addiction.

Anything can become addictive, really, if we use ASAM's official definition: «People with addiction use substances or engage in behaviors that become compulsive and often continue despite harmful consequences.»

The difference between books and feed-scrolling is stark. Books are linear, static and finite, while your feed is ever-changing and adapting to your patterns of behavior – and practically infinite. These differences are huge when it comes to the potential for "continued used despite harmful consequences".


Addicition is defined by its consequences. If you're engaging in a regular activity, that's one thing, but if you can't stop yousrself from doing it, even if it's harming you, that is addiction.


Is the book dynamically changing its content based on what it knows about your psychological profile in order to influence your beliefs and behaviours?


The deck of cards isn't changing its contents and yet gambling is an addiction. And if influncing behavior is such a sin why are you speaking? Don't you knoe that could influnce beliefs and behaviors?!


The outcome of the game is ever-changing. There is always the possibility that next hand (or dice roll) will give you the dopamine rush you crave. That is why it's addictive.

But.. that has nothing to do with the premise that reading books can be compared to social media.

>And if influncing behavior is such a sin why are you speaking

Because I'm on a message board whose accepted purpose is to read and share thoughtful (your comment excluded) ideas and views.


>I don’t agree that belief in pseudoscience, rumor, or superstition is anymore prevalent than it was before social media. I think it’s just more visible to everyone how many people do believe wacky ideas.

I think this is a key point. In particular I think educated people (for want of a better term) are often dangerously out of touch with how many people (particularly internationally) don't share their worldview. I remember the first time having dinner with a family and finding out I was the only person at the table who believed in Darwinian evolution. I was shocked because these people weren't even particularly religious. Up to that point, except for strict religious people I don't think I'd even spoken to someone who didn't believe in evolution. Or at least, it never came up.

And this is the point really - what social media does do is increase social friction on these topics. I worked with a guy who didn't believe dinosaurs existed (also not religious). After the initial debates it was fairly easy to just let it rest since he was a nice guy and completely sane on all other topics. But if this had been on social media if I had seen anti-dinosaur posts on a regular basis that would probably have got to me eventually and I would have blocked him. Even though, in social terms it doesn't matter if he believes in dinosaurs or not.

The other thing it does is make people feel less embarrassed of holding particular viewpoints. Lots more people than you might think believe in witches and curses but (quite rightly) think they'll be ridiculed if they discuss this too openly. But if you're on a Facebook group where everyone believes in witchcraft it starts to feel like a legitimate opinion. And then someone else sees one of your pro-witch posts and suddenly there's a discussion about where all this new belief in witches came from. This kind of conflict drives a shitload of activity and makes everyone involved feel bad one way or another.


> I think educated people are often dangerously out of touch with how many people don't share their worldview.

What you call above a wacky idea - is actually not that wacky. I may believe dinosaur X existed but not dinosaur Y. That’s perfectly ok. Much of this stuff is not “settled science”. I work in academia, and mostly its a milder version of VC vacuousness. You occasionally throw stuff out & see what sticks. It’s plausible, but not proven one way or other. So you publish something - the goal is mostly signaling and mindshare acquisition. If everyone waited until we knew forsure with 100% consensus, then nothing would get published. That’s why a majority of academic research can’t be replicated, but it is still published and it is still legit.


Social media doesn't just increase social friction, but creates beneficial network effects for people who share particular views. That's often just harmless fandom or fads, but it can also be deadly serious when the views in question promote political violence.


> In social terms it doesn't matter if he believes in dinosaurs or not.

Yes, I can choose not to bring up that topic. And I choose to ignore it because I don't know what else to do. But, I can't completely let go of the idea that a person who doesn't believe in dinosaurs is voting on things like whether or not schools teach about dinosaurs or whether or not kids should be vaccinated or whether or not masks will cause co2 poisoning.

I don't know exactly where to draw the line. I don't know how to put this without people reading something into it that's not there but, I'm perfectly ok with someone having arguable reasons for holding an opinion different than mine but I'm not so okay with them having nonsense reasons.

To give an example, for some reason my my dad stopped believing in evolution in his 40s and will spout nonsense reasons like claiming eyes are impossible because it's too big a step which is logically false. You can easily name the steps. I'm not saying I require him to believe in evolution, I only want him to give valid reasons. A valid reason could be "because god tells me so and to ignore all other input" but valid reason is not "because evolution can't make eyes"


I know what you mean but...what can you do? In the instance you give, he's still your dad and if he's a generally good guy otherwise you're presumably not going to cut all ties over a disagreement about biology.


The harms of social media are understated. I live in Brazil. Social media was one of the main reasons of the election of our president, Bolsonaro. Now we have more than 130.000 deaths from coronavirus, the Amazon and Pantanal are burning, our institutions are crumbling. My fellow software developers aren't just make some far way teenagers depressed. This is is real harm in the real world.


Most systems won’t change overnight because of the guy on top. What makes you so sure Brazil would have done better with the opposing party candidate as president?

Maybe the problem lies in a supposedly chronically underfunded healthcare system, by a number of previous administrations? This would then reflect, in an electoral democracy, of the citizenry’s misplaced priorities.


Men, this guy fired 2 health ministers during a pandemic. And during a pandemic we don't have a health minister!


Most systems get built over time to be better.

Every crack pot on top harms the structure, until eventually One Guy does break the system.

Now if people want this outcome, its great. If you don;t want this outcome, its not.


[flagged]


Nationalistic flamewar will get you banned here, so please don't post like this to HN.

https://news.ycombinator.com/newsguidelines.html


It has nothing to do with what nation they are - but an uncomfortable conclusion that the problem is in the voter base is self evident for elections.

1. Bolesaro is a fascist and only terrible people would support him. If supporters of genocidal fascists are terrible people the who is? 2. He won the election. 3. A plurality of votes is needed to win the election.

QED the voting plurality are terrible people. I said nothing of other nations by baseline and thought it was abundantly clear that it was a lesson of experience that assuming that a large number of your voting countrymen aren't terrible people is sadly a mistake.


Sorry, but this sounds mean.


What's mean about it? Brazilian voters are simply reaping what they sowed - "social media" has nothing to do with it. It's called living in a democracy.


You should see “The Great Hack” on Netflix. It might change your mind.


I totally, 100%, agree with you. I worked at Twitter pre-ipo. There were no “expert psychologists” anywhere in the company at that time, unlike what people below claim. Whatever features we built were mostly organic ( things like hashtags rose from the twitter community itself, not fron some internal mandate). The 140 char limit was a technical sms limit, not some feature purposely engineered to get you to engage more. Many of these things such as like button, heart button, retweet - for every “success” such as this, there were numerous failures that simply did not catch on. All in all, it was a bunch of clever hacks. The ipo happened, we sold our shares and moved on. Most of us deleted our twitter accounts shortly after. Its not like any of us were actively tweeting anyways. Many of my colleagues went to work at facebook. They have a barely functioning facebook page, which they hardly ever post on. A few of my friends work at insta. They don’t even have a profile - they said it’s mostly celebrities who post pictures about some luxury goods, not us developers who are git checking code. It has no relevance to us. Basically, a lot of us are completely bored with this social nonsense, and we’ve moved on to more important things. Social garbage is just not interesting. One of my friends works at Linkedin on something called Linkedin Stories. So I casually asked if he had ever written a linkedin story. He said you must be a bigtime loser to even think of using that garbage feature!

I guess once you know how the sausage is made, you lose the craving.


You don't need to have an "expert psych" initially.

Sitting with a friend rehashing what we studied about skinnerian conditioning and a diablo like game we thought of a frankenstinian game which knew exactly when to hit you for a transaction to let you level up/get new item and so on. Years later and micro transaction enabled mobile games came out.

Adding simple psych based analysis to systems isn't hard, you would just pick it up from gaming/other industries and recreate it as a best practice.

People are interested in this, and are doing great amounts of research to figure out how to do it better.


There's a bit of Sisyphus in everyone's story it would seem.


You are vastly understating the volume and type of data that Facebook is gathering. You casually mention photos but there is a mountain of information in each and every photo. People upload tens of thousands of photos.

You also just hand wave away the real psychological manipulation that is available when bad actors have enough information on you. “I won’t be convinced the moon landing was fake.” Sure, that’s likely true, but you aren’t special or immune from manipulation. With enough data ANYONE can be surgically targeted and manipulated.

And the worst part of all of it is that you can’t opt out. You can upload a photo that I am in. Facebook can scan and get a ton of demographic information about me. Then they can continue to trace and build out my “invisible” profile, all without me needing an account.

It’s absurd.


Even I feel the narrative is played quite disproportionately (democracies are falling, dystopian world).

Doesn't any engaging TV show (Game of Thrones) does something similar, keep you engaged by twist and turns, experimenting/testing with new characters? A/B testing on users is perceived as experiments on lab rats in the docudrama.

And why is the ad-based business model a bad thing? If the platform target specific customers without any misleading information, it can really help both the customers and the advertisers. Seems like a win-win.

I feel absolutely necessary for these platforms to understand us better, by capturing various data points and then manipulate/filter our feeds. There is so much content published every second, a genuine AI can really come handy. If I feel that my feed is getting polarised and I am in an echo chamber, I can consciously subscribe to alternate and diverse opinions and people. In reality, "You are average of the 5 people you surround yourself with"

On a side note, Netflix itself is a competitor to Facebook (content), Google (Youtube) and also engineering resources. Netflix is too "competing for user's attention" with these tech giants. Netflix too in a way was getting you hooked to the documentary by playing around with tempo and music frequency, that triggers some engaging chemical reaction.


I'm not super bothered much by it either. However, there is data to show, through Facebook's own research, that is possible to manipulate people's sense of well being and their perspective by what is being shown to them. For example, Facebook could show a person only posts from their friends expressing negative sentiment and put you in a bad mood.

Although these algorithms are quite sophisticated, there is no universally accepted heuristic for what is ideal for somebody to see. For example, let's say your friends have broad political leanings but you lean left. Should the algorithm only show you left leaning political posts because that's what you like, or should it optimize for you engaging with a broader variety of content but maybe increase your dissatisfaction. I think these are very difficult questions to answer.

Another issue is that people are very biased towards believing what is put in front of them. If your mind sees a pattern, it starts attributing truth to that pattern. This idea that you should just see all the facts and then "you decide" is a cop-out in my opinion -- your mind will have decided before you were ever able to apply rational thought to something.

Most people are used to seeing filtered information that has been vetted (newspapers, textbooks, teachers, experts, etc.), and the age of social media presents a problem insofar as this filter does not exist or exists in the court of public perception or algorithms. You can see evidence of this in that there are otherwise rational people who are being easily tricked into believing these whacky conspiracy theories and fake news, as you point out. I think that's a pretty big problem with serious societal repercussions.


> Should the algorithm only show you left-leaning political posts because that's what you like, or should it optimize for you engaging with a broader variety of content but maybe increase your dissatisfaction.

They should give that control to the user. Users should have the ability to select how much diversity they want and then provide feedback later to the content for the AIs to iteratively understand better, just like in Google news, Youtube. In addition to that, a report analysing the user's engagement with content and topics, for the user to reflect upon.


>The fact that people spend lots of time on social media just demonstrates that they derive something from it, not that it’s addictive or harmful.

I think the point of the movie is to show that it is addictive and harmful and it is built that way intentionally. And it gets worse for younger generations.

>Regardless, social media isn’t going to suddenly make me believe the moon landing was fake, so I don’t see it as being harmful.

Maybe not the moon landing, but it can make you believe many other things with out you even noticing.

We've seen examples of this already, there's nothing new here.


The truth is that platforms like HN are the worst offenders. While consumption-oriented platforms are clearly consumption-oriented, the addiction from text-based platforms like HN is far more insidious.

People think they're making a difference when they come here, but nothing is happening. They get the dopamine hit of some karma, and replies, but they're in their trench and the other guy is in the other trench and the battle lines aren't moving.

HN is a pretty clever tool. It honeypots entrepreneurs who can't develop so they don't compete with the guys who can develop. The former category, instead of competing¹, talks here about how they can't finish their side projects and how they can't get motivation to work, and other unrelated stuff that is just addictive entrepreneur or tech porn.

This streamlines the experience for doers, who avoid the manipulative trap that is HN, and are busy building product.

¹ Competition is bad. cf. Zero to One


I am not categorically against ads on online platforms (or offline for that mater), but I do have a problem with the power these companies have gained by gathering and analyzing fine-grained data-points on us and our behavior. The fact that most people only have a very vague idea about how these systems work and how much data is collected is worrying.

The fine grained control over who you can show ads to is a problem in itself and a tool ripe for exploitation by anyone who wants to nudge certain demographics in any direction they want, political or otherwise. We have seen several examples on how wrong this can go already.

> I don’t agree that belief in pseudoscience, rumor, or superstition is anymore prevalent than it was before social media.

I have seen up close and personal how a combination of Facebook and Youtube's algorithms have cleverly identified and massively amplified certain tendencies in people I'm close to. That is a personal, political and societal problem we should not ignore.


Like many privacy advocates, the documentary begs the question on most of the privacy issues it discusses. Some people seem to have an intuitive sense that they're being violated whenever anyone trades data about them or surveils them in public or aggregates their emails for crime-fighting purposes (or any of a dozen other examples), but I don't share it, so I'm not responsive to arguments that rely on the assumption that we share common ground on this point.


It was very ironic that Netflix, who once said that sleep is there biggest competition, releases a documentary about how it’s rivals in the attention economy are harmful.

Yes, Netflix doesn’t advertise directly to me, but many of the effects are still present. The way I think and behave can be changed by programming I watch on Netflix. Netflix uses a recommendation engine to keep me engaged. Netflix has implemented several design patterns to keep people engaged, such as auto loading the next episode.

It was specifically ironic for them to attack YouTube for kids while Netflix has a children’s offering, and most of the known downsides about screen time for children applies wether or not advertisements are present.


I appreciate the irony, but as a parent I can say that I much prefer my children on Netflix than on Youtube. (Caveat - there is more deeply interesting content on Youtube, but its pieces of civilisation flotsam to be salvaged from a river of shit.)

Plus your last paragraph I just don't agree with at all, ads for kids definitely has a huge impact, it was not just moral panic that made that regulated in the TV space many years ago. Youtube is so full of ads and undeclared (not that it matters to kids if its declared, and doubtful it matters much for adults either) product placement.


Well, the difference is, that YT needs you to watch more to show you advertising to make money. Netflix just needs you to pay your subscription. That is a big difference, as far as the payoff of using psychological hacks goes. I believe, that if Netflix was ad-based it would be a hell of a lot more addictive.


TV addiction is a huge problem in the western world. The average American spends more than 2.5 hours watching tv a day, and Netflix is a large part of that. Even worse, Netflix promotes “binging”, of watching tv for long periods of time.

Netflix is clearly addictive, and there businesses model is based on people spending large amounts of time watching their programming and talking about it.


Fortunately Netflix's recommendations are (almost) never compelling enough to keep me watching. :-)

I'm also much more cognizant of the time lost by committing to another 30-60 minute episode vs. a few minutes for another YT video.

But it does seem like some sort of regulation might be required to give consumers some control over dark patterns (i.e. force them to include settings to turn things like auto-play off).


> YouTube for kids while Netflix has a children’s offering,

Weird how a supermarket selling sugary cookies would criticize a man giving away speedballs in the park.


Isn’t that the essence of Netflix’s point? That Facebook, Twitter, YouTube, Instagram, etc are far worse for you than Netflix.

It’s hardly an unbiased position, considering Netflix is bidding on attention just like TikTok or YouTube is


I think there is a huge difference. Netflix gives me the freedom to cut out ads and watch what I want (from what is available anyway). That is why I have always loved Netflix and have always enjoyed not having cable tv (which always felt like it was a net negative experience).

I will admit that I have binge-watched episodes when I should have been doing something else, but I also believe Netflix provides a beneficial role in my life.

Also, it saves Netflix money if I don't bindge watch everything - as long as I still value the subscription and plan to come back - it doesn't matter to them how much I watch.

In fact, this might convince me to purchase Youtube Premium just to signal that I value ad-free paid-for-content and I believe we can build a world without ads.


Agreed, I'm starting to think the Internet is becoming a net negative in people's lives. Maybe if everyone reverted to 2G speeds for a few basic tasks and we can leave SocialMedia/Games/Video behind.


You can make real friends in games.


Man, it's gotten so bad that I rely on Netflix to help me sleep. I'm also surprised they would release this. I think they're hedging on being transparent about the effects of screen time on people because there's probably a political reckoning waiting for tech companies if Trump lose this election, so they're trying to get ahead of this political possibility so they can stay under the federal radar and retain the ability to self-regulate.


The funny thing is, i always wondered how it would have been to live in the 30s. Mass media, populist movements and all of that. I kind of get it now. Social Media is the new mass media And as the last time something similar happened, smart populists used it to their advantage, industry supplied the tools and we all know how it happened. Not saying we are heading the same way just yet, but the possibility exists.

One could have propably made the same documentary about radio and TV like 80 to a 100 years ago. Just watching the film now, thanks for posting this question, I could have missed it otherwise.

EDIT: The guy having invented the LIKE-button is right up there with the guy having invented FCKW. Not that he had any bad intentions. Which worries me, that when a group of people with good intentions is doing good things can create easily exploited, dangerous things in the wrong peoples hands.


Over 100 years ago.

Read the first paragraph from this famous document from 1891 to get a taste of the concern:

https://www.papalencyclicals.net/leo13/l13rerum.htm


I’ve often thought about this too, I think the printing press is a similar time to examine as well [1]. I do think the internet provides an entirely new scale, the dissemination is global and instantaneous, and that makes the previous problems that much more explosive and harder to contain.

[1]: https://www.theatlantic.com/magazine/archive/2020/01/before-...


Yeah, scale and speed are maybe the worst part compared to other periods of the past. Ways to manipulate the masses have always made some extremely powerful. Religion and church in medieval Europe for example.

Ultimately, we will figure it out. We found ways to cope, as societies, with mass media, radio, TV and so on. We will with social media as well. I'm just a tad worried about the way we have to pass to figure it out and what might go horribly wrong along that way.

EDIT: They lost me a little bit on the AI scare part. Up to now, we talking about ultra-complex algorithms. The kind of algorithms that screwed up Wall Street. Hard to understand, sure. But still "stupid" algorithms. Difference between Wall Street and Social Media? Wall Street has a public, closely monitored, public real time KPI to measure, indirectly, what the algorithms do. Social Media has no such thing.

Oh, and we should read up on Goebbels theories on propaganda. The principles still apply, just the scale is so much bigger. The algorithms have no intentions yet. People using these algorithms do. And we have to find a way, as societies, to cope with that.


I don't work on algorithm-driven platforms, but I built a tool [1] to help deal with them.

I'm very glad to see this reaching the mainstream. Like most people in the tech industry with some understanding of how these things work, I've been increasingly worried about the harms it's doing to myself, others and society at large. Once you've seen the damage it's doing, you can't unsee it. It drives me crazy that others can't see the same things.

However I think this film did a great job at conveying to non-tech people how the systems work and take advantage of them. I think of it a bit like those documentaries that expose the worst of the meat industry, where it's not uncommon for viewers to become vegetarians.

My optimistic outlook is that more and more people will wake up and turn their backs on these harmful things, and the era of Facebook etc will, thankfully, be over.

[1] https://chrome.google.com/webstore/detail/news-feed-eradicat...


Thanks for NFE!! That and FB Purity have reduced the time I spend Facebook to nothing, except for messages.

I put a bit of effort into finding extensions like this, and Distraction Free Youtube, Inbox when Ready, Amazon Lite, etc this year and I've been shocked and how much less "sticky" I'm finding the internet. Willpower and personal responsibility are great, but better yet is not even having to try.

When I use my phone, the difference is absolutely incredible, since I can't bring this huge suite of extensions with me. and as a result I've been uninstalling apps gradually over time -- you can disable surprisingly core functionality on Android, including the web browser.


I'm glad to hear it's helped you :)

> Willpower and personal responsibility are great, but better yet is not even having to try.

I think as humans we overestimate just how much power we have over our minds. Especially up against these algorithms that have been trained and honed with billions of compute hours by thousands of engineers, we just have no chance.


Social media is this generation's smoking issue. One of the predictions made in this movie was that humanity will not be able to survive if we continue to be blinded by our own willful ignorance due to these addicting sources.

I think they hit the nail on the head that ultimately the goals of the platforms are at odds with your own intrinsic goals and the goals at large for society. Something is going to have to change eventually.


Believing willful ignorance is due to any external X is itself willful ignorance. We have it documented in the bronze age with Philosophers.


The problem is that many people, including my wife, are OK being shown ads. After watching the documentary, I uninstalled Instagram and logged off Facebook as I hated the thought of being manipulated. My wife (and her friends for that matter) though concerned over the impact to kids, was back on Instagram about an hour later. "I'm ok with all these ads, some are useful".

So though it should help parents navigate this better for their kids sake, I'm not sure it will impact everyone as much as we think it might.


I think that advertising is the main problem. Most issues caused by social media ultimately originate from this way to generate money out of wasting people's time. Cut that source of revenue and suddenly there is no longer any incentive to get people addicted, encourage outrage, promote misinformation/clickbait, etc.

Related: http://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html (I totally agree with all of it)


I think you may be right. I've always hated ads, but never thought about why.

They have never done anything good.

If I need something, I can search for it. If I don't need it, I don't want an ad to try to instill some desire in my heart.

If something is really useful, word-of-mouth is a good way to learn about it.

What would it be like to live in an ad-free society (that still promotes healthy mutually beneficial capitalism)? Is it even possible?


I will never understand how parents can be okay with ads that target children. It's like giving their children's minds to a stranger with candy.


Targeted ads are a relatively new phenomenon - many people (especially older generations/parents) just don't understand how poisonous they are. 70 years ago parents were okay with their kids smoking cigarettes in school. Ads and tobacco and sugar and whatnot are not good for the consumer, but they're addictive and lucrative, so they will be pushed on us until we stop buying.

1. People need to be educated on the manipulative and detrimental effects of ads, which is difficult because

2. There is a concerted effort by advertising companies to keep people in the dark in order to keep milking more money out of them.

Hopefully it won't take a generational mental health epidemic (as smoking was to lung cancer) for us to regulate Big Tech. We're already seeing depression/anxiety/suicide rates skyrocket, teen/young adult quality of life dropping, people of all ages addicted to dopamine gambling systems, etc.


I agree that many (most?) people will be back at their phone within hours, even after watching The Social Dilemma. At the same time I find Jaron Lanier's comment compelling; even if only a few percent choose to get of algorithm-driven platforms after watching the movie, it is enough to wedge out room for a different conversation, and before long "everyone" knows at least one or two people who have taken a conscious choice to distance themselves from surveillance capitalism.

In addition, even though most people will be back on Instagram in no-time, the movie offers a new reference point and some common ground to talk about the undeniable consequences linked to social media, without being instantly labeled a freak or conspiracy nut. Most people need time to digest information like this, and might come around further down the road.


I'm a fanboi! I found this film incredibly powerful (esp. the comments during the closing credits). I've become a real bore when chatting with friends and other parents about this subject. Now I can simply say "watch The Social Dilemma" and return to discussing the weather!

I found the tips on https://www.humanetech.com/take-control very helpful to interrupt my addition to my phone (particularly the monotone one).


It is great indeed. The only thing that disappointed me was the way they presented the increase in depression data. They mention that there was a 100 and something percent of increase, which is a percentage of 2 percentages.

Other than that, I think that it brings to light what (possibly) happens behind the covers of those systems, as it's always trying to get you back on the platform. I remember that every time I get a notification about something.

This film and The Great Hack form a great duo, in my opinion.


Thanks for the link! Tristan, one of the main interviewees in the documentary, is a co-founder of that organisation (Center for Humane Technology).

Does anyone know of a good app/plugin that removes recommendations / the home screen feed from YouTube? I find that can be an attention-sink.


I couldn’t help but thinking of the movie/documentary itself as an act of manipulation and an attempt to distract from the real issue.

The real problem is the way big venture capital is funding these companies. Allowing them to become so big, while suppressing all competition in order to concentrate profits. This is what has made these companies into the monsters they are now. In this era of big tech focus has shifted, IMO, from creating and implementing fair and simple common standards to , among other things, walled gardens and egregious bullshit wherein people don’t even own most things they’re paying for.

The act of manipulating others into situations that are favourable to yourself is not something new and has probably been practiced for centuries. The documentary, which I admittedly couldn’t force myself to finish, seemed to gloss over the big money aspect to focus (in the most cringe way) on the psychological manipulation aspect, which made its insights unoriginal and shallow.


I'd wager a movie on a mass distribution platform like Netflix is going to be more effective at making people aware of the problem and help them mitigate its effects on an individual basis... than, say, trying to change the way capital flows.


My point is that it’s easy to make an argument about this aspect of SM than to talk about the money, trace it, and pinpoint blame. Nowadays, there is plenty of evidence available about the psychological effects/harm of social media. So it’s easy/harmless to fund this kind of exposition than anything that talks about the finance side.


It told me many things I already knew... but it enabled me to summarize them in a new way. Facebook, YouTube, Twitter,etc.. all exist to sell your attention to advertisers. This means they want to keep you on site for as long as possible. The algorithms used find the nearest rabbit hole, and gently push you in that direction. The hole might lead to a new skill, just plain fun, or could be horrible for you and society. They don't care about the effects, they only care about the profits.

Evil isn't the intent, it's just the side effect.


Zuck isn't evil, he just wants to give people what they want and take no responsibility. "If they were not on Facebook they would be on Twitter or Fox News or MSNBC. I might as well get rich instead of someone else."


It was a good explanation of some issues with current social media platforms. But I don't think that ad driven business models and algorithmic content manipulation are the biggest problems we are facing in all this. The fundamental problem is we humans aren't wired to deal with information and social connection and this scale and speed. It's too easy for just about anyone to find and confirm anything they want to believe and so many others that believe it. On the other side of that coin it's too easy for anyone to reach an audience with whatever they want. Focusing on the algorithms and business models is a bit of hubris and a distraction from the harder problem of empowering mobs. I know people caught up in the conspiracy theories. They aren't finding them through Facebook. Some of it is through YouTube rabbit holes but a lot of it is just websites found through Google (which is really just popularity ranking) or text messages shared between friends & family directly. People have these biases and want to believe these things and the access to these easy tools of connection and communication are amplifying us. Maybe regulation, moderation, filtering and defensive algorithms can help? Maybe we will learn to deal with it better and society will change to accommodate it but it's becoming a rough transition at the least.


I think you're underestimating the power of algorithmic manipulation.

Take a simple ecommerce AB test as a starting point. Should our shopping cart look like A or B? Apply Bayes theorem and it doesn't take that long to gather evidence that A increases conversions by 1%.

Now replace the simple AB test with a multi armed bandit.

Now run the multi armed bandit continuously across all social media platforms and keep tweaking to maximize sentiment in your favor.

That's even without applying data science to the social network graph.

The "fundamental problem" that you lay out is definitely related. The "fundamental problem" is that we are so susceptible to such algorithmic manipulation.


I understand what you are saying and agree that algorithmic manipulation is a problem. I think it's very profitable for these companies and people can be manipulated by it. I still think it's a small aspect of the problems of the network & scale of information access/publishing which are leading to conspiracy content and mob amplification. Problems which exist even without algorithmic manipulation being involved. At this point, just due to the network of EVERYONE using mobile devices to connect & consume content it's similar to the eternal September problem but at full scale and saturation. Is the algorithmic manipulation needed for Fox News & other biased media to reach, feed & manipulate its audience? I think you could delete Facebook from everyone's phones and we'd still have a polarized world with these bubbles, conspiracy and mobs continuing as long as everyone still has their smartphones. I also think pagerank and similar popularity based ranking is just as big of a problem as the algorithmic timelines because it directly amplifies what we say and want to hear even without the personalization of search aspect. I'm not defending Facebook and algorithmic manipulation, I just think there is too much focus on that aspect of the problem and don't think we can fix things with that focus.


Yup, after watching this I figured that we are going to eventually see a law which prevents behavior management of humans through code, all reccomendation and tailoring algorithms will be put into a central transparent repository with public access and study of its impact.


Algorithmic manipulation happens on platforms outside of social media, it's just slower.

Anyway algorithmic manipulation isn't limited to just facebook, the data that enables it is generated on many other platforms.


Like a lot of other people that replied, I enjoyed the documentary because it provided the information in a more accessible way. I think the documentary does a good job explaining the current harms of these companies, but I think the problem can be viewed more clearly through the lens of externalities (https://en.wikipedia.org/wiki/Externality). As discussed in the film, these companies curated feeds are shaping peoples realities and creating a more divided society. Like with most externalities, the way to account for this cost is through regulation. I think that a tax that increases exponentially with each ad served (over a certain period of time and then it resets) would be an interesting idea that could adjust the business model to not optimize for addictive behavior.


The movie does a great job at explaining the real problem behind our current version of online social networks. We need to start developing much more advanced systems that have the communication tools that Facebook and Instagram have, but also innovate on the financial markets/governance systems that the most advanced nation states have.

The is the only way out of this problem is to accept that governments and regulators are not going to be able to fix this, and create a new one that is owned by the people. And as you see from the movie, the chances of us hitting singularity with an AI coupled to an optimization function that domesticates humans for profit, is highly likely. It's probably one of the greatest challenges of our generation, and the hardest part is that over time the people you need to change the system are so addicted to the crack that it gives them, that they won't want to.

I do think movies like this are extremely important on getting the message out though, and hope that it enacts some changes. Me and a few engineers are working on a solution at social.network, let me know if you are interested in helping out.


It was well done. It certainly understood how social media works far better than anything I've seen on HN. Which is interesting when you are aiming at non technical users.

The back end minions were well done as an explanation.

It did well to be bipartisan. "Extreme centre" was clever. Their examples were sometimes political but it's hard to know how to have bipartisan real world examples, but they seemed to try.

I think it'll have a far reach and it will affect people, but there's no solution, what would you protest and where?

"Gasland", "Waco: The Rules of Engagement", "Blackfish", "What the Bleep Do We Know!?" all seemed to be actionable for instance. It's easy to buy guns, or go to a protest or ditch your meds.

For "The Social Dilemma" it's hard to change everyday life and it's hard to legislate algorithms where no one knows how they work.


I think movie will be pretty major, since it summarizes many points that people following it "know", but in one place. I essentially live and breathe this stuff at this point, and yet it is ridiculously hard to talk about.

You can see it in the start of the show - when people are asked what the problem is. Everyone's brains just stall with the scope and challenge of putting it into words.

It is great at explaining concepts and problems to people, which are very hard to discuss just using conversation.

I was discussing polairzation increasing since the year 2000, and the implications of that with someone. It took me almost an hour of discussion, to get an acknowledgement of the possibility.

One look at the Social Dilemma's animation on polarization trends over time? Never have to make that argument again.


Good movie overall.

I also suggest Ten Arguments for Deleting Your Social Media Accounts Right Now , by Jaron Lanier

Jaron is in the documentary and makes additional points. Social Media is making you a jerk is a great one.

While he doesn't directly critique Online Dating, he does mention Catfishing and Ashley Madison being mostly bots. If you Google 'FTC sues Match' you'll find bots are very common across the industry.

At least now you can't detach online dating from social media , you need to get your matches to add you on Snapchat so you never actually meet anyone. Just off pure stats, people are more alone than ever before. By design it doesn't work for most people. And what company would want to lose subscribers ?

I found myself an anxiety filled 'lab rat' when using that junk. Deleted all my social media a year or so ago , and I've had no issue meeting great folks since.

Generally I go out to do things I want to do. Everyone that's joined me along the way, including a fellow .net programer, has been a bonus.

All that said, I see it getting much worse before it gets better. Social Media is eventually going to be regulated , but it's already destroyed a generation. Rates of suicide have exploded since Social Media became mainstream.


I'm actually worried about the similar issues with Google search results at the moment.

2 examples.

1) The new suggested answers on search results. As an example search for "is salt bad for you". The top results might be correct though I've seen other searched that I believe aren't (different searches). Under that is this new "People Also Ask" section listing all kinds of questions, each with a single answer. AFAIK the answers are just whatever Google's algorithms decided are the most popular. Given there is one answer for each question (and the one answer to the original search at the top), it all comes across as authoritative answers. I think if you try various search terms you'll quickly find lots of examples you vehemently disagree with the answers google is presenting.

I don't know why the previous style of results felt better but something about the previous style (the style below the "People Also Ask" section came across as "here's semi random answers from unknown sources so be aware" where as to me at least, the new top answers above that all come across as authoritative and that scares the crap out of me.

2) Basically the same issue but if you search for "<some phrase> in <some language>" google will give you one answer and in this case they will often claim "Community Verified" yet they are often flat wrong. Often the phrase is ambiguous so there can be no "one true answer" without more context and yet google presents the results as "this is the one true answer because it's community verified". This basically proves my point for #1 above. That fact that google shows these questions with just one answer seems like pure thought manipulation.


They include an example in the movie regarding search results (and auto-complete suggestions) to illustrate that the answer you get is influenced by the algorithms knowledge of your world view. The example they use is: "Climate change is …", where people are shown their "truth". I wonder how long it takes before Google makes a change to the results of that exact search string, hehe.


That's a similar but different problem IMO.

I think I mostly want Google to help me find things. If I search for "windows" I'm 99% more likely to be looking for Microsoft Windows where as some carpenter or interior decorator is probably looking for glass windows.

This new issue is google claiming to tell me the right answer about any topic and it's really Google's AI claiming to know the answer on any topic. I admit it might be me. The old presentation of webpage hits suggested to me that google was not taking responsibility for the answers themselves, only for "search results". The new presentation is that Google is making it appear this are official answers. I know somewhere in the fine print Google will claim no responsibility but the presentation suggests they are.


I was somewhat disappointed that the movie didn't much discuss the downsides of regulating social media, e.g. censorship, slippery slopes. like other commenters say, I didn't learn anything new, but I'm glad this kind of documentary is out in the open for non-technies to watch and consider.


It will have the same effect the DMCA did on piracy after Napster. That's what will happen. Even if the big companies come clean and start policing, that will just move the extremists who have already built their army into decentralized encrypted chat platforms, and they will still recruit over these tools too.

You've gotta basically build the entire Internet from scratch, with some kind of government and some kind of identification system so that if someone comes to take down social discourse they are removed from not just one platform but all of them and they can't come back without some kind of appeal. In addition, all political discourse held should be required to adhere to scientific and scholarly standards. I have no problem with people who aren't from institutions to challenge the status quo, sometimes that has benefits but if they aren't armed with a pile of evidence to prove why the status quo is wrong, they should not be amplified and that regulation should be central.

Until you have both of these things, anything else you do will be a waste of time. It will either become like some sort of Vegan alt-lifestyle or morph into yet a new set of big platforms. Both are untenable, as fixing this has to require the participation of the entire social system.


The movie does not argue for policing content, but directs criticism towards massive data gathering, analysis and micro-persuasion trough algorithms.


Well that's what I'm arguing. I think algorithms have less power than people think they do, even the computer scientists. Social sciences/economic theories that study how crowds work all say that it's driven by information. You don't need an algorithm to do this you just need someone in your group to share that information with you, and this is actually is what really is happening when you look at it and get out of data and charts.

People are leaving these algorithm ran places in droves especially now because they already believe that the government is working with these companies to track their movements and radical plans. They're already going to decentralized networks, and once that happens you will not be able to control the radicals just like you couldn't control the pirates after they switched to decentralized piracy networks. This isn't trying to stop someone downloading a couple Metallica songs. This is trying to stop the proliferation of fascist/totalitarian thought and foreign actors stoking it up. It's much more dire, and the world of John Perry Barlow is dead. Not to mention the last few right wing terrorist attacks weren't planned on Facebook or were they radicalized through there. Neither was QAnon. That all came from 4chan/8chan/etc. There are no algorithms there, no AI, and anyone could basically code something on that level of complexity in a day.

Information should not be free. It should have limits. Once such information threatens the proliferation of a free society and brings people back towards totalitarianism the slippery slope "first they take away the press and then we get Hitler" argument falls flat on it's face because large swaths of the population freely gravitating towards Hitler gets you the exact same effect, and not only that the nuance between society trying to regulate information for the general health of said society and a fascist dictator doing the same is completely different.


> I think algorithms have less power than people think they do, even the computer scientists.

Algorithms are powerful because they can cause tiny, incremental and often completely unnoticeable changes of opinions and perception. These changes adds up when they affect large enough populations. I agree that people who gather in groups and share information makes up the bulk of this equation – algorithms do not operate in a vacuum.

I do think that the negative effects caused by algorithms are largely unintended consequences. Profit is the motive, not malice.

> People are leaving these algorithm ran places in droves especially now because they already believe that the government is working with these companies to track their movements and radical plans.

A tiny fraction of the technically literate people escape the big platforms, the rest stays on even though they are somewhat conscious of the negative effects and ruthless business practices.

> That all came from 4chan/8chan/etc. There are no algorithms there, no AI, and anyone could basically code something on that level of complexity in a day.

Many people joining extreme communities have been nudged by algorithms, especially Youtube's. What happens from there is usually just plain old group think and tribalism, feeling the comfort of finding a community to belong to. The point is that those nudges from suggested videos gently pushes you down a rabbit hole you otherwise would not be exploring in such a rapid and captivating manner.

Like they say in the film; algorithms are not evil on their own, they just tend to enable and amplify some of the worst tendencies in people who know how to exploit this tool for their own gain, political or otherwise.

> Information should not be free. It should have limits.

I disagree. What I do think is that the many "information outlets" should be held responsible for their editorialization. Newspapers, TV and social media platforms do all editorialize, only social media has left this task to the algorithms – which is rather careless and naive. These companies have indeed moved fast and are now braking things.


> Algorithms are powerful because they can cause tiny, incremental and often completely unnoticeable changes of opinions and perception.

Stafford Beer predicted in 1972 that increasing variety without regulatory variety to combat said variety would send society towards catastrophic collapse because there are so many possible states in the social system it becomes as complex as things like weather and wave formation.

https://archive.org/details/DesigningFreedom

But we got the American Skinner Box model, mixed with heavy doses of hipsterism instead.

> A tiny fraction of the technically literate people escape the big platforms

I suggest you look into it a bit more and read about the associations between people like Nick Land, Milo Yiannopoulos, Steve Bannon, etc. While it's true some of it was done by social media, the thing about it is that conservative media has and always been this close knit juggernaut hype machine since when Rush Limbaugh appeared in the depths of AM Radio hell. To them, it's merely a faster way of organizing the way they have for years because there isn't any cost. They no longer have to print 5000 copies of something. It's low hanging fruit. Most of the alt-right are ex Ron Paulers, that were already into things like bitcoin through the libertarian party since like 2012. They never "escaped the big platforms", they were always recruitment points for normies, which doesn't even require them to do anything but promote mainstream conservatism then say "go here for more stuff" that can't be tracked by Facebook's AI. Maybe your grandma may not have "escaped" but your grandma probably isn't on the streets shooting black people or lefties. Bitcoin was transacted between these Russian/American groups according to the Mueller report also.

> Many people joining extreme communities have been nudged by algorithms, especially Youtube's. What happens from there is usually just plain old group think and tribalism, feeling the comfort of finding a community to belong to.

The algorithms merely bayesian filter a giant database of what people like and feed it back to them for their increased engagement. These people would have chosen such content on their own, and the fact the content exists in the first place that plays to their real views (not views they show in public) that have existed for hundreds if not thousands of years and are passed through blood lines (if your mom is a dem or rep or kkk, you're gonna be a dem or rep or kkk 80% of the time) it legitimizes the worst of human behavior which was socially unacceptable.

But algorithms don't do that, communication without moderation does. You aren't going to get a Trump supporter to start watching CNN or MSNBC, and if the satellite stopped carrying it they would cancel their subscription and go somewhere else. So if the Internet offers media that has their views, they will seek it out and if the algorithms amplify this by sharing what other people in their social circles say, even better according to them. Saying that these people were innocent and normal before Facebook came along and made them radicals is just flat out wrong. They can just say the n word to each other with millions of people instead of having to hide it among close friends like they're smoking weed or something.

This polarization was happening long before Facebook and the Internet even existed, especially during the Clinton administration and the things that got Tim McVeigh to bomb Oklahoma City over Ruby Ridge, Branch Davidians, and the Assault Weapons ban. That's who these people are, always will be, and giving them a communication platform without strong information control is asking for trouble.

You aren't going to get away from business interests, religious interests, or racial interests it's basically impossible. You just need to create a sense of civil society via the web through ejecting bad actors from the public square. I know if I walked into a gay bar and started yelling homophobic slurs I probably wouldn't make it out of there alive, and if so I'd be kicked out for life with a nice bouncer at the door to greet me if I tried again. The Internet has no such protections.


People and groups with strongly held beliefs are not the most affected by algorithms, as you are pointing out. Worst case scenario is that their echo chamber becomes less impenetrable or that they become even more extreme.

Algorithms does most damage to people not currently holding any strong beliefs about any given topic, but do have some wage leanings one way or another. These people can be swung and their views amplified and radicalized without much effort or financial input, as the film The Great Hack makes a good case for.

Google, Youtube, Facebook and Twitter (who all offers fine-grained ad targeting) can be weaponized to push "normal" people towards the extreme, and pit groups against each other and affect democratic processes in a big way. This was Cambridge Analytica's business model, and it worked really well.


I think a lot of people confuse the appearance of a lack of strong beliefs with simple social filtering. There is no social filtering on the web because you don't have to worry about your reputation.

The thing that people try the hardest to stop from happening to themselves is social isolation and being an outcast. People will lie all the time just to get along with the crowd. You wouldn't know people with strongly held beliefs, unless you found out what they had in their library. It's just the filters are removed and laid bare for the whole society to see, and it's ugly and it always has been.

"In 1981, former Republican Party strategist Lee Atwater, when giving an anonymous interview discussing Nixon's Southern Strategy, said:[28][29][30]

You start out in 1954 by saying, "Nigger, nigger, nigger." By 1968, you can't say "nigger" – that hurts you. Backfires. So you say stuff like forced busing, states' rights and all that stuff. You're getting so abstract now, you're talking about cutting taxes. And all these things you're talking about are totally economic things and a byproduct of them is [that] blacks get hurt worse than whites. And subconsciously maybe that is part of it. I'm not saying that. But I'm saying that if it is getting that abstract, and that coded, that we are doing away with the racial problem one way or the other. You follow me – because obviously sitting around saying, "We want to cut this," is much more abstract than even the busing thing, and a hell of a lot more abstract than "Nigger, nigger."[31]"

https://en.wikipedia.org/wiki/Dog_whistle_(politics)


This is a great point. At the same time I observe a night and day difference in how some people close to me behaved before and after they had access to Facebook and Youtube. Pre-social media I could have a long and rather nuanced discussion on controversial topics, and we both left the conversation with slightly altered opinions and some new perspectives. Today, all nuance is gone and the conversation is scattered all over the place, spiced up with whatever conspiracy theory they "discovered" on Youtube lately.

Maybe one of the effects of algorithms has been to push people to extremes to such a degree that social filtering is discarded? People I know seem to be almost apologetic in their approach, as if their life depend on convincing people about their views and to not care about the social consequences of constantly "preaching". This form of polarization is not healthy – basic respect for other people is lessened and listening to counter arguments is considered a weakness.

This is just an anecdote from my life. From what I read and hear from others, this experience is not unique.


Doubtful it will have much impact. I don't buy the central premise of the film - social media is so addictive and people are easily manipulated by it - through the power of 'genius' algorithms that work against vast stores of data. There is no mention of freewill .. are we really so easily controlled? If this technology was so powerful and able to manipulate people - how are we not using it to end racism, effect climate change, teach basic math skills, fight obesity, etc. The reason is that the technology isn't that good - and at it's best it suggests things that appeal to our base nature. It doesn't control us or even come close. Also the fucking hubris of these people to think they really had that big of an impact is a bit much. Free will and human nature are real things and social media isn't that powerful. Also with free will comes personal responsibility - this film seems to suggest that we are such sad victims and the problem isn't us but the powers that be. I'm so weak that I can't help myself - so it must be social media's fault. Give me a break.


In every conversation like this I always bring up Cathy O'Neil's excellent Weapons of Math Destruction (https://www.bookdepository.com/Weapons-of-Math-Destruction-C...). I think it's an ethical must read for anyone in tech.


I also enjoyed Robert Elliott Smith's Rage Inside the Machine (https://www.rageinsidethemachine.com/) which is in a similar vein.


I personally never understood the value of ads. I subconsciously turn off my brain and press skip whenever I encounter any kind of ad because I know it was made to influence me into doing an action (and I don't want to waste any mental energy in evaluating whether I am being tricked or not). And most ads are bad:

https://twitter.com/JonErlichman/status/1304793136494006272

So to me the whole thing simply doesn't make sense, who actually did buy something from an ad?

The only signal I value for trying out (buying) new things comes from real people.

The privacy implications of storing so much personal data is the other question and that can be used for some bad things that isn't just serving ads. Would be nice to have a law for mandatary differential privacy for ad based social networks.

--

Perhaps I am losing out on ignoring all ads wholesale like that. I even sometimes put products that do 'a lot' of advertising in bad light and try to avoid them all together.


I feel the same. Oddly enough my wife is in marketing and looks at every billboard we pass. She sometimes mentions it and I'm like what billboard because I hardly notice them.

When it comes to online ads, if I get a very annoying ad or a pop up on stores website, I'm out and taking my money elsewhere. I also generally try to seek real world reviews but it is hard. So many youtube videos, bloggers and even comments are just paid advertisement, it sucks.

I remember early days of the internet when the information was decent before the MBAs figured out how much money could be made from it.


I think it’s worth being humble and admit that it is possible for ads to work on you, even if you think not.

Here are some examples:

For mass consumer products. Let’s say a smartphone. If an ad is shown every day you may not click it but now that smart phone might be in the list of brands you’d subconsciously consider.

Other ads are more direct I see a lot of start business ads that are very compelling. I ignore them or watch them out of interest in HOW the ad persuades and I can see why people would click them. They often offer a free thing that seems quite valuable to prove themselves which is a marketing tactic as old as the hills.

There are probably lots of other examples. Ads are powerful and it’s not hard to understand how they work. The human brain is a great ad blocker in general but it’s imperfect enough that the EPM of ads makes it still worthwhile and evidenced by how hard it is to pay less than a $1 a click most places.

So know your enemy! ;)


I'm exactly like you, but I think your comment won't find an echo here where most people "need" ads to promote their businesses.


Many people buy things after seeing ads. I was running an e-commerce where the majority of customers came either from direct ads or indirect advertising like influencer marketing. It's a great way to gain new customers for a product business (although in my view the "lifestyle brand" space is pretty saturated now).


Also, there is nothing wrong in advertising as a concept. Advertising has existed as long as products have existed. The problem is tracking and data collection/exploitation about some very granular aspects of users.

Also, even if you take away advertising, the problem remains of addictive apps and 'persuasive' design which will still remain. To tackle that is a whole different discussion altogether.


Guess I am missing out ignoring and blocking all ads I see. I do think there're better ways to conduct business than basing entire business model on ads.


I was surprised that given where the movie went, there seemed to be little discussion of the idea of banning personalized advertising. While it would be difficult to stomach for the companies who've thrived on that business model, the social consequences and business incentives of personalized advertising are corroding our society.


Read Bernays (or the Creel Report if you can find it) for an account of how personalised advertising and manual social graph exploitation worked about a century ago.

Of course, back then, they targeted with much less data. For comparisons from half a century ago to now, consider https://opendatacity.github.io/stasi-vs-nsa/english.html .


Never heard about the Creel Report before, but here it is: https://archive.org/details/completereportof00unit/page/8/mo... Thanks for the tip.


Just about nothing new wrt content.

I remember I kept thinking how naive, imaginationless and without knowledge of history these people must be if they genuinely never thought critically about what they were doing – if no one ever saw any potential for abuse.

That, and the fact that they kept pushing unfounded conclusions, like correlation between diagnosed mental issues and appearence of social media allegedly pointing to a causative relationship; alleging that this thing is so "new" and "different" from any other tool that has played on weaknesses inherent in the human mind that we just can't deal with it, etc.

I tend to think proper education, including critical thinking skills (including logic and cognitive biases), critique of media, etc., as well as defining privacy clearer as a human right (including explicitly stating that data subjects own data about themselves) and enforcing this hard will solve many problems.


There's a great article with more technical exposition by Jeff Seibert, one of the interviewees: https://medium.com/@jeff_seibert/the-mechanics-and-psycholog...


I found it to be persuasive, but the after-school special vibe of between interview scenes was corny


I suspect there's a growing market for summer camps and other forms of tech hiatuses. I would gladly subscribe to a service that forced me not to use social media or whatever, since I don't think my own brain can be trusted with that


I think the presentation was kind of silly to be honest. I liked the talking to experts, etc...but when they had dramatizations it felt so forced and silly.


It is a bit silly, but at the same time an effective way to illustrate some very technical concepts to non-techies. Judging from the response I got from my partner, it did the trick.


> What will the cultural impact of this movie be?

A few months back a bunch of big Facebook advertisers (nike, etc..) pulled back some of their ad spend to punish Facebook over their stance on certain cultural issues. What happened to Facebook? Nothing... Revenues actually went up...

Facebook is an incredible tool and platform. No platform (none amongst the FANGs) has the same capabilities of tailoring your internet experience like Facebook does. As an advertiser, this is incredibly powerful: being able to put a targeted tailored ad in front of exactly the demographic you want. No other platform even comes close.

The cultural impact will be zero. The dollars speak for themselves. FB users keep using the platform and advertisers will keep chasing them on it.

The only way to put this in check is for government to step in and put in a comprehensive set of laws and rules that governs how your data is shared. A market-driven response to Facebook's insidiousness will not happen by itself. Government needs to nudge it forward with a thoughtful set of policies that promotes competition and makes these big companies liable for the false information they promote.


The Facebook boycott was complete hypocrisy. It was framed as being in support of BLM & similar movements despite:

1) The companies involved in the boycott were often the ones with a terrible track record regarding their own treatment of minorities, so it was absolute bullshit virtue-signaling.

2) We were in the middle of a pandemic where the economy essentially shut down so it made total sense to pause advertising for the time being. This was purely a financial decision, framed as some kind of social impact action by some terrible & disgusting companies.

Of course, as soon as the market started recovering and they saw a need to advertise again they were back in full force as if nothing happened.


I think everyone should watch it. I think there can be a lot of good done on a platform like Facebook, if it's goal wasn't to get you hooked. If it had real moderation of content.

The proliferation of fake and misleading information has clearly caused a lot of harm to the United States and I'm sure other countries as well.


I think it does a great job to explain to non-techies things I have explained for years withouth luck. Biggest points are about kids and explaining echo chambers. I live in a very polarized country where everybody justs follows the media and people who are on either of two sides. (Argentina)


Like most people here, I was already aware of the technical side (and I'm guilty of implementing several of the features such as infinite scroll).

The majority of users aren't giving up their addiction. I think that's a given. But I do hope parents are more aware of the effects on children.


It should be called something else besides ‘the social dilemma’ because Netflix also does this to make sure people watch nonsense as much as possible as well...

every company hiring the best algorithm developers want people to spend as much time as possible on their ‘thing’

not sure what to call this though...


In the first 10 minutes of the movie they talk about what happened after what seemed like a revolution at Google. It was essentially nothing. People will not care due to how convenient staying the same is. It's sad really.


I think the main takeaway was the influence on children. I don't think there's precedent for providing kids with constant access to dopamine-driven, socially adjacent experiences. At least with TV and video games, they're finite activities with time limits (hopefully).

Yet giving a pre-teen a phone where they're constantly being spammed with addictive content is dangerous. I can't even imagine all the psychological and developmental implications.

In terms of adults being addicted to apps and their data being used for ad-targeting...eh. User beware. I think we have bigger issues facing us.


I do agree the main problem is the influence on children.

As for the adults, I do believe the biggest takeaway is not advertising. It is the effect on information propagation. I have seen people close to me share clearly fake information that wouldn't have made the cut on most traditional media outlets.

People are giving way too much credibility to information that cannot be traced with a proper source. In the past people relied on the media to do the research, as biased as they may be. Now this research phase has disappeared.


I watched a little this morning with coffee. I am finding myself wanting to skip the scripted parts and just wanting to watch the interview parts. I'll try again to watch it when I have more time.


I wish they had presented more solutions. The way I see it, infinite scroll feeds generated by algorithms should be outlawed. Same with Like counts. (It seems Instagram has begun testing this https://techcrunch.com/2019/11/14/instagram-private-like-cou...)

Facebook wasn't that bad when you had to visit each person's wall or when the feed was in chronological order.


IMO a bigger problem is the effect these platforms have on children. There should be systems in place that extend the regulations that exist on TV to the social media experience.


I am thinking a lot about this problem space and trying to integrate some of my ideas into my project (https://www.confidist.com). I am also interested in solutions and alternatives.

Centralization of power

I think this is the difference between these social media platforms being a good source to share information with friends and family and major social media platforms dictating culture through self-interests. Taxing data collection and storage is a good idea presented by the film. I believe social media companies should stay smaller in size and scope. These companies should make guarantees and hold each other accountable through regulation and organizations about data usage and adopting direct to user business models.

The attention economy

Again I think some regulation here is needed. In addition to looking closely at infinite scroll feeds, and the information associated with status that is displayed publicly, we also need to look very closely at the nature of notification management and data ownership. Taking email as an example, we must have an accessible way to unsubscribe from all. In parallel, all platforms should be required to have an accessible and straightforward unsubscribe from all notifications toggle. The obvious attention loophole of just creating new notification categories to continuously alert users needs to be closed. And lastly about data ownership; All social platforms that collect data, not only need to be accountable for the money trail your data takes, but should also allow users to export their data in a usable format. That means I should be able to export all my friends and connections from Facebook and delete my account. This would prevent that psychological gotcha that plays on the fear of losing what you have built on a given platform. It is also anti-competitive to other social platforms if data is not easily exportable. All of these companies know this and make it very difficult.

Intermittent Variable Rewards

You hit on this with the infinite scroll, but we can likely do more. We need to help people be patient again. Guess what? Notifications can help manage our time and attention instead of hurtful if used narrowly and as a service to the user. For example, if I make this post on hacker news and I have notifications disabled, I'll probably keep checking back to see if any updates occur. That involves the intermittent variable reward. However, if I know very clearly that I'll receive a notification when an update occurs, I can exactly respond when I need to. Companies like Apple and Google who manage device notifications should be responsible for their intermittent variability as well. We should require the ability to have "notification digests" instead of immediate notifications. We know they can cause traffic accidents, but they also waste a lot of time and are rarely all that important.

Echo Chambers

You spoke of the like counter. We all know status is a huge part of social and it is especially important to young people. Viewpoint diversity and acceptance play off of status as well. So how do we help remove the perception of our social media status? In the film, they spoke of narrowing in on certain ages before social media is allowed under the law. I think that is a good first step before we fully understand how to make improvements through the user experience. I am requiring 18 and over.

What I am doing with my platform is to privatize as much interaction as I can, with the balance of moderation and a rating system that is not displayed back. Sometimes good old fashion one-on-one communication is the way to go. If that can be used more in a distributed way to substitute for one-to-many interactions I think that goes a long way towards the social pressure of the group.

I think special attention also needs to be made towards each type of media. Pictures, videos, text, virtual reality, augmented reality, and more. I isolate interactions to text only similar to here on HN. This removes social pressures and "shiny objects" from easily manipulating our attention. It prevents every context with another person from being about appearance and prejudices. But it also allows users to fill in the unknowns with blanket assumptions. If you are just a username and well maybe you said something I don't agree with, maybe you are everything I dislike in the world wrapped into one anonymous internet persona? To try and bring a little more empathy into the picture I am trying out showing various attributes users have in common when they interact as part of an introduction. In a virtual reality social situation we could show emotions from users in a more effective way without revealing their identity which is pretty neat. Pictures, especially of users, can allow targeted objectification, and the hyper and unrealistic comparison with our peers that is most likely with younger women. We should have regulations around pictures and videos that require a visible indication that a filter was used. I think allowing young people to share pictures and videos of themselves will become less of a problem the more decentralized social media becomes. We need to change the interaction of, "wow did you see the <media> of x on y?", to "Hey I posted x on y, the niche platform I use, check it out". Sharing pictures and videos need to be more of a niche hobby, and less the thing people need to do to be relevant.

Echo chambers I think also directly relates to growth and growth hacks. People interact more and enjoy interactions more when they are validated. 8 friends within 11 days. This gets users focused on that status number of # of friends. Or maybe # of connections which exploded LinkedIn. Guess who people mostly know and that can help grow the platform? Oh, people that are very similar to each other and that generally like each other. This is great for a niche platform where we stay connected with close friends and family, but it is a disaster for a platform that aims to connect everyone and is also a way to share news, opinions, etc etc. So again, the scale and scope matters. What about platforms that aim to form communities or groups based on a common interest or identity? People love forming their echo chambers and I don't think there is anything we can do about that human desire. What we can do as system engineers use our algorithms to promote diversity of thought and perspective within groups and among groups.

On my platform Confidist, I am working on a design for a connection system called "Orbits". Instead of encouraging connections with just about anyone you encounter, I want to leverage the data that users have volunteered, to similar to a dating platform, promote diversity and health of your Orbit. This might equate to having multiple strong foundational connections such as being introverted from a rural community and prioritizing other attributes that you have not yet been exposed to such as someone who is socially liberal. Additionally, I created a spotlight event system that is tied to experience-based rewards. As the system designer, I can cross-promote these events from one community to another. The site can also encourage a spirit or culture of openness.

I did not touch on solutions and alternatives to the advertisement model, however, I think plenty of people are experimenting. I am interested in hearing what others think are the most successful alternatives. But to me, the key is staying relatively small in size and scope, prioritize the user directly with your business model, and be very conscious about how you grow and regulate your system. Don't make something you don't want your kid to use.


It reads like you have put a lot of time and energy into thinking the issues and possible solutions through. Here is a tip: as a first time potential user, this site gives an awfully complex vibe and its hard for me to get the layout, what should I do next, etc. Compare that to Facebook/Instagram sign up / onboarding / first steps experience. I suggest enlisting a UX researcher/designer who have done it before to tackle this problem.


Agree, thanks for taking the time to check it out and give some critical feedback. Much appreciated. Any UX'r who wants to help please email me info@confidist.com.


I see that on netflix's info page it calls it a "documentary drama hybrid". That's a weird but accurate description, because as a documentary it's too biased and makes me feel that it tries too hard to push it's agenda to me. I agree with most of it's points, I deleted my facebook account several years ago, but they way this "docu-drama" represents its points makes me unable to take it seriously, because there's just too much drama in it.


When I was watching it I was thinking in ways of fix these points in the users perspective and it seems to me that the user should use the same technology, to try to block this effects and help him to not fall short in this mind traps. His use of this tec would`t be binded to others interests like sell ads for instance, so maybe the system optimization could be better expressed for him. But I don`t know how a system like this could be implemented.


It was mentioned on my Nextdoor feed, so it has clearly reached even the tech novices :)

I haven't watched it myself yet, but I'm excited that it's getting such reach.


Honestly I dont think most people care about privacy or advertising. For me my biggest argument about SM (incl HN) is the sheer amount of time it sucks for very little long term benefit. I complain I dont have time to visit friends or do exercise but somehow I manage to spend way too my time on my phone, and that even is without Facebook or Twitter.


"We are more profitable to a corporation if we are spending our time staring at a screen, staring at an ad, than if we're spending our time living our life in a rich way." (Justin Rosenstein ~1:25:30)

This is ironic, because what is best for all of us as a whole and what is best for each individual - is that each of us are productive, helpful, and loving to one another.

However, what is more profitable in a short term for a single person (or entity) is what is often destructive to others (a drug dealer is a good example of making profit from the destruction of others).

Instead, we have to care about the long-term benefit and health of each person (our friends, family, community, and all humanity) and not just amass some money for ourselves so we can live selfishly.

Also, to be clear, I believe capitalism is a vital part of that - when it works with a win-win focus. (I innovate to provide a good and beneficial service that helps you, and you help me by providing me with resources I need, etc. - big win-win for everyone).

However, when we turn it to pure profit and divorce it from the human side of working together for mutual good, that is when it destroys instead of empowers. That is exactly why that quote above struck me.

If we use social media to enhance our human relationships, then it is beneficial. But if all we do is stay up late at night watching video after video of emotionally triggering content, then it's time to hit the delete button.


I doubt this will have the impact we wish it could have. Best case scenario some people with moderate the number of notifications they receive. Other than that I can't see anyone deleting their accounts because social media are so much interwoven into their everyday life.


I read Shoshana Zuboff's book Surveillance Capitalism, which was the jolt I needed to quit Facebook. Prior to those reveals, I was aware that the FB experience was more dark than fun; the reveals of the exploitation just nudged me into leaving, because I don't like being exploited.

Then I found and joined HN. It's a better experience. I think the guidelines make it so. They aren't 100% followed 100% of the time, but the intent makes HN a vastly better experience than FB.

https://news.ycombinator.com/newsguidelines.html

Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.


I'm tired of every few months we all have to rediscover how our complicated obsession with social media and the resulting data collection based advertising is leading to a total loss of individuality and privacy.

We (society) need to know these things and act according.


I think this film can do just that, but for non-techies. We can't really blame non-technical people for not understanding or caring about these things, when the underlying mechanisms are of such a technical nature. That is why I really liked The Social Dilemma; I can watch it with family or friends and make them aware of the negative effects of algorithms, that are less obvious from a regular users perspective.


As a documentary film, I bailed after the first few interviews. The novelty of peeking inside internet giants was lost on me. Just felt like awkward confessions of some of my peers with ethical second-thoughts.

I also feel like one method was singled out amongst others: the facebook algorithm. Which exploits its proprietary model of the links between individual humans to boost signal. I don't use the product myself. But I have to ask, to those who do use it regularly, how can any of jaron lanier's darkest predictions come as any surprise?

If you are in the mood for a free doc feature that will utterly blow your mind, I can sincerely give a high rec to The Real Story of Paris Hilton. It's actually a real life horror movie that I found to have a similar energy to Satoshi Kon's Perfect Blue. But non fiction ;)

https://www.youtube.com/watch?v=wOg0TY1jG3w


I think it's shallow, naive, simplistic, and achieves probably nothing, except making this particular narrative a more attractive social media loophole, and more money for netflix. It's based entirely on false premises.


Why was Netflix not there? How are they different from other addictive platforms?


Netflix is also guilty of using algorithms to recommend content, but to be fair, they do not gather nearly as much data on your behavior and life in general as i.e. Google and Facebook/Insta/WhatsApp. They also have different incentives as all of their users are actually paying for their product, and are not shown ads to the same extent (only product placements in movies as far as I know).


Funny enough, of all the platforms I take part in (Facebook, twitter, reddit), Netflix is the one the one with the most notifications and emails. I imagine that's simply because I don't find them as offensive as the others so I haven't neutered them yet.


There's no UGC on netflix


there were many things in the documentary which I honestly did not know.Very educational and presented brilliantly.


I don't know what the cultural impact of this documentary will be since most won't care to change their behavior one bit. People are building walls around their echo chambers and are protective of it.

I was well aware of most of this and had been complaining lately about how I don't see there being a good ending in sight considering how much growth there seems to be in the extreme right and extreme left. I've been finding it harder and harder to have conversations with many people I know or even family because of their extreme stances on many of todays core issues. Most if not all of their positions or perceived understanding of these issues is straight out of their facebook echo chambers. They started with an ignorant stance and had all their thoughts and ignorance echoed and amplified back at them, empowering them to feel even stronger about it all. My feed is peppered with propaganda simply because I still bother to comment to family, trying to share facts, trying to pull them back a bit.. How can they not know better?? Right... The information never reached them, because they relied social media to get the initial news and once suckered in, they only seeked to confirm their biases.

There's no questioning why we're in the world we're currently in. Nothing about this is normal, especially when we claim to have access to information.

I know better, yet still find myself scrolling my feed robotically.. I had just scrolled it a bit ago, but, found myself scrolling it again. I didn't plan on it. I just had a blip in focus while I was watching something and my new programmed behavior was to pick up my phone and start scrolling.

When they say kids mental health severely affected, I'm not surprised one bit. I see it. My nieces and my gf's nieces are all hooked. I only see my nieces once or twice a year since we live pretty far from the rest of my family, but I was shocked to see they knew how to find content and how to operate my brothers iphone before they knew how to read. And if there was a youtube video they talked about, they knew how to find it again. Again, they couldn't read or spell yet..

My girlfriends older nieces are locked on tiktok. They determine how they feel based on others perceptions of their online accomplishments. All these kids are trying to become influencers. The feuds that arise, who collaborated with who, commented what on who, bullied them, etc etc. Bullying in schools is nothing new, but at least it used to have a schedule. You got bullied in the hallways at school, or around town, or during recess. Now, there's no turning it off. Kids get bullied around the clock, and some of it leaves a permanent mark online for every other kid to see. Thats not healthy for kids. They don't have a safety net once they get home.

Politics are a clusterfuck right now. One can't keep up with the events while trying to find the actual facts on everything and not simply believing everything being targeted at them. And we hope to have positive change soon? Come on.. As long as people can be bought, this is not going anywhere.

The one thing the documentary didn't touch on, and I guess its the cause of this in the first place, our capitalistic systems are unsustainable. Everything must grow, indefinitely. Every quarter, companies must meet their growth targets or take a hit. At first, you work on improving production, cutting waste. Then you optimize every other aspect you can. Eventually you cut corners, eventually thats not enough anymore so you outsource everything, move production to the cheapest place you can. Then all these tools are available to market your products in the most targeted way possible. Facebook et al don't care what you pay attention to, as long as you pay attention and they can throw ads at you. The companies will pour as much into this as they can get out. They'll squeeze every drop out of that lemon.

This whole system is like one big dirty coffee filter being wrung out too hard. Eventually it'll rip and people will fall out, civil war is definitely not that far out of sight. I think they touched accurately on the big picture and where things are headed if things don't change. Whats sadder is we know it, we see it, and we're unwilling to change it, because that would drastically affect finances of too many parties, and there's one thing that rules above all, and thats the all mighty dollar.


Do you think watching The Social Dilemma with family or friends could help spark a conversation about how they are being manipulated by "Big Tech"? I plan on doing this with someone who has been greatly affected by the algorithms, and although I do not expect an immediate change, I do expect it to spark some new thoughts and questions.

Another comment here mentioned this site as a good starting point for taking tangible action against being influenced by algorithms: https://www.humanetech.com/take-control


I think its a great way to start the conversation as most will likely see behavior in there that they can identify with or see in their immediate family. The information is also presented in a way where everyone should be able to understand whats at stake.


[flagged]


As I mentioned in the first sentence of my post; "For those who have seen the film on Netflix already; what are your thoughts?"

It is a movie / documentary[0] which raises questions about the combination of massive amounts of user-data and algorithms, and it does it in a way that is most people can understand. Recommended :)

[0] https://www.imdb.com/title/tt11464826/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: