Hacker News new | past | comments | ask | show | jobs | submit login
Google is apparently taking down all/most Fediverse apps from the Play Store (qoto.org)
1313 points by mynameismonkey on Aug 28, 2020 | hide | past | favorite | 935 comments



The rationale they gave is that hate speech appears on these apps, because some of the microblogging sites that can be accessed via Fediverse have this kind of content. Based on this rationale, I look forward to Google Play removing Chrome, Firefox, and all other web browsers from the store as well.


This sort of decision by Google does make me rather uncomfortable (the entire situation is uncomfortable... https://www.theverge.com/2019/7/12/20691957/mastodon-decentr...). But it's worth understanding why the situation may be a bit more complicated than is described above. What seems to be happening is not an absolute ban on Fediverse apps, but a ban on specific implementations that make it easy to join specific communities which encourage hatred and real-world violence. Other implementations block these instances, and I believe are not banned.

Whether or not this is a good thing is a complex question. If you happen to be the target of this hatred and violence, and feel it is an existential threat to your livelihood, you might believe that it is a good thing to make it more difficult for those who are engaging in this behavior to enlarge their communities. On the other hand, if you believe eliminating communities by platform fiat is an existential threat to your livelihood, this may seem like a very bad thing.

(You might also think it's hypocritical, since you can access most of these communities via a browser. Google also controls the browser, and does make it difficult already to access some sites https://developers.google.com/safe-browsing/v4 . However, it does seem to have a higher bar for browsers than for social apps (e.g. malware, csam, iirc); some have suggested that there are legal reasons for this, I'm curious to learn more on this, but I have not seen any substantiation yet.)


This justification still implies Chrome & Firefox also ought be content aware & be censorship machines.

This is grossly unacceptable. Apps need some safe harbor too. Apps can not be responsible for every possible use of the app.


It's a lose-lose scenario for content providers. Lose if you censor (https://news.ycombinator.com/item?id=19274406), lose if you don't (https://www.reddit.com/r/videos/comments/artkmz/youtube_is_f...)


I think it's not apples to apples, one is censoring applications, the other is not censoring videos.


This isn't quite safe harbor. It's not like the app was removed for one user posting one bad content. If what the poster above said is true, it's closer to if an app had a user who regularly broke the rules, and the app refused to ban said person.


Agreed that it's not safe harbor really at stake.

I disagree about your comparison. This app can connect to arbitrary domain names. This is getting blocked because you are not filtering the list of domains a user can connect to proactively.

That's wild & I can think of zero precedent for it.


I'm not sure what you mean by justification. I think I simply lay out some context and a set of conflicting perspectives.

That said, if you don't want Chrome and Firefox to be content aware, then you should argue that safe browsing should be eliminated from Firefox and Chrome. That is a self consistent position, but it may not be consistent with e.g. avoiding dramatic growth in botnets, ransomware, organized crime etc.


Actual safe browsing comes from content-unaware tools like NoScript. And yes, I did spend half a hour going through about:config and neutering everything related to 'Safe' Browsing(R)TM(C)LLC.


> but a ban on specific implementations that make it easy to join specific communities which encourage hatred and real-world violence

So basically, Google only supports the Fediverse if, like itself, it engages in censorship. The Fediverse exists not to encourage hate speech, but to discourage censorship. Hate speech is the inevitable result of allowing humans to say what like they like. Some people will choose to be nasty. Many people believe the greater good is the free flow of information, and that adults are more than capable of filtering out and avoiding those information sources which make them uncomfortable. Instead, Google wants to treat everybody like children, and be the helicopter parent that swoops in and removes anything objectionable.


>but a ban on specific implementations that make it easy to join specific communities which encourage hatred and real-world violence.

As you state, one can access these specific communities in a number of ways, including Google Chrome. If the community is the issue, go after the community, not an ActivityPub app that can access content from these and other communities.

Should Google also ban RSS reader apps that don't actively block RSS feeds from sites Google doesn't like?


Oh, please don't suggest banning RSS apps - Apple is already doing that, they removed Pocket Casts and Castro because they allow access to Podcasts that offend Chinese censorship, while Apple's own podcast app remains because it blocks those particular podcasts:

https://www.theguardian.com/technology/2020/jun/12/apple-rem...


> Apple is already doing that, they removed Pocket Casts and Castro

In china. That is an important note that you left out to make Apple seem worse.


You say there making Apple seem worse, but "They're only censoring the Chinese" doesn't really make them seem any better


Different countries have different cultural norms and laws.


And some of those norms and laws are different in the sense that they are objectively worse.


More like different countries dont enpower their people and treat them like little children.


It's a bit silly to emphasize specific communities if this results in a ban of the entire app or network. ~all apps and networks have some communities like that. I don't think this is a complex question at all, this is just bad.


The same with Discord or Slack that could be removed, or Facebook


When the cathedral supports real-world violence it's good. When you support real-world violence it's bad. They want you dead, but will settle for your submission.


Bingo.


> Google also controls the browser, and does make it difficult already to access some sites https://developers.google.com/safe-browsing/v4

Safe browsing doesn't include sites for encouraging hatred and violence, etc. Only malware, social engineering, and "harmful"/"unwanted" applications. If they start including those sort of sites in their safe browsing lists, that would make your point here more relevant.

(Of course, some people get hit by safebrowsing unfairly. But I think in most cases, it is because someone compromised their site and used it for a malicious purpose, and then they struggle to get Google to remove it within a timeframe which is reasonable.)


> Safe browsing doesn't include sites for encouraging hatred and violence, etc.

Yet.


In other words, the developers of these apps need to all run their own Fediverse nodes, _but not federate them to any others_ because otherwise users may be able to access content from nodes that Google doesn't like! Because each dev having to vet every instance out there is the only other option and that's practically impossible.


It concerns me that the company taking down the apps also owns my entire mobile OS from browser to network stack, as well as the DNS resolver, the search engine, and the email client I use.


We've gone from wild west to company towns.


So Google somehow knows which apps block certain instances, which generally get reputation among other instances & are quickly blocked? That's not believable.


Why isn't that believable?


How does Google know this unless they have some access to the databases of each of those sites/instances? Why would google have that kind of access?


Such apps tend to advertise that they block instances. Tusky, for example, blocks all Gab instances and says so right in the FAQ [1].

[1]: https://github.com/tuskyapp/faq bottom of the page


Couldn't this get done as part of the manual review of an app's source code? It seems like this wouldn't necessarily have to be automated


And right after that we can remove any FTP client that uses the FTP protocol to download content Google doesn't like. We should scan all apps that use a common, published protocol to make sure the protocol is not being used to consume objectionable content. /s

The app is not the service; the protocol is not the platform.


I think you might have misread my comment; I wasn't suggesting whether a course of action was correct or not, but just explaining how it could technically be feasible. I interpreted the comment I responded to as not understanding how it would be possible for Google to have done this a certain way, and I was theorizing one possible way they might have done it.


Ah, then yes, apologies -I did not mean to put words in your mouth. Technical feasibility is likely easier than imagined; most Mastodon services use the auto-generated list that appears on their "about" page - easily scraped if not available through the API - here's the list on the instance I moderate on for example:

https://toot.wales/about/more#unavailable-content


I currently think this may be exactly what is happening. If I'm wrong, I'd love to know about that!


The problem I have isn't that Google bans these apps.

The problem is the fact that Google banning these apps borders on state censorship because of the monopolistic position Google has.

Busting up Google solves the correct problem.


That doesn't make sense. The apps can be made available outside of the play s tore. There's no state level censorship here.


True. This is not censorship. People can still direct-download the APK from Github or from an alternative apps Store.


It wasn't that long ago when virtually everyone understood that "hatred" was completely subjective. Trying to remove all communication channels because of the potential for "hatred" means nothing but total silence.


Define "easy to join".

Because if it's "user types in the server URL and tries to log in", blaming the app is ridiculous.


They do it with youtube now as well and demonitize ANYTHING with firearms in it. Doesn’t matter if you’re a hunter or trying to sell people on a new product.

All these disparate media sources that we yearned for back in the cable-only days have finally turned to dogshit.


i think the reason would be that with browsers they don't control the ecosystem enough to get away with it. I actually agree with the ban if your framing is correct (not having looked into it any further), but if they did this in chrome, people would just use another browser to access these sites. you can sideload apps as well of course, but it's much more of a hassle than doing it on PC, where people are used to software distribution not being as centralized


Are you saying the banned apps promote banned sites, or merely don't block banned sites?

There's a huge difference.


“Banned sites”. How short-sighted


Free Speech Extremist. Shitposters Club. No Agenda Social. Lets all love Lain.

There are ton of instances which much of the Fediverse blocks, but if you set up your own server and follow people on those instances, it's not 80% hate speech and racism as others would have you believe. Yes there is some of that, but there's also weebs, and anime and political discussion and weird gaming discussion and videos not posted anywhere else and memes and the great diversity of through we use to have on Reddit before it became a monoculture.

There are also straight up anarchist instances that justify violence and destruction of the state like Rage Love, Anticapitalist Party, and others.

It's a very big space, with new players entering and leaving every month.

Banning apps because they do or don't have block lists greatly misunderstands how the Fediverse works.


Or it exactly understands how it works and Google doesn't much like how it works.


Which i have always suspected was the real reason they killed off Google Wave in such a hurry, even thou we were told they found it useful for collaboration within Google.


TYFYC.


Tyfyc2!


> If you happen to be the target of this hatred and violence, and feel it is an existential threat to your livelihood, you might believe that it is a good thing to make it more difficult for those who are engaging in this behavior to enlarge their communities.

I’m indeed being threatened by various hate groups (one of them actually tried, and almost succeeded, to kill an acquaintance), but strangely enough they are never removed by Google or any other big corporations. Worst, each time I voice any slight complain about them, I am the one being censored. Some of those groups are even sometimes getting official support by the GAFAM. This is a really odd and unfair world.


Which groups?


Does that matter?

If op is lying, he or she is lying.

But are some groups ok to threaten? Are some people ok to threaten?


I think it matters because sadly I’m at the point where I need to evaluate the death threat for whether it is reasonable to fear from it.

It’s really unfortunate when someone fears for their life and I don’t want that for anyone.

However, lots of people fear for reasons that I don’t think are actually from threats of violence.

I had a friend explain how they literally feared for their life. When trying to console them I learned that the thing that was making them afraid was a friend’s Facebook post about a restaurant that supported some Bible group. Their reasoning was that the Bible group was anti-gay, and they might end up killing them for being an ally of gay friends.

Because of this they feared for their own life and wanted the friend to stop talking about it.

Now of course, there are multiple lame things about Bible groups being jerks, but certainly nothing to make this person think their life was in danger or directly threatened.

I’m not sure how to specifically help that person, but after several episodes like this, I don’t pay much attention to them when they say that they get death threats.

Maybe I’m just jaded but lots of people talk about death threats and I’m sure they perceive them as such. But having the details of the threat helps to differentiate the really dangerous people trying to kill others from the plentitudes of people saying “DIAF” who aren’t trying to kill, just being jerks.


It does matter because I would like to avoid said groups.


Does Safe Browsing block sites that the user wants to visit?


Exactly. This is Google drawing the line on where this "hate speech" is from and they believe that such "content" can be accessed via the Fediverse.

To see how ridiculous this sounds, Google might as well completely take down the entire social media and internet browsing category on the Play Store since I keep seeing the same content from both extremes on all these platforms.

Just wait until you tell them to take down their own browser since you can find this "content" with a simple search. They will soon realise that "drawing the line on hate speech" is more tougher than solving leetcode CS questions.


> Just wait until you tell them to take down their own browser

Well, they're taking the address bar away, bit by bit; they have SafeSearch; and they have AMP. It's a very slow erosion, but there will come a point at which going outside of the list of officially acceptable sites will become more difficult - first with mandatory warnings, then maybe with mandatory reporting to law enforcement or whomever, and eventually not at all.

Yes, it sounds like a "slippery slope" argument, but we're a few steps down the slope now, and any argument that encourages us to climb back up has to point out where things may go if we don't resist.

It sucks that this requires us to defend the rights of people to speak whom we may intensely disagree with, but that's the crux of the matter. Either we become mature enough to understand that people will have discourse we dislike, and avoid it or engage with it as we see fit, or we continue to hide behind authority figures who will purport to keep us safe by controlling what we can say and think.


The fascinating part of this is that Google has officially claimed the mantle of arbiter of what is allowed on the internet ( they are not exactly a gate keeper yet, but given how people have trouble accessing information outside FB, Apple, Google gardens, they are well on their way ).

edit: Trouble in a sense that it is inconvenient for them.


You are absolutely right, and this is scary.

Absolute fear.


As the editor of the internet, are they not taking on full legal liability for anything they haven't blocked yet?


No, and there are laws and tomes of case law that reinforce that no matter how much curation they do, an interactive computer service will not be held liable for user generated content.


Yeah but that can be changed with a simple act of Congress and it should be. Their support is rapidly eroding the more they flex their power.


That would be an especially stupid move even for Congress. Expect to see either bland corporate content or goatse everywhere then.


I get what you're saying, but I don't think this is because Google cares about hate speech. Google is simply using hate speech as an excuse to get rid of apps that it doesn't like. Deciding which apps you like and which you don't isn't that hard of a line to draw.


Google is politically very very left. I'm guessing that someone at Google may have browsed these apps, decided they didn't like what they saw, and pressured to have the apps banned. Obviously they can't do that to large players like Facebook, but small apps, they can easily crush, and nobody's going to do anything about it.

Image if Google decided to just block certain websites on Chrome, or if big tech got domain registrars to drop 4chan, or whatever humor websites they don't find amusing.


What prevents Google from stopping resolving some domains at 8.8.8.8?


That already happened with certain extremist Islamic websites.


I wonder if Cloudflare is still protecting ISIS sites from DDoSses and the like.


That's precisely what they are doing. They don't care about hate speech. If they did, they wouldn't have Trump ads on the YouTube banner.

This is justification to get rid of apps they don't like.


Not allowing a parties political ads would be clear favoritism in a political situation. You might not like Trump, but it's quite the jump to say republican ads are "hate speech." .. in fact that's quite literally weaponizing the word "hate speech" to censor political opinions you don't like.


Which is the whole point of hate speech laws. Can't market censorship of one party, but who could ever oppose censoring hate... then redefine hate to be anything you'd like censored, and ... that's what we have now.

Any time anyone complains about censorship, roll out the excuse of holocaust denial, regardless whats actually being censored.


Yes it's why freedom of speech is a thing. Ideas are meant to compete and the power to decide which ideas are acceptable is an absolute power that completely corrupts a society.


[flagged]


[flagged]


> when they pretty much just stand for family, God, and loyalty to the country and ideals of individual freedom.

> You've flattened an extremely large diverse group of individuals into your caricature viewpoint.

I think you might've done that yourself.


I said pretty much to underscore I was generalizing and that there are many nuances there. But that is how conservatives self identify. You will find that in their music and their art and their culture in general. I'm not inventing hateful terms to turn you against them as OP was.


> Trump just negotiated a peace deal between Israel and UAE

There are lots of Christians who favor Israel but ultimately hold anti-Jewish opinions. This is weird, yes, but support of Israel should not be mistaken for support of Jews (the reverse is also, but only incidentally) true.

> Trump's son in law is Jewish.

So were Emil Maurice and Erhard Milch, at least according to German law. Didn't stop Hitler and Goering from making exceptions for them.

I have quite a few conservative friends. Most of them don't support Trump, because Trump isn't a conservative. He doesn't stand for God or individual freedom, nor does he stand for loyalty to the country. Don't conflate conservatives and Trump supporters. They're not the same thing, and trying to present a bait and switch between classical conservatism and Trumpism is a bad faith argument. This is why you see so many conservative politicians that are no longer in office supporting Biden over Trump. Because Trump doesn't extoll conservative values. Instead he represents a populist and proto-fascist wing with more than a couple white supremacist tendencies.


I travel all over the country. And I'm from a highly conservative area in Florida. Trump has 95% support in the Republican party. He's got tremendous support and enthusiasm.

Trump isn't racist at all. He was friends with Michael Jackson, Jesse Jackson, Whitney Houston, etc. He actively tried to help Whitney from overdosing (https://www.rollingstone.com/music/music-news/mike-love-to-t...). He calls as many victim's family members as he can, regardless of race. He calls every fallen soldier's family, regardless of their race or creed. He just pardoned a black woman who was put away for a non violent drug crime in the 90s. He signed criminal justice reform established opportunity zones in poor neighborhoods to encourage business investment. He supports school choice to help poor communities who are stuck with broken schools. He's got 11 members of his cabinet who are Jewish (Mnuchin, Friedman, etc).

He is pulling the troops out of Afghanistan: https://www.cnn.com/2020/06/26/politics/us-troops-afghanista...

There are more black Republicans running for office in 2020 than in 100 years. Trump has endorsed most of them, if not all of them.

Your tired talking points are just a symptom of a coordinated hit job and misinformation campaign.


And yet the rest of the category is still there. This is a great opportunity to put on your thinking hat.


Haven't podcasts apps recently been removed as well? Something about it being possible to listen to stuff about covid on them.

Edit, found it: https://news.ycombinator.com/item?id=23219427

And well, they should probably remove the apps of Twitter, Facebook, Reddit, etc as it's plenty of hate speech there too.


They didn't remove Podcast Addict, it's right here: https://play.google.com/store/apps/details?id=com.bambuna.po...

That was a mistake, presumably. It's likely this is too. The deep desire on the part of posters here to assume malice and scream CENSORSHIP is really off-putting.

I actually don't know anything about fediverse, but if it's like other pseudoanonymous obscure communications media it's probably filled with awful stuff. It's not that hard to imagine a naive reviewer who doesn't understand the architecture to be confused if they get a report showing screenshots of the app with the content available in it.


The only reason Podcast Addict has been restored (multiple times) is that it's high-profile, and the owner raised enough stink to cause widespread (enough) outrage about this. Otherwise, whether through malice or incompetence, it would be gone forever.


>The only reason Podcast Addict has been restored (multiple times) is that it's high-profile, and the owner raised enough stink to cause widespread (enough) outrage about this. Otherwise, whether through malice or incompetence, it would be gone forever.

This is my concern. These apps are not content hosts, they are akin to Web browsers or RSS readers, but they are small, one-person endeavours that don't have the clout to get Google to notice the difference between the content providers (the individual Mastodon servers) and the ActivityPub client app that these apps represent.

I know one of the devs is thinking to not push the issue as he's worried about his other apps on the same developer account.

The discussion has veered off into censorship issues, but this is a simple 230-ish problem, these apps are not the Mastodon servers that (presumably) some people have had issues with. They are agnostic client readers of the ActivityPub statuses.

There is no way, nor any legal requirement, for a browser like these apps to be held responsible for the million possible bits of content it could consume.

The app is not the service.


It's not a desire to scream about sensorship. It's more about how the rules are arbitrarily enforced. And how every app's fate is in the hand of two big players, so you're sol if they ban you. Even if the ban is a mistake, good luck getting it reversed unless you're going viral.


Honestly it's somewhat telling that Automation for app review can get messy fast and that Google should invest in Apple's approach to app review (but I also agree that the poster is extrapolating the app denial into something much more than what it is)


The fediverse is very split, you have some servers that are run by people who post straight up Nazi symbolism on their admin accounts, and you have some servers that have admins who will happily participate in piling on someone for appearing Insufficiently Woke. I block both kinds on my server because I just want a nice quiet place to talk with my friends, and that's a definite segment of the Fediverse too.


This much is key to observe: this isn't a partisan maneuver by Google, as much as people may want to slot it into that. It smells much more like a control maneuver: a perceived competitor.


A competitor to what? G+?


To the big tech cartel, period. Don't think for one second that Google, Apple, Facebook, Twitter, Microsoft, Adobe, and their friends aren't having one big handshake party over this kind of crap.


Really? It's not okay to say it's censorship when it is? I'll admit I'm wrong iff the apps are reinstated without having to impose additional restrictions.


There is a specific exception to web browsers, so Mastodon app(s) could probably classify as one by prominently displaying the web url of a post above the post.


:) I wonder how this fits into the Chromium team's insistence that URLs are user-unfriendly and that browsers ought to redesign them?


Unless the rule these apps actually broke is that you aren't allowed to do things that go against Google's profit interests.


Where do we find the rules about exceptions? Are group communication apps excepted as well?

It's not obviously in the Restricted Content policy page: https://support.google.com/googleplay/android-developer/topi...

The rules there are extremely general, and technically cover all sorts of things which are currently let into the Store.


Vague policies are very useful when those making them wish to engage in arbitrary and capricious enforcement.


one of the problems we have in Tech (as an industry and on social media) is to allow individuals who make poor and bad decisions to hide behind the collective of a company/organization. And we continue applauding them for their great work they do in areas that are removed from the political. But these days innovation acts as a shield where we let the innovators get away and reap praise as individuals (the inventors of golang, the teams who standardized QUIC, the guys doing netflix propaganda about their simian-devops-army, facebooks React, Amazon's DSSTNE...) all of them have engineers who wear these things like a badge and are proud to give talks. Yet when they are responsible for projects that violate human rights, remove the Taiwanese flags from their app, or censor speech as in this case then we're never talking about people but it's always the opaqueness of the firm that hides these abuses.

We need a list of these lizards so we know when to throw tomatoes and rotten eggs at them whenever they give a talk or share feel-good posts on LinkedIn.

people should be ashamed instead of proud when they write "disclaimer I work at X"


When the hell has that /not/ been the case in society? Institutions have always been shields and your dehumanization and desire for shaming ironically shows exactly why they serve that function - they don't want to be subject to the whims of random mobs who aren't a part of them.


The tech industry is a place where people generally prefer to talk things out rather than yelling and shaming. I think that's worth protecting, even if we see short term gains that might be available from defection. After all, once Google realizes the norms have changed, won't they be able to leverage their resources to find people who yell louder and shame more frequently than you?


The old days are gone.


The old days were never as controversy-free as most people remember. There was a time not that long ago when common techie opinions like "Internet piracy isn't a big deal" or "shooter games are fun and kid-friendly" were seen as quite immoral in some circles, and calling your forum "Hacker News" was kinda subversive. If we're headed back to that kind of environment, just with a different set of moral issues enforced by a different set of people, that seems solidly OK.


I prefer the term "hated speech" since it's in the eye of the beholder.


That's needlessly confusing. Half the posts on https://www.reddit.com/r/TIHI/ are "hated speech" while being miles away from anything that would get called hate speech.


Don't forget twitter & facebook, but they won't because that's not really the reason.


Guess they should ban Chrome, since hate speech appears on the web. All these Fediverse things can be accessed via web apps and progressive web apps (PWAs) too.


Following that logic, they should also remove Google search.


You don't even have to go that far because you can just find plenty on Twitter.


> Firefox

Don't give Google ideas.


Not even web browsers, Twitter should go first


And they'd replace it with a special browser that limits to the amp-enabled sites only. This is so obvious.


It's pretty rich that Google claims to be removing these apps for hate speech when their own search engine returns results from sites like Kiwi Farms and Encyclopedia Dramatica on their victims so prominently.

(throwaway since the former name searches themselves to find new targets.)


They wouldn't be doing this unless they had serious reason to do so. They aren't dumb, they know everyone is looking.

The far more likely reason is that they know we have an issue. They've been monitoring and they don't like what they've been hearing.


They should also ban chat apps because you never know, someone may one day say something that Google does not agree with. lets be on the safe side.


Hate speech is bad.


Censorship is much worse.


Or, might Chrome start censoring...

If this is the Google policy they may want to bake that policy into the way Chrome operates.


What do you think AMP is a prelude to?


That's coming soon enough.


Why not just get rid of that troublesome feature called the internet?


You joke about that, but I wouldn’t be surprised if in 5 years (or maybe 1 year?) open browsers are banned and only “allowed” browsers are used that allow access to “allowed” websites and content.


Finally, the killer app for mesh networks.


Why not go all the way and just ban all ISPs?


That actually might be an improvement over the current situation.


Twitter is literally a platform for hate speech right now, and people have been killed because of it. Will they be taking down Twitter?


I really hope they do.


Cool. So if the issue is hate speech I'll be waiting for Google to ban the FB app as well


Goggle doesn't protect users, they oppress wrongthinkers. Moderation of facebook is delegated to facebook.


Don't forget Twitter and Facebook. Twitter/Facebook basically created hate speech, by the way.


It always starts like that. People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government. Eventually the definition of hate or whatever it is that’s offensive is very removed from the original meaning, and now we all bear the brunt of the sensible people who with best intentions wanted to make things better.


Hanlon's razor has hobbled everyone's ability to see what the establishment is doing. This isn't about good intentions gone wrong. The loss of control of the narrative due to the internet has been a severe setback for the powerful, and they have been slowly clawing it back by limiting access to alternative media.

Various think tanks, NGOs, board members with multiple irons in the fire, foreign interests, and the government itself exert a lot of influence on large players to shut down harmful narratives. Most visible was when the deplatforming activity started with threats from lawmakers against outlets if they didn't remove certain content. You've also got various orgs with CIA connections acting as "fact checkers" on Facebook. The influence happens in subtle and many ways.


>The loss of control of the narrative due to the internet has been a severe setback for the powerful

How was the narrative controlled by the powerful before the internet became a part of everyday life? Specifically, how was the narrative controlled in the US in the decades before 1993?

I'm asking for recommendations of books written by historians, journalists and other serious people. (Understanding the situation decades ago is probably a lot easier than understanding the current situation -- partly because the powerful will take pains to hide their controlling actions from the public.)

In the US I get the general sense that politicians and holders of government offices have never been able to exert a lot of control of the narrative with the result that journalists and the prestigious universities have so much influence that they are best thought of as essentially part of the government.

That suggests that the efforts of the establishment to rein in the big social media companies will prove largely ineffective with the result that Facebook and Google will probably join the New York Times and Harvard as parts of the de facto governing structure of the US.

EDITED: changed "rein" to "rein in".


If that is true then that suggests that the efforts of the establishment to rein the big social media companies will prove largely ineffective with the result that Facebook, Google, etc, will probably join the New York Times, Harvard, etc, as part of the governing structure of the US.

That's exactly what "reining in" looks like. Instead of being an alternative to, e.g. the New York Times and opposing the next Iraq war, social media just becomes yet another rah-rah cheerleading mouthpiece of whatever opinion the "serious people" hold.

I don't know if you consider Chomsky to be a "serious person", but Manufacturing Consent does go into how the people actually in charge of the government (professional civil servants, corporate lobbyists, etc) manage to make it seem as if their opinions are infallibly correct and countervailing opinions are thinly veiled crankery. What social media did (at least in its early days) was give everyone the ability to manufacture consent at a scale that previously was only the domain of the large media corporations. The establishment media is obviously threatened by this and are working to ensure that the new media follows the same guidelines as the old, even if that means censorship.

Of course, that's not how the establishment media phrases it (and probably not even how they believe it). They see it as "protecting" the people from unsavory "Russian fake news". In reality, though, that's just a lie they tell themselves and tell us to justify their continued hold on the ability to decide which opinions can be held by "right-thinking people". If they were truly interested in "the marketplace of ideas", they wouldn't be pushing so hard to make platforms as centralized, controllable and censorship friendly as they are.


Manufacturing consent online is dependent on the rules of the game. Russian-bot-syndrome is a fraud issue, in this case a state actor manipulating the 1-person-1-voice assumption of the game. If we're making a marketplace metaphor, then this market is being tilted toward actors with the resources to pay for bots, workers, or influencers. It's not like it's limited to Russia; Bloomberg was somewhat showy during the primary about paying people to tweet for him, and it's safe to assume any other well-resourced actor with an interest in manufacturing consent is doing the same thing.

The censorship debate is an indicator of game-rule collapse in social media. The platforms are reaching for top down control because they can't cook up a better way to reduce fraud and (let's call it) low-quality behavior. Ironically this method reduces the overall authenticity of the platforms and counteracts the intent of the censorship, and thus you get game-rule collapse.


> Bloomberg was somewhat showy during the primary about paying people to tweet for him

I'm genuinely curious, do you have any more details or sources on this?


This is what my googling pulled up, which matches my memory of it: https://www.latimes.com/business/technology/story/2020-02-23...

It was an interesting situation because in some ways, I prefer the transparency involved? But it's sort of like any sale of an account, even if temporary - it seems disingenuous by nature. Almost like how an MLM makes you sell to your friends.


Has anyone demonstrated how many people these "Russian bots" have influenced? There's enough craziness online before throwing in non-linear warfare. I'd imagine it's far less influence than, say, Charles Koch has. Why is it ok that he has an outsized influence on our "democracy" but such a terrible thought that Putin has influence? It's not like either person will act in our collective interest.


It's connected to the same reason we only allow US citizens to be involved in US politics. You at least assume that a citizen has a direct interest in the country's domestic well-being. We talk about russian bots nationally because violating that norm is a cultural scandal, but there's plenty of discussion around outsize influence by corporations and the rich in social media as well. It's not really okay for anybody.


Anyone who assumes that Charles Koch is acting in our collective interest is beyond nuts. Regardless, I appreciate the hand-holding explanation here.


TL;DR: You either die a hero or live long enough to become the villain.


The go-to text is Manufacturing Consent.


By the publishers of newspapers. The winter soldier hearings as essentially a mass confessional of war crimes witnessed and participated in during Vietnam flat out weren't covered at all on the East Coast for one.


"threats from lawmakers against outlets if they didn't remove certain content." Could you be more specific what that certain content was?



I really don't get that logic. Of course, everyone agrees that hate speech is bad (and so are a lot of things, but I digress). But how is it not censorship when one of the world's most powerful companies does it? Do they get a free pass because they are governed by shareholders and make a lot of money? I can see how it's not censorship if Bob does not want people to post things he disagrees with on his cat picture forum with 200 users. When a few massive companies effectively control the possibility of reaching 95ish% of the audience on the Internet, it's censorship in the very worst sense of the word, and I don't see how it's possible to support it without being an unequivocal opponent of free speech.


> everyone agrees that hate speech is bad

but does everyone agree on what hate speech is? That's the danger. You can just claim any opinion you don't like is hate speech. You can say endorsing a particular candidate is hate speech and those people can justifiably be censored; their views invalid (and in some places; justifiably killed).

It was once considered offensive, in many places a crime, to say homosexuality is morally okay or that the Bible should be translated into German and English or to say God doesn't exist.

There is no distinction between "Free speech" and "hate speech," because it requires you to qualify the former. There are exceptions in many countries, but they are for very specific things: child abuse and advocating specific violence against individuals.


Does everyone agree on what murder is? Of course not. Does that mean murder should be legal?


Well a bunch of people are running around now saying speech is violence...so in the not too distant future we might be saying someone was murdered by words.


It is interesting that you say that. In English, we do use phrases like "X was destroyed by Z" ( I forget the exact idiom, but kids seem to be using it -- god I feel old ), where no actual destruction beyond verbal attack took place.

I know you were referring to something else, but it got me thinking that we are already using the phrase. Our legal system just does not allow a lot of 'word damage' to be adjudicated.


There was also “sticks and stones may break my bones but words can’t hurt me” that now seems in practice to have gone by the wayside.


I'm not gonna lie when I was a child decades ago it was well known even amongst childrens books at the time that that line's a load of horse shit. There's tons of books where that exact phrase is used to show that ignoring verbal abuse is wrong and emotionally damaging.


You don't need any levels of indirection. https://old.reddit.com/r/murderedbywords

*Note, the sub isn't interesting, I'm just demonstrating that the phrase is already in use.


There have always been limitations on freedom of speech, including speech that incites violence. So your example, while deliberately hyperbolic (I don't think anyone would say that words literally murder people), has always been a normal thing.


> while deliberately hyperbolic

It's not hyperbolic at all, or have you not seen the "Silence is Violence" rhetoric everywhere? It could literally come from Orwell's world of "War is Peace, Freedom is Slavery, Ignorance is Strength"

The book, The Coddling of the American Mind, does a great job of showing how the goalposts for what is and isn't violent have been moved considerably in the past few years in academic circles.

Finally, violence is okay, so long as it's against the "wrong people," like the professor who was put on probation for assaulting an opposing party member with a bicycle lock, or the guy in Charlottesville who was fined $1 for assault:

https://battlepenguin.com/politics/war-is-hell/#the-normaliz...

> I don't think anyone would say that words literally murder people

There are people who are literally saying that now.


> It's not hyperbolic at all, or have you not seen the "Silence is Violence" rhetoric everywhere? It could literally come from Orwell's world of "War is Peace, Freedom is Slavery, Ignorance is Strength"

I’m aware of the “silence is violence” slogan. It means that inaction in the face of injustice is tacit support for the status quo. It doesn’t literally mean, for example, that all people are being violent while they are sleeping, or that people who are unable to speak are being violent. I’m sure there are some people who use the slogan in preposterous ways, but that’s true of all slogans. You’re looking into this way more than necessary. There’s a pretty clear reasonable interpretation of the slogan if you’re willing to look for that interpretation in good faith.


That interpretation is entirely too generous. That expression "Silence is Violence" is explicitly intended to compel speech and its clear meaning is that if you don't, you are contributing to the violence against minorities.

https://twitter.com/KunkleFredrick/status/129834428507983872...

This is not an extreme example. The expression has always been used (at least in the current climate) to mean, you agree with us, verbally and visibly and loudly, or we attack you.

Edit: If you think the above example is not an example of what "silence is violence" means, by all means, explain why rather than just flyby downvoting.


That example is a crowd intimidating people with the intent to compel speech, of course, and they’re using the slogan “silence is violence.” But those are two different things. You could pick any slogan you want and have a mob recite it while intimidating people into agreeing. That’s not an indictment of the slogan.


That slogan specifically promotes this:

https://twitter.com/KunkleFredrick/status/129834428507983872...

and this

https://twitter.com/rawsmedia/status/1298055028213678082

It's not just a slogan. That is the actual end result of such an ideology.

Silence is not Violence. Silence is the opposite of violence. Silences is stopping, thinking, looking at all the evidence, carefully evaluating and coming up with a sound decision.

This slogan says: "Be outraged immediately without knowing any real facts about the situation"

It's literally DoubleSpeak. You are literally, right now, using DoubleThink.


Silence is not the opposite of violence. Peace is the opposite of violence.

I interpret the quote "silence is violence" to mean by not speaking out against violence, you implicitly support or contribute to it. People may disagree if this is true, but it certainly doesn't feel Orwellian.


In Germany instead of the "silence is violence" slogan people often use the famous Niemöller quote/poem but I have always understand the slogan to express the same sentiment.

First they came for the socialists, and I did not speak out— Because I was not a socialist.

Then they came for the trade unionists, and I did not speak out— Because I was not a trade unionist.

Then they came for the Jews, and I did not speak out— Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.


That would start the discussion of "when does an example become the standard" which I don't really want to go into. Suffice it to say I do not watch the news, I very rarely visit Twitter and do not follow anyone, and that is the only way I have ever seen that expression used - in the news, on Medium, on FB, on anywhere, when I've come across it. "Agree with us or you are violent."

I don't think there's a generous way to interpret that expression. Silence is de facto not violence. Violence requires physical action.


> Suffice it to say I do not watch the news, I very rarely visit Twitter and do not follow anyone, and that is the only way I have ever seen that expression used - in the news, on Medium, on FB, on anywhere, when I've come across it. "Agree with us or you are violent."

Have you Googled the term? Apart from the first page or so being dominated by that very recent event of the crowd intimidating people and many other people conflating that event with that slogan, you'll find plenty of articles about what it means: that choosing to not speak out about an issue helps support the status quo. In fact, I've generally seen it used to try to persuade people who don't want to support the status quo that staying quiet or trying to "not be political" is in fact supporting the status quo.


> that choosing to not speak out about an issue helps support the status quo

I mean that's just fine, and a perfectly fine point to make - and one with which in fact I agree; I have railed against police and prosecutors' offices for years, having been on the ass-end of their horror myself.

But if that's what one means to say, then say that; because the word 'violence' has a specific meaning not captured by "don't support the status quo".

This is a long way of saying I generally don't like slogans :/


>Does that mean murder should be legal?

The forms which have little agreement? Probably.

For example, some say that meat is murder. I don't think we should be outlawing meat, and thus in the eyes of the ones making such a statement, I'm supporting some forms of murder remaining legal.


But you aren't concluding, because people disagree on precisely what qualifies as "murder," that there should be no laws against murder. That is the argument proposed in the earlier comment about why there should not be laws against hate speech.


No, there's a set of statutes that lay out what murder looks like, and ultimately it is up to a jury of your peers to determine if what you did satisfies the criterion. That's in fact exactly why the jury system was invented, because reasonable people can disagree, so the assumption becomes that "if a reasonable plurality of people DO agree, there's a good chance it is a good enough standard by which to act."

The subject of murder is not an appropriate analogy here, really.


Why is that not perfectly analogous? The law can describe what is and isn’t hate speech, and courts and juries can decide individual cases when necessary. This is the same for all criminal laws. The fact that not all people will agree what is and isn’t a violation of a given law at a given time is simply not a valid argument for why a given law shouldn’t exist.


Yes of course. That is, there are some things that some people call murder that should be legal.

Such as steak.


That's not the question. The question is whether there should be any laws against murder, given that people disagree on precisely what constitutes murder.


It's a question of what is meant by the term censorship. In the strictest sense, moderation and censorship are very often the same thing. If for instance, I post something terrible in a comment on here, and the administration of HN deletes that comment, then that's censorship.

However, when most people talk about censorship they're using it not in the strict sense, but rather as a shorthand for someone violating their first amendment right. In this case this is really only a crime when it's a government entity doing it, although people don't typically differentiate between the government and any large organization, which technically are legally allowed to censor you on their platform or property.

There's a larger discussion that needs to happen with regards to censorship. There are two extremes at play here, on the one hand there's the absolute freedom stance of literally nothing censored (only example I can think of for this is maybe the dark web, but really everyone censors if only a little), even shouting fire in a crowded theater or posting child pornography. On the other extreme is the absolute censorship of someplace like China, where only permitted thoughts and expressions can be posted. The US and most of the rest of the world tends to fall somewhere in the middle.

The big struggle right now is that everyone has recognized that there's clearly some kind of problem. We're seeing unprecedented levels of misinformation, and a frankly weaponization of social media both for profit, and for international politics. I don't know that anyone has a good solution for how to address that problem, but the pendulum seems to be swinging towards a more censorship focused response.


> other extreme is the absolute censorship of someplace like China, where only permitted thoughts and expressions can be posted. The US and most of the rest of the world tends to fall somewhere in the middle.

It's like other countries only exist as rhetorical devices for most of HN. If you actually used the fediverse you'll see that there are plenty of Chinese users on it criticizing the state. It's the Western fediverse users being censored for wrongthink this time. Even the creator of Mastodon straight up doesn't believe in free speech wrt. to certain far right beliefs.


> Of course, everyone agrees that hate speech is bad

As an extreme example: Do you really think the KKK believe hate speech is bad? Even if they do agree, do you think their definition looks anything like your own?

I find it hard to believe that someone who has lived through the last four years can say with a straight face that everyone agrees hate speech is bad. One would think the last US election cycle would have gone differently if that premise were true.


> Of course, everyone agrees that hate speech is bad

That sounds like an unjustified premise.

_Note to the casual downvoter not critically examining my argument: I am not saying that I personally do not think hate speech is bad._


Google regardless of it's size is still private, and should be allowed to host who it wants or doesn't, same as you should be forced to host visitors you dont want.

Free speech is that they shouldn't be a law by a government to punish expression of ideas or opinions.

citizens or companies should be allowed to host and not host whoever they want.


> citizens or companies should be allowed to host and not host whoever they want.

So should ISPs be allowed to not deliver a website (say Netflix's) content to you unless you pay extra?


In my opinion yes,

I wouldn't like it but it's their network, i would hope that that wouldn't be a good business decision and their competitors would not do that.


everyone agrees that hate speech is bad

The first amendment would like to differ!

Anyway it's meaningless to believe hate speech is bad, because hate speech is an undefined term. It just means something someone somewhere would like to punish someone else for saying.


Do you consider is censorship if a huge Internet/media company removes illegal content like child pornography, explicit calls to "imminent lawless action," phishing/fraud attempts, explicit misinformation (like false claims that an election date has been moved), or content that goes against their own community guidelines (pornography, violence, etc.)? Do you consider those things censorship or opposition to free speech?


You're mixing up two different things: sites removing illegal content because they're mandated to do so, and sites removing legal content because they choose to do so.


Not all of the examples I gave were illegal content.


So like I said, you're mixing them up.


No, I'm not mixing them up. I'm asking questions about them to try to understand people's viewpoints.


> People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government.

We have two things to unpack here. First, hate speech. What is it? Who gets to decide what the word means and what is their procedure for deciding? Is the definition stable or fluid (or even very fluid)? Is hate speech universally wrong, or only wrong when issuing forth from certain speakers? If we all agree that it's wrong, then why are people engaging in it, even unintentionally?

Second, censorship. Is self-censorship not censorship? Why must the state be involved in order to censor? We're TV networks that for decades voluntarily forbade their programming from portraying homosexuals being censored or not? What is unique about state authority versus corporate authority as it relates to censorship?

"Hate speech is bad" is a very abstract statement. The sentence conveys almost no actual concrete meaning. It seems like a rational or sensible statement, but it delegates almost all of the actual work to feelings and emotions, and highly subjective ones at that. I don't find "wanna grab a cup of coffee" terribly hateful, but apparently some people do.

I get the overall sentiment of your post, and I think I mostly agree. Nevertheless, the way we stop this nonsense is to say at the beginning that it is an abstraction over extremely subjective feelings and emotion, and thus has no basis other than eventual mob rule authoritarianism.


> People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government.

This doesn’t seem very sensible to me, I would even say it seems to be the opposite of sensible, probably because I am not American.


hate speech is a crime in many european countries too[0].

[0] https://en.wikipedia.org/wiki/Hate_speech_laws_by_country


That’s a non sequitur. What do you mean “too?” What does this have to do with the claim that “it’s not censorship if it’s not mandated by the government”?


I suspect they thought you were saying "hate speech is bad" sounds the opposite of sensible, when really you were referring to the part following the 'and'.


The is what I believe is the goal of placing restrictions on people based on “mental health”. It’s so open ended and not easily verifiable that it becomes a sliding scale.


If I'm understanding you right, the idea is that some harms, physical ones, are fair game for the law to cover, but other harms (mental ones) or a collection of boundary cases are less (if at all) within the purview of legislation.

I think there's a good conversation to be had as to what in particular makes physical harms so special as compared to others, and how existing law in every country (including the US) can constitutionally include some non-physical harms within its legislation (such as laws against sending threatening letters, or child pornography law, or fraud).


> what in particular makes physical harms so special

Such harms can be reliably detected, with stringent enough criteria.

Mental harms, and the very notion of "normality", are much more nebulous.


>Such harms can be reliably detected, with stringent enough criteria.

Mental harms, in many cases, can also be detected by competent professionals; besides that, it is entirely possible for physical harms to heal and for supporting evidence of their infliction be used to convict. Further, many physical harms depend at least partially on the victim's characteristics or situation; a concert pianist is arguably harmed more by someone cutting off his finger than a schoolteacher would be, for instance. Many physical harms that are rightfully legislated against often require the testimony of the victim for the case to succeed. For a wide class of 'mental harms' it is accurate to say that they are indeed physiological responses - from PTSD to lethargy and insomnia. This is in contrast to the caricature that mental harms are necessarily merely 'hurt feelings'.

I also have concerns that the difficulty or the fact of sometimes being nebulous features of mental harms should necessarily rule out such lawmaking. At best, the minimum for proving such harm should at least be set out by the legislators or judiciary, if the standard of evidence is the roadblock to legislation.

It's also worth remembering that we're talking about harms here, not mere hurts. Harms are much harder to fabricate than hurts are.


Here's where mental harms "detected by competent professionals" leads:

https://en.wikipedia.org/wiki/Political_abuse_of_psychiatry_...


It seems as though you're invoking a slippery slope fallacy; it's possible to say exactly the same about doctors working for the state who minify or trivilazize the examination of physical harm on dissidents too. The fact that expert testimony can be bought off or compelled does not preclude expert testimony from being an important consideration in general. The opioid crisis for instance has shown there are many incompetent doctors, but I doubt you'd refuse the testimony of a doctor to help your case when you are injured by someone else.


You can prove physical harm beyond a reasonable doubt. Mental harm is frequently concocted as a bullying tactic, e.g. the recent NY Times editorial controversy where employees said running an editorial they disagreed with made them "feel unsafe." https://www.npr.org/2020/06/08/871817721/head-of-new-york-ti...


The article you link did not mention the employees saying running the editorial made them "feel unsafe". Neither the word safe nor unsafe appears in the article. It says the article "reportedly elicited strong objections" from the staff.


I don't see this as an argument against such legislation; consider that many physical crimes are also hard (or even impossible) to prove beyond a reasonable doubt, considered case-by-case. Rape very often qualifies here, as does the mens rea of various other crimes, which may rely upon testimony. Both actus reus and mens rea are required for a conviction, and while the actus reus may be easier to prove (but again, in many cases not beyond a reasonable doubt), we do not abolish the role of intention in the justice system simply because it's hard to prove.

Accusations of physical harms can also be concocted as bullying tactics too, in which the harm was suffered as a result of either a self-inflicted injury, or inflicted by somebody else. Such cases can be thrown out due to insufficient evidence. I see no reason why the same cannot be said for a subset of mental harms, in which there are equivalent doctors available to use their expertise to judge the harm.


Specifically, I’m concerned about government placing restrictions on individuals based on their mental health history.

What is the process to dispute it? You can’t just take a blood test to say this isn’t really a problem.


Some problems don’t have easy simple solutions. Any answer will have some outside cases.

If a schizophrenic parent has in the past harmed someone, should a court ignore this when determining custody. It is unfair. If you err on being too lenient some people will be harmed. If you err on being stringent some people will be harmed.

Complex problem cannot be solved with ideology and maxims. All solutions will fail some people sometimes.


That example seems like a easy problem to solve actually?

> If a [] parent has in the past harmed someone, should a court ignore this when determining custody.

There you go, no need to place restrictions on people based on mental 'health', just their actual actions.


Oh right, I don't think I understood what you were saying, then. That's also a good question.


Would lacking spirituality or belief in a higher existence make you mentally healthy or unhealthy?

I agree, the sliding scale only strengthens whomever is in power. In Florida, the baker act is used like this.


The persecution faced by doubters and non-believers is always surprising to me. I guess nothing should surprise me in the deep south though.


> People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government.

I'm paraphrasing what was here a few days ago:

Our banking partner is uncomfortable that the realistic sex toys modeled after magical creatures have the colors that strongly represent human organs. You will either have to change the colors or we will not be able to continue providing you with our services.


To be fair, they didn't claim that people don't also agree to very stupid and malicious things, and in fact rather implied that that's the likely result of supposedly sensible starting points.


And that's why I'm incredibly cynical about politicians and activists who use amorphous political terms like "hate speech". It eventually becomes a club wielded by whoever is making the rules of today's Calvinball game.


"The road to hell is paved with good intentions"


I love that aphorism. Unintended consequences. We should teach unintended consequences in grade school, high school, and have advanced degrees in it. How to see them before they explode, how to mediate them, and how to fix them once they're running at full steam.

Lately I've been imagining it along with the slowly boiling frog story and the crab-mentality too. As in some people can't tell we're headed to hell because it's coming so slowly, and some people will actively stop others from escaping hell or trying to fix the situation.


There's a hypothesis I came up with awhile ago and it seems to hold pretty true. If we talk about problems as O(n) where n is the causation distance[0] we've solved the vast majority of O(1) and O(0) problems. It makes sense that biologically we would be primed to think in this way because they are decent approximate solutions for small groups. But the world we live in now is much more complex and many events are coupled and the low order approximations aren't good solutions. The problem I see is that people are treating O(5) problems like they are O(1). As a society we discuss things in this way instead of trying to understand the complexity, nuance, and coupling that exists in many of our modern problems. A good example of this is global warming. People treat it as "if we switch to renewables then we've solved global warming" when reality is substantially more complicated. But I don't know how to get people to realize problems are higher order problems and that the first order approximation isn't a reasonable solution.

[0] So O(0) means x causes itself. O(1) is y causes x. O(n) is n causes y causes ... causes x. This is just a simplified framework and not meant to be taken too seriously.


I agree, I've been thinking along these lines for a while too; thank you for phrasing it so clearly.

My current thinking led me to conclude that we don't have sufficiently good tools[0] for modelling O(n) problems with n > 2. Particularly when (what your simplification doesn't capture) there are feedback loops involved.

Take this O(2) problem: x causes more y, y causes more z, z causes less y but much more x. Or in a pictorial form:

        (++)
     X<-------\
     |   (-)  |    
     |  ------Z
     \ /      ^
 (+)  v       | (+)
      Y-------/
You can't just think your way through that problem, you have to model it - estimate coefficients (even if qualitatively), account for assumptions, and simulate the dynamic behavior.

I argue that we lack both mental and technological tools to cope with this.

Speaking of global warming, a year ago I presented this problem: https://news.ycombinator.com/item?id=20480438 - "Will increase in coal exports of Poland increase Poland's CO₂ footprint?" Yes? No? How badly?

The question is at least this complicated:

     Coal exports
          ^
          | [provides Z coal to]
          |
          |     [needs α*X = A kWh for coal]
     Mining coal <---------------------\
          |                            |
          | [provides X coal to]       |
          v                            |
   Coal power plants                   |
    |     |                            |
    |     | [γ*X = Y kWh burning coal] |
    |     v                            |
    |  Electricity --------------------/
    |
    | [burned coal into β*X = N kg of CO₂]
    v
  CO₂ emissions
(Presented this way it not only tells you that, ceteris paribus, it will, but roughly by how much and what are the parameters that can be tweaked to mitigate it.)

Why aren't we talking about climate change in these terms with general public? Why aren't feedback loops taught in school?

--

[0] - Or, if they exist, they aren't sufficiently well known outside some think tanks or some random academic papers.


Your ascii art is much better than mine and I'm not going to attempt it, but I agree with everything that you've said except for

> I argue that we lack both mental and technological tools to cope with this.

I do think we have the tools to solve these issues. I do not think the mental tools are in the hands of the average person (likely not even in most of your above average people because the barrier to entry is exceedingly high and trying to model any problem like this is mentally exhausting and it thus never becomes second nature). Many of the subjects broached here aren't brought up until graduate studies in STEM fields, and even then not always. An O(aleph_n) problem is intractable but clearly O(10) isn't. We should be arguing about what order approximation is "good enough" but ignoring all the problems that arises is missing a lot of fundamental problem solving. Good for a first go, but you don't stop there. I think this comes down to people not understanding the iterative process. 0) Create an idea. 1) Check for validity. 2) Attack and tear it down. 3) If something remains, rebuild and goto 2 else goto 0. I find people stop at 1 on their own ideas but jump to 2 (and don't allow for 3) for others ideas.

> Why aren't feedback loops taught in school?

I think 3 other things should be discussed as well. Dynamic problems (people often reduce things to static and try to turn positive sum games into zero sum. We could say the TeMPOraL component), probabilistic problems, and most importantly: an optimal solution does not equate to everyone being happy (or really anyone). Or to quote Picard:

> It is possible to commit no mistakes and still lose. That is not a weakness. That is life.

The last part I think is extremely important but hard to teach.

(I should also mention that I do enjoy most of the comments you provide to HN)


> Your ascii art is much better

A skill honed in deep procrastination :).

> I think 3 other things should be discussed as well.

Strongly agreed with all three.

> Dynamic problems (people often reduce things to static and try to turn positive sum games into zero sum.

That's what I implicitly meant by talking (again and again) about feedback loops; problems with such loops are a subset of dynamic problems, and one very frequently seen in the world. But you've rightfully pointed out the superset. I think most people, like you say, try to turn everything into a static problem as soon as possible, so they can have a conclusive and time-invariant opinion on it. But it's not the proper way to think about the world[0]!

(I only disagree with the "try to turn positive sum games into zero sum"; zero-sum games also require perceiving the feedback loops involved. And then there are negative-sum games.)

> probabilistic problems

Yup. Basic probability is taught to schoolchildren, but as a toy (or just another math oddity) rather than a tool for perceiving the world.

(Thank you for the kind words :).)

--

[0] - Unless your problem has a fixed point that you can point out.


> (I only disagree with the "try to turn positive sum games into zero sum"; zero-sum games also require perceiving the feedback loops involved. And then there are negative-sum games.)

This is an often snipe I make to people talking about economics (I do agree with the lack of mention of negative sum games, but they also tend to be less common, at least in what people are about). Like the whole point of the economic game is to create new value where it didn't previously exist (tangent).

> Yup. Basic probability is taught to schoolchildren, but as a toy (or just another math oddity) rather than a tool for perceiving the world.

I think this is where we get a lot of "I'm not good at math" and "what is it useful for" discussion. Ironically everyone hates word problems, but at the heart of it that's what it is about.


Great point. Those problems are really hard to reason about, partly because without specific knowledge of the coefficients, all you can expect a well-reasoned person to conclude is that "it can go either way". And even knowing the data, most practical problems in this category would take either computer modeling or simplifying assumptions to really draw conclusions about.

Worse, someone motivated to shape the story one way or the other can create a just-so story where they emphasize only one feedback path or the other, depending on what conclusion they want their audience to draw.

I think the best antidote, although by no means a cure, is to teach clear and specific examples early on so that everyone at least can have a mental category for this class of problem, if not the tools to work through them.

Jevons paradox is a great example of one which is both clear and counterintuitive: https://en.wikipedia.org/wiki/Jevons_paradox


There's a danger of bad reasoning being involved, but I argue that "well-reasoned people" and just-so stories are problems either way. But I think that attaching a specific model to a problem grounds the conversation in reality.

Taking the carbon exports example I pasted, the model presented structurally tells you that carbon footprint is going to grow with exports. We can haggle about "how much", but - under this model - not about "whether". You can tweak the parameters to mitigate impact, you can extend the model with extra components and tweak those to cancel out the impact (and that automatically generates you reasonable solution candidates!). Or, you can flat out say that the model doesn't simplify the reality correctly, and propose an alternative one, and we can then discuss the new model.

The good thing is, at every point in the above considerations you're dealing with models and reality and somewhat strict reasoning, instead of endlessly bickering about whether A causes B or the other way around, or whether arguing A causes B is a slippery slope, or whatnot.

I strongly agree with teaching examples, both real (serious) ones and toy ones, to teach this kind of thinking.

Jevons paradox is indeed great to dig into and I suppose offer some sort of counterexample to what I'm talking about. The nature of the phenomenon is in a feedback loop, and whether it'll go good or bad depends on the parameters (the increased use can reduce the value of the intervention, cancel it out, or even make it worse than doing nothing). But from what I hear, people sometimes pick one of the possible outcomes and use it as thought stopper (e.g. "we shouldn't do X because obviously Jevons paradox will make things worse!").


> [0] - Or, if they exist, they aren't sufficiently well known outside some think tanks or some random academic papers.

Are you familiar with Judea Pearl's work regarding graphical analysis of causal problems? If not, he'd probably interest you. While he mostly falls in the category of "random academic papers" (and academic books), but he has also co-authored a very readable (and enjoyable) popular science book. A review of that book is here: http://bostonreview.net/science-nature/tim-maudlin-why-world. And a more technical overview of his graphical approach is here: https://www.timlrx.com/2018/08/09/applications-of-dags-in-ca....


I'm familiar with the name and the titles of the works (and very roughly what they're about), but I haven't read any of it yet. Thanks for the links!


Any issue about which there's a cultural movement going on can serve as a handy pretext for measures that consolidate one's own power. It kind of seems obvious to point out, but nonetheless let's continue to be open to the possibility that such consolidation is not mere coincidence, or more to the point, that the measure is not even well-intentioned and in fact has nothing do to with the ostensible issue/reason (in this case, hate speech). The most cynical of power grabs are usually cloaked in the most noble of pretenses. That's how you make the unpalatable, palatable.


>it’s not censorship if it’s not mandated by the government

Censorship can be conducted by private institutions, governments and other controlling bodies. https://en.wikipedia.org/wiki/Censorship


I recently watched an excellent speech concerning freedom of speech and freedom of protest by Rowan Atkinson, and feel it's very appropriate to share here: https://www.youtube.com/watch?v=BiqDZlAZygU


Censorship always starts with unpopular speech. Historically it's been raunchy porn, religious blasphemy, and direct opposition to the state. Hate speech seems to be a new one that works well to sell censorship to liberals (who should know better).


This is only "censorship" in the sense that Fox News censors me by not giving me a 5 minute slot to advocate for Medicare For All.

In reality, it's not state sponsored censorship at all, and it doesn't lead down any slippery slope.

These claims of censorship are extremely selectively applied, to only certain types of political speech. I wonder why that is?


I don't watch Fox News, but even I know that Fox invites Democrats on to talk about their opinions. Here is an interview where a BLM leader rants for 5 minutes: https://youtu.be/FTjBJiXalHU?t=59


And yet I have had my pleas for time go unanswered... which is "censorship" of me.


This is like having a bulletin board in your store and not being able to take anything down from it.

Yes, Google and Apple are big. You can say well, it’s different because in this world there’s only two boards for the entire country, that’s true! But it’s not a censorship problem, it’s an antitrust problem.


You hit the nail on the head, if they were just "more alphabets in the soup", people wouldn't have much leg to stand on; but because the internet has no analog to the actual PUBLIC square, when you have 1-3 companies that control access to what 95% of the internet sees, you have a problem - because there's no alternative and no public square on which to register your complaint.


the real problem here is that we allow google, apple or facebook to control public discourse. we let companies decide what we are allowed to talk about.

regardless of wheter we agree with what's being removed or not, this can't be healthy.

i am not american, so my interpretation may be off, but here is how i understand the problem:

many people would like hatespeech to go away. jet the US constitution prohibits government censorship, so the government can't do much about it. instead they rely on companies like google and facebook to do the work for them.

the companies are also compelled by public pressure to do what the government can't.

contrast that to germany, where hatespeech like the promotion of nazi ideas is outright illegal.

while i haven't verified this, this puts less pressure on companies to censure anything that isn't mandated by law.

public demands for the control of speech can also more easily b etranslated into law, so that the public doesn't need to resort to pressuring companies. on the contrary, they expect the government to protect them from companies that act in bad faith.

it is hard to say which system is better. if there were many small companies each making different decisions about public discourse, then things would be fine.

the problem is not so much the removal of outright hatespeech, but the more subtle influence in for example what is allowed to be posted about the covid epidemic, or other sensitive topics like political opinions, fact checking and all that.

as it stands, i prefer that decisions about what speech is allowed is controlled by law such that we can use legal means to combat abuse.


(We detached this subthread from https://news.ycombinator.com/item?id=24305296)


>It always starts like that.

many places have cultures and also law that for decades has worked perfectly fine reigning in the very worst forms of hate speeech (say holocaust denial in my country) while not descending into a sort of activism that starts to get silly.

There's no automatic mechanism that turns sensible rules into insensible ones, and it's also need not be the case with sensible hate speech rules.

With cases like Google's play store the issue seems more concrete. On the one hand it's the overwhelming power and lack of due process that large firms have over software. Decentralise this and put authority into the hands of people who know their networks and the situtation will imporove. Secondly it also seems to be a very activist employee base at companies like Google that's gone somewhat overboard. Again, an accountability issue. If these things were decided publicly, it would moderate to reasonable levels.


Historic revisionism as an subgroup of hate speech is a prime example of slippery slope and moving of definition. I would add "fake news" to hate speech.


"Fake News" is merely the 7% of journalists who identify as Republicans.

https://en.wikipedia.org/wiki/Media_bias_in_the_United_State...

The problem is the political spectrum of journalists is wildly biased compared to the average citizen. This also apparently shows up with censorship of phone apps.


Why is that the case though? Do conservatives not like going into journalism? Or does become a journalist turn one away from conservatism?


Extremely hostile workplace environment. Like the southern jim crow experience.


Conservative journalists are lynched for sitting at the same lunch table as their non-conservative colleagues? I think I would've heard about that...


sorry I have no idea what you're trying to say. Holocaust denial is generally considered to be both historical revisionism and hate speech, the former being a tool for the latter. Is this just semantics?


Out of context is the favorite goto for most news orgs that want to push a particular set of ideals.


Ye it is semantics alright. Not taking the debate to the Holocaust, hate speech is not just incitement to hatred (quite broad) or incitement to crime (quite specific), against a group anymore. It is like "fake news", in that sense, when Trump comically turned the term against its creators.

I feel that when it comes to Google its not about if it is hate speech or not, but who controls it. I.e. Zuckerberg is fine although there are multiple long-lasting Facebook groups that have been used to incite crimes, but Aaron Swartz would not be (today). It is quite amusing how Facebook is not shut down in Europe even though many European countries would shut down any local company being so lax and arbitrary with moderation as Facebook.


>I would add "fake news" to hate speech.

What are you doing? Are you trying to ban speech you don't like? What body determines what is "fake news" and "hate speech"? It can't be done, which is why the only sane policy is free speech.

We have laws against violence, and it's a very clear line.


Sorry, it was meant as rhetorical stupidity.


And even that is destructive. The slippery slope of "speech I don't like" has a tendency to ever expand; not completely unlike, say, government. It is a very human tendency. This is the main reason, even small encroachment should be pointed out.

I think we are in agreement on Google's case in particular being a little more straight forward.


Your suggestion sounds nice in principle, but how would you propose to create mandated democratic control of a corporate entity?

The only mechanism which exists I can think of would be to nationalize the corporate entity and have the folks controlling it be elected positions. That seems pretty extreme though as a response to a corporate entity becoming successful and growing enough that it influences the zeitgeist.


That does not sound extreme at all to me. It would not need to be controlled by people in elected positions - it could just be mandated that employees have to follow a specific charter and be as neutral as possible. A bit like many national news services, like the BBC. Considering how much influence Alphabet's products (especially Search) have gained over everyone's lives, I think something like that is much overdue and the only reasonable solution. That, or extreme regulation. At the very least, the search algorithm should be made fully public.


It also goes the other way.

Section 230 was about child pornography and became used as a safe harbor for anything.

I am not usually agreeing w the Trump admin but they do have a point there.

In general our thinking about freedom of speech is itself idiosyncratic in the same way. Human FREEDOMS means doing what you want. It’s not the same as a right to a megaphone maintained by thousands of employees and infrastructure of large corporations to give you a platform to say anything unfiltered to 5 million people at once. I would argue that such interpretations of the First Amendment have been detrimental to society. Speech on giant platforms should be vetted like on Wikipedia’s Talk Page, where mutually distrusting people engage in responsible fact checking BEFORE the crowd sees the main page with these claims.

But hey I also argue similarly that the supreme court’s Heller decision similarly obviated the Well Regulated Militia clause into irrelevancy, so now anyone can have a gun no matter whether they are part of any well regulated organization or not. No checks on individual action that can affect others.

Now we reap what we sow as a society. Yes FREEDOM of speech is important but what we call freedom today has greatly expanded even to unlimited political donations by super PACs and so on. Again a supreme court decision where expanding freedoms in Citizens United harms democracy. A win for ideologal purity I guess, but is socity better off?

PS: before someone objects with “who will be the factcheckers/watchers I will say it will be self selected and self policed like on Wikipedia, as long as there is a healthy mix of views, it’s better than one wacko with a megaphone. Who does this celebrity culture help? It further divides us. And that’s why we can’t have nice things!


>Section 230 was about child pornography

What? It most certainly was not!

The CDA as a whole was an attempt to regulate indecency and obscenity on the internet. Think "pornography that might be seen by minors"[0]. (Remember, this was the 90s.) Most of it (with the exception of section 230) was struck down in court for obvious first-amendment reasons. Section 230 was added later during the process by the House, after the bill had passed through the Senate, and was more about defamation than anything else[1].

[0] https://www.congress.gov/bill/104th-congress/senate-bill/652... - search for "This title may be cited as the ``Communications Decency Act of 1996''."

[1] https://www.theverge.com/2019/6/21/18700605/section-230-inte...


Life, liberty and the pursuit of happiness was chosen for a reason in contrast to the term Locke used (estate): individual freedom was supposed to be balanced with the interests of the society that allowed them to exist and be pursued in the first place, which are what courts consider fundamental rights.


Sadly, Wikipedia's been controlled by a whole number of wackos with megaphones for years. I wouldn't trust them to fact check my big toe.


> It always starts like that.

That's the "slippery slope" fallacy. There's a knee-jerk reaction in America that any censorship is bad and will somehow always lead to more censorship. But there area places where censorship has been implemented in small ways and it hasn't led to some sort of free speech apocalypse. In Germany or Israel, for instance, it's illegal to deny the holocaust. That's a pretty sensible limit considering their history. All these years later, they're still free an functioning democracies.


I believe you will find that's the fallacy fallacy.


Kindly take this as a devils advocate post before you reflexively downvote.

It looks like letting people say anything they want on major social media platforms is only having one major positive effect: a few advertising companies are becoming very rich.

The negative effects include:

- incited violence (gang-oriented gun crime in Chicago is often fanned by social media posts for example)

- bad medical decisions (vaccine/COVID misinformation)

- cancel culture/political manipulation (people taking other people's posts as facts when they are not)

I would like to uphold the principle of free speech and forcing social media providers to be free speech agents even though they are private companies, but it's starting to get hard to defend. I am losing faith that strict adherence to free speech is going to result in a smarter, happier humanity. It might be better if less people speak their mind.


I think it's obvious that if less people speak their mind, we could have a smarter, happier humanity.

But I also think it's obvious that I shouldn't trust you to make the decision of who needs to shut up. And definitely not the government.


> But I also think it's obvious that I shouldn't trust you to make the decision of who needs to shut up. And definitely not the government.

Well right now social media companies seem to have that power. How is that better?

I hear this point all the time. Is there a better response than this?


> Well right now social media companies seem to have that power. How is that better?

It's not; we're attempting to fix that, and TFA is about Google maliciously attacking one such attempt.


Which powerful, privileged people should get to decide what we are allowed to hear about? When the power is inevitably abused, how can we address that abuse when we may not be allowed to know about it?

Is it even necessary for free speech to directly result in a happier humanity? What if it simply preserves the conditions that we need for progress, or merely keeps us from sliding backwards? Would that be enough to make it worthwhile for you?


> Which powerful, privileged people should get to decide what we are allowed to hear about?

Journalists and news media, bound by the respect and principles of their profession, fulfilled this function in the past. There seemed to be a time in the past where division between reporting and editorials were more separate. We've destroyed the institution of news media without a good replacement; now people are taking editorials (people's social media posts) as the equivalent of news.

> Is it even necessary for free speech to directly result in a happier humanity?

If not then what is it worth?

> What if it simply preserves the conditions that we need for progress

I'm simply not seeing how social media after a good 10 years of it is progressing anything other than the profits of its owners.


The respect and principles of the profession and a bucket or horse piss will get you a bucket of horse piss - it isn't worth anything and certainly not as a check on power.

It is nonsensical to claim social media destroyed the news media. It was already dying before the internet let alone social media. To put blame on it is a blatant lie from the losers of the old era who got regularly dunked on by bloggers and forum posters in basic fact checking.

Early wikipedia "not suitable for reports" clearly did a better job. They didn't catch up with the internet until it got basic enough for them to follow it with Twitter.


> I would like to uphold the principle of free speech

No, you completely against that principle.


Everyone is constantly complaining about private censorship being a slippery slope...

And yet today, despite decades of "censorship" by Facebook and Google, you can see whatever porn you want, snuff films, terrorist propaganda, hate speech, libel/slander spread by instigators like Glenn Beck and Alex Jones. Just not on Google or Facebook.

Different private entities and people have different levels of tolerance. If you want filth, use Gab or 4chan/8chan. If you want forums that are partially moderated, use Facebook/Google/Reddit. If you want forums that are fully moderated, join a private or niche board like HN.


It might be Apple forcing them to do so. For example you can't publish app with porn content. Even if your app is some kind of forum, you're obliged at least to filter out explicit content in the app.

Browser seems to be an exception.


I know nobody reads the articles but what about reading the title?


Does reddit filter out porn in the reddit iOS app? And what would Apple have to do with Google’s Play Store?


Yes with the standard settings in place you have to navigate directly to nsfw subreddits and even then there is a age restriction in place. There are no auto fill suggestions nor search results and there is no nsfw content on the /r/all page inside the app.


Apple is forcing Google to remove apps from play store? How?


Let's see their owners first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: