Hacker News new | comments | show | ask | jobs | submit login
Reddit and the Struggle to Detoxify the Internet (newyorker.com)
401 points by smacktoward 3 months ago | hide | past | web | favorite | 717 comments



We tend to think we’re more in control of our behavior than we actually are. That is, our brains are operating from habit a large amount of the time, and the idea of the CEO brain consciously deciding our every action is largely an illusion.

This is why emergency exit doors must open outwards. People have died in fires in theaters trying to push on doors that said “Pull”. The crash bar on an emergency exit door is what’s known in design as an affordance.

When you present people with a door handle that affords “pulling”, they will pull even if the sign says “push”.

Now consider the fact that the primary affordance of social media is the “reaction”. Is it a surprise that content that garners a reaction will trend towards the outrageous?

If proactivity is the road to a more fulfilled, more civilly minded life and society, maybe we need to think of our affordances. Because we’ve made it awfully easy to be reactive, and awfully cumbersome to be proactive.

Edit: The comments below correctly point out an error; what I should have said was that there’s a danger in putting a “pull” handle on an emergency door that pushed outward, because of the confusing affordance.


This is an insightful analogy.

To stretch it a bit, I'd say there's an important difference between a fire door and reddit. The fire door knows its job is to prevent people burning to death. Reddit... does it know it's job is to prevent outraged reaction?

I think this is one of the things that made facebook so problematic on politics, it can't tell the the good likes, comments and shares from the bad ones. I'm not sure they really had a concept of better and worse. Some stuff isn't allowed, but otherwise?

Imagine one post, where Mrs X invites neighbors to meet a local candidate at her house for revolutionary thoughts and biscuits. Another post, where Ms Y rants about Macron voters, Trump, taxes and kids these days. Both are political. One is actually democratic and participatory. The other is cheap, nasty, unproductive and divisive. Does Facebook, in any meaningful sense, value one over the other? Does reddit?

Reddit has its non-censorship values. I respect that. It's important that someone does. I also think they want to house the weird, and I respect that too. But, I think unrestricted speech may be an insufficient value, like nondiscrimination or atheism. It's not enough to build on. You need positive values too.

Free speech is also problematic, when taken as 'all speech is equal.'


>Reddit... does it know it's job is to prevent outraged reaction? //

Ha, ha, ha.

Cause that'll increase pageviews.

Outraged reaction is exactly what all news media is going for because people rant and rave and in passing see more adverts. It's just they want to sanitise the topics according to their advertisers wishes.

Reddit doesn't have non-censorship values, it's heavily censored; not all from the top admittedly, but the sanitisation that's gone on in the last few years is huge as Conde Nast have moved to make it a more tempting platform for advertisers.


Well, yes that's the obvious (and cynical) assumption. It's definitely true as a real pseudo-economic force on the internet and media generally, but let's not just assume reddit are following that "interest" blindly.

If you think reddit doesn't have those values, I guess we disagree. I don't see any way of coming to that conclusion apart from fundamentalism, it's either absolute or it's bullshit.


I'm not a Reddit user but there are plenty of stories out there about how the corporate managers have permanently closed some offensive forums. To be clear, I'm not claiming that Reddit management did anything wrong; they're under no obligation to spend money spreading toxic content. But obviously they don't value non-censorship as a corporate value.


Let's see - Vanity Fair, Wired, GQ - there seems good reason to think that Conde Nast are willing to compromise and go with the click bait, a lot.

https://www.wired.com/story/bad-actors-are-using-social-medi... .. is the first entry Reddit, is it in the list, nope.

I think Conde Nast are prepared to stoop pretty low in the search for dollars before integrity.


To be fair, Conde Nast also owns Ars Technica, which, in my experience, has a fairly exemplary reputation as far as avoiding clickbait and running good articles.

That said, it certainly seems to me like if not as an explicit business decision, Conde Nast certainly has no problem with their properties sensationalizing their media for views/clicks/etc


Increased pageviews, yet many users also hate constant outrage and leave the platform entirely. It is not obvious that a platform with civil, high-quality discussion would lose in the marketplace (not obvious it would win either, though).


> But, I think unrestricted speech may be an insufficient value

It looks like you are imagining that every speech has objective "value" which can be determined (maybe it's hard to do, but if we throw enough "big data" magic dust onto it we can get at least close) and then somehow speech can be sorted by value.

I think it's completely wrong from the premise up. The value of the speech is a subjective measure. Some people value invitation of candidate X, some people value rants of Ms Y. In fact, if salaries of talk radio hosts and late night comedians tell us something, way more people value rants than measured, polite discussion. Thus I think "value" exists only in the eye of beholder, and trying to objectify it would only mean dismissing the values of part of the audience and emphasizing values of some other part of the audience. I can see why a site may want to do it and why one may want it to happen, if he or she happens to belong to the latter part, but I see no real justification for it.


> Thus I think "value" exists only in the eye of beholder, and trying to objectify it would only mean dismissing the values of part of the audience and emphasizing values of some other part of the audience.

And yet we need to do it. As you observed, "way more people value rants than measured, polite discussion", and I'd claim that this is a problem. The "value" may be subjective, but the consequences of both "kinds of speech" are real, and so it would be great to incentivize the kind that leads to a more stable, more just society, and disincentivize the one that causes thoughtless destruction.

Yes, I'm aware that "disincentivize" is getting dangerously close to "ban", but I feel we need to try and walk that fine line, if we want to have a society that's better than just random.


> And yet we need to do it

Who are "we"? And what is "it"? Do you just take on yourself the mantle of decider for the whole world who is worthy and who is not? Or how is it determined? Who is worthy to wear that mantle? I don't think any human is.

> I'd claim that this is a problem

Maybe human nature is a "problem". But what you're going to do, replace humanity with better species? Do you have one in mind? Beyond that, I don't see how declaring it "a problem" helps anything, unless there's a fix to this problem. History teaches me that all attempts to create a "new, better human race" didn't just end badly - they ended so awfully terrible that when people point at it, other people get offended that you dare to compare their ideas to that. So, we have to deal with what we have now - and had for millienia. Given what we did with it - well, not ideal, but certainly there have been some improvement.

> it would be great to incentivize the kind that leads to a more stable, more just society

How do you know what leads to more just society? Maybe rants would lead to more just society faster? As for stability, stability is only good when we're at the optimum. Are we?

> I feel we need to try and walk that fine line, if we want to have a society that's better than just random.

Which fine line? Everybody has their own fine line, that's the point. You could maybe find a bunch of people whose fine line is similar to your own, if you don't look too far into the future (and if you do, what you get is https://www.youtube.com/watch?v=WboggjN_G-4) - but pretending there's some line that's good for everybody is just willful blindness. And out of all possible ways of building a better society, I don't think dismissing people that have different views as something that doesn't matter is the best one to start with.


> Maybe human nature is a "problem". But what you're going to do, replace humanity with better species?

I agree that the root of the "problem" currently lies within people and not process. I agree that changing people wholesale is not easy and not desirable. I agree that there has been some improvement.

I also think human nature is a product of the environment. The way people behave on different websites, in different countries, and in different social situations shows this rather clearly. There is no fixed set of anything that constitutes the whole of how people act. Put someone in a nudist colony, and the environment changes, and the way they act changes (with time). If a reddit user starts going to 4chan, the environment changes, and the way they act changes. Put someone who follows the "always defect" strategy in a community of "always cooperate" people, and the environment changes, and the way they act changes. Put a racist in a racially diverse community of acceptance, and the environment changes, and the way they act changes.

If you accept this idea, it follows that certain environments can be better for society as a whole. Case in point with Reddit: they decided an environment without beastiality and certain violent elements would be better. Maybe they were wrong, but I don't think so. I'm not suggesting I have a wonderful theory of what the best environment is, only that there are better and worse ones. The problem of what we value is hard, but that doesn't mean it's not worth trying.

This ties the loop back: human nature is a product of environment, and environment is a product of humans. We have the power to create environments that make thoughtful discussion easier and hate harder. We can put energy toward solving the problem by changing the environment. HN has an environment I am very fond of, despite being here for only a couple years. I appreciate the work that has gone into making the comments an insightful, respectful, and generally nice place.

We can't replace humanity. We can't change people without changing the conditions they exist in. We can change the circumstances of our struggle in order to grow together as a species.


Picking out only a very small thing that you said:

> I don't see how declaring it "a problem" helps anything, unless there's a fix to this problem.

This seems wrong to me.

The process of (as a group) clearly identifying things that are problems, and coming to agreement that they are problems, and coming to agree on whether they are important, is of fundamental value, regardless of whether we have solutions at hand.

Without this step, folks will either be ignoring problems because they don't know about them, or proposing "solutions" to things that others don't even see as problems, neither of which can lead to any good...


This post is basically saying the problem is not worth looking at because it's too scary...


No it is not. It's saying if you're looking at something as if your private point of view is objective reality and dismissing the very existence of other points then the problem may not be where you looking for it.


"How do you know?", "What are you going to do?" "Everyone is different so we can't know anything for sure!" are very much defeatist, knee-jerk responses to someone doing something extremely important:

Presenting what the problem is.

The first step to solving a problem is understanding it. It's not solving it. Trying to solve a problem immediately is like trying to write code before you fully understand the requirements.

If you lack the mental fortitude to simply look at a problem without having an immediate solution to it, you're not going to be able to solve major, ugly, nasty, uncomfortable problems like this.

But, inevitably, these problems will show up and knock on your door. Running away from them is not a good plan.


What if I don't believe in trying to control people at such a low level? What if I think people should be allowed to be as ranty as they want without having to worry about being "disincentivized"?


Then I'd like to talk with you, see if you have an alternative way of keeping the society from self-destructing.


So we've got desires to identify with groups, and we've got desires to share criticisms of groups, and we struggle to find a balance. If we never criticize, we stagnate, if we never identify, we "self-destruct". We can come up with a "quick solution" as the notion of keeping more of our criticisms to ourselves, but that's certainly not a goal you want to pursue overzealously. You need to be able to share criticisms.

We can tell people it's not attractive to be ranty, but I'm not very comfortable going further than that, at the risk of wandering into thought policing.

Look at some modern political opinion, I'm sure you've seen it as much as me. Ranting is cool. Calling everything under the sun "problematic". The tricky thing is it's not always wrong. There's surely a nearly infinite list of "problems" one could identify, and some of them are truly important. But we just need to turn down the heat on the criticism for just a second. But you can't just ask bipartisans to listen to each other more. We need to make it less cool to be blindly partisan. We need to increase the value of being able to identify with anyone. And we need to make it really uncool to judge hundreds of millions of people you've never met with deep assumptions.

I don't know, it's an interesting problem and I haven't thought of it this way very much. All I'm sure of is that this growing lack of interest in protecting free speech is about the only topic in modern politics that I give a shit about.


> We can tell people it's not attractive to be ranty, but I'm not very comfortable going further than that, at the risk of wandering into thought policing.

I don't want to see it going much further than that either. I was thinking more along the lines of making it so being thoughtful is "sexy" and being ranty isn't, the way today owning a car is "sexy" and smoking isn't.

Free speech has its positive and negative consequences on stability and happiness; I do not want to fight free speech, I'm looking for ways to reduce the negative consequences. I'll protect your (and mine) right to rant about whatever you want, but I sure as hell would like the general policy discussion to involve less rants and more thoughtful cooperation.


> Free speech has its positive and negative consequences on stability and happiness; I do not want to fight free speech, I'm looking for ways to reduce the negative consequences. I'll protect your (and mine) right to rant about whatever you want, but I sure as hell would like the general policy discussion to involve less rants and more thoughtful cooperation.

I agree that free speech is an essential aspect of what makes us humans, and that it comes with both many positive and negatives.

In implementation of a plan to mitigate the negatives though, I much more support a private entity such as Reddit censoring whatever they wish, as if people believe it becomes to harsh they can simply leave. I'm paranoid that allowing an entity like the government (where constituents can't easily just leave) to get involved with it is good, as it allows for many conflicts of interest. These conflicts could be instances where the ruling party or minority parties push to label an opposing belief as more divisive, or where the ruling majority seeks to 'disincentivize' a minority or outside belief/religion by saying it is offensive to what they deem our values.

I feel like we should push for the civilization of speech to be a societal change, not a policy based change.

On a slightly different note, people have been saying that language and civil discourse have been going to hell for a very long time. George Orwell rather famously wrote an essay titled "Politics and the English Language" in the early-mid 20th century, wherein he detailed how society was moving towards using unclear and imprecise language to pander to the many without being forced to use falsifiable statements. Anthony Burgess wrote "A Clockwork Orange" in the 1960's where he highlights the main characters savagery in part by highlighting their usage of 'barbaric' dialect. William Langland wrote that “There is not a single modern schoolboy who can compose verses or write a decent letter.” in 1386. While civil and educated discourse is an important issue, people have been saying it will lead to the downfall of society for a very long time, but in many cases it is just changing and the entrenched powers dislike having to cope with that change.


I do. I propose we do exactly what we've been doing for the last 10000 years when human society didn't self-destruct. Do you have any other proposal that has a similar or better track record?


Human societies self-destructed plenty of times over the past 10 000 years.


Those 10,000 years have involved very, very little in the way of free speech for most people - I don't think going back to the days of lèse-majesté and the Inquisition is what you had in mind, even though those institutions certainly provided stability in a sense.


Facebook and YouTube already do stuff like this: "too many people are just having fun clicking like on funny images instead of typing long comments, so let's increase the newsfeed and recommendation penetration of posts and videos with more comments... oh, shit, but now we are promoting flame wars as the best way to get more comments is to troll people".


While my policy might seem similar to theirs, were I in their shoes, I wouldn't be trying to promote content just for generating comments. Number of comments isn't a particularly useful measure for anything other than.. measuring how much discussion the content creates, it's no indication about the quality of that discussion.


>Reddit... does it know it's job is to prevent outraged reaction?

Says who? Let's get a little more objective first. Its job is to make money, presumably, if not encourage participation by any means necessary. Its job isn't to moderate. Its job is to allow the creation of sub-communities that can be moderated in any conceivable way. Most moderators aren't interested in reducing reactionism, they're just interested in reducing whatever they or their community doesn't like.


>unrestricted speech may be an insufficient value, like nondiscrimination or atheism. It's not enough to build on. You need positive values too.

I agree with this statement, except for the inclusion of atheism in the list. Atheism is literally the lack of a belief. You may as well say "The lack of belief in astrology isn't enough to build on. You need positive values too."

Unrestricted speech and nondiscrimination argue for something. Atheism literally argues for nothing.


> Free speech is also problematic, when taken as 'all speech is equal.'

Free speech is not problematic, as long as everyone has the right to speak and falsehoods can be debunked - there should be no "safe place" for the exchange of ideas, whether they are good or not. Starting by saying that 'free speech' has a problem is a very, very dangerous place to go to.


> there should be no "safe place" for the exchange of ideas

What do you think a "safe space" is? If you are arguing that there should be platforms where people can speak without being shouted down when the audience strongly disagrees with what they're saying, that is a safe space. In order to construct that space, you have to deny some rights of the audience to speak in that context.

And this is the whole problem with naive free speech advocacy. Unrestricted free speech is not possible anymore than unrestricted freedom in general is possible. People cannot possibly hear every single person's viewpoint, so some people will always be denied a platform to speak to some other people.

The question is how best to structure our societal discourse. What values are important, and how do we protect them? And the question needs an answer more complicated and nuanced than "free speech". Because when we don't acknowledge the complexity of this question, we become blind to the de-facto decisions that we're making about which speech to prioritise.


> If you are arguing that there should be platforms where people can speak without being shouted down when the audience strongly disagrees with what they're saying, that is a safe space. In order to construct that space, you have to deny some rights of the audience to speak in that context.

No, I am not asking for that kind of platforms. I am saying to let people express what they want to say, and the only restriction to Free Speech should be "direct incitation to violence" (such as asking to lynch someone publicly) as mentioned in the US Constitution. Everything else should be able to be said and be heard, and debated between people as long as they want to debate. And of course you will be responsible for what you say, as an individual, and you will have to face the consequences of your words. But it goes both ways.

Restricting Free Speech puts power among the ones in control of Speech. Allowing Free Speech is the only thing you can do to allow even the marginal points of view, even unpopular ones, to be heard.


>Everything else should be able to be said and be heard

But what does that actually mean? If all you're saying is that the state should not stop them, then relatively few people disagree with that, but the argument usually goes further. There are many ways people's speech can be limited without the involvement of the state. Be that de-platforming, protests or economic or social limitations.

>Restricting Free Speech puts power among the ones in control of Speech.

This is true, but the reality is there will always be restrictions on speech. It is not physically possible to let everyone speak to everyone, or even just those willing to listen. However we structure our societal discourse, it will always privilege some speech over other speech.

We have to engage with, and be ready to criticise, the implicit decisions being made about what speech is privileged and why. Because if we don't, then we cede power to those who already restrict speech with these decisions.

Just saying "don't restrict speech", and thinking that gives everyone a voice, is incredibly politically naive.


> Just saying "don't restrict speech", and thinking that gives everyone a voice, is incredibly politically naive.

I don't see how "incredibly naive" is something that is the fundamental piece of Western Civilization (at least in most of the English world).

> It is not physically possible to let everyone speak to everyone

No, but first not putting any filter on the nature and contents of speech, as long as it is not violent, is something we should stand for. The "How" is irrelevant.

> However we structure our societal discourse, it will always privilege some speech over other speech.

This should be an individual's choice to make, as in what "speech" you want to listen to. When you go on social networks for example, it should be expected and natural to find people who share different views, no matter how revolting they might appear to you. And we should find comfort in the fact that they are allowed to be expressed, because in turn we are allowed to express ourselves just as well. So in fact, there is no intervention needed by any state actor - on the contrary, free speech is the tool that enables us to discourse and experiment with ideas. Just shutting the door, or filtering inconvenient speech is not making it go away, and certainly will end up having no positive effect towards those who profess such speech, because it could further solidify their opinions and prevent them from being receptive in the future.


Free listening is just as important a concept as free speech. I get to decide who and what I listen to.

It may be good for me to hear an uncomfortable truth, but ultimately I get to decide whether I listen or not. If I want to live in an echo chamber, I can do so, but it should be my choice. Equally if I want to hear uncomfortable opposing views, I should be able to. (Somewhere. Not necessarily on Reddit.com. The site's owners can publish whatever they want.)

This is true in practice, because as a last resort I can put my fingers in my ears and say “la la la not listening” loudly. So frankly, advertisers should give up trying to force me to see their adverts — they can't — and persuade me to listen instead.


Actually, it's designed to open outwards because when there's a crush of people, there's no room for the door to open inwards.

Also, affordances aren't to support habit or unconscious behavior. Affordances are to imply meaning or how to use something by their design (whether digital or physical).


> Affordances are to imply meaning

That's actually called a signifier. https://ux.stackexchange.com/a/94270


I disagree. A signifier gets into semiotics, but more explicitly. e.g. the word "push" is a signifier, but a large bar that makes it seem like you can push it is a perceived affordance.


Speaking of them as signifiers isn't useful here, but yes, these are signifiers. You said yourself they convey meaning. The implication of their use wouldn't happen without signification.

One could even write a book about the discourse of affordances. It would be boring and irrelevant, but it wouldn't be wrong.

Sorry, just defending the mechanics of meaning.


Someone did (in a manner of speaking): https://en.wikipedia.org/wiki/The_Design_of_Everyday_Things


Good points in your comment, I'd probably use the words impulsive vs. self-controlled (instead of reactive/proactive).

Asking people to build user-interfaces that promote self-control seems a bit naive though. There's a reason self-control is identified as one of the primary "fruits" of a Christian life (Galatians 5:22-23), since it's exactly the opposite of default human behavior.


Crash bars offer an unthinking affordance: if a crowd presses up against crash doors, the doors open automatically, relieving the pressure (or at least, if the doorway / passage itself is not overwhelmed).

Automatic negative feedback mechanism.



Yeah, I think this is close to the right interpretation.

Much of internet usage is, essentially, a giant slot machine. We've studied the things that make gambling addictive and intentionally incorporated them into social platforms to keep eyeballs on. The goal is to generate a reflexive subconscious impulse to check for new inputs with specific relation to the platform. A good high-level overview of this is the book Hooked, about how to build addictive computer interfaces. As we've "gamified" and "optimized for engagement", we've created an apparatus that many have difficulty understanding or overcoming.

This concept cemented more for me as I watched an Amazon FBA seller just impulsively hit "Match Price" over and over again, despite the fact that it was leaving him with margins of less than 25c per unit. He couldn't be reasoned out of this and insisted he had to match price with the lowest seller, even though his units would've sold for much higher margins over the ensuing weeks if he hadn't.

I realized then that really, he was simply gambling. He hits the button, sees a corresponding spike in sales, and sees a corresponding increase in his "income", despite the fact that letting it increase at a slightly more moderate rate would've easily netted 3x-4x more money (not talking years here -- just a few weeks to sell out as non-cheapest). He does it because he likes the impulse. He likes pressing the button and correlating that action with numbers that say he increased his money.

Facebook is the slot machine; Likes and Comments (broadly "engagement") are the currency; Your Posts are the input. You drop in a post and find that bland, mature, predictable posts don't generate a bright animation or engagement. You respond by posting more and more stuff that has high-engagement quality, which is, of course, the more controversial things like religion, politics, and identity.

The check on this is one's own identity is tied back to it, but that generally just means that you take the things you conceptualize as positive self-image to an extreme that warrants reaction, in particular in these controversial fields that generally have high-yield output from the slot machine.

So Republicans are put on a path toward cartoonish Republicanism, Democrats on a path toward cartoonish Democratism, and so forth. It has a distorting, reinforcing echo-chamber quality on everyone. It encourages people who are more moderate on certain issues to disengage, it encourages us to select them out and weaken our bonds, it makes it easier for our own little world to become all-consuming and feel all-encompassing. It has an overall corrosive effect on public discourse and relationships in general.

The more I think about it, the more I think the physical constraints of "real life" on dialogue and relationships, like being in a room with someone who can punch you if you sink too low, or making instinctive reactions and adjustments based on a conversation partner's body language, are much more valuable than we've assumed.


And for any that are skeptical about the gambling link, think about the now-ubiquitous pull-to-refresh gesture on mobile interfaces -- does it remind you of anything... like a slot machine perhaps?


Even worse than gambling--the output isn't random. You can actually affect the result, a lethal combination for gambling behaviors. With slot machines it's at it's worst when the gambler believes there's a system that works. In this case there actually is one making it capable of drawing in folks who would not be at risk of being pulled into a slot machine.


> Now consider the fact that the primary affordance of social media is the “reaction”. Is it a surprise that content that garners a reaction will trend towards the outrageous?

Well, maybe, but is there a way it could it be otherwise?


We are on rails. Even "The Road Less Travelled" is another road, ignoring the infinity of directions we could actually take.

We can't even notice the other possibilities, let alone consider them.


>This is why emergency exit doors must open outwards. People have died in fires in theaters trying to push on doors that said “Pull”. The crash bar on an emergency exit door is what’s known in design as an affordance.

Is this true? I always assumed it was because it's pretty much impossible to pull open a door when you are being crushed by people pressing against one another trying to escape.


Yes and recent.

>An inward-swinging door - three times cited as a code violation by West Warwick inspectors and three times replaced by club managers - was blocking the exit closest to the stage.

https://www.bostonglobe.com/metro/2013/02/15/series-errors-s...


For some context in case anyone is curious, two sentences later the article reports that "Bouncers opened it immediately after the fire broke out".

The door in question's main problem appears to have been that bouncers wouldn't let people use it, not its orientation.


AFAIK you're correct, a vertical handle is an affordance to that indicates a door should be pulled, but the reason is mechanistic. A crush of people will open the door with a push-release but won't with a pull handle - whilst the handle affords pushing some numpty will try and pull it but the people behind them will soon ensure that they push the door, with their broken ribs if necessary.



I think it's true to some degree, I've seen people push on doors that say "pull" even when there is no fire.

But I think your assumption is probably more correct, past a certain point they have no option to pull anymore.


The push/pull "mistake" is usually caused by poorly designed doors. Like putting a vertical grab handle on a door that pushes open.


It's the same sort of myth as "why are manhole covers round?" and there are lots of attributions for why that ignore history for a just-so story.


I don't understand the reference. Why are manhole covers round? I always thought it was to prevent the cover from falling into the hole. Is that not the case?


They are only round when they cover a round hole. I've seen a number of covers that are other shapes, often square.

There are engineering reasons to prefer a round hole, but there are sometimes other considerations which push a different shape.


> Why are manhole covers round?

Manhole covers exist in several shapes, as illustrated on the Wikipedia page.

https://en.m.wikipedia.org/wiki/Manhole_cover


That is the argument but actually any lipped shape can give this guarantee. Even historically manhole and utility covers had to be somewhat flush to the ground and so they've always had to be lipped for this.

You can see lots of utility covers that are hinged, square, or other options, but all are lipped for safety. While a circular profile makes the lipping easier, it doesn't seem to influence many utility covers.


You can easily put a square or a rectangular utility cover into the hole, because the length of the side is less than the diagonal. This isn't possible with a circular cover because the diameter is uniform.


The parent’s point is that a lip prevents this for every shape, by making the hole smaller than the side-length.


Only with a sufficiently large lip. The diagonal of the hole has to be less than the shortest side of the lid. As I picture it in my mind this is an excessively large lip.


If you have a very rectangular shape then the lip would have to be large.

Stop for a moment next time you walk in an urban environment. Look for circular utility covers you see in your urban environment as opposed to squaed ones. Look at the features of the circular ones vs the non-circular ones.

One of the reasons this myth irritates me so much is that everyone is so certain that they know exactly the answer but their actual daily experience doesn't line up with the results at all.

I've heard several people offer explanations from a geometric safety option (which is attractive for free-standing covers) to the simplicity of manufacturing (e.g., that it's very easy to make circular molds and get even density compared to square molds) to simply what the contractor suggested. I've also heard people suggest that metal cylindrical templates were something very common to manufacture for a variety of industrial uses.


I'm not certain, that's why I asked the question in the first place.

I'm not sure what features I would look for because the lip on a rectangular cover would be under the lid. How do I know that the rectangular covers have sufficiently large lips to prevent the lid from falling in? I'm willing to accept that in some cases the lids are rectangular and there is a risk of them falling in.


If only this were true for certain utility holes in my neighborhood with rectangular lids. I I've sprained my ankle a few times. I mean, they won't fall in accidentally because of the lip. But you can lift one out and put it in the hole, due to negligence or a taste for vandalism.


> You can easily put a square or a rectangular utility cover into the hole

Then why do rectangular (including square) and triangular manhole covers exist? Rectangular ones are quite common, triangular less so.

Grandparent is correct, lips are what, in practice, prevents manhole covers from falling in, not being circular (which many are not.)


Well maybe in some cases the risk of the lid falling in is acceptable. Or maybe there are other mitigations in place.


I don't know why rectangular utility holes exist, I assume cost.


FYI, I've personally dropped a rectangular manhole cover into a properly lipped hole. They do fit all too well.


Sewer grates are frequently square. For what it's worth.

(Not always, but quite frequently.)

They have the same gravitational-gradient dynamic as manholes.


What's the myth? There are good reasons for round manhole covers.

Also fun fact: That used to be a Microsoft interview question.


That is not where the myth originated, but it is where I heard it first.


Thanks - you correctly caught an error and I’ve edited the post to address it. The danger is in the confusing affordances, and the point was about that and not the direction of the door swing (which as you’ve correctly pointed out has other more primary considerations.)


Don’t worry about that. Just retweet it as a fact, pronto.


Now consider the fact that the primary affordance of social media is the “reaction”.

Questionable.


Ironic!


I can't wait for the word "toxic" to drop out of favor. It implies that the person is not just giving a bad opinion, but is in fact fundamentally flawed and dangerous. Arsenic isn't toxic because it had a bad day at work, arsenic is by its nature a deadly poison and you can't change it, only avoid it. The vast majority of people labeled "toxic", though, are just humans with a variety of opinions and beliefs, some you may agree with and some you surely disagree with. They may change these beliefs, but not by being vilified and told they are intrinsically bad.


Poisoning a conversation is a thing. And if you've participated in online discussions I don't see how you can honestly say it doesn't exist.

You're implying that in every thread where a troll shows up to derail the conversation we all have to stop what we're doing and give thoughtful responses to the troll to show them the error of their ways. But then the only possible conversation is debates with trolls. But that gives too much power to the trolls. Sometimes you just have to shut the trolls up so you can talk about what you want to talk about.


> Poisoning a conversation is a thing. And if you've participated in online discussions I don't see how you can honestly say it doesn't exist.

"Poisioning a conversation" is a bad metaphor that conflates two different things which do exist, but need to be dealt with in two different ways:

1. Baiting: trying to say something horrible to anger people for their own entertainment. The proper response to this is simply to ignore it: if you aren't entertaining the baiter gets bored.

2. People saying things they actually believe, even when those things are genuinely terrible. Responding to these people prevents their beliefs from going unchallenged, and is the only way we can possibly hope to change those beliefs.

If it were just group 1, you could just ban those people and that would be fine. But the problem with that is that sometimes people are actually in group 2, and engaging those people and correcting them is part of arriving at shared values in a functioning society. The tendency to accuse people who are genuinely expressing their (awful) opinions of simply baiting so that you can ban them is problematic for open discussion.

> Sometimes you just have to shut the trolls up so you can talk about what you want to talk about.

Contrary to what you're saying, I think it's very possible to have conversations about what you want to talk about while letting these people say what they want: isn't this what comment trees exist for? On Reddit and HN, person A and person B want to have a conversation, person C can say whatever they want, and it doesn't affect the continuity of A and B's conversation as long as A and B respond directly to each other's posts, and not to C's posts. Every platform I know of supports private messages.

Underlying what you're saying is an assumption I'd like you to reconsider: why is it that you think that a public conversation in a forum where anyone can respond should only be about what you want it to be about?


It's not a bad metaphor. If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome. I've never observed that a no-moderation community is free of trolls, and ignoring trolls often results in them just taking over because they just start talking to each other and everyone else leaves.

These things are caused poison because even a small amount can cause serious problems if left unchecked, and the effect creeps out across an area (how many people are hit) like poison spreading.

I think what you're missing is that a very large amount of people simply do not enjoy a certain style of discourse to the point that they'll opt out of a community that doesn't ban that discourse. You may not like it, you may think those people are weak or something, but the rest of us want to talk to them, and we want them to feel welcome.


> It's not a bad metaphor. If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome. I've never observed that a no-moderation community is free of trolls, and ignoring trolls often results in them just taking over because they just start talking to each other and everyone else leaves.

Which form of trolls from my post are you talking about? I insist that we not pretend these are the same group of people.

I think that good moderation filters out the baiters and lets the people who believe what they're saying stay. And contrary to what you're saying, I don't think that such communities end up with just trolls. There were plenty of reasonable conversations on Reddit before Conde Nast took over and dropped the banhammer.

> I think what you're missing is that a very large amount of people simply do not enjoy a certain style of discourse to the point that they'll opt out of a community that doesn't ban that discourse. You may not like it, you may think those people are weak or something, but the rest of us want to talk to them, and we want them to feel welcome.

I'm not missing that--in fact, I don't enjoy talking to people with hateful beliefs either.

But the alternative you're proposing is an echo chamber where you don't have to hear those people, but they still believe what they believe, and those beliefs become our leaders and laws. If we ignore bigots on the internet we get bigots in office.


I think you're very confused. The idea that rational people have to engage thoughtfully with irrational bigots is pure nonsense. It is not the duty or obligation of anybody to engage with those who hate them.

The reason we get bigots in office by the way is not because trolls are banned. It's because powerful interested want bigots in office. Bigotry sells. It's very easy to screw people over if you can distract them by having them hate on some out group. It is naive to think that taking with bigots will change this.

And here is the point: social change doesn't proceed through rational discussion. Never has, never will. Real change requires organization and solidarity and protesting and marching and uncompromising demands.

If you want to waste time engaging with trolls have at it. You will find that these people have nothing but contempt for discussion and no interest in being swayed by logic. For the rest of us we have far better things to do and banning trolls and bigots is the obvious choice.


> I think you're very confused. The idea that rational people have to engage thoughtfully with irrational bigots is pure nonsense. It is not the duty or obligation of anybody to engage with those who hate them.

"Have to" and "obligation" in a general sense are things I try to avoid saying. They don't exist in my belief system, and I apologize if I mistakenly said otherwise.

What I'm saying is that if we want bigots to change, we can't just expect it to happen.

> The reason we get bigots in office by the way is not because trolls are banned. It's because powerful interested want bigots in office. Bigotry sells. It's very easy to screw people over if you can distract them by having them hate on some out group. It is naive to think that taking with bigots will change this.

I think you've confused cause and effect here. Some powerful interests certainly see bigotry as an end goal, but I think most powerful interests who support bigotry see it as a means to an end. As you said, bigotry is a distraction to achieve other goals. Bigots are easily manipulated if you don't care about bigotry: you just pretend to be a bigot and that gets you power, and then you can do what you actually want to do. If there were not bigots to be manipulated, powerful interests wouldn't push bigots into power.

> And here is the point: social change doesn't proceed through rational discussion. Never has, never will. Real change requires organization and solidarity and protesting and marching and uncompromising demands.

Organization and solidarity and protesting and marching aren't incompatible with rational discussion, and in fact none of these things work if they aren't a means of putting forward a rational discussion.

Modern protest movements need to read Martin Luther King's writings and understand what he really did. Every single protest he lead was carefully designed to make a point in the rational discussion of the time. The bigoted viewpoints of the time: that people of color were violent, dangerous, less intelligent, etc., were struck down one by one on public television by MLK's protests. Bigotry is based on lies, and MLK made it impossible for people not to see the truth. When bigots feared people of color would be violent, he showed them people of color peacefully being beaten. When bigots feared takeovers by blacks, he showed people of color only wanted normal things like sitting where they wanted on the bus and drinking from the same water fountains. He didn't simply try to talk over the people he disagreed with, he listened to their concerns and showed their concerns to be invalid.

Harvey Milk, as far as I know, didn't write about his tactics, but they are clear in what he did and said. When bigots saw homosexuality as a foreign, unusual, threatening thing, he encouraged people to come out so that bigots could see that gays were normal people all around them. When bigots saw homosexual culture as an invasion of their neighborhood, he showed it also brought economic benefits ("You don't mind us shopping at your liquor store." "We both pay taxes for your child's school").

Can you explain to me how you think protests work to change policy? If all they are is simply trying to yell your opinion louder than your opponent, why should people in power care? If protests don't persuade anyone, what's to stop everyone voting for the same people and getting the same bigots in power? If our only tool is escalation, they'll just escalate back, and they can escalate further because they have guns. :)

> If you want to waste time engaging with trolls have at it. You will find that these people have nothing but contempt for discussion and no interest in being swayed by logic. For the rest of us we have far better things to do and banning trolls and bigots is the obvious choice.

If by trolls you mean people who are saying inflammatory stuff to enrage people for their own entertainment, sure, engaging with them only entertains them.

But if you're talking about people who are just trying to live their lives and think that bigotry is the way to do that, I very much doubt you have tried talking to these people, because this has not been my experience at all. If you approach talking with someone about their bigotry as if they were a human, with compassion, and address the actual fears and hang-ups that cause them to be bigots in the first place, people do change. It doesn't always happen quickly or at all, but sometimes it does. And more importantly, I've never seen it happen any other way.


> Which form of trolls from my post are you talking about? I insist that we not pretend these are the same group of people.

I haven't found it particularly wortwhile to distinguish people who are saying terrible things to troll, or because they believe them. They're very often the same group, because reasonable, emphatic people are going to neither say nor believe those terrible things.

If you cannot carry a conversation with consideration for the other people in it, I do not want you in my community.

> There were plenty of reasonable conversations on Reddit before Conde Nast took over and dropped the banhammer.

I perceive Reddit as a very good example of the kind of community I _don't_ want, because any deep, complex, or otherwise not aligned with the popular discussion is impossible there, so I'm afraid we're at an impasse.

> But the alternative you're proposing is an echo chamber

Luckily, I haven't proposed an alternative, so it's rather interesting what imagined alternative you're making that statement about...


> I haven't found it particularly wortwhile to distinguish people who are saying terrible things to troll, or because they believe them. They're very often the same group, because reasonable, emphatic people are going to neither say nor believe those terrible things.

> If you cannot carry a conversation with consideration for the other people in it, I do not want you in my community.

Okay, you can want whatever you want, and I understand why you want that. I also have the gut reaction to someone saying something bigoted where I to avoid the person so I don't have to see it, or respond with vitriol and ostracization, because that's what feels good in the moment. But if people continue to insist on putting their head in the sand and take actions that feel good rather than actions that actually address the problem, these problems are only going to get worse.

> I perceive Reddit as a very good example of the kind of community I _don't_ want, because any deep, complex, or otherwise not aligned with the popular discussion is impossible there, so I'm afraid we're at an impasse.

I have had fairly in-depth conversations and said plenty of unpopular stuff on Reddit all the time, so I'm not sure what you're basing this.

> Luckily, I haven't proposed an alternative, so it's rather interesting what imagined alternative you're making that statement about...

You said, "If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome."


> But if people continue to insist on putting their head in the sand and take actions that feel good...

I'm not sure how you go from "I don't want trolls in my community" to "it just feels good, you're putting head in the sand". I'm not putting anything anywhere, I know exactly what I am doing. I don't want trolls in my community.

> these problems are only going to get worse

Not in my community they won't.

> I have had fairly in-depth conversations and said plenty of unpopular stuff on Reddit all the time, so I'm not sure what you're basing this.

This is a subjective thing obviously but it's not like it's some new sentiment I made up, plenty of people find Reddit terrible to have interesting conversations on. Particularly a thing you'll see mentioned often is that shorter, less complex posts are often more liked than longer, more complex posts requiring a lot of effort to write.

> You said, "If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome."

Which is not a proposal. It's a statement on consequences. A proposal looks like this: "To have a well functioning community, you need to have this, this, and this, and not that". I've said nothing of the sort. Communities are complicated and require design, and there's a lot of variety within communities besides just "free for all" and "echo chamber".

You have a conversational style which seems to like to presume that the person you're speaking to is doing something they never claimed they're doing (keeping their head in the sand, or suggesting an echo chamber), which might be why you find Reddit tolerable, because this is very much the kind of interaction I find really annoying and could do without. It's always easy to feel right about everything if you just put words in the other person's mouth.


> If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome.

I have never seen it play out that way in practice.

> I've never observed that a no-moderation community is free of trolls

I've never observed community with moderation and no trolls. You will not ever eliminate them completely, so better approach seems to be ignore them and just ban outright spam.

And I did observe almost unmoderated community that had trolls, nobody cared about them, and everything was fine.

> very large amount of people simply do not enjoy ...

I think you are doing a lot of projection here. Maybe people agree with you, but considering how broken are your arguments I would not take your sweeping generalizations seriously.


The issue is that many in group 1, baiters, intentionally and effectively mimic those in group 2, earnest people. So sadly, although they are different, there is no way to reliably distinguish them from their posting behavior.


No, the parent was simply saying that “trolling” (or, having a bad opinion) doesn’t imply a character flaw, and that we shouldn’t use language that makes that implication, lest we start to think of people who have bad opinions as irredeemable. People can do wrong without being fundamentally bad, broken people.

If we treated, say, driving a car the way people treat Internet discourse, you would be dragged out of your car and stoned to death the first time you cut someone off.

Yes, sure, ignore posts that seem like trolling. Filter or block them, even. But perhaps you could give the people behind those posts a second chance, before writing them off for life for one post.


You know, nobody has been actually linched for either trolling or reckless driving. Maybe some were killed, if they crossed the wrong kind of victim, but that's the risk you assume when being a jerk.

On the other hand, I would fully support a law that would give a temporary suspension on people's driving licenses if there was a reliable way to tell that they have an habit of cutting other drivers off. And to make it a permanent ban on operating any kind of vehicle if you were a reincident. Again, it would be greatly inconvenient for them, but there must be some point when the rights of the public superseed the rights of individual assholes.


Is it not easier to ignore trolls and let them realize they will not get attention that way, even without being censored? And if they are not ignored, then perhaps they are more than the simple trolls we imagine them to be?


Implying dishonesty is toxic!

Oh haha well done...


I submit to you, a subreddit I've found in the past week that is absolutely horrifying, and I think stands quite counter to the "variety of opinions" model you put forth.

This isn't anything visually graphic, but the opinions expressed in this subreddit make me believe that these people are preparing for a violent insurrection, full stop.

https://www.reddit.com/r/CBTS_Stream/

CBTS stands for the Calm Before The Storm. These people follow an anonymous online poster named "Q" who posts vague, short posts on some other site (not sure where), and then lots of other subscribers repost these and form general conspiracy theories that all revolve around the deep state, the NWO, and other nefarious groups colluding to remove Donald Trump from office. Its not your standard fare related to the ongoing investigations in congress or the special counsel, these people are the dangerous combination of paranoid, gullible, and angry. As an outsider just perusing, its obvious that this place is crawling with charlatans and con artists who understand that they are addressing a crowd of people who are more prone to believe an idea simply because of the tantalizing ramifications of if it turned out to be true. Anyone can spout out the most hairbrained idea, and three people will show up to give vague, outrageous stories that confirm the original idea.

I don't have an answer, but surely there must be a level of fomenting anger and general mob action that deserves some sort of modulation/regulation.


> the opinions expressed in this subreddit make me believe that these people are preparing for a violent insurrection, full stop.

What makes you think so? I've looked at it (admittedly, didn't spend too much time) and it looks pretty standard fare for a subreddit - or any other forum like it. It's slogan is "BE LOUD. BE HEARD.". People that are preparing violent insurrection don't want to be loud and heard. They want to be silent and invisible until they have enough people and materiel overthrow the government. People that want to be participating in a democratic debate want to be heard. There's no point in being heard by the other guy if the next thing you're planning to do is shooting him (well, maybe if you demand surrender, but I see no such demands there and if would be weird to do it on Reddit). The only point I can see of being loud and heard if you try to convince somebody, or at least gather support - e.g. for winning al election, or pressuring an elected representative into doing something by showing them how many people demand it. All that is part of the normal democratic process.

Even if people that are there hold some unacceptable views (I have no idea if they do, but even we for a minute assume they do) that doesn't mean they are planning violence. What is the evidence they do?


I mean preparing for violent insurrection that they believe is about to be started by someone else. I don't think they are planning to start it. So many of the posts are alluding to events that are imminent. My fear is not that the logical progression of this group is a pre planned and executed terror attack of some kind, but that they are being radicalized, and are not far off from a mob of Manchurian candidates. Perhaps thats a very loaded and hyperbolic phrase, but consider this: imagine a hypothetical situation where Donald Trump resigns, or is impeached, or for whatever reason leaves office on a day other than Jan 20, 2021 or Jan 20, 2025. The mass demonstrations that would inevitably ensue would be powder kegs. I don't think its hyperbole to say that those protests could quickly devolve into mob violence on a large scale. This subreddit is essentially prepping people for that day, and like a doomsday cult, insinuating that the day is coming very soon.


That may or may not be true that they are expecting doomsday of sorts, but there are a lot of conspirologists which predict imminent doom literally for decades, and when it never comes they aren't bothered by it even a little. If that day will never come - and most likely that's exactly what would happen, if history teaches us something - there wouldn't be no other harm done than a bunch of folks wasting a lot of time on the internet talking about stuff. So far mob violence was a rather rare occurrence in political demonstrations, actually - and one that happened lately was mostly driven by antifa. That's pretty much the only movement right now that openly uses mob violence and achieves some political success by it - cancelling speeches, shutting down events, etc. Are there any other examples?

I see a lot of explanations - especially on the left - how expressing certain views is akin to violence. If we had tons of actual violence happening - or clearly imminent to happen - we wouldn't need any speeches about how words are similar to violence. It would be clear to us that there's actual violence and there would be a lot of pointing to it instead. So, I take it as a sign that there's actually not much violence to be pointed at, if we are pointing at words instead.


I'll whole heartedly agree that there is no smoking gun to any of my theories. I don't equate this speech with violence. My point is just that situations that result in mob violence can deteriorate at a rate far faster than rational voices can prevail. People are programmable. The level of fervor I see on some of these subreddits tells me that someone is programming these people. People are using the tools of psychology to achieve goals that would not be achievable otherwise. In the time elapsed since the programming of this particular group of people, two years ago, all of the protests I've seen from Trump's base have not been airing of grievances, but more of "lets go gloat in public and see if we can trigger the left.". Trumps base is triggered with shadenfreude, not anger. My point is that there is a singular event, Trump leaving office prematurely, that large groups of the right will inevitably interpret as a coup. If that very situation arises, then the calm, see-it-coming-a-mile-away, clear signs of trouble you envision, is suddenly hundreds of thousands of people in crisis mode.

I realize that there is a spectrum of using psychology to influence people, and I can't tell you where the line is of too far, but can we agree that the line exists?


> I don't equate this speech with violence.

Oh I don't say you do. I am saying the need of so many people to do it suggests there's a distinct lack of violence to point at, otherwise they'd be pointing at it, instead of pointing at words. And since these people are highly motivated to find anything to point at, their failure to find it suggests maybe there's indeed not much political mob violence to find.

> The level of fervor I see on some of these subreddits tells me that someone is programming these people.

For some definition of "programming", maybe. But for that definition, everybody who debates on the internet "programs" everybody else participating in the debate. It's just a nefariously sounding way of describing mundane things, just like writing "contains chemical compounds!" on food packaging.

> People are using the tools of psychology to achieve goals that would not be achievable otherwise

Not sure what you mean by "otherwise". People communicate. Some of them use knowledge of human psychology to make their message more persuasive. It's not something that appeared today or yesterday or this century or this millenium. Is it harder to convince somebody in something if you ignore human psychology? Of course. But there's nothing nefarious about it - it's like saying "people are using tools of physics and chemistry to achieve goals that would not be achievable otherwise". Sure they do, all power to them! That's why we spend all the big bucks financing the science!

> My point is that there is a singular event, Trump leaving office prematurely, that large groups of the right will inevitably interpret as a coup.

That would largely depend on the manner of said leaving, I'd assume. For example, if he becomes gravely ill or suddenly dies, that sounds unlikely. If Democrats win majority in Senate and House in the next election and decide immediately to impeach Trump "because he's bad", without a proof of any real crime committed - that sounds much more likely. But that would be a consequence of highly inappropriate behavior resulting in loss of trust in the democratic system by citizens. The cure for it is not to behave like that. If there's no such behavior then the history shows there would be no significant violence. I've heard rumors that Bush would cancel elections and institute martial law, then Obama would cancel elections, and no doubt I'll hear about Trump cancelling elections, and then whoever will be elected after Trump would cancel elections too. There's always talk about this, because it's easy.

> I realize that there is a spectrum of using psychology to influence people, and I can't tell you where the line is of too far, but can we agree that the line exists?

Not really. There's no "using psychology" but plain old persuasion, and no persuasion is "too far".

Well, of course, if you use violent methods like torture, you can also achieve psychological effects, but if we're talking about persuasive speech alone, then there's nothing "too far" in that. There's no words that can make robots out of people, and in fact convincing somebody to change one's mind on a political question by just throwing words at them is really hard. Possible, but hard. People may be "programmable", but not very easily. Usually if they become convinced in something, there are a lot of reasons for it and a lot of background for it, not just some nefarious article on some forum.


"There's no words that can make robots out of people"

How do you explain cults? How do you explain the effects of advertising? How do you explain the uniform levels of discipline achieved by basic training? How do you explain phone scammers? How do you explain the success of the public relations industry? How do you explain Bernie Madoff? How do you explain cigarette smokers? How do you explain the effects of what we refer to as echo chambers? Everyone of these consists of people being programmed or brainwashed in one way or another.

You hear the word brainwashing and immediately think of someone thats hypnotized, or a zombie, the typical hollywood trope. But its a far more common thing.

If you're interested in reading about this, there is a book by a psychologist named Robert Cialdini, called Influence, the Psychology of Persuasion.

https://www.amazon.com/Influence-Psychology-Persuasion-Rober...

One interesting persuasion trick is to start with extreme opening bids in negotiation, and then back off to what you really want. This is how the actors of Watergate were able to convince others to go along with the plan to break in to the Watergate. The original plan was far more involved, with a $1,000,000 budget, and included kidnappings. G Gordon Liddy used this as an extreme opening bid, and eventually convinced everyone that what eventually took place was a reasonable compromise. After, its not like they kidnapped anyone, and they only needed $250,000.


Most of your litany of questions can be answered by saying that nobody became a cultist by only reading words on a screen of what the cult believes.

Take Scientology for instance. I have access to the whole of their printed words online, yet I am not a Scientologist. You wouldn't be either. You can sub out "Scientology" for any other cult or any other odious group (including race nationalists, terrorists, etc.) and the statement remains true.

The reason being that nothing is being engaged here other than reading, writing, and thinking. No money is changing hands, no leader is demanding my obedience at literal or metaphorical gunpoint. In the case of a web forum the absolute worst thing that could happen to me is that I'll get rude comments or be unable to participate further.

The only difference between discourse and propaganda is the aim of the people doing the talking. This is a subjective value judgment on what you think of the speech, and when it comes to analyzing it, this amounts to noise. Are you trying to propagandize at me? :)


The flaw in your argument is that the obedience must be demanded at gunpoint. In actuality, the obedience can be gained through a combination of continuously putting people under stress, and framing yourself as the only viable solution to the stress. Its how you train dogs, and how you train Marines.

Narcissists and con artists don't need to threaten violence to achieve their means. The threat of violence is simply one of the most effective methods of placing someone under emotional stress. It is once these people are under emotional stress that their defenses are down, and they are vulnerable to brainwashing.


@Karunamon: I'm not sure why, but I have no reply link under your latest post, so I'm posting as close to it as I can.

I don't think its controversial at all to say that both Facebook and Reddit are brainwashing people. Not to say that the companies themselves are doing the brainwashing, just that they are effective platforms for anyone to do it on. Do you really not know anyone who literally lives on one of the two sites? They are both highly addictive echo chamber services that people willingly return to, hoping that the next page load will contain their unicorn story that confirms everything they want to be true. Its the dopamine cycle of what makes social media interesting in the first place. These are two of the four most visited sites in America. I can't control for peoples previous experiences, or say that Reddit or Facebook are the exclusive causal factors, but I think that just the fact that they are excellent at getting users to self select for interesting content, at the expense of content that may challenge their opinions, is enough to count these services as brainwashing platforms. People are figuratively screaming to the void "I want to be entertained!!!" Trump answered the call. The repeated dopamine cycle of discovering outrageous content, and then eventually petering out to boredom, is the stress. Once you are sufficiently stressed out, you are far more susceptible to believing that the media is lying, and the FBI is biased, the intelligence community and government at large is filled with evil actors with their own agenda. I mean, how interesting would that all be, right?

Another point to consider is that I don't necessarily think any one organization has caused all of this to happen. Internet addiction has obviously existed nearly as long as the internet. My theory is that Facebook and Reddit effectively teed up millions of people for someone else to come along, and capture their minds.

As I've said throughout this thread, I'm not suggesting a minority report situation, or that circumstantial evidence should depose a president, or that censorship is the answer to any of this. What I'm more lamenting is that the charlatans are winning, and their methods are nearly impervious to defense in a modern democracy. As best I can tell, all I can do is try to convince people of what I think is at play, and hope that it resonates.


Re commenting: That happens after two or three replies - you have to click on the timestamp to get the permalink to the comment to be able to reply to it.

Re everything else: This is a reply that doesn't do justice to the effort you put into it, but I think with your definition of "brainwashing", we've made that term functionally useless, and I think bringing partisan politics into it apropos of nothing has made any further honest conversation on this matter impossible.


Fair, but how do you continuously put people under stress using only words on a screen? The comment you originally replied to asserted that "There's no words that can make robots out of people", and so far you've provided counterexamples that use many more things; none of which are restricted to internet comments.

Keeping the original topic in mind, we're talking about online comments. Not cults, not marines, not anything other than words on screens.


> Fair, but how do you continuously put people under stress using only words on a screen?

Once someone has become afraid of something, it doesn't just vanish after the event that caused it is over.


> How do you explain cults?

People have strong drive to belong to an in-group. There are multiple experiments that assigning random markers to random people and making them participate in certain activities lead to "group cohesion" effects and people start assigning deep meaning to those markers, despite them being completely random. Cult is just when people take it to the extreme - likely because they didn't find satisfaction for their in-grouping drive elsewhere.

> How do you explain the effects of advertising?

Which specifically effects? It's mostly brand recognition, aka availability heuristics - if you have brand A, B and C and you heard before that A is good, you're more likely to choose A than unknown B and C - and just plain informing people things like brand A exist. And a touch of in-grouping ("if you drink Coca-Cola, you are in a group of cool people"). And a tad of signaling ("if we have money to but expensive ad spot on TV, we must be a successful company that can afford to create good product, would not disappear tomorrow and values its reputation so we would not cheat you").

> How do you explain the uniform levels of discipline achieved by basic training

They are not that uniform, but again in-grouping plus other guys will be literally shooting at you (though research shows a lot of this shooting is much less targeted than previously thought).

> How do you explain phone scammers? How do you explain the success of the public relations industry?

That's plain persuasion, with various ethical fences either present or absent.

> Everyone of these consists of people being programmed or brainwashed in one way or another.

Again, if by "programmed" you mean "persuaded in something, in which they most likely inclined to believe from the beginning due to selection and grouping effects" then sure. People can be persuaded to buy a shampoo especially if they wanted to buy one already, and people can be deceived especially if they're already out to look for something the deceiver seemingly offers, and people can be blind to arguments especially if those arguments challenge their prejudices.

The only thing different here is tobacco smoking - that's physical addiction, it is a different mechanism.

> If you're interested in reading about this, there is a book by a psychologist named Robert Cialdini, called Influence, the Psychology of Persuasion.

Yeah I know about Cialdini. It has a lot of nice tricks, but it's not magic. It's much less magic then it's made out to be. And yes, reframing and anchoring is one of the tricks. If you watch Trump carefully, you can see all these tricks employed, he does it all the time. He didn't invent it of course - expensive stores have been putting items with outrageous prices on prominent display for ages, to make regular item prices seem lower in comparison. It does confuse some heuristics for people. But anybody is capable of approaching the prices - or Trump - rationally and see the actual price. There's no "programming" to prevent it - if one makes minimal effort, one can always do it.


Most of your explanations amount to spreading the trees out far enough and saying, "see, there's no forest here". You agree that all of the underlying mechanisms perform their component tasks, but disagree that there are cumulative, persistent effects of prolonged exposure to combinations of them.

My point is that there are large swaths of conservatives that have been radicalized by the last 25 years of Fox News and Rush Limbaugh. And now they are being told that mainstream media is lying about everything.

It's so complete in many people, that they don't hear themselves when they lay out their political motivations. The number of people whose primary goal in achieving any particular political end is to anger their political opponents is staggering. Its like a catch all if someone can't be convinced of the traditional conservative stance on an issue, they can fallback to "well, at least liberals will lose their minds".

This radicalization is primarily the cumulative effect of conservative media, and its villification of the left.

Cults can't exist without a confident authority figure dominating the flow of information to the adherents. These things don't just happen out of vacuums, without a leader in on the scam. Just because reading Dianetics is not 100% effective in converting to scientology, that does not mean that the book isn't an effective tool of persuasion, that combined with the other effects like ingroup psychology, can get people to join a cult that will bankrupt them without a second thought.

With regards to advertising, I mean the use of sex, patriotism, or other emotionally charged concepts to trigger positive associations with the subject being marketed. I don't claim that all people are impervious to all instances, I mean that enough people are susceptible to it, and it has persisted over a long enough period of time, that it has produced a significant amount of people with seriously warped views of how government works, how the 20th century played out, and what the powers that be are planning to execute imminently. I think there are literally people out there being primed to support whatever totalitarian aspirations Trump may have, and that they are being convinced of the righteousness of his cause. Do you really think that hundreds of thousands of people are being facetious when they refer to Donald Trump as "God Emperor"?

With regards to basic training, the effectiveness can be attributed directly to the process I describe. Convince people that they are in danger for long enough, reinforce this with threats and screaming. After the people are sufficiently scared, give them a path to escape the danger. Apply the imaginary danger proportionally to the level people stray from your prescribed path. You posit that since the danger is controlled and not as real as the grunts are lead to believe, the effect is somehow negated. It works in the military because you are isolated from any contrarian information by the military themselves. There's no source of information to tell the trainees that its a controlled, safe environment. It works in political echo chambers because the people have isolated themselves willingly. There's no _trusted_ source of information to temper the vitriol from the echo chamber.

Your model of people and how they take in and react to information ascribes a lot more agency and rational decision making than what I posit. Your argument is that since everyone technically has the tools available to them to become educated enough to not fall prey to devious persuasion, devious persuasion is automatically defanged, because all people avail themselves of all education.

Tobacco smoking, specifically nicotine addiction, is not a different mechanism at all. In fact it is a highly illuminating example that shows how insidious constant bombardment with persuasion can be. Nicotene has only the slightest physical withdrawal symptoms. The cravings smokers exhibit are a product of the brainwashing. People don't wake up in the middle of the night from cigarette cravings. They've convinced themselves that smoking a cigarette removes stress, instead of ensuring its perpetuation. When they crave the cigarette, they fantasize about how much they will enjoy it, but when they actually smoke it, addicts are surprised 20 times a day to find out they don't enjoy it at all. But this surprise doesn't free them from their mental prison. They are of the opinion that cigarettes modulate some completely unrelated stress in their life. Think of talking to a smoker, and just telling them how dumb they are, by repeating obvious facts that we all agree upon. How does that work out for you? They immediately put up defense mechanisms and dig in. But in their private moments, they know that everything you said was true. I posit that many Trump supporters are in the same predicament. They are faced with the prospect that their entire world view is wrong, that they aren't the woke genius's they've assured themselves to be, they are just Bernie Madoff investors, tools of Vladimir Putin, the choice to double down on supporting Donald Trump is no choice at all.

Trump supporters are addicted to him like Nicotene, and they will irrationally defend the Nicotene even as they cough and wheeze, because after all, whats more emasculating and cuckold-like than admitting you've been catfished by a charlatan to the very people that have been screaming this fact at you for over a year?


I guess this has been longer ago, but the Malhuer National Wildlife Refuge takeover by the Bundys was essentially a militia challenging the government to armed conflict. I'm well aware that the case against these people ended in a mistrial because of exculpatory evidence that was withheld. That doesn't change the fact that conservatives have threatened violence in recent political protest.


I suspected somebody would bring up Bundys. However, given that the whole dispute is about grazing fees in some piece of federal land, I can't really take it as a political statement. Yes, Bundys proposed some political theories as justification of their actions, but if there wasn't $1M in grazing fees, I don't think any of it would happen at all. The fact that violence was threatened is bad - but it is hard to take this as a genuine example of politically-motivated violence.


Well, luckily for all observers, the Bundy's erased any doubt that they were making political statements with their other armed standoff with federal authorities. You refer to the elder Bundy's grazing fees dispute, which played out at their Nevada ranch. This actually involved a grievance the Bundy's themselves held. I refer to his son Amon Bundy, who took up the cause of another father and son from southeast Oregon, who were jailed for starting fires on federal land, clearing brush. They were originally sentenced to some amount of time, which they served. A judge later determined, after they were already released, that there sentencing was somehow improper, and that they had to serve more time. The father and son willingly complied, and turned themselves in. Amon Bundy and his fellow Yeehawdists took over a National Wildlife Refuge, and essentially tried to provoke another standoff. They want federal lands to be wide open to do whatever they want with, to strip resources, and effectively get something for nothing. They see their inability to do whatever they want, and the consequences of doing so, as tyranny of the government.


I submit to you, a subreddit I've found in the past week that is absolutely horrifying, and I think stands quite counter to the "variety of opinions" model you put forth.

I'm not familiar with that subreddit, but I think I'm familiar with the mindset you are describing. Nothing in your description seems to set it apart from the "variety of opinions" model the GP proposed. Other than your feeling that these ideas are "beyond the pale"[1] what exactly differentiates this subreddit from being a group of people whose ideas you disagree with?

These people follow an anonymous online poster named "Q" who posts vague, short posts on some other site

I thought the comments by Occams-shaving-cream in this thread was a good take on Q: https://www.reddit.com/r/conspiracy/comments/82qpk5/in_case_.... He suggests that Q is essentially a marketing team within Trump's campaign, casting out ideas and seeing what resonates with potential voters. He hypothesizes that it may not have started this way, but given the obvious utility of having such a mechanism, it likely has become such by now.

I don't have an answer, but surely there must be a level of fomenting anger and general mob action that deserves some sort of modulation/regulation.

Maybe, but it seems possible that attempts to censor discussion may paradoxically fan the flames. Given a group of paranoid conspiracy theorists who believe that the "powers that be" want to silence them, "modulation" seems equally likely to make them believe even more fervently that there are powers who do not want certain knowledge to be known. And they're right! The difference would seem to be only that they believe this knowledge is truth, and you believe it is a dangerous lie. Public discussion seems like the best way we have of resolving this dispute. Trying to stamp out an idea like this seems more likely to produce violence than to prevent it.

[1] Did you know that this phrase originally meant "outside the reach of English law": https://englishhistoryauthors.blogspot.com/2013/03/the-origi...


I don't disagree that just about any regulation of these groups will be interpreted as more conspiracy.

I guess the crux of my argument is that these people are the victims of psychological warfare. I'm not throwing out the word brainwashing for emphasis, but I truly believe a lot of these people are actually brainwashed. To give context, I don't believe its controversial at all to say that boot camp in the American armed forces, and most assuredly other countries as well, is brainwashing, through and through. Put people in a prolonged state of stress. After sufficient time, tell these people you can end all of the stress if they just follow your directions. Run till you puke. Have trained soldiers screaming in your face. Be woken at all times of the night to both run till you puke and have trained soldiers scream in your face.

Instead, many conservatives in America have been fed a constant diet of outrage/dopamine cycles about how evil Bill and Hillary Clinton, Barack Obama, and anyone associated, are. They're given 30,000 emails to peruse that are eventually framed to "unveil" a pedophile ring tied to all of the current players in the democratic establishment.

Then most importantly, a billionaire with no political experience, no political capital to lose, whatsoever, comes along and trolls, and proves wrong, nearly the entire national media for over a year. Nearly every week, he makes some offensive remark that leads every veteran of any election anywhere to believe he has committed political suicide. Thus, a year and a half of reporting that Trump will soon quit the race, and has no chance to win. But since, unlike all of those previous politicians, Trump has no "betters" to please, no one in American politics had anything with which to pressure Trump to do anything he didn't want to do. So when Trump won the election, the brain washing was complete. Trump had lead them out of the "prolonged stress" of Barack Obama's 8 years, and the year and a half long prospect of Hillary Clinton being President for 4 years after that. Everything he said about the lying media turned out to be "correct". Witnessing Trump win under the unique circumstances in which he won has had the psychological effect of making him a nearly god like figure in the eyes of his base. He has cut through political correctness, sexual assault claims, ethics concerns, taken his red meat to the supreme court and won. In the eyes of someone already inclined to pull for they guy on their side, Donald Trump became nothing short of Luke Skywalker. If you think this is hyperbole, heres another example of how people can be brainwashed by witnessing uncanny success against all odds, especially when that success stands to improve the brainwashed's lives immediately.

There is a known email scam where the scammer emails a sufficiently large enough pool of marks, the winning team for a single National Football League game every week, ahead of the game, so the marks can bet on the game. Every week (out of 16), the scammer simply takes the group of people who "won" the previous weeks game, and splits them in half, telling half that team A will win, and half that Team B will win. Obviously, no one will listen to someone who picks losers, so assume that every week, half of the marks leave the scam. If the pool is sufficiently large, (only 32768) after 15 weeks, there will still be marks who have been given the winning team ahead of time, for 15 consecutive weeks by some anonymous stranger. It's not controversial to see why those marks may have been brainwashed by the process, and would be willing to pay large sums of money for that 16th pick.

Obviously, Donald Trump didn't run this scam, but it goes to show that people can be casual observers to information, and be presented with just the right information to be essentially brainwashed.


Wow. You seem to be experiencing your own conspiracy theory. Not every popular idea has some puppet master controlling it. It's completely understandable that supporters of a widely ridiculed underdog would gloat about his success. It was just another election, and just another president, not the end of the world. America hasn't collapsed.

Nonetheless, I don't want to censor your ideas. Let people make their own minds up. McCarthyism censored communist ideas because they were too dangerous and people might get brainwashed. Was that a good idea too? I though the whole free speech ideal of America was to keep political ideas out in the open where they can stand and fall on their own merits, not silenced groups of violent supporters like you get in Venezuela or Egypt which leads to revolution after revolution.

For an example of what might be a dangerous idea of the left - how about the popular one that blacks are poor because whites oppress(ed) them? That sounds like a recipe for endless failure. Don't work to improve yourself - just get angry at the bogeyman. They did it in Zimbabwe and it ruined them. They're doing it now in South Africa. And American blacks are listening to the left mantra of oppression too. All it can do it make people angry and hateful - what if it leads to race riots?


If you don't believe there are massive psychological exercises successfully influencing, and arguably radicalizing people, occurring on the internet, specifically on Reddit, you're being willfully ignorant. Or a Trump supporter, but I repeat myself.

I realize that the topic of brainwashing is loaded enough that any rational discussion of it can easily be derailed by someone by simply making the strawman you just made, by comparing my arguments to claiming the end of the world or collapse of America.

Isn't it ironic that you can easily spot a culture of victimhood in others, the psychological effects, the futility of it, but I imagine you don't see the same thing in your average revanchist conservative? The level of hatred and frothing at the mouth over 8 years of Barack Obama, fed by people like Donald Trump, Sean Hannity, Alex Jones, and other charlatans created a victim complex that seems to persist even now in many Trump supporters.


Wow... instead of debating the pertinent points raised, you resort to name-calling.


Let's not pretend you're not attempting to gaslight me. I _returned_ name calling, as my interlocutor decided the easiest way to dismiss my claim was to frame it as chicken little proclaiming that the sky is falling. Then I debated his points.

Lets break it down point for point:

- lopmotr framed my comments as a conspiracy theory. I retorted that there is obvious evidence all over reddit that supports my claims.

- lopmotr brought up examples of victimhood as the causes for decline of states, and drew a line to whats happening in America with blacks as equivalent. I retorted that conservatism has this same victim complex, and that its ironic that people can see in others much easier than they can see in themselves.


As someone who is sympathetic to your argument, I disagree with your tactics. 'lopomotr' suggested that you might be falling prey to the same psychology that you are seeing in others. While it's probably impossible to be completely polite when doing this, I don't think he was trying to be rude. As you might guess, self-identified "conspiracy theorists" don't necessarily consider "conspiracy theory" to be purely an insult.

You, on the other hand, compromised your otherwise reasonable argument by flippantly claiming that approximately half the US voting public is "willfully ignorant". This is gratuitous "name calling", and el_cid was right to call your attention to this. I didn't vote for Trump, but have intelligent friends and relatives who did. Regardless of whether their choice was wrong, insults like this are counter productive to changing anyone's mind. So stop it.

Personally, I think you are right about much of the behavior you see on the right, but seem to be missing (or at least not mentioning) the equivalent online psychological tricks that mislead the left. As your penance, here's a story from someone one the right detailing how he sees some of the matters you are refering to: https://imprimis.hillsdale.edu/the-politicization-of-the-fbi.... I thought it was an interesting read that I haven't seen in the mainstream press.


I'll admit that this theory is pretty out there, and I'm not going to continue to expound about it here. I'll also admit that I can easily get pretty worked up, and veer into the partisan, hair on fire rhetoric that is the very subject of my ire. I pledge to do better, and read your article.

I developed the roots of this idea over December '16, after several conversations with one of my life long friends I hadn't seen since a few months before the election. He's an attorney, and someone I know to be extremely bright. He voted Trump. The level of spite and shadenfreude in all of his arguments, and him repeating the phrase "I've never been so sure of anything in my life", in regards to his confidence in Trump to fulfill his campaign promises, was very jarring. All of these traits were completely foreign to my friend before Summer '16. He hadn't gotten less intelligent in any other avenue of his life.

I only came to this theory through the realization that susceptibility to public relations tactics, weaponized persuasion, whatever you want to call it, is not a matter of intelligence at all.


"Let's not pretend you're not attempting to gaslight me." "you can easily spot a culture of victimhood in others"

I think you should focus on continuing to build your case on reddit and then you should definitely return here in a few years once its ironclad!


The subreddit you refer to has since been banned:

"This subreddit was banned due to a violation of our content policy, specifically, the prohibition of content that encourages or incites violence and the posting of personal and confidential information."

https://www.reddit.com/r/cbts_stream


wow. I wonder what falls out of that.


Exactly. Giving a massive benefit of the doubt that their complaints can be non-partisan, some alternative terms that might be more correct to their concerns might be:

  - Polarizing
  - Extremist/Fundamentalist/Zealotry
  - Trolling
However, these sorts of things are legitimately "toxic" to a welcoming discussion environment (ie a large social site's revenue stream), but it's being conflated as being "toxic" to society in general, which itself is an extremist perspective. :-P


I would say that all of those things are toxic to society in general. Those behaviors just tend to be more muted out in the real world where acting shitty is more likely to have immediate consequences.

I can't think of any non-contrived situations where those behaviors aren't equally damaging to social bonds in any situation, whether it's an online community or a government or a family or an organization. They all reduce enlightenment, are always irrational, and always increase human misery, even if it's just a teensy bit in the most benign cases.


Yes, you agree that they're "toxic" to communication, which I stated. But I wouldn't say they, for example, reduce "enlightenment"; they're a product of not being "enlightened" (for whatever measure you use).

All of these are products of the failings/weaknesses of humanity, not causes of them. Their presence is to be expected and crops up in everybody, not just in some "toxic" subset of people that can be excluded.


This is true, but “toxicity” seems to arise much more frequently in some people than others.

In Robert Sutton’s book, “The No Asshole Rule”, he describes what it takes to be a “certified asshole”:

> A person needs to display a persistent pattern, to have a history of episodes that end with one “target” after another feeling belittled, put down, humiliated, disrespected, oppressed, de-energized, and generally worse about themselves.

Put another way, in the series “Justified”, Raylan Givens opines (paraphrasing here):

> If you come across an asshole in the morning, well, you just met an asshole. If you’re coming across assholes all day long, maybe _you’re_ the asshole.

Assholes at work create a genuinely toxic work environment. People get sick, quit, and even commit suicide.

It can be argued, with some merit, that this differs from the Internet in that the assholes are usually in a position of power to abuse their subordinates, while on the Internet - at least in chat rooms and the like - people can withdraw from hateful environments by just closing the tab on the browser.

That’s not my point, though; I’m asserting that people who show a “persistent pattern” of promoting hatred of particular groups, inciting violence, and convincing people of harmful information through lies, half-truths, and myths, deserve to be labeled as “toxic”, and can be far more dangerous to society than a common or garden corporate asshole, because their messages can - and do - influence millions of people towards antisocial or, at very least, irrational thoughts and activities.


I think what this boils down to, and is extremely apparent in those quotes, is communication skills & style.

People with very controversial beliefs (and in a free & diverse society, everything is controversial in some relative axis, hence we need to extend measures of freedom to each other) can still act civilly, or they can flail and be problematic on forums. That's not a feature of their beliefs, but a feature of their behavior (stubbornness, arrogance, etc). People who have fully "correct thinking" in some scope can also be disruptive, poorly behaved members of discussion-based communities.


Spoken like someone who hasn't spent much time on reddit recently. Since the massive surge in white nationalist, alt-right, and other right-hate ideologies online (and particularly the creation of the_donald, a subreddit which has shockingly avoided the ban hammer for an inexplicably long time given its obvious explicit purpose: to be an echo chamber to spread hate, scream slurs, threaten people with death and genocide, and dox enemies).

In literally any part of reddit, if there is any post on anything that could be construed as racial or about any gender, even innocuous, hordes of extremist right wing trolls descend upon it and spew horrifying screeds of hate. They abuse people into silence. They are toxic. These things leak. And toxicity is real. Hate begets hate, saying nasty horrible things to people and advocating for genocide are not innocuous "beliefs that other people might have" they are unacceptable behavior in civil society.

If you do this in real life you are ostracized, beaten, you lose your job, you are abandoned by your family and friends. And this is good. Social signals and actions to prevent "toxic" behavior have existed since forever. But the Internet is the property of a few companies who are loath to enforce those same social rules. Sometimes, they get pushed far enough they feel have to.


You're mixing beliefs with tactics/behavior. The behavior you described is "toxic" no matter what your beliefs are. I agree that the lack of repercussions and social feedback online lead to an increase in people acting like this and it is a problem for pretty much all public forums. However, it is neither constructive, nor is it truthful, in my opinion, to attach this behavior to a single group, side, or set of beliefs. All you'll end up doing is driving moderates of said group further to the extremes. You can call out ideas you think are bad and you can call out behavior you think is bad, but "other-ing" an entire group based on the worst actions at the fringes of their membership just isn't going to change any minds. It only widens the divide.

Edit: The exception, of course, is if the behavior that is at issue is actually encouraged by a foundational belief of the group.


>Edit: The exception, of course, is if the behavior that is at issue is actually encouraged by a foundational belief of the group.

I'm glad you added this, I agree with you in general. I'm not interested in "other-ing" right-wing people, Republicans, moderates, conservatives, but I'm very interested in "other-ing", e.g. neo-nazis or Klan members. I am not worried about neo-nazis becoming more extreme (? is this possible?) and I also am not willing to let their sensibilities or concern for their feelings dictate any part of my or society's behavior.


There are in fact system dynamics in which specific behaviours are unhealthy. Toxic, if you will.

Consider these, generally, hygiene factors.

There are substances which are toxic in specific concentrations or circumstances, which are otherwise healthy: oxygen, CO2, water, vitamin D, salt, nutritional iron. Certain forms of discourse.

My view increasingly isn't that there are things, but interactions or behaviours. A thing isn't, but is what it does or how it behaves.

(This ... tends to simplify numerous ontological questions.)


> My view increasingly isn't that there are things, but interactions or behaviours. A thing isn't, but is what it does or how it behaves.

The "belief in things" is itself a cognitive simplification we make, probably for the sake of efficiency. To use a programming analogy, toxicity as a concept is a function of at least three arguments - toxic(what, to-what, context) - but we attach it as a label to the first argument and store it there.

Compare e.g. with beauty, itself a function of at least two arguments - beautiful(what, to-whom). But we usually assume to-whom = "human like me", and stick the whole thing as a label on a thing, because 99% of the time, that's the correct thing to do (incidentally, the quote "beauty is in the eye of the beholder" is literally a reminder that the concept of beauty is a function of multiple values).

Confusing the "arity" of concepts seems to be the cause of quite a lot of misunderstandings between people.


What's your meaning of arity here?

"Things" with some bounded shape or form (or other properties) may be related to perceptual apparatus.

We see, or hear or smell or taste or feel, etc., the boundaries, emissions, or other perceptible manifestations of objects or phenomena. If you will, those are their interfaces.

As in other domains, an interface may reveal, or conceal, some more complex back-end, inner working, or larger system.


Arity, as in number of arguments to function.

As in, beauty(who to-whom &optional context) confused as beauty(who).


Got it.

I've been thinking on Aristotelian categories over the past couple of years. Thought occurs that all of them are relations, though with varying degrees of dependence on the observer.

Related, a ~1835 essay on value by W.F. Lloyd notes that all value is relative. That's not a universally held view, but is, I believe, correct.


I kind of agree with you, but I think there's two threads here that need to be teased apart.

I agree that people get considered toxic because they're expressing views that are unpopular within the group in question. All groups have their sacred cows and enemies (where the belief in the truth of these views is often spread socially rather than developed intellectually), and people tend to not look kindly upon someone going against the standard line.

The second thread is that toxicity is not just a matter of the opinions being expressed but also the way they're being expressed. Eg when people are going out of their way to be rude to others, or where they're not arguing in good faith.


I suspect the two are being conflated, and not entirely accidentally. In many social circles, going against someone's sacred cow is viewed as essentially no different from going well out of your way to be rude or argue in bad faith. This offers a signal advantage - it means it becomes acceptable to silence people who you disagree with on the basis that they're behaving badly.


On that allegory though, the poison is in the dose.

Small amounts of copper are required in your diet to keep your body functioning. However, large doses of copper are toxic.

Same goes for most drugs.


I believe the metaphor of "poisonous chemical" has run away with you. Is it about the listener ingesting knowledge that is harmful ("grokking"), or is it about the speaker making a mistake and thus everything that speaker has done / will do is a mistake?


Totalitarian ideologies such as Communism, Fascism, and Islamic Fundamentalism are inherently, irretrievably toxic. They place the entire future of the human race at risk. I don't favor censoring them but we need to ceaselessly oppose them and shun their proponents.


For, uh, purely scientific purposes I've noticed the posting volume on pr0n subreddits like /r/gonewild and uh, a few dozen similar subreddits, is perhaps 100 to 1000 times the sheer posting volume of a controversial subreddit like /r/the_donald. The sheer volume of relatively R rated pr0n is perhaps 1000 times the volume of everything else.

There seems to be tap dancing around the issue that reddit is a 1960s Playboy magazine fifty years in the future. There's just enough excellent articles to keep the advertisers amused, but the fact has to be faced that 99% of the traffic is young men looking at scantily dressed young women. You need a little submarine PR to stay controversial about subreddits nobody on a statistical basis reads to keep things legit appearing, whereas all the traffic and money is over there at /r/randomsexiness

Just like the old saying about Playboy, I only read it for r/ama not the nekkid ladies.

I'm not complaining; I'm just pointing out that reddit is THE most successful pr0n site out there with the most brilliant strategy I've ever seen. Please don't confuse it, and its achievements within the pr0n industry, with legacy news media or anything like that.


I think it's disingenuous to try to classify Reddit as only a porn site, or to report on its communities in that context - by nature of its size and nature, individual (or groups of) subreddits are nearly as disparate as two separate websites, and should not be generalized based on Reddit as a whole; while the porn communities of Reddit may be the largest section, that does not exclude functionally distinct bodies such as writing groups, political activists, and racist gatherings from existing, each with their own identities.

While there is doubtless overlap that influences demographics and discussion, that influence does not preclude non-porn subreddits from relevance or discussion.


> I think it's disingenuous to try to classify Reddit as only a porn site

I completely agree, there are zillions of different experiences depending the subreddit and many of them can amaze you. For example, I have asked very specific questions in subreddits like sysadmin and networking on how AWS and Google Cloud handle layer 2 network protocol like Ethernet, and received the correct answers that I never received posting on the specific AWS or Google Cloud groups outside Reddit.


I run a Reddit post scheduler (https://laterforreddit.com/), and I have noticed a substantial bias towards posting in various adult subreddits.

I suspect in my case that this has a lot to do with how people who are browsing porn use Reddit, versus how people use the rest of the site at large. Most subs moderate posts and discussion actively and have norms (and moderators) that act against posting frequently, cross-posting and self-promotion. On the other hand, most porn subreddits are quite happy to accept x-posts, self promotion, etc, because their goal/approach is to provide more porn faster.

This seems to be an emergent property of Reddit rather than a deliberate strategy, but they seem happy enough to keep the revenue coming (as am I).


> reddit is a 1960s Playboy magazine fifty years in the future [...] Just like the old saying about Playboy, I only read it for r/ama not the nekkid ladies.

That's a thought-provoking way of putting it, though I'd add the /r/gonewild subreddit and its variants makes it considerably more meta. A lot of people, particularly females, are posting explicit imagery of themselves - revealing a hidden culture exhibitionism as the vast majority of them do it for little profit beyond comments and upvotes. There are people on Reddit who do actually make money from posting explicit content, but the vast majority of users who do it are in it for a sense of self-worth from the "updoots" and the thrill of exposing themselves relatively anonymously.

Taking your observations about advertising into account, it's a complex ecosystem. The closest thing I've seen to Reddit is Usenet, but in a age where digital cameras are ubiquitous.


Have any data on it mostly being girls doing it for the thrill? It seems like every time I like a photo and look the posters profile, they've got something else going on where the free photo was just marketing. Tons of underwear selling.


> It seems like every time I like a photo and look the posters profile

No offence intended but could it be that well-proportioned, model-like subjects are more likely to have their profile viewed, so it's a case of sample bias?

While no, I don't have any hard data so my points are anecdotal, I tend to be disinterested in subjects that are more model-like as the point for me is to look at everyday people getting their kit off. And from what I can tell those everyday people make up the lions-share of self-posted content.


> revealing a hidden culture exhibitionism

Not only do many females seem to enjoy exhibitionism on subs like /r/gonewild, there's also a not-so-hidden culture of exhibitionism that takes place in real life that "strangely" seems to have so far escaped being raised in the ongoing gender wars discussions. Human beings are very complex beings.


> culture of exhibitionism that takes place in real life that "strangely" seems to have so far escaped being raised in the ongoing gender wars

I'm not sure what discussing a lack of clothing or dressing provocatively would achieve - it's never an adequate defence for poor behaviour or assault by a third party. Any nuanced debate about the topic is difficult and likely to be a no-win scenario.

One of the few reasonable points to be made is that revealing clothing can be inappropriate in professional settings, however I think other females need to be the ones who encourage appropriate attire in the office. A male delivering the message would lead to anger and resistance as it'll be viewed as a form of control rather than something formed from consensus.

This might be an overly simplistic way of looking at it, but female exhibitionism is not dissimilar from a guy flashing how much power and resources they have. It's a signalling system, and it attracts both wanted and unwanted attention, but in a civil society assault and harassment are never acceptable no matter how provocative someone's behaviour might seem.


[flagged]


> the same action (wearing revealing clothing) is considered just fine and don't you dare even question it you victim-blamer when committed by one gender, and sexual assault when committed by another.

Well, no, wearing revealing clothes isn't considered sexual assault when done by either sex. Though I suppose if you mean that male public upper body nudity is treated as acceptable whereas female public upper body nudity is treated as criminal and a sex offense, but were just using slightly hyperbolic language about “sexual assault”, you'd have a point.

Though that seems to be in the opposite direction of the fantasy you are trying to sell.


> It might open people's eyes to some of the hypocrisy in the current public dialogue, just one of many issues being that the same action (wearing revealing clothing) is considered just fine and don't you dare even question it you victim-blamer when committed by one gender, and sexual assault when committed by another. Oh and by the way, "all we want is to be treated equally'.

I'm not seeing apples for apples in your argument - it'd be valid to say the popular female opinion (whatever that is) is hypocritical if their stance was that men couldn't wear revealing clothing, but you're making the point that they're hypocrites for wearing provocative clothing and then being upset if they're treated indecently or worse. You could argue their exhibitionism is inappropriate, but that is subjective, shaped by culture, and entirely different to saying their position is hypocritical.

If I'm rich and wave my money as I walk down the street, it's still a crime to rob me even if you feel as though I was asking for it.


> I'm not seeing apples for apples in your argument - it'd be valid to say the popular female opinion (whatever that is) is hypocritical if their stance was that men couldn't wear revealing clothing

Isn't that their stance? If a man went out in public with 1/3 of the flesh of his penis showing, everyone would be cool with it? The reality is, he'd be arrested for indecent exposure. In this case, the equivalent of what women do on a regular basis is quite literally illegal.

> but you're making the point that they're hypocrites for wearing provocative clothing and then being upset if they're treated indecently or worse

No I'm not.

> If I'm rich and wave my money as I walk down the street, it's still a crime to rob me even if you feel as though I was asking for it.

I'm not saying otherwise.

This conversation is actually not a terrible example of my overall point.


It's not very obvious, to me at least, what your overall point is?

> If a man went out in public with 1/3 of the flesh of his penis showing, everyone would be cool with it? The reality is, he'd be arrested for indecent exposure. In this case, the equivalent of what women do on a regular basis is quite literally illegal.

I don't understand what the specific double-standard you're alluding to it? I don't think a woman with exposed genitals is going to escape an indecent exposure charge.

Upthread you liken men wearing revealing clothing to sexual assault, but I've never seen this as a talking point in the gender discourse.


Is that relevant? If only 1% of Reddit traffic is destroying civilization then civilization is still being destroyed.


> There's just enough excellent articles to keep the advertisers amused, but the fact has to be faced that 99% of the traffic is young men looking at scantily dressed young women.

Then again, these young men probably aren't paying attention to the ads (unless maybe they're porn ads!), so those subreddits aren't a target audience.


If I may ask, how do you jump from posting volume:

> 100 to 1000 times the sheer posting volume

to traffic:

> 99% of the traffic

?

Also, just to clarify - do you consider porn something not legit or bad? Your tone suggests it, but I may have misread, so just want to make sure. If so, why?


Porn is not a banned word.


That's legitimately hilarious. Thanks for sharing this tidbit.


What tidbit? His totally made up statistics?

Because I've noticed that the amount of posting on mainstream subreddit is maybe 5-7 gazillion times more than the porn ones so he's full of shit.


Thanks for mentioning statistics, then using gazillion in your post, really makes it seem more valid.


So we've tried:

- Real identities (Facebook comments)

- Voting and self moderation (Reddit, HN, etc.)

- Strong moderation (Reddit, HN)

They all result in toxic comments, trolling, an echo chamber,or worse, a complete lack of participation. There's no real solution to this problem. However, if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling at the cost of less participation and an echo chamber (though, you could argue that not all participation is equal and you're mainly removing poor participants).

There's no perfect way to do this because even if you made a subscription necessary, for instance, you may just create an echo chamber. As part of the solution you'd need to prevent the creation of new accounts to circumvent any punishment received.

I'd say the most straightforward solution is that you have a forum and you get an account. Physical mail is sent to your house in order to get a single account. Then, regular moderation practices would be taken seriously as there's no way to create another. The community would be left with those who care enough to not be banned. The problem is that the moderators themselves may be corrupt or wrong.

Thoughts?


We really just need better public education, as well as clearer separation between information and entertainment.

Facebook/Reddit/Twitter/etc promote a regression towards the mean understanding... viral content reflects and amplifies the "average user's" sentiment and values. Acceptable for entertainment, but inherently prone to misinformation, propaganda, and demagoguery. Education requires valuing subject matter experts. Opinions which may not be widely held, or even popular, but supported by people who are vetted as being knowledgeable on the subject.

Traditional media could be regulated because they were largely centralized, but centralization also creates an establishment that regulated counter-culture ideas. In contrast, the internet is anarchic. Online anonymity impairs delegation of trust... any idea can be published so every individual must rationally evaluate what they consume. Attempting to regulate away undesirable behavior on the anarchic internet is just cat-herding. At best, you create a walled garden for a select few.

As I see it, the paths forward are either:

* public education, emphasizing civics/rationality, to support distributed self-regulation

* centralizing with state regulations

I want the former but the latter seems most likely, considering how the underlying networks are consolidating, and increasing awareness of how amplified public ignorance creates political/economic instability that hurts those with power.


We really just need better public education

That's great but it's a slow cultural change. Well-educated countries can still fall into extremism, which is driven by emotional and atavistic factors as well as economic and political ones, and can't simply be dispelled with doses of Rationality (tm). Arguably, the failure of rational utilitarianism to engage with this aspect of humanity and to simply dismiss everything that can't be quantified as irrationality exacerbates the growth of toxicity.

On a more practical level, the US is a country where part of the population rejects the theory of evolution on religious grounds, historical narratives are intensely contested, and political life is objectively and increasingly polarized. Educational change happens over generational timescales, and if it were as simple as making it available all our social ills would have been dispelled long ago.

Of course education and critical thinking skills are essential for a healthy social body, but when I see people saying 'we just need better education' I feel like I'm on a bus that's headed towards a cliff edge and well-meaning people are suggesting that the solution to this is better driving lessons.


My intent was to describe the system (social media is a hyper-efficient anarchic consensus-based information exchange), and a root-cause for undesirable output...

There may not be a solution that preserves the open internet, if this system is fundamentally incompatible with social realities.


"We really just need better public education"

I agree 100% as long as you put me, or people of my worldview, in charge of the curriculum and personnel.


Yes. In today's age of near-universal literacy, "uneducated" is just a euphemism for views disliked by the the side in control of the education apparatus.

While thought-policing media, schools, churches, and any other possible venue of "indoctrination" may "work" to a superficial extent, it mostly just completely destroys the credibility of your authority and leads to stunning implosion and destabilization. See: the Soviet Union.

Like it or not, you can't just beat "undesirable" views out of people, either with schools or social media moderation.


Education is not indoctrination.

Education teaches critical thinking, science, history, numerical literacy, and the general skill and toolset to differentiate fact from falsehood, rhetoric, and manipulation-- regardless of where it is coming from.

Education is an immune system for the mind. Generally it is the manipulators who don't like an educated populace, because it decreases their power. They tend to be the ones labeling education as "indoctrination".


>Education is not indoctrination.

Right, in principle, it's agreed that "education" is "good knowledge" and "indoctrination" is "bad knowledge" and/or "fake news".

As long as you think that the inoculations being administered in the school system are valid, you'll call it education. Once you stop thinking that, you'll call it indoctrination.

So you're not really arguing anything. Every side calls training that biases you toward their preferred narrative "education" and training that biases in the opposite way "indoctrination". Is your point that "sometimes people disagree"?


Do you understand the difference between knowledge and critical thinking?


> Education is not indoctrination.

This tends to last until someone realizes that an educational system is a wonderful indoctrination tool to advance their goals. Often enough this is followed by enacting that.

This isn't new. "Give me the child for the first seven years and I will give you the man."


Being able to distinguish fact from fiction is not a skill that is near universal, and it should be. Seems like a straw man to attack "thought policing" and "indoctrination." We're talking about critical thinking and logical reasoning.


This argument is completely disingenuous. Your average person is capable of critical thinking and logical reasoning -- those who aren't are either wards under the care of another person. Normal people just think logically and critically in reference to local optima, and that's not something that we can or should try to program out of them.

That quality is also known as adaptability and it's crucial to successful survival and prosperity, for exactly the same reason that it's useful in mathematics: global optima are generally difficult to deduce, if they can be conclusively and authoritatively determined at all.

Saying Side X is "not being logical" or "can't think critically" is virtually always just a cop-out. It says you either a) don't understand or b) don't want to admit the validity of some of their concerns.

Most of the time when the other side's argument is understood, the disagreements are a matter of priority and/or credibility, not nonsensical thinking. And those priorities are usually determined intrinsically; values as such can't really be programmed or taught. They're the result of the years of experience each individual has endured in the real world.

A good example of this is that many engineers are known for a just-the-facts, no-frills approach. This is because engineers tend to prioritize facts and correctness over aesthetic and emotional value. Other people who don't do this aren't objectively wrong -- they just put different weights on the considerations, leading them to different conclusions.

Another example is outlet credibility. Your average Fox News viewer may believe that MSNBC is propaganda secretly dictated by the shadowy figures in the background, and vice versa. If you believe this, the logical conclusion is to dismiss or at least discount the perspective of the propagandist.

You cannot "prove" that one side is propaganda and the other side isn't, because it is impossible to definitely deduce the intentions and motives of other people. Reports that say reports from MSNBC were more frequently errant are of no value because you can just say "Oh yeah, says who? The same shadowy figures?" to that.

It is important to understand that humans hold a variety of totally non-falsifiable beliefs -- things that cannot be definitively proven one way or the other, even if you try, like the state of mind of the speakers we're around. These have to be approached from the subterranean to be understood, let alone addressed.

All we can do is understand that our own perspective is not the default or de-facto correct one, and that other people are entitled to their own assumptions and unfalsifiable opinions just as we are. They're entitled to their own credibility heuristics and decisions about who is worth trusting. People are free to make their own decisions and conclusions, whether we agree with them or not.

Understanding that is critical to learning that it's OK to disagree with people, without having to pretend that they're insane just to preserve your own ego and self-worth.


> All we can do is understand that our own perspective is not the default or de-facto correct one, and that other people are entitled to their own assumptions and unfalsifiable opinions just as we are. They're entitled to their own credibility heuristics and decisions about who is worth trusting. People are free to make their own decisions and conclusions, whether we agree with them or not.

For opinions, perhaps. There are also people who reject facts. I don’t consider rejection of evolution or young-earth views as legitimate. Thus, those who cling to these views are empirically wrong.


>There are also people who reject facts.

Most people don't reject facts, they reject certain interpretations of facts.

For example, some people believed epilepsy came from evil spirits. They didn't deny that the person was shaking on the ground. They just had a different explanation for it than we do now.


Empirically? One suspects a different adverb would have been more correct in that sentence.


Do you contend that human beings have not empirically measured the spherical, or roughly spherical shape of the Earth?


When you build a small house, you don't account for the curvature for earth. Same for when you walk down the street.

When you build a runway for a plane or a long bridge, you do.

A model is not necessarily useful in all contexts. People still use the flat earth model in useful ways because it's simpler to assume the earth is flat in some situations. Of course, once you go beyond the capabilities of the flat earth model your numbers will wildly diverge into the realm of useless while the round or spherical models provide useful numbers for longer.


If parent had been talking about chemistry or physiology or any subject that can be explored via controlled experimentation, I wouldn't have complained. Instead the topics were geological and evolutionary history, which seem very much not "empirical". Not that I suspect that those sciences are wrong in any sense, but words have meanings.


I misread gp as saying "flat Earth", and not "young Earth". My apologies. I would agree that even if we can point to things like nylonase or the speed of light coupled with known distances to stars, those are deduced facts. Whereas astronauts have empirically observed the spherical nature of earth.


> A good example of this is that many engineers are known for a just-the-facts, no-frills approach. This is because engineers tend to prioritize facts and correctness over aesthetic and emotional value.

And yet engineers are over-represented (compared to people with other degrees) amongst Creationists and conspiracy theorists and, I would guess, terrorists. I think engineers value simplicity and direct causation more than facts or correctness.


No, it's not disingenuous at all. It is not a cop-out to say that people who believe in conspiracy theories, people who don't understand facts, people who are highly opinionated about things they don't understand, etc. are not behaving logically. They can have valid concerns and still be behaving irrationally. You clearly think these are mutually exclusive but they aren't.

Appreciate the multiple snide attacks, though.


> people who believe in conspiracy theories

Is there not such a thing as conspiracy fact? Aren't some conspiracies, in fact, real? It seems both sides of the political aisle have pet conspiracy theories these days, so it's really hard for this to hold water anymore.

> people who don't understand facts

As another commenter said, people will usually agree on the clear and present facts, e.g., Donald Trump won the presidency. Where you'll find more disagreement is on rationale: either he won because he gave a voice to the discontented American working class, or he won because he worked in cahoots with Vladimir Putin to subvert American democracy.

People don't refuse to acknowledge the obvious state of affairs. They have different interpretations, based on different values and credibility heuristics, of the likely impetus for that state of affairs.

>people who are highly opinionated about things they don't understand, etc.

aka virtually everyone. How many of us know enough to hold our own with the experts in something that we're "highly opinionated" on? If we can in anything, it's very narrow. Are all of our other opinions invalid now? Humans use credibility heuristics to try to determine who is right about something, and then they follow based on that.

> are not behaving logically

I dunno, it sounds logical to me, at least in the practical sense. If we pretend we live in a world of infinite resources and time, you might be right, but considering the constraints of reality, the logical approach seems to be to have and express opinions in the moment according to one's best judgment, since everyone else is going to be doing that too. Just gotta try not to be too haughty about it.

> They can have valid concerns and still be behaving irrationally. You clearly think these are mutually exclusive but they aren't.

I agree someone can have a valid concern and also behave irrationally. I don't agree this is what you started out saying, though.

>Appreciate the multiple snide attacks, though.

No offense intended. Edit deadline is passed, but I wasn't thinking I put any such things in. My apologies if you felt I was being condescending or passive-aggressive.


I'd like your opinion about this particular subreddit:

https://www.reddit.com/r/CBTS_Stream/

These 20,000 odd people unequivocally lack the type of critical thinking skills GP is referring to. I find it hard to believe that they are all under professional care. These people are straight out of The DaVinci Code, or National Treasure. They truly believe that they have uncovered a massive conspiracy to over throw the current American government, and they are organizing to stop it. Many subreddits choose a sort of mascot that defines their subredditors. For instance, people who subscribe to the tongue in cheek /r/evilbuildings are "6509 villains plotting", where they post pictures of buildings that have a nefarious apperance, no conspiracy in the comments. /r/CBTS_Stream has "21,333 Operators". As in mercenaries/militiamen. These people are rabid Trump supporters, seem to have a strong fundamentalist Christian bent, and appear to be extremely gullible and susceptible to any sort of theory that involves revenge upon the previous administration. They even have their own prophet, "Q". Everything from occult references, to nazis, to big pharma killing off holistic doctors, to arranging Trumps tweets into an 11x11 grid, and then playing word search to reveal a secret message. These people swear that Donald Trump's televised rallies are chock full of encoded messages and symbolism, both in what Trump is saying, and the clothes/posters of supporters in the background. These people buy toothpaste from Alex Jones, because it doesn't contain flouride. These people believe that all mainstream American history since the American Civil War is a lie created by the perpetrators of this current hoax these people have uncovered. They also believe that Trump has already secretly met with Kim Jong Un, and will soon unveil a world saving peace treaty, and that will "make the libs heads explode".

The truly sad part of this is that a lot of these people are also members of other subreddits dedicated to people who have escaped Mormonism, or Jehovah's Witnesses, or similar groups. So these people have already thrown off the shackles of psychological warfare once. But they believe now that they are "woke", and seem completely beyond talking down.

Good luck explaining to these people that they are being radicalized by Russians, or whoever. Good luck getting any of these people to not believe that any censorship is obvious proof that the sleuths are hot on the case, and that the global elite are silencing them.


> This argument is completely disingenuous. Your average person is capable of critical thinking and logical reasoning -- those who aren't are either wards under the care of another person.

Oh please, you think the average person has sufficient critical thinking skills to read the newspaper and pick out the parts that are "stretching the truth", use specious reasoning or various other logical fallacies, etc? You must roll with a different crew than I.

> Your average Fox News viewer may believe that MSNBC is propaganda secretly dictated by the shadowy figures in the background, and vice versa.

If they had critical thinking skills, wouldn't they be able to get a pretty decent handle on the degree to which they are propagandists?

It sounds to me like what you're saying is, most things within this realm are not knowable, except for the parts that are. The world is complex and confusing, but I don't think it's that confusing.


I understand and agree with some points of your criticism, but I disagree with the part that we can't beat undesirable views out of people. Well, we can't do it completely, but it's not a binary thing, and I believe we really can do a lot to educate people. And not a political education, but teaching them about their own biases. Teaching them to be critical, to not just ignore evidence when it goes against their views, to be fair to others, etc.

I don't know, I don't think we have actively tried yet.


> Like it or not, you can't just beat "undesirable" views out of people, either with schools or social media moderation.

You absolutely can. An example: https://www.theguardian.com/commentisfree/belief/2012/sep/22...


That is indeed an inspirational example.


I don't mean it to be inspirational, only to indicate that persistent propaganda and organized information warfare can indeed drive ideas fully out of the population.


Broken clocks are right twice a day.

Religion sucks, but Soviet Communism's anti-religious nature doesn't excuse its foibles.


We've already got people with the correct worldview in charge of curriculum and personnel. The problem is that there are still dumb-dumbs that sometimes think there are valid alternatives to our worldview. That's why we need better education.


We should incentivize them to have the correct worldview. E.g. if you’re a CEO you should make sure that your workforce holds the correct opinions by reminding them that they can be fired on the grounds of being a bad “cultural fit”.


I can't tell if this post is ironic or not


"correct worldview".

People like you scare me.


Sorry--it is satire, but the comment represents literally how it comes across to me when I see claims that "better education" will effectively bring about less toxic discussions. The implication is clear: If only people were rational and educated, like me, they wouldn't think the way they do, and then we would all agree.


How about this: first teach advanced critical thinking skills, so people have the skills (if not the will, that's another problem) necessary to see through propaganda from both sides.

I have a feeling a lot of people would have issues with this approach though.


How do you teach critical thinking?


By giving people things to think critically about, and ensuring that they respond in an appropriately thoughtful manner.

Of course this doesn't work when politics is taboo.


I think emotional maturity is more important than critical thinking. People in our culture have this life or death anxiety over being right, especially in social groups. You see it all the time on social media. Person 1 makes a throwaway facebook post which contains some kind of factual error. Person 2 points this out. Person 1 feels personally attacked and becomes emotionally invested in "winning." The more pushback person 1 gets the more stand their ground and will scorch the earth to save face. Where is all this intellectual insecurity coming from?


It comes from the fact that when you say anything incorrect online, there's an infinite number of people who will call you out on it. Your intellect is always on trial. You have to convince a jury of the entire planet that your opinion is valid.

Take the same comment or opinion and air it among three friends in person (or a very tight social network). You only need to convince two or three people who likely trust and respect you already, and who are not inclined to want to spend an infinite number of hours debating such trivia across all time zones.


Why not just engage in conversations on the principle of charity and good faith. There's also the concept of steel manning other peoples arguments to help extend good faith.

Not every conversation has to become a burned bridges and salt the earth affair. If the other person is just trying to "win" then disengage from the argument. If the other person is arguing with you in good faith then maybe you're wrong or have something to learn from a new perspective.


But... an infinite number of people aren't reading every page on the web, all the time. Even on Reddit, you're only really interacting with the limited subset of users who choose to comment, out of the limited subset who read a thread - which is still possibly bigger than a circle of friends, but smaller than any significant fraction of the human population.

There is the perception that "the entire world" is watching you on the web, criticizing your every move, but that's not a fact.


I guess I'm just not sure what the curriculum would look like. Is there something you could point to as an example of a course doing this well?


I would start with someone who is skilled in both critical thinking and education, or am I misunderstanding the question?

Logical fallacies would be one place to start, you can see examples of this all day long on reddit for example.


Well, my understanding is that "critical thinking" is already very commonly considered to be part of various course curricula. If it's not being taught, then we'd need to do something differently.

I've been hearing claims of the need to teach "critical thinking" since I was in high school. To me it always came across as one of those things that can't easily be taught, particularly in a traditional academic setting. Everyone agrees it should be taught, but if there were a clear way of doing it, we would.


There's plenty of material out there that isn't remotely touched upon in a traditional education.

https://en.wikipedia.org/wiki/Classical_logic

https://plato.stanford.edu/entries/logic-classical/

https://distancelearning.ubc.ca/courses-and-programs/distanc...

> If it's not being taught, then we'd need to do something differently.

When reading the news, forums, or overhearing conversations, do you not regularly encounter people who obviously have no significant skills in critical thinking?


> There's plenty of material out there that isn't remotely touched upon in a traditional education.

Right. I took a logic class for my undergraduate degree. It's actually the source of the "modus" in my username. I guess to me that's a far cry from what people refer to as "critical thinking." Being able to identify textbook logical fallacies isn't the same thing as rationally and objectively forming a judgment about something.

It's certainly a helpful part, but I doubt most would remember it any better than geometry or 1800s history.

> When reading the news, forums, or overhearing conversations, do you not regularly encounter people who obviously have no significant skills in critical thinking?

I do, but it's rarely a clear-cut example of misunderstanding a logical fallacy. More often than not, it's the blind acceptance of supporting evidence while rejecting opposing evidence. Or assigning way too much value to a poorly-sourced news story. Or approaching the issue with a different worldview / values. Or any number of other biases that affect decision-making.

To be clear, though: I agree it's clearly not being taught. I'm just not convinced you can take a bunch of high schoolers, put them in a room, and after X weeks of doing something, they'll be critical thinkers. I agree you could probably teach them logical fallacies well enough to pass a test on them, but that's not the same thing.


Do you think we've reached the absolute apex of having a well-informed citizenry?

If not, if critical thinking doesn't work, what could we do to improve this situation?


> Do you think we've reached the absolute apex of having a well-informed citizenry?

Of course not.

> If not, if critical thinking doesn't work, what could we do to improve this situation?

I'm not sure "well-informed" and "critical thinking" are even relevant to each other, but putting that aside, I genuinely don't know. That's why I asked how you teach critical thinking.

It's possible people are bound to retreat to their biases and it's a futile effort. I'm just not convinced attempting to teach people "critical thinking" will work, because it hasn't.


> because it hasn't.

Implying it's been tried, and failed.

Where has widespread teaching of critical thinking been tried?


I've seen it in numerous course syllabi and mandates. I'm not sure how to cite that, though. Here are a few examples where it is assumed the existing education system / teachers claim to be teaching critical thinking.

> Public school teachers and administrators will tell you that one of the mandates of public education is to develop critical thinking skills in students. They believe that curricula are designed, at least in part, with this goal in mind. [1]

> Common Core, the federal curriculum guidelines adopted by the vast majority of states, describes itself as “developing the critical-thinking, problem-solving, and analytical skills students will need to be successful.” [2]

> Many teachers say they strive to teach their students to be critical thinkers. They even pride themselves on it; after all, who wants children to just take in knowledge passively? [3]

Are you willing to acknowledge educators / curricula commonly claim to teach critical thinking? To me it's always come across as something claimed to be taught pretty much everywhere. Yet we both seem to agree it's not working.

We could try teaching critical thinking differently and potentially meet some success, but that doesn't change how it's been claimed to have been taught for some time with poor results.

[1] http://argumentninja.com/public-schools-were-never-designed-...

[2] http://www.newsweek.com/youre-100-percent-wrong-about-critic...

[3] http://theconversation.com/lets-stop-trying-to-teach-student...


> Here are a few examples where it is assumed the existing education system / teachers claim to be teaching critical thinking.

> Are you willing to acknowledge educators / curricula commonly claim to teach critical thinking?

I'm not in denial of some sort ffs, I'm frustrated at watching our society coming apart at the seams because the vast majority of the population seems to be incapable of intelligently reading a newspaper article, and will fall for seemingly any trick in the book.

Of the examples of "critical thinking education" listed above, do any remotely approach the critical thinking specific education I'm talking about here?: https://news.ycombinator.com/item?id=16572861

People are absolutely inundated with propaganda nowadays, like no other time in history, with social media being the most powerful weapon by far. We are graduating our children and sending them intellectually defenseless into this new world, I don't know if the average human mind can be brought to a level sufficient to cope with the propaganda created by the world class experts in persuasion who are working for a variety of deep pocketed entities, but at least we could try.


> Of the examples of "critical thinking education" listed above, do any remotely approach the critical thinking specific education I'm talking about here?

Well no, but my claim wasn't that your suggestion has been tried. It's that other people have been claiming they've been teaching critical thinking for some time, and it's not working.

I agree it's a problem--I just don't think a class in logic will do it. I'm not sure it's teachable at all, and even if it is, I'm not sure those same skills won't be ignored the moment the argument questions one's identity or becomes emotional.

Is it worth trying? It's easy for me to say "sure," but it's not on me to implement, and I'm certainly not sure how to assess whether it'd be successful.


Judging solely on the number of HN commentators who are absolutely incapable of detecting irony or satire, and indeed who may feel those are entirely out of place on HN, the average citizen isn't capable of considering two mutually contradictory propositions at the same time, let alone becoming "well-informed". The various exhortations in this thread to "just teach them!" bespeak a similar innocence. We have a rather large number of trained professionals engaged in the teaching already, so such pleas should at the very least be accompanied by considerations of why those efforts have not yet sufficed.


Dialectics are hard, man.

But to be fair, the longer we get into the current era of politics, the harder it is to distinguish between earnestness and satire. Young people who watch the movie Network today don't see Howard Beale as satirical, because there are too many people like him today who are deadly serious.


Critical thinking, specifically, is taught on a widespread basis?

What country are you writing from, and could you give some specific examples?


Most of our high schools despair of teaching mathematics to the level of algebra, to most of their students. Many haven't yet despaired of conveying literacy to those same students, but the outcome is by no means certain. I would consider both of those prerequisites to "critical thinking", no matter what particular idiosyncratic definition of that phrase you might prefer. Therefore I suggest that we aim lower, for a sort of animal suspicion that comes naturally to all humans. The result, from the perspective of political harmony, will be the same: hundreds of millions of critical thinkers would not magically all arrive at the same conclusions on any set of topics. In a perfect world of critical education, not only would you still disagree with most people's conclusions, but you would also still disagree with how they arrived at those conclusions.


Philosophy


You’d think that, but Wittgenstein minted his career on calling philosophers out for not thinking critically enough in debates.


Even if you hadn't misinterpreted the GP, this crosses into personal attack, which is not allowed here. Please read https://news.ycombinator.com/newsguidelines.html and stick to those rules when commenting on HN.


I think moduspol was being sarcastic.


> .../Reddit/... promote a regression towards the mean understanding... viral content reflects and amplifies the "average user's" sentiment and values

Comparing the experience in a niche subreddit, vs. a default subreddit, it's clear that the real problem is allowing causals in. If people have to go out of their way to participate, you wind up with only the ones who care to do so. And they have, in their reputation, something they don't want to lose.

But if people are allowed to participate by default, you get the enormous masses of people who, collectively, by virtue of their shifting roster, are immune to moderation. And you get people who set out that morning to share as many opinions as possible, instead of the people who set out to participate in that community exclusively.


I’ve witnessed this in well-meaning subredddits. Lots of arcane rules for posting. No warn band and removals mean I don’t try again.

On the subreddits I manage, I allow broad participation with no barriers. But they aren’t popular enough to have enough trolling to break me down and add hoops.


The clear separation between information and entertainment is BRILLIANTLY said. Seems straightforward but it's really important. Why is Twitter, a cesspool of memes, rap, sports, and kardashians mostly used by kids under 25 the primary means of breaking stories? Why is so much news on there? Anytime profitability, entertainment, short attention spans and news all get entwined, the resulting outcome is clear.

Facebook, Reddit, twitter, Snapchat have all started for entertainment and switched to catchbyte reaction/outrage culture news and normal news has turned into a mess as well.

I say regulations wiping trending news feed off Facebook, twitter and Snapchat is a start. Perhaps funding to any group that meets set in stone criteria regardless of political affiliation gets some federal funding to make up for the cost of making real journalism. That journalism must be fact checked and we hold them accountable.

People are desensitized to everything now and when shocking news is made every 15 minutes and the world is so connected we become numb to so much and that is incredibly dangerous


I wish this were true, but the amount of fake news spread by trusted news networks, such as newspapers spreading Russian propaganda bots, implies that education isn't sufficient. If a professional cannot discriminate between truth and lies on twitter how can an average person?


The economic incentives are misaligned. Fake news helps with page views and other KPI. If there was an actual enforcable cost associated with misrepresenting the truth, we'd see a slowdown in fake news.

The restoration of the Fairness Doctrine would also help stymie some of the biggest promulgators like Fox News *

* you can search for "fox news viewers misinformed" and encounter studies and results like http://publicmind.fdu.edu/2011/knowless/


The analogy of food nicely shows that some sort of race-to-the-bottom does not necessarily occur: there’s still organic or other high quality fresh food even though McDonald’s has been around for a while.

Similarily, there are still excellent news sources. The Economist is often cited in these discussions, and the New York Times is also vigilant in their reporting and the correction of errors when they occur[0].

What we’ve seen is a breakdown in trust of institutions, largely disconnected from actual mistakes on their part. People will quickly demand proof and invoke conspiracy theories when, for example, the there-letter agencies accuse Eussia of interfering in elections. They have learned to invoke “appeal to authority fallacy”too well, without offering an alternative. Because you cannot evaluate a new story without in some way deferring to the reputation of the publisher.


The breakdown in trust of news institutions has many sources, so correcting factual faults is just addressing one part. Omission and selective use of facts, misleading context, and misleading language seem to carry a higher penalty for trust in todays environment where it is very easy to provide the original source when ever a slightly biased news article is published. A factually error is very binary, true or false, while omission and selective use of facts gives room for much more outrage and distrust of otherwise well establish news institutions.

The Economist and the New York Times may have good practices in regard to errors, but there is a clear difference in their reporting to independent fact checking sites. To make matters worse, even those examples of "excellent" news papers tend to have a clear and open political alignment. With increased political polarization this then result in a rather natural distrust of news institutions, even those that are vigilant in correcting errors after they have occurred.


I disagree with the fast food analogy because that has obvious and direct personal costs, while infotainment negatives are subtle and externalized.

Re: trust and appeal to authority; your example made me realize people are drawn to grand conspiracies because unverifiable theories are infallible... Luring in people unfamiliar with probabilistic reasoning and consilience.


Organic food is not higher quality. Organic just means they cannot use some arbitrary list of farming practices, (some good some bad).

Sure McDonalds is not good, but there is also plenty of organic that equally bad (or worse).


> Organic food is not higher quality.

That depends. Sometimes European organic veg is preferable to Chinese industrially farmed veg when your local supermarket offers only those two choices. This is definitely true of garlic: Chinese garlic tends to be notoriously bitter and lack juice, but Spanish organic garlic is very sweet, pungent, and juicy. Now, the fact that the European organic choice was made according to the limitations of organic farming may well be irrelevant to its goodness, but there is a strong enough correlation with quality to guide consumers, and it was likely chosen by your supermarket as an alternative to the Chinese imported product precisely because they wanted to cover the organic segment.


That is not a property of organic though. Non-organic farmers are able to produce at least as high a quality as organic (nothing an organic farmer does is prohibited for the non-organic farmer, while there are a number of things the conventional farmer can do to increase quality that is prohibited to organic farmers). Of course just because they can doens't mean they do.


At the same time, I feel like we are seeing a breakdown of trust in institutions because of actual mistakes that, in years past, would have gone unnoticed.

Although this distrust does have negative impacts to our society, I view this distrust as an overall good thing.


>We really just need better public education, as well as clearer separation between information and entertainment.

nope, this won't work.

People will always need a place to voice their vile comments in a cowardly manner


[flagged]


Taxes don't pay for private education, let the free market decide how to run those schools.


Reddit is not actually strong moderation.

Forum communities with actual strong moderation, eg. Something Awful, ResetEra have near zero issues with hate speech, Nazis and other things that reddit has let fester.


I'm a regular on Something Awful.

One of the major factors that keeps it relatively clean is that user registration costs $10. That's a strong financial disincentive against trolling, bots, etc.

I really believe that successful online communities of the future will have paid signups.


Also, SA's userbase consists largely of older, tech-savvy people. It's been around for nearly 20 years now and I bet their registration peak was ~2004 (:files:). So it's pretty likely that the median age of a poster there is ~35-40.

I'd totally pay $10 for a less shitty reddit clone.


The Something Awful forums provided my first real exposure to 'internet culture'. I find myself reading their forums more and more often lately because discussions there seem less likely to devolve into an echo chamber. An account there is well worth the registration cost imo.


I think part of it is the lack of voting and the presence of easily-identifiable avatars: participants have an incentive to post things that generate maximum engagement and discussion with other specific users as opposed to maximum instantaneous agreement. In this regard it mirrors real-world social interaction much more closely than reddit/fb/twitter.


The other factor is not being shy about banning people. Heavy moderation.


I believe you are correct. If I pay for something that suggests I am the customer and not the product.


Haha if you pay for SA it only suggests that you are a sucker who paid 10 bucks to post on SA for a couple of days before getting banned. They are very ban happy there.


I have had an SA account for years and never been banned. I don't appreciate being called a sucker and I think the price is fair for what I get.

When I use Facebook I am the product, not the customer. This means the platform is optimized to put my eyeballs on advertisements or provide data about me to marketers. It is not optimized to provide high quality conversation.


Good for you but it's hard to take you remotely seriously when you try to relate SA and high quality conversation.


Are you trying to be ironic on purpose?


SA is basically 4chan except you have to pay 10 bucks to join. I don't really see the joke here.


The "joke" is that you complain about the lack of high quality conversation on SA, and yet your posting style here is extremely shitty.

No wonder you got banned so quickly there...


What, am I supposed to list my reasons with citations (of course) to justify my opinion about some historical relic of a site? And I didn't get banned from SA; I just know a lot of people who did.


If you got banned after a couple of days that means you broke the rules in a big way.


Well, that's because reddit is simply a platform, right? reddit undeniably has tools to allow for strong moderation. see /r/AskHistorians for a perfect example of this.


Strong moderation would mean that moderation is not optional, always present and always consistent.

Having a toxic community that internally declines to moderate is not strong moderation.


"Tools to allow for" is a hell of a phrase. Yes, moderation is possible on reddit but the idea that the tooling for such is in any way "good" is in error. It's enough to make moderation possible with sufficient application of effort. The fact that it's so rare is a strong indication of how useful those tools are.


Reddit tries to be just a platform, but it fails in two ways:

1) Namespace of subreddits. The subreddit which snags the most obvious name for a topic has a much better chance of becoming canonical for that topic that competitors with worse names.

2) Cross-subreddit identity and supporting tooling. For example, i can easily search for all recent posts made by a particular user, but i can't easily search for all posts made by a particular user within one specific subreddit. This sort of thing promotes "cultural leakage" across reddits and makes people think of all of Reddit as one community with one culture.

Related: see https://news.ycombinator.com/item?id=16573842 which summarizes a study that finds cultural leakage across subreddits.


We haven't really tried "real identities". We've come close.

Real identities would require verification, which sites like FB only do after the fact.

I fear that a huge repercussion of the election issues is that we will get there. A real ID may be required for you to post comments in all websites. And i'm not sure how I feel about that. Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know. But at what cost?

I really do wonder what is the root of trolling. What makes a "normal" person take an alter-ego/view/counter view, solely based on lack of face to face interaction...


Most of the problems of the Internet are mimicked in real life. That's why I don't think "real identities" would solve much, to be honest. Before trolling, for instance, there was the art of the prank (some harmless, and some downright mean -- just like trolls!), and the term "rabble rouser" seems to date back at least a couple hundred years. Some people in real life interact with others in rather toxic manners, in one form or another.

I mean, you get toxic behavior even on something like Nextdoor, where you pretty much know it's the neighbors across the street. Technology has just made things more convenient -- social media removes any curation, and technology also has made some means of harassment much easier to execute.

Myself, personally, I avoid social media that encourages toxic behavior (which usually means, smaller, special interest type sites; social circles that you know; etc.). This involves some degree of moderation or self-selection.

I don't see a good way around limited moderation for Reddit either, which is unfortunate in that it is hard to moderate something that size well (it's usually inconsistent and often arbitrary-ish).


Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know.

That is not at all clear, especially as a percentage of comments. There are plenty of sociopaths who are perfectly willing to troll under their real name. Meanwhile, more reasonable people may quite rationally be worried that expressing any opinion on a controversial issue will lead to online mobs trying to get them fired from their jobs, kicked out of school, or otherwise ostracized. Not to mention scenarios like being a gay teenager in a very socially conservative environment.


Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know.

I don't think it would. Lots of people post horribly objectionable material under their own names on a regular basis, depending on their level of financial security, peer group, and social milieu.

What makes a "normal" person take an alter-ego/view/counter view, solely based on lack of face to face interaction...

Some people are horrible, and are just as unpleasant in real life as they are online.


> Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know

Is this true? I've seen various studies saying that the opposite is actually true, but I can't currently find any of those studies. Does anyone have any sources?

EDIT: some sources, though I don't know the strength of their validity:

  - https://techcrunch.com/2012/07/29/surprisingly-good-evidence-that-real-name-policies-fail-to-improve-comments/

  - http://www.slate.com/blogs/future_tense/2014/07/17/google_plus_finally_ditches_its_ineffective_dangerous_real_name_policy.html


> A real ID may be required for you to post comments in all websites. [I]t would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know. But at what cost?

The cost is that it would prevent people from anonymously reporting abuses, which means that fear of retaliation will have a chilling effect. We've already seen this where people get death threats, houses burned down, etc, when they do things like report sexual assault.


People should be allowed to be anonymous but only explicitly so. The problem now is we have people lying about who they are and using multiple accounts and other forum moderation abuse to push viewpoints, often paid to do so.


>at what cost?

I can see at the cost of making stolen identities worth even more.

As it is websites completely suck at keeping our 'anonymous' identity secure. Our email and passwords are hacked so commonly there are websites dedicated to tracking it. Now you're just adding 'real identities' to the brokered data. Any real trolls will be able to use this data from the dark net pretending to be you. Even worse, since real names are required, any employers will look for you on the internet, they will see your "I'm an anti-gay right wing pro-russian" profiles online and say "you are not a cultural fit for our company". You will have to take the time and effort to clean up what is said about you. Since it's a real identity, it's not going to change, and they already have all the information they need on you.

Good luck in that terrible future.


A lot of this is not "trolling" but genuine hate speech. People will post hate speech under their own bylines on news sites, all day long.


There are two kinds of people who use real names on FB: those who never post offensive content anyway (the vast majority), and those who simply don’t care if offensive content is associated with their real identity (a small minority).

Those who post offensive content and do care, find it completely trivial to get a fake name. FB’s enforcement of their policy is basically nonexistent. This is the second-biggest group of users.


> A real ID may be required for you to post comments in all websites.

I honestly don't think this will ever be the case. Because there is a lot of profit from an unmoderated comment store. Also because it'd be monumentally hard to actually make all sites compliant.


> So we've tried

4chan also exists and it works and is marvelously non-toxic once you realize that any insults hurled at you are impersonal because they can only attack what you have immediately posted previously. Your attack surface is tiny, assuming basic information hygiene.


One of my biggest issues with reddit is post history. Oops this person posted something we don't like 3 weeks ago, dismiss his post and attack him. Oops, this person posted somewhere we don't like, ban them from the 46 subreddits I moderate. Oops this person posted for the first time in a default sub they have been subscribed to for 3 years today, ban him for brigading.


I imagine there are a few trade-offs when flicking the "post-history" switch either way.

Post history On: You get to follow a users comment history. If you read an insightful comment by them, and want to read more, then having a history is nice. Users are people

Post history partially On: e.g. comments could decay to anon after some period. (Cue the sites that collect all post data and match to users) Slightly increases the cost of doing a deep dive on a users history. Users are people, fading to ideas

Post history Off: Lowered attack surface for people who are actively trying to find an argument with you. Less pressure to have a persona consistent with the typical one of any particular community. Users are ideas


There are more options.

For example you can make multiple identities easier. I make use of that here via firefox containers, it is quite nice.

You can make anonymous the default and track the user under the hood, allowing them to claim a post at a later point if they feel confident enough to attach their name to it.

You could limit who can see the user identity. Or only display something more vague than a specific identity, a profile-lite behind a one-time ID.


The only real consistency i've seen between toxic communities and healthy ones is size. When you try to call 100k people a "community", it's a bad community where the toxic people have a louder voice than the good ones. If the community is a core group of <1000 participants and maybe 10000 spectators (following the 90-9-1 rule of online communities) it can be good. When it grows larger than that, it needs to be split up or shut down.

The only way i see to save reddit is to set a maximum size for a subreddit, and shut down or otherwise isolate every subreddit that grows bigger than the maximum threshold.


I believe the somethingawful forums involve a cost to participate, something like a one time $10 fee that was designed to remove low content or negative content participators. Might be an interesting case study of your hypothesis.


Unfortunately, while this works, it does not align with the goals of the companies running these forums-- to get as big as possible, to get the biggest valuation and/or slice of advertising dollars as they can. Even small monetary barriers to entry decrease participation substantially, and that's just not acceptable.

This is why I'm quite pessimistic about the current situation: the current in-vogue business model of surveillance/advertising capitalism demands massive size beyond what can be moderated, and thus makes this problem inevitable. And it only gets worse when the most toxic users are the most profitable, viz. Twitter's refusal to ban Donald, even though by any reasonable interpretation of their TOS, he breaks it every other day.


Worked, but not 100%. Many people were still willing to pay 10$ over and over and over again, for whatever reason, to reregister and keep posting (badly) just to get banned again.


The number of people committed enough to being trolls that they will keep paying you is small enough that they don't dominate the discussion.


At least you get money out of the trolls though. Just donate some of it to a bullying campaign.


Metafilter does this as well. A one-time payment to post along with heavy moderation will remove/keep away most of the toxicity.


Ten bucks as ante for the entertainment?

Cheap, if you ask me.


See other comments about metafilter.


I think your suggestion would lead to an echo chamber. Take a look at metafilter and how samey they've gotten. One thing to remember is that even small barriers implicity give mods much more power.


It just occurred to me that that forum moderation is essentially politics, and the failure modes of web-forums follow the failure modes of the political systems their moderation emulates.

The most common form anarchic moderation (i.e. no moderation). When a forum's small, unwritten social rules keep things under control. However, as forum grows, that breaks down, and things become more chaotic.

Metafilter essentially has an authoritarian moderation culture, the rules of discussion are both made and enforced (selectively or not) by the same group on another subject group. There's a wall to keep outsiders out (the paywall). It avoids chaos, but its failure mode is ossification, devolution into an echo chamber, and eventually desertion; as public forum behavior comes to more-or-less rigidly reflect the opinions and preferences of the moderators.

Reddit's somewhere in the middle of the above two forms. There are anarchic hordes in the less moderated reaches, and little authoritarian kingdoms without the walls to keep the hordes out.

I don't think anyone's tried real democracy in a forum (with elections, politics, checks and balances, and the time investment that all entails). It'd be interesting to see how such a forum would fare, and if it could avoid chaos without become an echo chamber. Democracy isn't the public-opinion-style voting we see in forum's today, but instead actual accountability of the moderators to the users.

Not claiming this is a novel insight, but it's new to me.


LambdaMOO tried switching to democracy around 1993, with "LambdaMOO Takes A New Direction". Basically, the mods ("wizards") instituted a petition system for technical changes and ceded all social decisions to a separate arbitration board. The resulting three and a half years of chaos is summarized in the last post on http://meatballwiki.org/wiki/LambdaMOO

In the end, the wizards published "LambdaMOO Takes Another Direction" and took back control, concluding:

> Over the course of the past three and a half years, it has become obvious that this [refraining from making social decisions] was an impossible ideal: The line between 'technical' and 'social' is not a clear one, and never can be.

A great deal has been written about this experiment (and a Web search will find much analysis, along with full text of LTAND and LTAND2), and there are a wide variety of perspectives on why LTAND failed, but one conclusion that nearly everybody seems to reach is that attempting to give a community democracy with no higher guidance is almost guaranteed to be a recipe for disaster.

Reflecting on all of this, I have no idea how the US founders managed to get something that worked at all, much less as well as it does. (And notice that it took them several tries to get it right.)


Freedom fighters rebelling against authoritarian regimes also start out with the goals of empowering the populace ("power to the people!"). They cast off their shackles, stage a coup and seize control...only to realize after some amount of chaos that people are incapable of governing themselves, and the former champion of freedom becomes the new dictator.


I complete agree. My premise is that you can only remove two out of the four -- echo chamber, trolling, toxicity, community. If you accept this premise the only logical conclusion is to remove trolling and toxicity. Without community the point is moot, after all. So really you're removing two out of echo chamber, trolling, toxicity.


Hmm, I think I would choose to remove toxicity and echo chamber. I don't mind a certain degree of bad faith behavior if removing it comes at the cost of having discussions with people different from me

I think maybe a good solution would be to (1) pay mods and (2) make everything they do transparent. This will give you better mods to start out with but also gives users the power to notice mod overreach before it spirals out of control.


What's the difference between trolling and toxicity? The words have seemed interchangeable in most contexts where I've heard them used.


Within a narrow community, a little bit of "echo chamber" is worth the quality of commenting online. I'd rather get downvoted a little and have to debate my minority opinion than it be drowned out with spam and bots here.


Well your comment presupposes you can trade between a little echo chamber for a lot of spam. Sure, but my comment is more about moving beyond pure spam to stuff like trolling - call it low-level bad-faith behavior. I'm okay with some of that in exchange for a community which is a bit more diverse.


An echo chamber is the only real option. Computers are unaware of concepts of good or evil, so at best a moderation algorithm can do is enforce a certain viewpoint. The question is what viewpoint? The viewpoint of the consensus of users, or the viewpoint of the community leadership?


I don't think moderation should be done by algorithm, as soon as you give the task back to humans you're much more capable of shades of gray and thoughtful, real moderation. Humans have been moderating public spaces for thousands of years, we're more than up to the task if a little bit of care is put into the implementation.


But humans are VERY very slow at this. Even our world-class moderation systems (the legal system in many countries) is excruciatingly slow, often taking months or years for a single decision.


MetaFilter is more of a non-forum than an echo chamber. There's almost 0 discussion; people go there for the links, not the conversation.


Have you ever... looked at the comments on MeFi? Some posts get lengthy, complex discussions, on subjects related to the link at hand; some do not. There is also Ask MeFi, where you can ask questions and get answers from other users (ads shown against this section to visitors without an account used to be a major portion of MeFi’s revenue until Google did some stuff that lowered MeFi’s search ranking). And there’s MetaTalk, which is for talking about the site and has its fair share of “hey let’s hang out and talk” posts.

I mean, yeah, it’s structured mostly around links, and you can certainly use it as a source of Interesting Links. But there’s conversation and community there if you look around a little.


I was a member there for years. There's a community, which is highly normative, but they also resist implementing threading or commenting by reference precisely to keep the focus on submissions.


I'm still a member! For the record, we resist implementing threading or commenting by reference because it's makes for an unreadable discursive shitshow when trying to follow busy, active discussions.

Staying relatively on topic is an unrelated aspiration and one that we mostly let flex a lot depending on the specific thread and context.


This is remarkable for the terse confidence with which it is misapprehends the actual structure and content of the site.


There is a fundamental approach that has not been tried as yet

- Incentives

The real-world has that figured out long ago. If you find something that is truly useful & timely, you would be willing to pay real money for it.

Google Answers (answers.google.com) had tried an approach wherein a price can be put on a question and any legit reply which answers that q can claim it. 'Reputation' definitely still plays a role in this, but the system is flexible enough to allow a new comer to attempt answering a question & stake a claim to the funds.

The real-world has many of these aspects sorted out, like calling a plumber or a carpenter from your neighborhood to get your work done. The problem is we have embarked on creating a 'global' network (aka FB) without first having adequately understood how to create strong family & community network, before we go global with our social networking..


Stack Overflow site utilizes this really well. You get more privileges the more trusted you are.


skeptics stackexchange is a good example. Lots of people complain that too many comments are deleted, but it's an absolute lions den of controversial issues. The mods do a very tough job and handle hate speech better than anywhere I've seen.


Reputation will mean different things to different people.


HN generally works very well. The echo chamber problem is due to allowing downvotes IMO. In my experience that simply leads to minority viewpoints being downvoted. Instead, downvotes should be removed, and people should be allowed to flag abusive comments.


I think the Hackernews' "echo-chamber-ness" is extremely exaggerated. It only feels like that if you're in the minority viewpoint in a thread (which happens to me, too). However, it's not echo-chambery if dissident viewpoints live side-by-side dominant viewpoints, even if the latter are 80% of the thread replies and upvotes.

An echo chamber is arguably more when we actively suppress dissident viewpoints. Reddit is infamous for moderators doing that simply by deleting comments under some pretext of 'spirit of the subreddit' or such. With Facebook there's a first-and-last-name-and-picture-visible shaming that can be scary and damaging, repulsing the opposite viewpoints. At a more extreme, you can help foster an echo chamber by organizing a large group of people to scream and picket and threaten a speaker that has the wrong views, reminding all the others of what happens.

HN to me is an oasis. Even if I get downvoted when I have a minority view. I still feel as if intelligent arguments are considered.

With Reddit, how many intelligent comments are there? The English grammar alone is awful, full of shortcuts, cliches and new millennial-speak. But worse: the responses are short. One-liners. And even worse: argumentation is ad-hominem and emotive.

In summary, I think Reddit is about emotional expression, and HN is about (an attempt of) rigor and rationality.


I've seen a lot of very interesting, usually quite short comments on politics-related threads in the past few weeks, that were posted less than ten minutes prior and already grayed out and marked "[dead]". In each instance, the user was not using inflammatory language at all, yet HN was implicitly saying "yeah we're not going to allow discussion on this topic." I'm sure there's Very Good Reasons(TM) for this but it always feels like wasted opportunity for interesting, out-of-the-box discussion.


> In each instance, the user was not using inflammatory language at all,

Downvotes are not only for inflammatory language; a comment can be a negative contribution to the signal-to-noise ratio, and even violate the commenting guidelines, without using inflammatory language.


There might be a lot of reasons for that and we'd need to see specific links to say why, or make a good guess.


Ah. HN is special: they punish political discussions. It's unfortunate, even tragic in my opinion. They allow it sometimes if there's specifically a tech or science-related topic very very closely attached.

I avoid poking the moderator lions (I used to post political articles maybe a year or more ago), but I do wish HN would have another view of that particular topic. It's rather unavoidable that adults (and we are adults), highly-educated ones at that, would sometimes slip into politics when science or tech news (or legal news about tech or science) is discussed.

But yes, you're generally right about that.

I think emotive political discussion is useless, but rational policy discussions aren't useless.


On top of downvotes, you can say very toxic / abusive / condescending things and get away with it if you share the "correct" viewpoint, but unpopular views have to be exceptionally polite to avoid biased moderation. You can't bluntly refute or critique a questionable (but popular) argument without being accused of lacking civility...


> if you share the "correct" viewpoint, but unpopular views have to be exceptionally polite to avoid biased moderation

Where HN has been falling short (lately, in my observation) is where discussions about the ethics of certain business models get lost via the "buried" option or killed off completely.

You cannot come to HN to discuss the potentially-negative ecological or economical impact of a YC company. The voting rings will literally send your comment or post to the void: buried or killed off completely. HN does still post lots of interesting links, but for truly interesting discussion that isn't (for lack of a better word), tainted by bias, I prefer Reddit these days.


Other areas where I see this happening on HN:

- discussing the risks of psychoactive drugs.

- pointing out flaws in overhyped press releases about the next wonder drug/treatment

I guess you're right that you can avoid getting downvoted by being exceptionally polite and spending about 15 minutes crafting a response saying "crap science, uncontrolled trial, possible placebo effect", but sometimes I just don't have the time and energy for that. I'd prefer it if people here didn't automatically assume I'm full of shit when I point out a flaw in an argument without writing my response absolutely perfectly the first time.


> "You cannot come to HN to discuss the potentially-negative ecological or economical impact of a YC company."

That's definitely not true! People say negative things on HN about YC companies all the time. We moderate HN less, not more, when YC or a YC-funded startup is at issue. That doesn't mean we don't moderate it at all—that would leave too much of a loophole—but we do moderate it less. This is literally the first principle that we tell everyone who moderates Hacker News. You can find many posts I've written about this over the years via https://hn.algolia.com/?query=by:dang%20moderate%20less%20yc....


Thanks for the reply.

What I meant, is that one cannot start a discussion of such things without being willing to lose lots of points and karma. Observations that AirBnB might be doing more harm than good to cities having "housing crisis" issues, and the fact that Uber and Lyft are actually harming public transportation rider numbers and putting more automobiles on the roads (creating congestion).

Two issues I've seen brought up here that get downvoted into oblivion. Why risk that? It's far easier for people to jump on the "attack the poster" bandwagon... as they have done to me in this thread.

Granted, I've been reading HN for over 11 years now, and the site is not the same as it used to be. A lot of interesting posters have left. Probably I need to lower my expectations for what to see when I come here.


It's hard to say why specific comments have been downvoted. Often it's because they break the site guidelines in ways the author didn't notice. Sometimes it's simply not fair, and other users need to (and often do) correct that by giving a compensatory upvote.

Plenty of comments arguing that Airbnb/Uber might be doing more harm than good routinely get heavily upvoted, so I'd question your overall generalization.


The real issue is binary choice. I might disagree with a comment, but acknowledge it's a valid well thought out argument. On the flip side I might agree but acknowledge it's a poorly formed argument.

Something might be totally off topic or funny, but if I made me laugh do I down vote it?

Slashdot's model of tagging posts was a pretty good idea I think and allowed one to filter out the 'funny' or 'offtopic' comments.


> I might disagree with a comment, but acknowledge it's a valid well thought out argument.

So, upvotes and respond.

If it's a net positive contribution, you shouldn't be downvoting.

> Slashdot's model of tagging posts was a pretty good idea

Its a good model for a customizable user experience, and a bad model for a community. Those two goals are often opposed.


I disagree. Upvoting is like a High 5, downvoting, especially on Reddit, is used to bury something people don't agree with. Downvotes, IMO, should require some sort of intellectual effort as to why you are actively burying a comment or post and thus require a reply.

Dragon, you commented and downvoted on something that you are doing right now which is commenting on a comment system. No? At least you had the decency to reply, which most Redditors don't. Which makes Reddit Toxic.


I never downvote a comment, reddit or here, simply because I disgree. I find such behaviour (subjectively) wrong since it does not encourage discussion in an open, civilized manner (on reddit, all that happens is that you get 30 comments deep and you just downvote eachother's comments to 0 while being increasingly aggressive).

I reserve downvotes for when a comment is being needlessly toxic, doesn't contribute to the discussion or otherwise not helpful for an open discussion.

I think the best cure for "downvote to disagree" is to firmly hold to the principle that the opposing side of the argument has the best intentions to the extend of their knowledge and that at the end of the discussion, all participants should have learned something. You should also always be willing to change your mind on what you argue about.

Always.


HN works very poorly, and worse by the year. I've been using HN since 2009, I've been on a wide variety of discussion platforms going back to usenet, HN has had its moment in the sun and that has largely passed.

Voting on HN barely has an effect, and I suspect that the average votes per comment on HN has gone way down year over year. People just don't vote on posts as often as you'd think, not anymore. A related problem is that commentary doesn't go on for very long. In the usenet days you could have a good thread that would last for months and months that would continue to spawn good and interesting commentary, a flash in the pan thread might only last a few days. On HN the window of commentary for a post is rarely more than a day and typically only a matter of hours. It's just people strafing comments into the void and then disengaging. Long comments typically don't get read, and don't get upvoted, don't get commented on, etc, for example.


> ...I suspect that the average votes per comment on HN has gone way down year over year

OK, I'll bite! A cursory look at the data shows a clear increase in average votes on comments from 2007 until 2012, which is the only year with a dip, followed by steady growth until the present all-time high.


Huh. Is this total votes or votes per comment? I'm curious what the median number of total votes on comments that have at least one vote is.


I wonder if (or how) one should take population into account, too. We might figure that a majority of the people read only threads that are on the frontpage, so, if the number of people on HN has doubled, then each thread will get viewed by 2x as many people, and, if the new crowd has the same likelihood of voting on each comment as the old crowd, then you'd expect 2x as many votes, assuming no change in comment quality. Instead you might want "votes per comment, divided by number of users".

Of course, there are lots of "all else being equal" implicit assumptions there. First, if the population doubles but stories move off the frontpage in 0.7x the time, then you'd only get 1.4x as many votes—and this is one of InclinedPlane's points. Second, the newer crowd could be significantly more, or significantly less, active. To control for these two things, the measure you might use instead is "votes per comment per pageview", or "votes per comment per second a user spends on the page". Third, there might be more comments posted—well, duh, it would be weird if the new users never posted any comments.

Fourth—and I think this another thing InclinedPlane wants to focus on—comment quality could have changed. Comment sorting is relevant too, because I'm sure lots of users don't read everything. If we suppose that, due to an increase in population, we get 2x as many comments but they have the same quality distribution, and if we suppose the best comments always go to the top, then the average quality of the top n comments should increase; you can see something like this in extremely popular Reddit threads, where the top several highly upvoted comments are clearly optimized for something (often clever jokes). If we suppose a decent population of users only read the top n comments, and always use the same function that maps "quality of a comment" to "probability of upvoting", then, when the set of comments doubles and (by assumption) the best rise to the top, we'd expect these users to generate more upvotes overall, and hence "average votes per comment viewed" should go up. (It's also possible that people's standards would rise. But I think people's changing standards would lag behind the changes in what they're viewing.) That said, for the comments that aren't in the top n, the fraction of people that view them (and consequently might comment on them) would go down.

The question of how long threads sit on the frontpage is relevant, both for comment exposure and for InclinedPlane's point about conversation longevity. (There are also pages like "new" and the no-longer-linked-at-the-top "best".) I wonder how best to quantify that... perhaps "the frontpage tenure of the thread with the longest tenure of all threads on that day".


> HN generally works very well

After reading a few discussions over the last few days, I was thinking to my self that HN was better than ever, and very good (with one serious shortcoming). Even the echo chamber is much better than I remember.

EDIT: The shortcoming, IMHO, is the abandonment of politics. HN is the ideal place to solve that problem, with s sophisticated audience open to and interested in experimentation and in problem solving. The goal of propagandists is not to persuade you, but to paralyze you; to shut down real discussion and debate. HN is, unwittingly, capitulating and cooperating with them. HN is another success for them.


> The shortcoming, IMHO, is the abandonment of politics. HN is the ideal place to solve that problem

No, it's really not, as HN demonstrates most of the time it interacts with politics.


> HN is the ideal place to solve that problem

That's an illusion, for reasons I attempted to describe here:

https://news.ycombinator.com/item?id=16443431


Thanks for responding.

> That's an illusion, for reasons I attempted to describe here

That implies that it's an unsolvable problem, if I understand correctly. There's no reason to think this problem is any more difficult than all the other 'unsolvable' ones and this one is particularly, I would even say 'extremely' valuable to work on.

I don't believe we simply could introduce political topics and it would work due to some HN magic. It would take serious work and experimentation to find a solution, but I think HN is better suited than other places to do that work. And a solution could change discourse in the country and the world, at a time when discourse on the Internet problems SV has invented has become a very dangerous weapon for some, and is tearing society apart.

I realize that "we" means you and sctb more than anyone, and so it's a request and encouragement. I still think it's the most valuable thing HN could do, potentially world-changing. Previous generations had books and leaders that changed the course of history; this time it might be software or a software-based technique that turns the tide. I hope that at least you will keep it in mind.


I don't know that it's impossible. But if I'm certain about anything re HN, it's that it would be unwise to try to make it be that, for the same reason we don't do radical medical research on living humans.

Our first responsibility is to take care of what we have. The way to take care of a complex system is to be sensitive to feedback and adapt. We can apply that principle here. Look at what happens when the political taps get opened beyond a notch or two. Discussion becomes nasty, brutish, long, and predictable. That's what we want less of, so opening the taps all the way is not an option. For similar reasons, closing them all the way isn't an option either.

I don't disagree completely. I think there's a chance HN can slowly develop greater capacity in this area. But it would need to be very slow and not something we try directly to control. Anything as complex and fragile as HN needs a light touch.


Well... downvotes are definitely abused. They're not supposed to be used to express disagreement, and they are. All. The. Time. And it stinks to be on the receiving end of that.

But I'm not sure that they should be eliminated. The alternative is to leave moderation as the only way to deal with bad (abusive, off-topic, trolling, unintelligible) posts. I'm not sure that having people flag every bad post they think they see, and letting the moderators sort it out, is really the optimal way to do things.


> They're not supposed to be used to express disagreement

That's a common misconception. Downvoting for disagreement has always been ok on HN:

https://news.ycombinator.com/item?id=16131314

I think people have the wrong idea about HN downvotes because they think Reddit rules apply to HN. Sort of like how Canadians think we have Miranda rights because we've all watched American TV shows.


I would argue that while it's totally okay to do it, I'd find it better if people used it less for simply disagreeing. In my experience, the resulting discussion ends up being of poorer quality because of it (and less exposure due to being pushed down and hidden once it hits a certain threshold)


Such evidence as I'm aware of points in the opposite direction: HN without downvotes would be like a body without white blood cells. Disease would quickly kill it.

The problem with your argument is that it doesn't reckon with just how lousy bad internet comments are, or how many of them there are, or how completely they take over if allowed to. To a first approximation, bad internet comments are the entire problem of this site.

It's easy to imagine an HN that would be just like the current HN, only with some negative X (e.g. bad downvotes) removed. Most of these ideas are fantasies, because the thing you'd have to do to remove X would have massive side effects. You can't hold the rest of the site constant and do that.


>bad internet comments

Bad != disagree. I think we all agree that it should be ok to downvote and hide "bad" comments. However the problem is that many good comments are downvoted simply because people disagree with them.

I think it might be better to remove the downvote and replace it with "flag", so people can flag bad comments (spam, abusive, pointless, etc). At least that way people would need to think a little before the comment gets flagged, which would hopefully result in fewer minority viewpoint comments getting hidden.


I'm not arguing that we should never downvote, if someone is writing garbage, I will happily downvote them. But maybe people are a bit too quick to downvote when they disagree...


Sure, but people have been saying that on HN for many years.

I think you have more success if you ask everyone else to upvote unfairly downvoted comments.


I try to do that, yes, though I always (wrongfully) hope that people change...


I stand corrected. I agree with zaarn, though - overuse of downvotes for disagreement is not helpful for having a real discussion.


Reddit does not have strong moderation.

Some subreddits do have strong moderation; some subreddits think they have strong moderation but they have fucking idiot mods who call down trolls; and there are some subreddits that have permissive moderation and those subreddits leak.

> However if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling.

You genuinely don't. There are forums where you have to pay real money to be able to read and post, and where acting like a jerk will get you banned. They haven't eliminated the jerks. About the only advantage is paid mods which ensures some consistency.


I apologize -- I mean the platform itself offers strong moderation. Unlike Facebook, for example. I agree with your point, though.


It closes subreddits at the owners whim, that seems strong. It doesn't need to be always exercising that power to have demonstrated that it has and uses that power.


This is why 4chan always had it right. Anonymous + minimal moderation. Instead of echo chambers you get extreme contrarianism. The upside of this is that non-conformal viewpoints aren't buried and low effort trolling just gets ignored.


My friend introduced me to a project that he'd been working on that touches on some of these issues.

https://doxa.network/

While I can't come to a conclusion on it (whether or not it's a good/bad idea) - I wonder what others in the HN community think about it. I apologize in advance if this is a bad place to comment, I've been mostly a lurker thus far!


I think this has been posted before, actually. The underlying concept, I get. But I'm not sure how it'll ever become a 'product'.

In addition to all that, I still fail to see how a blockchain would solve any of these fundamental issues.


Reddit and HN are both missing two key ingredients on the moderation side: Transparency and accountability.

There's nobody watching the watchmen, essentially. That leads to a lot of frustration, anger, mistrust, and abuse.


In particular, there is no guarantee that moderators are any good. Whoever registered a subreddit first "owns" it and moderates it themselves/chooses additional moderators. That's it.

This means there was a "gold rush" in the early days and if some shitter is sitting on prime real estate (like brand names of products) there's nothing you can do about it. If they decide to close the subreddit by making it private, nothing you can do about it. If they go inactive and are still squatting on prime digital real-estate, nothing you can do about it... if they later get hacked but are still inactive, doubly nothing you can do about it.


> Physical mail is sent to your house in order to get a single account.

That's how NextDoor works and it seems to work reasonably well.

There are some big communities that seen relatively healthy to me. For example, Instagram is easily my favorite social network. I see photos that my friends take and that's about it. I pull it up and am done with it in a minute or two.


Your list consists entirely of push media. Some example of pull media that "play well with others" or at least are not actively anti-social are podcasts and email listservs. Of course its easier for advertisers to monetize push media.

In the long run push media is always going to be troll-ier and offend more people than pull media.


I believe that heavy moderation is necessary to weed out toxic stuffs.

HN seems to be doing a great job in that regard (in my opinion) and it is a model with which other websites can learn from.


I think you're underestimating the cost to society of your parenthetical. To the degree that "the personal is political", sharing the details of one's circumstances — especially as a marginalized or disenfranchised individual — can reveal ostensibly unique struggles to be widespread societal problems. Twitter does this, and has been good for highlighting shared experiences. We'll lose a platform for that very important, seemingly trivial disclosure if we improperly disincentivize contributions. We need to keep "poor participants" in the common conversation.

Secondly, I notice no mention of deliberate, paid propagandizing, i.e. professionally divisive sock-puppets employed by sock-puppet firms.

Any serious discussion of threats to a healthy public discourse must address deliberate attempts to undermine the legitimacy of the common voice.


I think one way to help prevent echo chambers is to have term limits on moderators. So many moderators become sour towards their own communities but feel an obligation to stay involved. Term limits might help with that and encourage new users to become moderators themselves.


Relate 'karma' to the ability to post at all?

More karma, more posting; less karma, less posting. Everyone starts every month/week/day with only so much, no roll-over per timeframe. Modify it so that 'popular' threads cost more to post in. You can give karma to others too via the upvote and take it away with a downvote, but still no roll-over. Troll/shill accounts would still get upped around en masse, but less so and it would be 'easier' for mods to tell. (I'm sure you can model this without too much effort vis a vi prisoner's dilemma). You'd have to pick and choose which to comment in. Posting content would work similarly, but a slight mod to the cost to posting.


The NRA has its own social network where users must perform certain tasks before the user can post or comment. You have to tweet at your legislator or share things to your personal Facebook in order to show fealty to the community, in order to get enough karma to talk to others in it.

If you're curious: https://www.bloomberg.com/news/articles/2018-03-01/the-nra-h...


Jesus, that is effective.


Are you saying this would create less of an echo chamber?


I'd think so. More popular threads would 'cost' more to post in and there would then be less posters in it as a result. 'Brigading' would be difficult.


on the contrary, popular opinions would gather more karma, thus allowing them to share their ideas more often, unpopular opinions will quickly have their posting ability removed via downvotes. Soon enough only the prevailing popular opinion will be found

I had the same idea at first but I don't think it would work in practice


Hmmm, you're right. Perhaps a sliding scale then? The cost to up/downvote increases exponentially?


> However, if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling at the cost of less participation and an echo chamber (though, you could argue that not all participation is equal and you're mainly removing poor participants).

Twitter actually does this but only for some types of accounts. Every see the "see more" button under replies? That's where people of low quality go. Twitter has a sort of rating for figuring out if someone is of low quality or not and they usually get hidden. Even their likes and retweets get hidden.

Hasn't to help overall on Twitter though I have noticed I get less assholes replying to me.


It is wildly inaccurate to call HN or Reddit “strong moderation” — the tooling alone is abysmally bad, plus Reddit can’t put the “post anything you want as long as it is not literally illegal” genie back in the bottle so easily after a decade.


I don't see the connection between the quality of the tooling and whether or not there's strong moderation. HN is pretty strongly moderated and reddit inherently allows for very strong moderation. Do you think HN is not strictly moderated or that reddit does not allow for strict moderation?

I'll agree that reddit doesn't seem to expose easy to use moderating tools, though.


I feel like HN is such an echo chamber it doesn't require strong moderation.


I'd say hierarchical credibility. More authority. Give the leaders power to bless others with disproportionate power, who in turn have the ability to bless others with disproportionate power.

Remember that Reddit ultimately only imparts one vote per person regardless of whether you're an admin or moderator or whatever. End that system. We've already established that these systems are not libertarian - the leadership has an opinion (in /r/Science the opinion is that you must post good science) and we want to empower them to enforce it.

Not only that, but provide negative feedback. If you endorse a terrible person, make it impact your credibility.


Real identities doesn't work when FB threads aren't indexed by Google.

If googling myself found comments I'd make on "platform X," you can be sure I'd carefully consider my comments on that platform.

Having anonymous commenting is important, but if you want a less toxic platform for mainstream comments, real identities + indexing is a good start.

The real WTF moment for online discussion is yet to come. When ML chatbots are able to comment indistinguishably from human comments, purely as a measure of capacity of time/scale, comment threads are eventually going to just become chatbots arguing with chatbots, drowning out actual discussion by humans.


This nightmare scenario has natural stable states, I think. At some point, it will cease to be profitable to propagandize at machines, no? We should work to construct a society that negatively incentivizes bot proliferation, but it can't be a purely financial incentive. Free speech must not have a price tag, or else "stop, thief" may become impossible to shout.


IMO the problem is size/scale. If you're too small, then you just have people going through the motions of talking to each other in an attempt to make the platform seem real so more people will join it. If you're too big, you won't be able to curate effectively. There's a sweet spot in the middle, you start heading up around only maybe 20 or 30 users (the point at which it becomes impractical for everyone to know everyone), and when it starts heading back down depends on the culture, but go down it will.


I'd like to see what impact posting limits would have. For instance, let every user make 2 submissions and 10 comments in a month. One of the issues on every forum is that it takes a lot less time to make a ton of low effort comments than it does to make one thoughtful post. Another is that a small minority end up making the vast majority of the posts (even when that minority is well intentioned).


Another key factor is the community reaction to low-effort posts; Many users of social media are not looking for or willing to put in the time and effort to digest more in-depth discussion, and will happily skip over more thoughtful posts in favour of the next meme or bite-sized thought. In a platform such as Reddit, where votes per unit time are paramount, this results in the burying of most longer discussion.


Set up a forum and try it, then you too can be the mayor of an internet ghost town.


You still have sybil attacks.


We’ve tried real identities plus serving as many ads against user conversations as possible (Facebook), and “algorithms” designed to keep users on as long as possible regardless of the cost to their mood, or the world around them. Perhaps “social media” should stop trying to be a way to make lots of money.

Naaah. Gotta monetize every inch of everyone’s life. It’s the American way.


I think nextdoor does this model and it’s still notorious for trolling/internet fights.


I’m amazed at how much trolling there is on Nextdoor. It’s not hate speech level, and most posts seem pretty cordial but considering the community venue aspect I’m just floored at how reflexively shitty people can be the minute there’s a divergence of opinions.


Yeah, Nextdoor is astonishingly vile given the non-zero chance of actually running into one of your fellow members at the grocery store.

My hypothesis is that Nextdoor primarily appeals to those who already have that kind of territorial "they're all out to get us" mindset since territoriality is literally what the app is about. It's the angry "get off my lawn" old man of social networks and it attracts exactly those kinds of people. Mix in some Internet depersonalization effects and you get something pretty nasty.


It's a virtual home owners association in my experience.


My Nextdoor community gets a lot of hate speech. They have filters that catch the words you know, but they don't have a way to stop the ideas.

My neighborhood is 50%+ non English Speaking Chinese. There are posts almost everyday that say things like..

"Some people in this neighborhood need to OPEN THEIR EYES and stop hitting the gate with their cars. I'll post this in the appropriate languages so everyone and the community can appreciate the significance of my message. Ching-Chang Chow... Bang Bong, Bing bong bing."


That must depend on the neighborhood - I don't see any of that.


There was a podcast (I think freakonomics?) saying that sports forums had the least toxic political discussions on the internet. This was probably due to their common interest in a sports team over their political views.


>Thoughts?

Imagine a blank graph. Randomly place 1 million circles on it. No two with identical boundaries. This is our hypothetical Venn diagram of people's preferences for how communication happens online. Drop another million circles to represent how those people will actually act online.

It doesn't matter what points on the graph are labeled "toxicity," "echo chamber," "uncomfortably friendly," or "sparse comments, experts only." It only matter's that the circles are not all identical.


Metafilter set the gold standard: accounts require a one-time payment to discourage sock puppets and they have full-time moderators who’ll jump in before behavior gets out of control and tell people to tone it down.

I don’t know an easy way to scale that model.


It seems like if moderators of subreddits were given the ability to A) limit posting rights to paid users and b) given the ability to ban at will, that most subreddits would see a much improved commenting culture.

It would also help reddit monetize.


> Real identities

Real real identities (i.e. government issued digital id) have never been done. I am sure they will come eventually. The political process is just very slow compared to the pace of technology.


The purpose of using real identities (government ID) is not to facilitate debate and sharing of opinions, but to punish and neuter debate and limit sharing of opinions.

Compare the outcomes of totally anonymous reputation-based forums such as HN, reddit or 4chan with near-real identity forums such as Facebook or Linkedin.

There is a very open flow of ideas and debate on 4chan and other reputation-based forums. There is at least as much hate speech and trolling going on at Facebook as on HN, yet Facebook has near-real identities. Linkedin have at least as much spam and criminal phishing posts going on as HN, yet Linked in has near-real identities.

HN would not be a better forum if everyone had to register with their government ID. The main benefit would be to make it easier to ban one person from accessing the forum, and silence that individual.

My vote is for reputation-based forums.


South Korea tried this for a while, but I believe eventually gave up on it. China effectively manages it transparently.


No one will use it. Anonymous commenting is much more fun.


> There's no real solution to this problem.

I disagree, There IS a solution, but no one likes it.

Segway: Football aka Soccer had a problem back in the day. Games became (more?) boring because teams would go up a goal, and just play "kick the ball to the goalie". Goalie'd pick up the ball, bounce it, pass to a player who kicked it back to the goalie, rinse and repeat. Then they instituted the back pass rule - https://en.wikipedia.org/wiki/Back-pass_rule - where basically, pass back to the keeper and he can't use his hands, only his feet. A generation of goalies had to learn to play with the ball at their feet, and the bit people enjoyed happened more often.

Rugby Union has had a similar evolution, although more intense. Rugby's goal is to have a game that can be played by people of all shapes and sizes, and they mostly succeed. George Gregan was a great player at 5'9", and most second rowers are over 200cm/6'7". But rugby always gets boring, because the coaches start to playing boring rugby (lot of kicks, lots of positional play, less running). So rugby, every few years, overhauls the rules. For a year or two, the game is exciting again, IMHO the best sport in the world, then it is boring all over again as coaches play safe. We then get another overhaul.

I think the Soccer back-pass rule and rugby's ever-changing-rules are models of how changes in rules can dramatically affect the quality of an activity, I don't think they are doable for online sites at scale. Instead, the only solution I can see working is to constantly change to new platforms.

I started in SEO in circa 2001, and there were heaps of forums run on phpBB et al. They had all recently started, so they were figuring themselves out. The forums all interacted, but they all had their own feel and their own rules. Over a few years, they developed "personalities", and everyone could find a place that suited them. This "personality" then morphed into a kind of group think, and the forums grew into echo chambers where the rules where strictly enforced to keep out the others, and each forum became hostile to all the evil others, and they devolved from fun places to hang out to the same old same old.

SM then came along, with sites like FB, Digg and reddit being born, and this process started again. A new place, no real rules yet, and it was the same exciting process of discovery. Over time, the bad parts set in, and these places became stale echo chambers filled with all the bad bits everyone talks about.

That's why I think the only real solution is to tear it all down and start afresh, because this process has repeated several times now. I think that partly explains why SnapChat and the other platforms exist, and why, IMHO, SnapChat et al will grow to a point then fail to grow anymore, as the "freshness" fades, and all the nastiness intrudes. Unless sites can figure out a way to change this process, which after almost 2 decades of this process is either unlikely (pessimistic), or it is too soon to tell (optimistic).

TL;DR when a platform gets entrenched, it starts to exhibit more nasty traits, and a new platform started afresh is the only solution.


just fyi I enjoyed your comment and anecdote but the word you're looking for is "segue" not "segway."


The real solution is making blatant lies and misinformation illegal.


Without perfect information and transparency there's no way make lies illegal.


And who determines what's true and what's not?


At the risk of excessive red tape, the judicial system has been doing a reasonable job of determining facts in most modern countries.


It actually doesn't. Example: Did OJ kill anybody?

Judicial system engages in determining whether someone is guilty based on the evidence that it itself filters.

What about more complex things? Like what if I say "P = NP"? Is that a lie?

And then, do you want a trial for every comment? That will not work even for a fraction of comments.


>Like what if I say "P = NP"? Is that a lie?

That can't be determined to be a lie unless someone solves the problem. It's only a conjecture or assertion until then.

>And then, do you want a trial for every comment? That will not work even for a fraction of comments.

Only really has to apply to political statements made by the most powerful office holders, and only when contested, and only when there is a imminent intent to deceive and impact policy.

It's not really an all-or-nothing situation. It's just a matter of how much can be achieved within a reasonable cost.


Not quite correct.

We've tried:

- Real identities (Facebook comments)

- Voting and self moderation (Reddit, HN, etc.)

- Strong moderation (Reddit, HN)

All within the context of rampant financialisation, land policies returning us to feudalism and continuous lying about the banking bailout including removing the one candidate who was going to take on the banks (Bernie).

Maybe it's not "the internet" that's the problem. Maybe the dissemination of information as we slide into this rentier hellpit is causing people to be pretty pissed off?


How can I read up on these land policies?


Here's what I know or believe I know about feudalism. It's almost entirely from the book https://www.amazon.com/Pre-Industrial-Societies-Anatomy-Pre-... , which while excellent does not deal with feudalism in any depth.

- European feudalism was an unusual system in that the government had no taxing power. The lord who owned land held the taxing power ("feudal dues") over that land, and the king funded himself by collecting feudal dues from land he owned personally, rather than e.g. by taxing the dukes. This might contrast with a more advanced state of civilization in which the caliph / emperor / whatever collected taxes directly from everywhere by virtue of being the supreme ruler, and paid a salary to his lower administrative functionaries. Or it might contrast with a system where the use of land wasn't much of a source of taxes. Or both of those latter things might be true simultaneously.

The US system of property taxes has a lot in common with the system I've described above, and some obvious differences. Similarities:

- The federal government ("king") can't assess property taxes. Only the states ("local lords") can do that.

- People other than the government cannot own land outright, but must pay the property tax ("feudal dues") for its use every year.

Of course, rather than the federal government receiving tax income based on federally owned land, it instead double-taxes the citizens of the states. (But on mercantile revenue rather than on land.) This is arguably worse than the feudal system.


How are devolved local states equivalent to private landlords? States capture land value via land tax and socialise this. Private landlords capture it and keep it.

Night and day.


Are you referring to a feudal lord as a "private landlord" or a "devolved local state"? Both would be more or less fully appropriate.

"Devolved local state" is slightly more accurate than "private landlord", because unlike a landlord in a more commercialized society, a feudal lord was not legally able to sell the land he owned. He was legally able to govern it.


My interest is that the feudal lord was living off the backs of others. The state taxing land to build hospitals is not the same thing.


You might enjoy Stop, Thief!: The Commons, Enclosures, and Resistance https://www.goodreads.com/book/show/17802312-stop-thief


Lords often got their start in lording by building a bridge.


I would read the heck out of a book that purported this.


I don't like "the economist" however a while back they did do a piece on land value tax, maybe HN will find it more palatable coming from this journal:

https://www.economist.com/blogs/freeexchange/2015/04/land-va...

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: