Hacker News new | past | comments | ask | show | jobs | submit login

There's a lot of consideration of if we could, and previous little of if we should. Unmoderated sites inevitably become breading grounds for harassment and worse, especially as more mainstream platforms get better at driving this content away. I would be terrified to live in a world where kiwifarms and its ilk cannot be shut down



Have you considered that maybe de-plaforming people on the fringe and forcing them into their own private silos actually ends up boosting their message?

When you try to be the goody two-shoes and block anything and everything that could potentially be offensive to someone you just make controversial opinions more intriguing. At least I want to see what the big fuss was about and determine myself it such banishment was actually justified and more often than not it feels like too hash of a punishment, which in turn makes it more likely that I go look for the next banished person's shit and so on.

One can then easily get directed to one of these silos where there are no opposing arguments at all. At least when someone acts out on a public forum majority of users can rein them back in line and avoid further indoctrination. However this last part is hard pill to swallow for most, since they don't want their own bad opinions to be called out.


I mean, it's worth _considering_, but it does not appear to be true. See reddit; reddit has gone through a number of waves of banning abominable subreddits. In general the result is that the most extreme members create a nightmarish reddit clone which no-one else cares about, the rest disperse.


How would you know when the very existence of these new silos is hidden from you?

This is like saying "our model recognizes 99% of the AI generated images", but it leave out that you don't know the actual amount since when your model does not recognize that an image in the wild was generated by AI you dont know that it was generated by an AI.


... I mean, it's not hidden at all. If you want (you probably do not; they are astonishingly obsessive and horrible) to find the alternative reddit clones that people fled to when fatpeoplehate or the Nazi subreddits or the worst of the TERF subreddits or whatever were banned, well, they're right there, they are not a secret.


> When you try to be the goody two-shoes and block anything and everything that could potentially be offensive to someone

This is intentionally trivializing the actual approach and making it seem as arbitrary and low impact as possible. I certainly wouldn't support a ban on "everything that could potentially be offensive" and I haven't seen a proposal for one. I do support a ban on violent far right extremist movements on social media platforms though, because they coherently use these platforms as a venue for harassment, recruitment, and messaging.

The "marketplace of ideas" ideology or "don't feed the trolls" tactic don't actually work in practice. It's the sartre quote. Having a public policy debate with, for example, an ethnonationalist is a victory for the ethnonationalist in itself. They don't have to "win" the debate, they've won by getting you to have it in the first place.

Bans do work. Reddits used to have a serious problem with extremist antifeminists and literally, self-identified neonazis brigading semi-related posts in other subreddits. Banning the extremist subs had a huge impact in reducing it! You don't have to give people a forum to self-organize against your other users.

Or like, what is milo yiannopoulos up to these days? His influence and reach shriveled into insignificance after he got banned from everything a few years ago. The idea that the best way to combat extremism is by discussing it with extremists is a particular ideology. It is not a pragmatic goal- or result-based approach to moderation, or an abstinence from making ideological decisions about moderation.


Deplatforming people doesn't boost their message though. You don't get the Streisand effect when its 1000 trolls instead of one famous person. Also the free market of ideas just hasn't proven effective at stopping harassment and worse. There's nothing illogical about what you've said, but the real world data just doesn't support your conclusions.


I assume you’re terrified, then, because said farm hasn’t been shut down and apparently won’t be.


I mean yeah, it's a scary time to be a queer person. Lots of our rights and protections are under attack now in ways they weren't 5 years ago. I hadn't heard that kiwi farms was back up, that's deeply disappointing.


OP here. I don't expect these servers to become unmoderated. I think a lot of them will moderate as strictly or stricter than Twitter currently is.


This is something I said, but didn't explain well in the thread about Bluesky using domains as handles. You did a much better job of explaining it in your article. Being able to adjust the moderation rules to fit specific scenarios is useful.

I also think the use of domains could have a significant impact on the quality of online discourse because building a good reputation on a domain and having that transferable anywhere on the internet is a lot more valuable than a handle that's only usable within the silo of a single company.

Sub-domains add another layer where the owner of the top level domain has incentive to make sure they're not bringing bad actors onto the network because moderation could be enacted against the base domain, not just individual sub-domains.

Domain based attestation could also drive significant change. Imagine a system where spending money at a reputable company gave you a digital token / receipt that you could attribute to a domain (aka identity) as a way to attest to that domain being a good participant.

The attestation wouldn't cost anything beyond what you're already spending, but it's valuable because it demonstrates you're spending real money somewhere and attributing it to an identity. That doesn't scale well for bad actors running millions of bots because someone like me might have thousands of dollars of spending per year that I can attribute to my reputation or the reputation of someone I've had a good interaction with and bots can't throw that kind of money away. IE: It's a good indicator that a domain / identity isn't a bot, spammer, jerk, etc..


Oh interesting, so all the content is out there but each user gets to decide how what they see is moderated?


Why?

The vast majority of people aren't interested in terrorising minority groups, or anyone, or enacting violence of any kind on any people of any kind.

While situations like that, and sites like that are an important issue, they're not the type of issue that spirals wildly out of control and takes over the world if left unchecked.

It doesn't mean they should be left unchecked - any such negative outcomes are horrific and awful and should be minimised as much as possible.

But it does mean we don't need to feel terrorised by them, which is good - as we'll make calmer, more rational and, so, better decisions about them.


5% of the population is diagnosable with severe personality disorders that lead to anti-social behavior. Approx 1% have NPD, a bit over that have diagnosable psychopathy, and there are several other disorders to round out that 5%. They aren’t going to “terrorise” people so much as troll, manipulate, disrupt, abuse. Any specific form of attack is often only a means to an ends.

The vast majority of people (95%) are not like this, however if you cannot police those 5%, then your platform will be the playground of the malcontents. Moderation is a non-negotiable for any social platform.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: