Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that if you publish the lists it leads to more abuses. For example if spammers find out which sites are banned then they just post other ones.



I think there are two different types of sites you are blocking: (1) those which are just pure spam; (2) news/opinion/etc websites that you’ve decided are not suitable for HN for various reasons (such as being low quality and tending to produce more ideological flame-wars than curiosity), for example Breitbart

I agree that publishing case (1) causes harm (spammers will just use a different domain if they know you’ve blocked theirs.) But case (2) is rather different. I don’t think the same justification for lack of transparency exists in this case. And I think shadow-banning the submission in case (2) is not very user-friendly. It would be better to just display an error, e.g. “submissions from this site are blocked because we do not believe it is suitable for HN” (or whatever). A new user might post stuff like (2) out of misunderstanding what the site is about rather than malevolence, so better to directly educate them than potentially leave them ignorant. Also, while Breitbart is rather obviously garbage, since we don’t know everything in category (2) on the list, maybe there are some sites on it whose suitability is more debatable or mixed, and its inappropriateness may be less obvious to someone than Breitbart’s (hopefully) is


That's a good argument and subtle enough that I'm not sure whether I agree or disagree.


> For example if spammers find out which sites are banned then they just post other ones.

I don't think that makes sense. The supposed spammers can just try looking up whether their submissions show up or not when not logged in.


That also requires additional effort on the spammers’ part. Increasing cost of attacks is an effective defense strategy.


Increasing cost of attacks is effective against good faith people, not spammers.

Even Cory Doctorow made this case in "Como is Infosec" [1].

The only problem with Cory's argument is, he points people to the SC Principles [2]. The SCP contain exceptions for not notifying about "spam, phishing or malware." But anything can be considered spam, and transparency-with-exceptions has always been platforms' position. They've always argued they can secretly remove content when it amounts to "spam." Nobody has challenged them on that point. The reality is, platforms that use secretive moderation lend themselves to spammers.

[1] https://doctorow.medium.com/como-is-infosec-307f87004563

[2] https://santaclaraprinciples.org/


In my experience, increasing cost or delay even a little bit cuts out a disproportionate amount of bad stuff.

I once had the domain 'moronsinahurry' registered, though not with this group in mind...


In your experience where?

No research has been done about whether shadow moderation is good or bad for discourse. It was simply adopted by the entire internet because it's perceived as "easier." Indeed, for platforms and advertisers, it certainly is an easier way to control messaging. It fools good-faith users all the time. I've shared examples of that elsewhere in this thread.


I think that you are reading this too narrowly. SPAMers etc are often in a hurry. For example, simply avoiding responding for a second or two to an inbound SMTP connection drops a whole group of bad email attempts on the floor while no one else even notices.[0] Another example: manually delaying admitting new users to a forum (and in the process checking for bad activity from their IP/email etc) seems to shed another bunch of unwanteds, and raising the cost a little with some simple questions on the way in, also. This point about small extra delay and effort deterring disproportionately bad behaviour is quite broad.

[0] https://deer-run.com/users/hal/sysadmin/greet_pause.html


In your cost/benefit analysis, you overlook the harms created by secretive actions. That's why I asked for details about your experience.

The internet has run on secrets for 40 years. That doesn't make it right. Now that everyone and their mother is online, it's time to consider the harms that secrets create.


There are bad actors, and many of them are lazy/stupid. Their activity imposes a tax / harms on the rest of us. One way to minimise that harm to the good actors includes some mildly covert measures. The sendmail GreetPause is hardly a secret for example: it catches a common deliberate malicious protocol violation and is publicly documented. This is not unique to the Internet nor new; see also banking and personal security and so on.


This subthread started with a discussion about how "HN itself also shadow flags submissions" [1]. That's a slightly different form of moderation than the t.co delays.

Another commenter argued "Increasing cost of attacks is an effective defense strategy."

I argued it is not, and you said adding a delay can cut out bad stuff. Delays are certainly relevant to the main post, but that's not what I was referring to. And I certainly don't argue against using secrets for personal security! Securitizing public discourse, however, is another matter.

Can you elaborate on GreetPause? Was it to prevent a DDOS? I don't understand why bad requests couldn't just be rejected.

[1] https://news.ycombinator.com/item?id=37130143


Here's another reasonable summary:

https://www.revsys.com/tidbits/greet_pause-a-new-anti-spam-f...

I get several thousand SPAM attempts per day: I estimate that this one technique kills a large fraction of them. And look how old the feature is...


Okay, so the requests do get rejected, it just uses a delay to make that decision.

I don't consider GreetPause to be a form of shadow moderation because the sender knows the commands were rejected. The issue with shadow moderation on platforms is that the system shows you one thing while showing others something else.

Legally speaking, I have no problem with shadow moderation. I only argue it's morally wrong and bad for discourse. It discourages trust and encourages the growth of echo chambers and black-and-white thinking.


How do you view the rest of typical SPAM filtering, where the mail is apparently accepted for delivery but then silently thrown away? For simplicity assume a system such as mine where I control the MTA and client, so no one is making decisions hidden from me as the end user who wants to get the ham and see no SPAM. (I get tens of ham per day and many many thousands of SPAM attempts.)


With spam email, the recipient has a chance to recover the mail by looking in their spam folder.

No such spam folder is provided to the public on social media.


Note that in the GreetPause case the SPAMmer will not see the rejection errors since they don't look at the response to their hit and run (ie no one gets to see any error, neither sender nor target), and a legitimate sender should never get the error, so even this may be messy by your criteria I think!


> even this may be messy by your criteria I think!

Only if the recipient sent a false response.

If the response were misrepresented then I would object to the technique. But it doesn't sound like that's what happens.


OK, thanks!


platforms that use secretive moderation lend themselves to spammers

how is that? i can understand it not being useful, but how would it help spammers?


Spammers game the system while good-faith users get edged out. Spammers are determined actors who perceive threats everywhere, whereas good-faith users never imagine that a platform would secretly remove their content. Today, you see low quality content on social media, not because the world is dumb, but because the people who get their message out know the secret tricks.

Secret suppression is extremely common [1].

Many of today's content moderators say exceptions for shadowbans are needed [2]. They think lying to users promotes reality. That's bologna.

[1] https://www.removednews.com/p/hate-online-censorship-its-way...

[2] https://twitter.com/rhaksw/status/1689887293002379264


so to spammers shadowbanning makes no difference, but good-faith users somehow get discouraged even if they don't know they are shadowbanned just because they get no reaction to their posts? how is an explicit ban any less discouraging?

i can't see how shadowbanning makes things worse for good-faith users. and evidently it does work against spammers here on HN (though we don't know if it is the shadow or the banning that makes it effective, but i'll believe dang when he says that it does help)


> how is an explicit ban any less discouraging?

It's about whose messages are sidelined, not who gets discouraged.

With shadow removals, good-faith users' content is elbowed out without their knowledge. Since they don't know about it, they don't adjust behavior and do not bring their comments elsewhere.

Over 50% of Reddit users have removed content they don't know about. Just look at what people say when they find out [1].

> and evidently it does work against spammers here on HN

It doesn't. It benefits people who know how to work the system. The more secret it is, the more special knowledge you need.

[1] https://www.reveddit.com/#say


It has made sense since the internet was invented, spammers need everything thrown at them because they will abuse every nook and cranny of your system to get paid 1 cent more


You're correct again. Spammers and bots are the most determined actors, so these secretive measures don't impact them.

In fact, such secrecy benefits spammers. Good-faith users never imagine that platforms would secretly action content. So when you look at overall trends, bots, spammers and trolls are winning while genuine users are being pushed aside.

I argued that secrecy benefits trolls in a blog post, but I don't want to spam links to my posts in the comments.


Most spammers aren’t that competent. Hiding their posts without telling them used to be very effective on Reddit (now Reddit tells them). I guess it’s the same on HN.


Spammers are more competent than genuine users. They are advertisers, so they are more likely to be tracking metrics.


If that were right, then HN would be overrun by spam.


So you think secretive measures more often defeat spammers than trusting users? I'd argue HN's content could be a lot better than it currently is.

Content curation is necessary, but shadow moderation is not helping. When a forum removes visible consequences, it does not prepare its users to learn from their mistakes.

I'll admit, I find HN to be more transparently moderated than Reddit and Twitter, but let's not pretend people have stopped trying to game the system. The more secret the rules (and how they are applied), the more a system serves a handful of people who have learned the secret tricks.

Meanwhile, regular users who are not platform experts trust these systems to be transparent. Trustful users spend more time innovating elsewhere, and they are all disrupted by unexpected secretive tricks.


> So you think secretive measures more often defeat spammers than trusting users?

Yes. And it's really not a close question.

"Regular users" don't have to be platform experts and learn tricks and stuff. They just post normal links and comments and never run into moderation at all.


> They just post normal links and comments and never run into moderation at all.

On the contrary, secret suppression is extremely common. Every social media user has probably been moderated at some point without their knowledge.

Look up a random reddit user. Chances are they have a removed comment in their recent history, e.g. [1].

All comment removals on Reddit are shadow removals. If you use Reddit with any frequency, you'll know that mods almost never go out of their way to notify users about comment removals.

[1] https://www.reveddit.com/y/Sariel007/

archive: https://archive.is/GNudB


I'm talking specifically about HN, not reddit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: