Ultimately I dealt with those ppl by “greylisting” them. Added a sleep() prior each page rendering of 5 to 25 secs (actually it was more sophisticated and would stream chunks over TCP so the slowness feeling was even more real).
Worked like a charm. Few days after the recalcitrant would no longer come on the website.
I called this “moderation by degradation of user experience”, and was pretty effective like the solution described in your post.
Think about page load if you need to restrain visits.
We also had a community suffering from this problem (during the early 2000's). Bans would take care of a lot of problem users, but would just give energy to those truly out for blood, troll, bored, or very immature.
We had one user get banned over a dozen times while we tried banning IPs, name regex or anything else we could think of. Finally, like you, we found that if we annoy them first, they get bored and shuffle off to some other, lower barrier place.
Some of the nice features from that plugin (via the site) were:
1. Slow response (time delay) on every page (20 to 60 seconds default).
2. A chance they will get the "server busy" message (50% by default).
3. A chance that no search facilities will be available (75% by default).
4. A chance they will get redirected to another preset page (25% & homepage by default).
5. A chance they will simply get a blank page (25% by default).
6. Post flood limit increased by a defined factor (10 times by default).
7. If they get past all this okay, then they will be served up their proper page.
*  https://www.vbulletin.org/forum/showthread.php?t=93258
> So, how do we escape this parasitical leech without triggering his vindictive rage? Gray Rock is primarily a way of encouraging a psychopath, a stalker or other emotionally unbalanced person, to lose interest in you. It differs from No Contact in that you don’t overtly try to avoid contact with these emotional vampires. Instead, you allow contact but only give boring, monotonous responses so that the parasite must go elsewhere for his supply of drama. When contact with you is consistently unsatisfying for the psychopath, his mind is re-trained to expect boredom rather than drama. Psychopaths are addicted to drama and they can’t stand to be bored. With time, he will find a new person to provide drama and he will find himself drawn to you less and less often. Eventually, they just slither away to greener pastures. Gray Rock is a way of training the psychopath to view you as an unsatisfying pursuit you bore him and he can’t stand boredom.
I did this at my old workplace where people would get enraged. I learned not to interrupt and not even try to be empathetic no engagement at all.
People are like cats they only react to stimulation.
Even for love today being Valentine's Day I recall a saying, "the opposite of love isn't hate it's indifference".
It worker really well -- although not perfectly -- when I was being constantly annoyed by someone. I explicitly told them I didn't want to deal with them, but they kept coming. They stopped once, to their "how are you?", I replied with how I really was.
That's a good thing, because it avoids stigmatising a real diagnosis that's given to a group of people who are already deeply stigmatised.
> he will find a new person to provide drama and he will find himself drawn to you less and less often
> he can’t stand boredom
Unnecessarily gendered language is jarring for the reader and also, (possibly) unintentionally, sexist. The singular they/them/their is generally acceptable to use in cases such as these.
In 2012, this was already well-known. The language is jarring and sexist for no apparent reason. It is not undue revisionism.
Any level of revisionism is undue.
The point of not using gendered language is to compensate for discrimination, not to hide problems.
‘He’ worked as the default pronoun for a long time in many languages all over the world. (And still does in many languages) I find it curious how many people can be sold into the idea that a language can be considered sexist.
They exist. I'm not sure that they can de-gender language, but they can swap word A for word B
I was simply offering that I don’t perceive the language as sexist. You do. That’s fine.
Similarly, I would offer that I don’t find your sarcasm productive in a civil discussion.
Either that or a plugin to encourage productivity.
Don't block social media sites, just turn them all into slow ineffective shitty websites.
Often, the comments they're posting into the black hole are indicative of why they were banned in the first place but if I see a comment that is dead and it does add to the conversation then I vouch for it which un-deads it.
Although I did see a fairly new account the other day with every comment dead, from the very first one, with no visible mod intervention; I figured it was from being a sockpuppet or someone with a track record.
p.s. I wouldn't want showdead on all the time! I use this "HN ungrey" bookmarklet to see whited-out comments on a page. Also it's useful to read comments that have just been voted down enough to be hard to read:
'Almost invariably'? Dude, I created my first account here a few months ago and was "shadow banned" within 15 comments, simply for expressing honest opinions, which somebody somewhere took anonymous offense to and then SHADOW BANNED ME. Is that fair? Do you expect to attract new faces here with such an attitude? I'm a Slashdot poster going way back, with a mid-6-digit UID there.
Then again, it's always possible that there's another, deeper "dead" in there, and only the really lousy spam shows up on "showdead". But that way lies paranoia.
Debug steps: turn off bitwarden, my only extension. Never helps. Ctrl+Shift+Del cookies. Never helps. Sigh, open chrome. Works first time.
Is it just me or did the web up and dump firefox just when it started to get good?
I asked them and they're like "yeah, it only works on chrome-based browsers". Or something to that effect. It's not like some CSS was wonky, or a bug somewhere... No, the default process of them building the SPA somehow yielded a completely non-functioning app for Firefox.
This is a change that’s been underway for years but came as a surprise when it actually shipped. I coordinated updates to ~40 packages owned by 5 different teams at my company, and had to put aside a good amount of other critical product work for about a week to ensure we didn’t encounter any customer issues.
The crux of the issue for maintainers is that Auth flows that require cookies to be sent around different origins (e.g. OAuth with form_post) will no longer work unless they update the cookies to explicitly be SameSite=none and Secure=true. Chrome led the pack on shipping the changes to browsers, but also implemented a special timeout rule that temporarily allows cookies that don’t meet the new spec to be set anyway to try to ensure auth flows don’t break. Eventually they will lift this timeout. Firefox has shipped support but has not implemented such a timeout.
For me it's usually extensions.
A truly deplorable act...
I also added about 30 seconds of latency to every page I visit, but for completely different reasons as op. Switching to Brave and blocking all cookies and JS by default made me have to manually enable it for nearly every site that I actually wanted to use.
About a week later, Chrome was reinstalled. Maybe I'll try it again once I level up my willpower.
That explains a lot... I frequently have to solve 10+ captchas when I'm using Firefox, many of them rate-limited. It feels like a punishment for resising surveilance. These things should be illegal due to the accessibility problems they cause if not the fact they're a nuisance.
I'm sorry if that's unnecessarily dystopian
Everything is okay and justified when rich corporations do it. "Normal" people just have to accept it without fighting back in any way. Company directly and openly transmits malware to people's browsers, collects all personal information and creates detailed profiles of people in order to sell to interested parties? If I did that, I'd no doubt get charged with some sort of crime. They just make it part of their terms of service which nobody ever reads much less agrees to and somehow everything is justified. Suddenly it's not malware but "surveillance capitalism", a totally legitimate activity. And if we try to resist in any way, they use the lack of tracking to say we're indistinguishable from the networks of bots spamming them or DDoSing them or whatever. Since it's part of their terms of service, any attempt on our part to circumvent their fingerprinting is abuse.
> we're going to correct your behavior by making your browsing experience miserable
Hopefully the only thing they'll achieve is the death of their own online community. Imagine if HN forced people to solve a captcha before every single post.
> Invent a better one and the world will throw money at you.
It already exists.
The abuse stems from the fact servers connected to the wider internet are designed to respond to anyone who tries to talk to it. That's the fundamental problem with internet security today: computers talk to strangers they don't know much less trust.
What if computers dropped all packets by default and networked only with authorized users? The risk of exploitation and abuse becomes negligible because to unauthorized users it's like the computer is not even there to begin with.
This can be done with single packet authorization. The internet would lose its mass market appeal but it's much better than normalized widespread surveillance.
This works well if you have a network of fake accounts from a single "persona" or ring of personas - by all their indications they can't see their own posts are being ignored.
Notice: it's almost the exact response to the persona management software problem  (aka bots).
Give them their own queue with their own games with other cheaters to play against, and as long as nobody is cheating in a way that breaks the servers, they can play their own version of the game if they want without ruining the game for those who don't cheat.
Neither reddit nor HN make any attempts to make it hard for sophisticated users to figure out they're shadowbanned.
Of course your main point - that this is all terribly imperfect and won't stop a determined, sophisticated user, who has realized what's happening - is spot on. That, however, is perhaps a rare combination, rare enough to simply continue dealing with manually.
IA was just an example, and Tor would be easier. But anyway, I think it shows the flaw in doing so:
> You could also shadowban IA.
If the spammer manages to get all the IPs hellbanned just by looking at things, he gets more eyeballs on his spam.
My point is, you can't get much better than normal shadowbans, which are trivially detectable for moderately sophisticated users (just log out and try to check your profile) but not anyone else. "Hellbanning" is a stupid extension of this concept which only works in video games.
Also, shadowbanning is a spineless and deeply unethical move. If I get banned, I know what I did wrong and can reflect on that. If I get shadowbanned, I'm just screaming into the ether. That is not a Good Thing™, it is atrocious.
Depends on the use case. Once every 24 hours is a lot easier to moderate than a minute by minute spam wave.
> IA was just an example, and Tor would be easier.
TOR would indeed be easier... assuming it's not already blocked through other means, as it frequently is. There's a whole ecosystem around blocking TOR and other proxy mechanisms - imperfect and permeable though they may be.
> My point is, you can't get much better than normal shadowbans
Not sure I agree or that you've supported your point - however, even shadowbans are often unnecessary. The goal is never perfect moderation, merely to stack the deck in favor of the moderators for blocking problematic content in terms of time effectiveness until either the moderation effort available can handle it, or until the spammer moves onto easier, more cost effective targets (which even basic shadowbanning can achieve, mooting the need for better tools even if they're available.)
> Also, shadowbanning is a spineless and deeply unethical move.
As a first line of defense against mere rules breakers, I might agree. As a second, third, or nth line of defense against particularly problematic ban evaders and spambots, I will gladly resort to such tools - or worse - and sleep soundly at night.
So then you will have no problem recognizing where I'm coming from when I say that the Hacker News 'mods' are some of the most evil Nazis on the planet. After all, I created my first account here only a few months ago, and was shadowbanned in less than 15 posts despite posting with honest intentions. Is that what decent people do to other people?
I mean, I wouldn't use your hyperbole, but I get the anger and fustration and the disagreement about how communities should be run. I'd certainly like to think I'd run any of my communities differently, given some of the outlier cases of automatic shadowbans I've heard of.
On the other hand, I also realize just how badly the moderation team is outnumbered. Once you get to a certain scale, you only have bad options at your disposal. Showdead and vouching at least add a twist to it that make it not quite as bad as some of the other options out there.
But back to the first hand - I've quit other communities over less. That's my preferred form of retaliation when I don't merely disagree with moderator decisions, but feel strongly that they're outright behaving poorly, mistreating those I care about, and not being reigned in - let them reap what they sow. I'm happy to take my knowledge, advice, technical chops, and general aid elsewhere. The internet is vast and infinite, and there are communities out there with moderation styles to my liking, where my contributions will be appreciated.
And they are appreciated, at least from time to time. I've had people reach out to me in another community over a decade old abandoned and archived github project for questions and appreciation, to say nothing of my more current projects. I've had people credit me for teaching them the programming language they used to enter the industry - or perhaps blame me ;). I've helped more people find and understand their bugs than I can easily count. Which isn't to say I'm a perfect, always behaving individual, or that I won't cut moderators some slack for honest mistakes or just generally being human. But I will aim to aid the communities I appreciate, and withhold from those I don't, by voting with my feet.
I found this (related?) poll https://news.ycombinator.com/item?id=7818823
Incidentally, if a banned account is making only good posts, we're happy to unban it. I often look at the recent commenting history of banned accounts in the hope of finding such cases, and users sometimes email us about them (as mirmir mentioned elsewhere in this thread). That's super helpful!
One strange phenomenon is that there are banned accounts that post good comments, but revert to posting bad comments that break the site guidelines as soon as we unban them. Then we ban them again and their comments get good again...go figure. Any large-enough population sample includes a long tail of behaviors.
It would be somewhat ironic, if re-enabling interaction with the community is what's driving them to back to bad behaviour. You know, HN as a bad influence.
The problem here is your BONDAGE AND DISCIPLINE moderation system, as I explained to you. You make up all of these bullshit rules and then stand around with tasers and shotguns forcing people to follow them "OR ELSE." You do not exercise any discretion or any moderation whatsoever, but just go around shooting people in the face simply for opening their mouth to speak. And then you're surprised as shit when this behavior breeds nothing but contempt for you and your idiotic rules. "How could this be happening?"
Take me, for instance. Made my first account a few months ago, posting my opinions innocently. Shadow banned in less than 15 POSTS. Holy shit! Talk about being shocked. Did you care, when I brought this to your attention? Does it matter to you that you are driving good people away from here every day? Of course not. You're too self-absorbed to care.
And you even sit around having these long discussions, thinking up clever and creative ways to be even more of a hellspawn against these people whose opinions you hate so much. New ways to separate people from society and trap them inside a cage where you don't have to hear their thoughts or opinions. Has there ever been a more sick or contemptible society in the history of man?
Go ahead and ban/shadowban this account also, please. I will just make another. And another. And another. And another...........and another......... from now until the end of time. Eventually I will automate the process, and submit my comments in batches, at a far higher rate than your mods can find and hide them. In my irritation at your unchecked fascism, my comments will ALL be as sharp as a knife, to keep driving the truth home as frequently as possible, until these "mysteries" you "just don't understand" are finally and definitely solved in your mind.
Somebody should make a shrine of his lucid and technical posts. And also a book.
Anybody know of such things?
Isn't this a pretty mean thing to say? Just making an account from tor is enough to get you shadowbanned in hn.
> and mark such accounts legit to immunize their future posts from those filters
I remember seeing you restore a post from someone who made their account via tor and their comments were auto-deleted. Their next post was autodeleted in the same way, so I presume that this feature is buggy (or was, as this was quite a while ago).
I'm glad you like the vouching system! I still feel like it's the best single change we've made to HN since pg retired.
And you can vouch for good comments from shadowbanned accounts, just as for manually flagged comments.
Being rather paranoid, I check this account occasionally via Tor.
Imagine Van Gogh on HN...
I agree with the sentiment, shadowbanning is a passive aggressive way to hide communities problems under the carpet by hiding them from the public
Banning is legit if done publicly with reasoning, so people can make a clear idea of what happened and why, like
> you broke rule number 3
not when it is done just to keep "the community clean" without any explanation whatsoever
I guess this is the reason why HN rules are so vague
I wasn't aware that some communities are now limited to the app only. I haven't run into any yet.
But that sucks.
Probably. As a shareholder I get it. The site needs to make money so it can keep providing the service it provides. And ads are the best way to do that.
I also get it from a development standpoint. It takes effort to maintain a frontend, and maintaining two of them takes twice as much effort. With limited resources, I can see why it makes sense to focus on the mobile interface and let the web interface fall by the wayside.
I'm not entirely sure I'd have made a different decision if I still worked there. I don't know enough about the internal structure or costs or revenue to say for sure.
I can say that I know the people in charge, and they are good people, and if this is the choice they made, it was probably for good reason.
Seriously, the number of times the Reddit app hasn't worked but Apollo has is kind of ridiculous.
When I had it installed (up to last week) it was using more than 10x anything else on my iphone. I was definitely not using reddit enough to justify that.
I can even imagine it tamping down "reply wars" and long arguments since you get more decompression time between impressions.
The system "slows down" the user experience, and also introduces random timeouts and other unpleasant user experiences.
I can see it only working when users don't know it exists.
But when secret, I imagine this would be INCREDIBLY effective, as they'll blame the experience on the software, not their behavior.
What if a user pays for the content directly? Indirectly, they pay via ads.
And if applied to networking with say QoS it is suddenly net neutrality though
This always reminds me of the "Trolldrossel" (troll throttle), which was/is a funny piece of science and technics to reduce spam and troll comments.
It's in German, but maybe auto subtitles and DeepL can make you understand it: https://linus-neumann.de/2013/05/die-trolldrossel-erkenntnis...
Messengers made everybody wrte liek this .. but it you made the UI a bit difficult unless the author spells right. Maybe we'd see some global gain ?
(Yes I know I'm banned with this account as well)
Nodes which are actually important are 'bridge' nodes that provide a means of moving between mostly-disconnected groups. I started wondering what these ideas looked like in an actual social graph, like society. What would 'bridge' nodes look like, and what would eliminating the connection to them look like and what effect would it have? I think a social bridge node would be something like a biker whose main social group is his motorcycle gang, but who also participates in his elderly aunts knitting circle once a month. He provides a means through which ideas and concepts and information can flow from biker gangs, and those connected to them, to a group of elderly ladies and those they are connected to. They are, almost by definition, tenuous links. Ones which, if someone had influence over the communication networks they were using, it might be very easy to disrupt. What consequences would there be to breaking those links on a large scale? In the '6 degrees of Kevin Bacon' situation, you can get the average number of links needed to get to Kevin Bacon up over a dozen by only removing a couple handfuls of bridge nodes.
I think doing such a thing on a real social graph could be very quiet, possibly undetectable (drop messages from rarely-connecting pairs of users... they rarely connect, so how many of them will go through the trouble to re-establish contact? Have bridge nodes have something go haywire and they have to be issued a new phone number, 'their facebook got hacked', etc). And the consequence would be to freeze most things in place, or at least radically slow down any kind of large-scale social change. Disruption of the status quo on the scale of regime change in a government, say, requires buy-in from large and very mostly-disconnected segments of the population. If only pockets of people are interested in change, it doesn't matter how intensely they want the change to happen, it only matters if they can join forces with very disparate compatriots. If you had high-level control of communication networks and a vested interest in guarding the status quo against large-scale social upheaval, you could probably do it very quietly and without really needing anything more than the metadata of connectivity. No need to find out what ideas are being spread, you could just make sure ALL ideas remain trapped in their own little bubbles or that their spread is greatly contained.
For the past few years specifically it feels like a story gets a suspicious amount of immediate and very widespread reach when they’re on the topic of an outrageous member of some certain political or other identity group. Any group, as in this is occurring in all directions simultaneously. I felt this way just yesterday when I saw a Reddit thread about some transgender sports participation drama and the “Other Discussions” tab had fifty other identical threads making sure the “link breakage” you describe is broadcast as widely as possible. Jessica Yaniv is another recent example. I don’t doubt that those divisive people themselves are genuine, but the absolute fervor around these topics just feels so fake. I could see the argument that it’s a natural feedback loop of people becoming more aware of and attuned to certain topics, but the truly scary thing is there’s no way to know.