I've dealt with some spammers to various degrees. I think one of the most effective ways of dealing with spammers is to - "shadowban" them. Allow them to use your service, but don't indicate to them that you've identified them as malicious. For instance, when dealing with chat spammers - allow them to chat, but do not show their chats to other users. Another level would be to allow them to chat, but only show their chat to other shadowbanned users. For the author's use case, perhaps something like - if the ip address that created the link shortener accesses the link, they get the real redirect, and if a different ip address accesses it, they get the scam warning page. If the malicious actor doesn't know they've been marked as malicious, they do not know they need to change their behavior.
The second most effective thing is making the malicious actor use some sort of resource. Such as a payment (the author uses), or a time commitment (eg new accounts can only create 1 link a day), or some other source of friction. The idea is that for legitimate users the friction is acceptably low, but for consistent spammers the cost becomes too high.
The 3rd thing I've found effective is that lots of spam comes from robots - or perhaps robots farming tasks to humans. If you can determine how the traffic is coming in and then filter that traffic effectively without indicating failure, robots can happily spam away and you can happily filter away.
IPv6 doesn’t solve this really. You’ll still ban at least /64 and you’ll switch to /48 for the particularly nasty ones. There’s zero reason to ban a specific ipv6 address.
> You’ll still ban at least /64 and you’ll switch to /48 for the particularly nasty ones.
The entire /64 will nearly always be a single ISP customer, not thousands of customers behind one address as it can be for IPv4. And you can start by banning the /64 and then widen the mask, say, 4 bits at a time if abusive traffic continues from an adjacent range. It's not that hard to automate this. Then the /48 gets blocked only if you see abusive traffic from multiple ranges within it, implying that the whole range is controlled by the attacker, or that ISP does nothing about abusive customers, which is nearly the same thing.
That's actually a very interesting idea I hadn't seen before. Certainly makes it less obvious that one has been shadowbanned, and probably would help keep (non-bots) happy. I wonder if it'd be worth the investment to implement.
Shadowbanning is extremely hostile to users that have been mis-identified as spammers (which will happen) while spammers will quickly and easily figure out a way to determine if they've been shadowbanned. That approach needs to stop.
I've employed shadow banning on an online service deal with some deranged ban-evading individuals. It does help a lot. Granted, some of the more savvy users may figure out what you're doing, but you're often not dealing with the brightest minds. Given that your typical online service will maybe employ one moderator per 100k people, any reduction in workload is welcome.
> Shadowbanning is extremely hostile to users that have been mis-identified as spammers (which will happen)
It should always be a manual action and moderators should continue to see messages of shadowbanned users. You can always lift it in case of a mistake.
If you're going to have a free tier on your service and your service has any sort of interaction going on between users that could be degraded by spammers and the mentally insane, you're going to need shadowbanning. It's either shadowbanning or upping the hurdle to creating an account considerably.
I don't understand why shadowbanning would be so effective. It's trivial for any competent spammer to check their submissions from different ip addresses, they will very quickly discover if they are shadowbanned.
The risk of misidentifying legit users and shadowbanning them outweighs the potential gain.
I might be wrong on how these spam bots operate, but I assume someone (human) has to write at least a few lines of scripts tailored to the form on the website, to actually submit the spam. Adding a few more lines to also check that the submission went through doesn't seem like much effort.
> I don't understand why shadowbanning would be so effective
Because if done correctly the user never knows they are shadow-banned. It sounds trivial when you know _how_ the shadowban is done. But for instance, instead of an IP check, perhaps it's a time check - after 3 days it comes into play. Or a combination of different checks. So imagine that you are accessing a service that appears to be working correctly .... you would basically need to a) determine that that service even does shadowbanning, and b) think of infinite ways that you might be shadowbanned and try to determine if that's the case.
A legit user getting told they are banned can contact the site to try and resolve the issue on why they have been misidentified, getting shadowbanned will possibly never get resolved.
If you had the time and inclination you could even seed their account with mock stat's. I.e. when the link shortened is accessed, correctly log all of the metrics to their account so they have solid metrics indicating its working, but fail the actual consumer requests
Logging their metrics correctly is going to take resource. Instead, just set a flag on their account which, if true, means they just see some randomised junk stats.
The second most effective thing is making the malicious actor use some sort of resource. Such as a payment (the author uses), or a time commitment (eg new accounts can only create 1 link a day), or some other source of friction. The idea is that for legitimate users the friction is acceptably low, but for consistent spammers the cost becomes too high.
The 3rd thing I've found effective is that lots of spam comes from robots - or perhaps robots farming tasks to humans. If you can determine how the traffic is coming in and then filter that traffic effectively without indicating failure, robots can happily spam away and you can happily filter away.