If a decentralised system is to stay decentralised, it needs to consider spammy bad actors.
Federated systems, like email, have used these anti-spam techniques for a long time.
Federated systems always evolve into an Oligarchy, like Gmail/Hotmail/Yahoo, etc. or like banks, JPMorganChase/GoldmanSachs/etc.
If you want decentralization, you should more go for something like https://notabug.io/ (P2P Reddit), which uses the GUN protocol (mine). Or any WebTorrent-based approach.
 - https://github.com/firehol/blocklist-ipsets
A single fediverse instance blocking TOR users doesn't make much of a difference: my instance still allows them, and I know of many that do.
What if I own a server and connect it to an ISP under an agreement where the ISP is accountable for clearly malicious behavior coming from its connection (regardless of origin)?
Then, that ISP requires the same agreement from me, and everyone connecting to that ISP, and on down the chain.
Wouldn't we all be very active in policing bad actors in the networks we manage?
It also requires de-anonymisation (so you can identify who the bad actor actually is!) - you wouldn't be allowed a Tor exit node on this network, for example.
I think that "because it is a lot of work" can't be a reason for not taking preventative measures against something you own that is performing harm on another party.
As far as available technologies, "it isn't perfect" does not mean it isn't better.
If an automated ride-sharing service has customers who are shooting guns out their windows and damaging property, then maintaining the anonymity of the attackers is not a reason to permit them to engage in this behavior.
If Tor is permitting criminals to use the Tor tool, itself, to do harm to others, then it is up to the Tor project to remedy this. If I, as a network operator, do not want them damaging my network, then that is my choice. If I, as a customer of a network operator inform the network operator that I am willing to accept Tor and all liability, then this exception can be written into a contract.
2) This would require ISPs to do even more invasive monitoring of all traffic to be in compliance. They'd essentially have to DPI everything, or even break TLS between you and your destination, to know if your traffic was malicious. No thank you.
3) Many ISPs simply don't care. A lot of malicious traffic comes from countries where ISPs will just look the other way for a bit of cash. I suppose we could come up with a system that depeers bad ISPs, but this would have tons of collateral damage to innocents as well as reintroducing the exact centralization we're trying to avoid (where's the "master list" of bad ISPs to depeer?)
Whatever the solution to bad actors online is, it isn't ISPs.
Yes, I would like it if I had something that unbeknownst to me is harming others (beyond some de minimis) through their service, and per their contract, they certainly have the right refuse my service until the condition is rectified. Anyone relying on my service will either suffer or be owed something, by me. Note that this isn't some arbitrary shuttering of some service. This is a harmful activity being blocked from harming and is spelled out in the contract clauses.
You make it sound as if this stuff is so hard, yet here we are discussing this in a comment section of a post by a person who doesn't seem to be employing highly-sophisticated tools in identifying the bad behaviors. All he would have to do in my dream world is show this behavior to his (contracted) service providers, and them on up the chain.
But notice that this option is not available, thus the only option is to use a centralized provider that is effectively big enough to completely absorb a huge percentage of bad activity. He even comments that owners of networks are only voluntarily providing responses and actions to these activities. They could just as well not be bothered and what then?
If the ISPs don't care and people who don't want this traffic on their networks disconnect from them, this is bad? And, yes, whole countries may have problems connecting anywhere. Mind you, even those countries had some reason to connect to the World Wide Web (itself with a mountain of even just protocol requirements) in the first place, and it likely has to do with some minimal amount of trade with the outside world. To continue this trade communications they will have to provide a service that others are willing to connect to.
It wasn't until this post that I realized the italics of your upstream post wasn't your original content. I don't find it nice to squint at the text to see when it stops being italicized to know when you've started your post. But a final ">" is easy to see.
Also, it was only up to like very early 2000s when researchers of decentralized systems mostly ignored the existence of malicious actors, but later everyone became well aware of them and started considering how to deal with them.
Well, don't leave us hanging, do enlighten us how
But in practice datacenters, uplinks and internet exchanges often are able to do flowspec, firewall rules, block all UDP for a subnet in all networks they have relationships with, etc. So plenty of those nodes can be behind ISPs that mitigate volumetric attacks automatically, so even simple DNS failover might be good enough to protect from such attacks. It's not that hard. Layer 7 is where the hard part is.
1. Having had ~25 servers per datacenter
2. 5 Datacenters (1 in Texas, 1 in Utah, 2 in California, 1 in Chicago)
3. 1 Server in each location connected 10gbit, the rest 1 gbit.
I got to watch first hand as DNS reflection attacks crippled our infrastructure one server at a time. Only 2 of the datacenters (1 in LA, 1 in Chicago) had the infrastructure to mitigate the DDoS without significantly effecting their operations. Even post mitigation, the 2 datacenters that didn't end-up blackhole-ing our IPs at the edge still let so much malicious traffic through that only the 2 10Gbit servers remained online and they were nearly CPU bound over 24 cores just handling all the SENDQ/RECVQ for the NIC.
I mention this because it's sometimes easy to dismiss until you're in the situation and the realities of what you have control over are vastly different from technically feasible. The size and scope of modern DDoS attacks can easily overwhelm entire uplinks to a datacenter, even after pushing mitigations upstream. The reason these reverse proxies from companies like Cloudflare have become so popular is because most will not have the raw resources required to mitigate this themselves. Even some larger datacenters don't have the resources.
I understand, but you are still talking about a situation where surviving a volumetric DDoS attack without a global centralized provider was possible. It wasn't smooth for you, but it could have been if things were done a bit differently.
Anyway, here on the other side of the world it's not like that, DDoS protection is more common. Because in the early days of DDoS attacks with all the dreadful blackholing one of the big European providers OVH invested in DDoS protection and kind of pushed the whole market to provide it too instead of blackholing.