The way the US got the safe harbor to begin with was as follows.
There was a court decision that essentially said that you weren't liable if you were just carrying bits, but if you did moderation then you were.
The problem with this is obviously that you then either have to have no moderation at all or it has to be 100% perfect because you're liable for everything you get wrong, and getting everything 100% perfect isn't really possible. So the result would have been that nobody would do any moderation and everything would be overrun by trolls and spam. To prevent that, Congress passed a safe harbor that allowed platforms to do moderation without immediately ending up in court.
The problem now is that the constitution and jurisdictional issues make it difficult for governments in the US to do the kind of censorship that a lot of people now want somebody to do. So they're trying to get in the back door by creating laws that will force the tech companies to do it, because on the one hand the companies have minimal stake in hosting any given information and will execute just about every takedown no matter how ridiculous if it will reduce their liability, and on the other hand they're not bound by the First Amendment when they over-block protected speech. So imposing any kind of liability on them that will cause them to execute spurious takedown requests is basically the censor's birthday wish, and even better if you can get them to over-block things ahead of time.
But there are solid reasons for the First Amendment to be in effect, and "governments shouldn't be able to erase evidence of their crimes" is pretty far up there on the list. So this ploy to put the national censorship authority into the offices of Facebook and Twitter really needs to get shut down one way or another, or we're in for a bad future.
I'm not sure. You can let the users moderate themselves, then you're still just carrying bits. That's where a pluralistic organisation of the platform comes in handy. Things like Reddit or image boards are not just one community, but a plurality of communities. None of those suit you? Go ahead and open your own subreddit, splitter! Then you can moderate there as you please. The problem is of course, the bit carrier cannot expect an advertiser to agree with all the subcommunities. But that's a different, and solvable problem. You need better targeting for ads and you need to accept that some subcommunities will just not be attractive for any advertisers at all.
You don't actually have this imagined separation you imply: hosting fatpeoplehate mean you impose a higher moderation burden on unrelated communities. And while the reason for this might be the principle of free speech, in practice you are only defending the act of hating fat people.
But then without a safe harbor the users doing the moderation would be liable, no?
Out of curiosity, what was the court decision? I've been thinking about this recently from the same perspective; that once you start allowing moderation then you should lose the shield of liability, but wasn't aware that this had been articulated by the courts previously.
CDA 230 was the legislative response to this, allowing companies to not assume liability for moderating content.
DMCA was passed in 1998, based on WIPO treaties from 1996.