Hacker News new | past | comments | ask | show | jobs | submit login

Australia lacks safe harbour laws which the US has had for some time. This has a chilling effect on certain types of platforms. Even something like a photo hosting site is very risk heavy in Australia as the type of content on your servers is your responsibility.

I feel like those safe harbour laws in the US are at risk, at least in practice, as the idea of a content "conduit" begins being less plausible as platforms start moderating more and more.

Now, I am all in favour of moderation. Real life communities are moderated, and all the strongest online communities have some kind of moderation. But there needs to be protections for companies who are moderating the best they can and still have evil content on the site. Else these platforms won't be sustainable and will be in heavy legal risk to boot. They won't make much sense anymore.




> I feel like those safe harbour laws in the US are at risk, at least in practice, as the idea of a content "conduit" begins being less plausible as platforms start moderating more and more.

The way the US got the safe harbor to begin with was as follows.

There was a court decision that essentially said that you weren't liable if you were just carrying bits, but if you did moderation then you were.

The problem with this is obviously that you then either have to have no moderation at all or it has to be 100% perfect because you're liable for everything you get wrong, and getting everything 100% perfect isn't really possible. So the result would have been that nobody would do any moderation and everything would be overrun by trolls and spam. To prevent that, Congress passed a safe harbor that allowed platforms to do moderation without immediately ending up in court.

The problem now is that the constitution and jurisdictional issues make it difficult for governments in the US to do the kind of censorship that a lot of people now want somebody to do. So they're trying to get in the back door by creating laws that will force the tech companies to do it, because on the one hand the companies have minimal stake in hosting any given information and will execute just about every takedown no matter how ridiculous if it will reduce their liability, and on the other hand they're not bound by the First Amendment when they over-block protected speech. So imposing any kind of liability on them that will cause them to execute spurious takedown requests is basically the censor's birthday wish, and even better if you can get them to over-block things ahead of time.

But there are solid reasons for the First Amendment to be in effect, and "governments shouldn't be able to erase evidence of their crimes" is pretty far up there on the list. So this ploy to put the national censorship authority into the offices of Facebook and Twitter really needs to get shut down one way or another, or we're in for a bad future.


>The problem with this is obviously that you then either have to have no moderation at all or it has to be 100% perfect

I'm not sure. You can let the users moderate themselves, then you're still just carrying bits. That's where a pluralistic organisation of the platform comes in handy. Things like Reddit or image boards are not just one community, but a plurality of communities. None of those suit you? Go ahead and open your own subreddit, splitter! Then you can moderate there as you please. The problem is of course, the bit carrier cannot expect an advertiser to agree with all the subcommunities. But that's a different, and solvable problem. You need better targeting for ads and you need to accept that some subcommunities will just not be attractive for any advertisers at all.


It wasn't long ago that it was shown that being host to a hateful sub community lead to overall higher levels of hate in unrelated communities.

You don't actually have this imagined separation you imply: hosting fatpeoplehate mean you impose a higher moderation burden on unrelated communities. And while the reason for this might be the principle of free speech, in practice you are only defending the act of hating fat people.


> You can let the users moderate themselves, then you're still just carrying bits.

But then without a safe harbor the users doing the moderation would be liable, no?


> There was a court decision ...

Out of curiosity, what was the court decision? I've been thinking about this recently from the same perspective; that once you start allowing moderation then you should lose the shield of liability, but wasn't aware that this had been articulated by the courts previously.


The EFF has a good page on CDA 230 here [0]. The relevant court cases are the 1991 case 'Cubby, Inc. v. CompuServe, Inc' [1] and 1995's 'Stratton Oakmont, Inc. v. Prodigy Servs. Co.' [2]. In 'Cubby', CompuServe was found to have no liability for content hosted on their site as they did no moderation themselves, had no knowledge of specific content on their site, and were thus not a publisher. In 'Prodigy', Prodigy Services was found to have liability as their moderation practices were judged to make them the publisher of the content.

CDA 230 was the legislative response to this, allowing companies to not assume liability for moderating content.

0: https://www.eff.org/issues/cda230/legislative-history

1: (https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.)

2: (https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod...)


I think DMCA in 1996?

[edit]

DMCA was passed in 1998, based on WIPO treaties from 1996.


The DMCA had the safe harbor provisions, which the GP claimed were added in response to a court case. It's that court case that I'm interested in learning more about, because it sounds like the logic used in that case mirrors my own thoughts on the issue and I'm curious to see if that is the case.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: