Hacker News new | past | comments | ask | show | jobs | submit login

Does this start to open CloudFlare up to legal issues for all the sites they host? I was under the impression they were more of an infrastructure/utility type service, and weren't liable for what took place, the same way gun manufacturers aren't liable for shootings or gas stations for car crashes.

But if now they're manually deciding who goes on their network and who doesn't, it seems like they're more responsible for everything else that's on it that they allow.

They're a private company and I support them choosing to do business with whoever they want, but I thought there was some sort of legal distinction if they were totally agnostic to what travels over their wires. Is that not the case?

You may be thinking of the "CDA 230" nonsense that will not die, where people claim that companies can't moderate their customers because they'd be liable for what they post.

The opposite is true. CDA 230 makes it clear that companies can moderate their content without becoming responsible for it.


I've never heard anyone claim that companies can't moderate content without becoming responsible for it. I've heard people say that if publishers show themselves to be capable of censoring, then the legal protections should be rescinded and they should decide if they are a platform or a publisher.

Are you sure you heard the argument correctly?

The techdirt article I linked cites examples of people doing exactly this.

> If Facebook were to start creating or editing content on its platform, it would risk losing that immunity


> If Facebook is going to behave like a media provider, picking and choosing what viewpoints to represent, then it’s hard to argue that the company should still have immunity from the legal constraints that old-media organizations live with.

This is all nonsense. Old-media organizations are protected by CDA 230 just like everyone else: they can host third party content like user comments without being liable for it.

Publishers being able to "censor" is the whole value proposition for having a publisher. You're paying for the NYT because it picks who to publish. Facebook has no special "platform" protections that anyone else doesn't get.

Many, many people seem to think that CDA 230 itself makes a distinction between "platforms" and "publishers". I even replied to someone here in this comment section:


The first one is fair - Vox got it wrong. That vox got it wrong should surprise no one, vox is lowest common denominator agenda driven garbage. \

The second one is asking "should they" - its asking a question not positing a fact.

Should they get immunity for what posted if its clear they have the capacity to censor at will? Why should they and not anyone else on the internet?

CDA makes a distinction between publisher and platform and the talk about this whole issue is that many people are saying that these companies can clearly police their content, and should be liable for it and not specially protected.

The first one was Wired- not Vox- and the second one was claiming in the prior paragraph that "The platforms are immune from such suits under [CDA 230]. The law treats them as a neutral pass-through" which it doesn't. The law specifically says that platforms can moderate any content it deems objectionable.

Where does the CDA make a platform/publisher distinction? What is the definition of the difference, and where is it in the law?


As Techdirt says, "This "publisher" v. "platform" concept is a totally artificial distinction that has no basis in the law.". Are they wrong?

They've kicked people off their service before for content based reasons (eg, Daily Stormer), so this changes nothing. In any case:

> I thought there was some sort of legal distinction if they were totally agnostic to what travels over their wires. Is that not the case?

Not as far as I'm aware, no. The closest thing I can think of is if they were discriminating based on people's membership in a protected class, eg, if they announced a strict "no female clients" policy. This is clearly vastly different.

From a PR point of view, yes, every time they kick someone off for being bad, the more their failing to kick someone off will be seen as an implicit endorsement. But again, that ship has sailed.

They've also removed sex worker websites (including a forum that was just sex workers talking to each other), but for some reason no one complains about it.

I believe you'll find that this was driven by SESTA/FOSTA, rather than being a discretionary choice by Cloudflare, and if you hang out in the right circles, it gets complained about a lot. (EFF, ACLU, Wikimedia, and many more opposed it.)

I think it's unconstitutional and the worst thing to happen to the internet in many years, as well as one of the worst things to happen to civil liberties (which is a pretty high bar!). Unfortunately, it passed senate 97 votes to 2, which suggests legislative fixes will not be coming soon.

Congress recently added sex work as an exception to CDA 230 protections, and every provider scrambled to nuke everything remotely related.

That’s always the way. Censorship/moderation isn’t given a second thought when it’s just sex-related.

Interesting, never heard about that. Are there are other things too?

> "no female clients" policy

For the record, gender isn't a protected class in a place of public accommodation and it's why clubs in Las Vegas can charge Men more than Women.

You would think but apparently that doesn't apply anymore. You can control the content and still get the protections of a common carrier.

I just know I'll remember that cloudflare could pull the pulg on my site if one of my users posts something they don't like. I don't think I can recommend their service to any of clients because of that.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact