I still think this is a business that Cloudflare shouldn't be involved in. There are very legitimate reasons for parents to filter Internet content. But Cloudflare is in a unique position here, they have a brand as a company that cares about free speech, and specifically because of who they are, they really shouldn't be making determinations about what is and isn't inappropriate content for kids.
When 1.1.1.1 for Families launched, it blocked access to GLADD's site because Cloudflare didn't do a good enough job testing any of this stuff and they just pulled in filters from other parental companies, some of which turned out to be anti-gay. Cloudflare apologized, pushed a couple of fixes, but never actually took a step back and asked how this happened. In the meantime, 1.1.1.1 for Families launched without blocking access to sites like Stormfront. Cloudlfare didn't think it was appropriate for them to make a determination over whether that site was safe for kids.
I think that our society is just generally a lot less thoughtful about filtering adult content than it is about filtering other forms of content like political speech, and we don't think about adult content filters as having a downside, or being real censorship. So when 1.1.1.1 for Families was released, I came up with a challenge: https://danshumway.com/blog/sex-censorship-is-censorship/
I do think there are scenarios where it's completely appropriate to block content for children, and I do think families should always able to make these kinds of determinations. People and communities have a fundamental Right to Filter (https://anewdigitalmanifesto.com/#right-to-filter). However, adult content isn't the only content that falls into the category of being harmful to children. It is utter hypocrisy for Cloudflare to launch a service that blocks adult content but not hate speech; both forms of content are legitimate for parents to want off of their networks.
My challenge is, if Cloudflare is frightened of the implications of being the company that decides what is and isn't hate speech, then why isn't it also frightened of being the company that decides what is and isn't adult material? Why do we view accidental censorship of LGBTQ+ informational materials as less of an existential free speech risk than accidental censorship of political ideas or extremist groups? Cloudflare still, over a year later, doesn't really have clear documentation I can find anywhere about what specific criteria they use to make filtering decisions on 1.1.1.3 beyond that they "aim to imitate" Google Safe Search. Would people tolerate that kind of fuzziness if they were filtering hate speech or political extremism?
There is a reasonable debate people can have about whether or not it's appropriate for Cloudflare to be the company that carves out sections of the Internet that are inappropriate, even as an opt-in filter. I think both sides of that debate can make some good points, and reasonable people could go in either direction. But for me, the biggest question isn't really whether Cloudflare is the right company to build and maintain Internet filters. For me, the biggest question is about which subjects Cloudflare views as OK to moderate, and which communities Cloudflare is OK offloading the externalities of their moderation onto.
Because frankly, in free speech communities we do have a lot of hypocrisy about this. There's no argument to be made that extremist hate sites aren't just as dangerous to kids as pornography is. We should try to have more consistency about stuff like this. Are we OK with content moderation or not?
I think it’s up to the network owner to decide what should be blocked or allowed in their network.
1.1.1.3 (or 2) is a tool in the tool chest. Some people may find it too aggressive and don’t need to implement it, some may find it too conservative and implement more. No tool will be perfect for everyone, and if you don’t find it hits the right balance you don’t have to use it. No one has to use it, and cloudflare can literally release any free block list they want and call it parental blocking. It’s free, it’s a best effort product that doesn’t drive revenue, and it is up to each network owner to determine which blocks they want.
It would be a totally different story if the company was determining blocking for the US or people were forced to use it. But they aren’t.
I agree that for an optional tool, Cloudflare can make any blocklist they like. People have a fundamental Right to Filter. I personally don't think it's consistent with Cloudflare's brand or stated purpose to go down this route, but that's just my opinion, people can have other opinions.
I do want to kind of question how egalitarian we are inside free speech communities about this stuff though in reality. I am fairly confident that if Cloudflare added hate speech to 1.1.1.3 or started adding misinformation to their filtering list, that is something that would show up on HN and see debate. I think a lot of people on this site wouldn't see that as a neutral act, I think a lot of people would be on here arguing that it was a dangerous value judgment, or at the very least a dangerous behavior for Cloudflare to normalize.
We all have the right to filter content, and we all have the right to choose which filter lists we'll use. But is that actually our philosophy? Would we collectively as a community be applying those same standards if Cloudflare started blocking Covid misinformation or conversion-therapy sites from 1.1.1.3? The way society debates filter lists can sometimes betray our collective ideas about what kinds of information needs more or less protection.
> or people were forced to use it
There's a separate conversation to be had here about the fact that children are forced to use filter lists. This is exactly why Cloudflare reacted so quickly to stop blocking sites like GLADD and why if it ever does offer the ability to choose custom categories, it's probably never going to offer an "LGBTQ+ information" category to block.
Cloudflare (to its credit) does at least recognize that child filters are often only semi-consensual and can be (and regularly are) abused at the network level.
That doesn't change the overall debate, it doesn't mean that making a filter list is always evil, communities still have a Right to Filter. But it is important to bring up, kids at schools don't get to choose whether or not the filters on those networks are too conservative or too liberal with what they block.
Kids (necessarily by virtue of being kids) do not have agency to decide what networks they're a part of. There are good reasons for that, but it still puts kids into a somewhat more vulnerable position, and it means there are more dangerous implications for network-wide filters than there are for user-controlled filters. This is also something that kind of gets glossed over in these debates sometimes.
When 1.1.1.1 for Families launched, it blocked access to GLADD's site because Cloudflare didn't do a good enough job testing any of this stuff and they just pulled in filters from other parental companies, some of which turned out to be anti-gay. Cloudflare apologized, pushed a couple of fixes, but never actually took a step back and asked how this happened. In the meantime, 1.1.1.1 for Families launched without blocking access to sites like Stormfront. Cloudlfare didn't think it was appropriate for them to make a determination over whether that site was safe for kids.
I think that our society is just generally a lot less thoughtful about filtering adult content than it is about filtering other forms of content like political speech, and we don't think about adult content filters as having a downside, or being real censorship. So when 1.1.1.1 for Families was released, I came up with a challenge: https://danshumway.com/blog/sex-censorship-is-censorship/
I do think there are scenarios where it's completely appropriate to block content for children, and I do think families should always able to make these kinds of determinations. People and communities have a fundamental Right to Filter (https://anewdigitalmanifesto.com/#right-to-filter). However, adult content isn't the only content that falls into the category of being harmful to children. It is utter hypocrisy for Cloudflare to launch a service that blocks adult content but not hate speech; both forms of content are legitimate for parents to want off of their networks.
My challenge is, if Cloudflare is frightened of the implications of being the company that decides what is and isn't hate speech, then why isn't it also frightened of being the company that decides what is and isn't adult material? Why do we view accidental censorship of LGBTQ+ informational materials as less of an existential free speech risk than accidental censorship of political ideas or extremist groups? Cloudflare still, over a year later, doesn't really have clear documentation I can find anywhere about what specific criteria they use to make filtering decisions on 1.1.1.3 beyond that they "aim to imitate" Google Safe Search. Would people tolerate that kind of fuzziness if they were filtering hate speech or political extremism?
There is a reasonable debate people can have about whether or not it's appropriate for Cloudflare to be the company that carves out sections of the Internet that are inappropriate, even as an opt-in filter. I think both sides of that debate can make some good points, and reasonable people could go in either direction. But for me, the biggest question isn't really whether Cloudflare is the right company to build and maintain Internet filters. For me, the biggest question is about which subjects Cloudflare views as OK to moderate, and which communities Cloudflare is OK offloading the externalities of their moderation onto.
Because frankly, in free speech communities we do have a lot of hypocrisy about this. There's no argument to be made that extremist hate sites aren't just as dangerous to kids as pornography is. We should try to have more consistency about stuff like this. Are we OK with content moderation or not?