Conceptually, though, how would it work, and who becomes the arbiter of what's good and what's bad, and what can be shared and what cannot ?
In no way shape or form am I defending the nasty stuff or saying that it should be allowed, but when defending privacy and liberty, there's a real question about how we deal with hidden lawlessness that uses the same tools people use for legitimate purposes, or, more importantly, how people use these tools in a way that a government views as illegitimate or a thought crime but are in the defense of liberty.
I think there's a notion of policing that comes out of this discussion that is not part of the technology but rather a complement to it. I don't know what shape that would take.
Well, that's where we have laws, and elections, and all that stuff.
We similarly have a complete ban on violence, except for the state which has a monopoly on it. This does get abused sometimes, the system isn't perfect. But it's better than allowing anyone to use violence whenever they want to resolve disputes.
I agree that there's a discussion that we technologists need to have about policing and censorship, which we're currently not having.
Such complement cannot really exist. If it will exist and actually be effective, the technology as a whole would be pointless.
An analogy can be made to WhatsApp. It's known to be used to coordinate terrorist attacks in Europe yet not a single government intelligence agency has managed to legislate Facebook into opening a back door. Because a backdoor makes encryption quite pointless.
Similarly, the case with Apple. Whom categorically refuses an unlock ability to authorities, and so far has won.
There's no public outrage. The public seem happy to be protected from the prying eyes of their governments. And I guess the public implicitly accepts that as part of this protection, some very nasty stuff goes around these same platforms.
That's why I believe we should separate content extermination (which is fully impossible) from hiding said content from view. The latter is doable and common.
For example, terrorist videos are almost immediately removed from social media platforms upon detection. This stops it from spreading and its damage and shock effect is contained. However, should you specifically seek out such videos, they can still be found in several places, and you don't even need to go to the darkweb.
Censorship, in the practical sense, should be seen as hiding from view. Not deletion.
From what I’ve seen, you would basically just unpin or force unpin undesirable content. As long as no one requests that content, it will be garbage collected/deleted.
In no way shape or form am I defending the nasty stuff or saying that it should be allowed, but when defending privacy and liberty, there's a real question about how we deal with hidden lawlessness that uses the same tools people use for legitimate purposes, or, more importantly, how people use these tools in a way that a government views as illegitimate or a thought crime but are in the defense of liberty.
I think there's a notion of policing that comes out of this discussion that is not part of the technology but rather a complement to it. I don't know what shape that would take.