Sadly, "end-to-end encryption" by itself does not guarantee what happens before content is encrypted or after it is decrypted, meaning Facebook can easily spy on your private stuff by sending the data out via a side channel. There's also nothing stopping them from running an AI model on your (still-unencrypted) data before they send it.
I agree with you that we shouldn't consider communications platforms as "voluntary" anymore. In my opinion, that means we want to hold them to stricter regulation, like we do with utilities like electrical companies. I wonder if that could actually be used as a an argument in favor of moving the content scanning from what Facebook is already doing towards a government agency. At least in theory, it should be easier for the public to hold a government agency accountable, than for example Facebook's moderation team.
I appreciate your slippery slope argument and I agree. Allowing the existence of these tools places a large amount of trust in governments. But to me, it looks like the only alternative is to trust private companies who already are doing such scanning. And to me, that seems even worse.
Thank you for taking such a good perspective and providing interesting thoughts despite me taking a fairly argumentative oppositional debate style. It's uncommon and has made me rethink my approach. You do raise some interesting points about private vs. public misuse of data that I don't think I can expand on right now, but I'm going to let that percolate.
That said, there's a world worth striving for in which the public services that work for us, do indeed work for us. They act as a force which seeks a safe and prosperous society without having to compromise on privacy. In fact, introducing backdoors like this undermine our privacy in a way which actually could lead to an unsafe society, ironically. Trusting governments to use these tools is like trusting governments to distrust us, which seems basically like a bad idea.
I agree with you that we shouldn't consider communications platforms as "voluntary" anymore. In my opinion, that means we want to hold them to stricter regulation, like we do with utilities like electrical companies. I wonder if that could actually be used as a an argument in favor of moving the content scanning from what Facebook is already doing towards a government agency. At least in theory, it should be easier for the public to hold a government agency accountable, than for example Facebook's moderation team.
I appreciate your slippery slope argument and I agree. Allowing the existence of these tools places a large amount of trust in governments. But to me, it looks like the only alternative is to trust private companies who already are doing such scanning. And to me, that seems even worse.