‘Many forms of secure messaging systems have been tried, and will be tried in this world of sin and woe. No one pretends that Signal is perfect or all-wise. Indeed it has been said that Signal is the worst messaging except for all those other forms that have been developed from time to time…’
Sure, but there's a small theoretical difference with democracy. You have to live under some system of government. You don't have to use a secure messenger. You can choose to have sensitive conversations in person or not have them at all.
I agree that in practice, a lot of people are going to use their phones for relatively sensitive conversations, and in practice, Signal remains the best choice for doing so. But there are a few real threat models where the options aren't Signal vs. SMS / Google Chat / Discord / etc., the options are Signal vs. nothing. For instance, you could be a journalist deciding whether to ask clarifying questions to a government whistleblower via Signal or meet up with them in a park. You could be an activist/demonstrator under a repressive regime deciding whether to coordinate some action this weekend via Signal or hold off on it entirely and tactically preserve your freedom. And so forth.
For those people, if (and to be clear this is a big "if," while this issue is one serious piece of evidence it is nonetheless inconclusive) Signal isn't trustworthy, it doesn't matter if Signal is the least-bad of the options.
(Also, it's not like Signal is the only e2e messenger around. There's iMessage/FaceTime, for instance. Churchill's claim was that the abstract idea of democracy was good, not that any concrete implementation like the British government was good.)
> You can choose to have sensitive conversations in person or not have them at all.
I don't think this is fair. Most of the solution to this is "not having them at all." That's not a good solution and still doesn't solve your problem since you can still be listened to.
> There's iMessage/FaceTime, for instance.
Which also has gotten in trouble recently with Pegasus as there was a 0-click exploit in iMessage. Honestly, that is a far more serious issue than the one here. That being said, I still trust iMessage and that the devs are doing the best that they can. I just recognize that security is difficult and will always be a cat and mouse game. There is no such thing as perfect security.
I think what I'm trying to get at is that incorrectly believing you have access to a secure messenger can be worse than acting as if you don't, if those are your options. The whistleblower might choose not to make contact, but if the alternative is making contact and immediately going to prison (because someone else on your contact list saw a classified screenshot from you and told the authorities), maybe that's better. The activists might choose not to protest, but if the alternative is being caught before they even start their protest (because your group was forwarding a little advertisement image or annotated map around among trusted people, and someone untrusted got it), maybe that's better.
Take Reality Winner, for instance (the mechanics of that case were entirely unrelated to secure messengers, but it makes a relevant example overall). The effect on the world of her whistleblowing seems to have been minimal, and the cost to her was significant. Was it worthwhile? If she had been told the risks of the government identifying her were higher and decided not to leak anything, wouldn't that have been a better outcome?
I'm not saying there's perfect security. Vulnerable users absolutely need to be making risk assessments and deciding what they're comfortable with, and we should be clear nothing is risk-free. I'm just saying my sense of Signal's risk, in absolute terms, is higher than it was before I learned about this, and that matters to vulnerable users, not just the fact that it probably remains the lowest-risk messenger of the various options.
I agree with you overall, and the Pegasus exploit does reflect badly on Apple (and probably should reflect more badly on them than seems to be happening).
> I think what I'm trying to get at is that incorrectly believing you have access to a secure messenger can be worse than acting as if you don't, if those are your options.
For the average person, I do not believe this is true. For the non-average person, I believe you are correct but most of these people are aware and should be constantly trained.
I'm not saying you're wrong, I'm saying that there are two different conversations to be had and we need to know which one we're having. To me it looks like Signal is about as good as you get without loads of complication and for what it is meant to do.
> he Pegasus exploit does reflect badly on Apple
And same with this on Signal. I do believe we should hold these companies to high standards. But the point I'm trying to make is that these also aren't reasons to abandon the platforms completely (as many users are suggesting here). That's throwing the baby out with the bathwater.
Yeah, to be clear, I only mean this from the point of view of non-average Signal users.
It's a little weird that Signal is both the "baseline security that everyone should have" product (a la HTTPS or WPA2) and the "you are literally hiding from the government" product. Of course, the target market for the latter, when you are not another government yourself, is by definition mostly illegal activity (whether or not the laws are justifiable), so it makes sense that there isn't a good product just for that.
In this particular case, it also complicates things that people who are literally hiding from the government also have normal ordinary conversations with lots of people, and it helps things for those ordinary conversations to happen on Signal, but this bug is particularly bad if you do that.
(I'm also not really sure where, say, people buying recreational drugs fit on the "average"/"non-average" axis. Is it a reasonable precaution to not text incriminating information to your drug dealer over Signal? It feels like it shouldn't be necessary, but I can see the argument for it.)
You make some fair points. But even from the eyes of those people, what is the alternative? Is iMessage guaranteed to not have any hidden exploits out there? And on the flip side, what do they lose out on by only having those conversations in person? Well, I'd argue that their world becomes a lot smaller, and there sources are instantly at a higher risk.
If iMessage were sending photos to the wrong people (even with extremely low probability) for over half a year, there would be serious negative publicity to Apple for it, even if they had never implemented end-to-end encryption. Apple also has more software testers and more willingness to use telemetry. So while there are no 100% guarantees, I think the incentives are aligned with iMessage at least as well as they are with Signal.
Apple suffered negative publicity from the 2014 iCloud photo leaks, even though those were "just" phishing and not a vulnerability/bug in the strict sense. Tim Cook had to give statements to the media, and in fact Apple stepped up its phishing protection by pushing two-factor authentication and notifying users about additional iCloud logins.
Winston S Churchill, 11 November 1947