But putting "VACCINES CAN KILL" as a headline in a mass-circulation paper would be just as wrong, and for the same reasons.
So you admit it's true, but it's wrong to write it as a headline?
It's interesting to see people put their head in the sand regarding the Signal/Whatsapp security flaws.
If something is actually true, then why is it preferable not to say it publicly (including in a headline)? As long as the qualifying statements are made alongside it (eg probabilities)?
Why should governments have so much classified information and secrets? What if they just operated entirely transparently? (Except for short term tactics for raids and investigations of suspected criminals and terrorists.)
People come away thinking "vaccines are bad". In this case, the Guardian put the tech equivalent of "vaccines can kill you" in a headline.
Heavily de-contextualised true information can be just as misleading as actual lies. Hence the vaccine example.
Don't they end up being lies of omission at that point?
The probability can be conditioned on some relevant assumption to make the story sensationalized. But as long as the probability and assumption is clearly spelled out, I don't see what other context should be required for PUBLIC disclosure. Maybe also the total incidence number.
For example: Jihadist terrorists can kill you! With a 0.001% probability for every hour you spend in a large crowd. It has happened in two crowds so far.
Or: 97% of climate scientists agree the rise in greenhouse gases is mostly attributable to human activities. That is 97% of 35 PEOPLE in the world.
(the climate scientist number is probably closer to 35,000; and when talking of surveys and samples, reporting the sample size causes everyone not professionally trained in statistics - that is, almost everyone - to underestimate the "power" of the survey.)
Just so you know, tptacek signed this letter. I did as well.
Calling it a backdoor was outright dishonest. I've written backdoors. I even won a cryptography backdoor contest at DEFCON with one of my designs.
If it's to be said that there is a vulnerability, then it is simply, "If there are any messages that haven't been delivered yet, and the recipient changes keys, the client will re-encrypt to the new public key before alerting."
Okay, a lot of security experts wouldn't make that trade-off, especially if they were trying to compete with Signal. But WhatsApp isn't a Signal competitor. The alternative means of contacting someone you'd normally use WhatsApp for is SMS, because that's what people are using today.
Most WhatsApp users aren't interested in encryption. It just works for them. They may still need it, but they don't care about it.
Even if you could exploit this, you get:
- Any undelivered messages (if any)
- No past messages
- No stealth either; the user does get alerted
WhatsApp is not a secure communications platform. Claiming otherwise is disingenuous.
IF the user opts in, they have the option of being notified. Most users do not know this option exists or care to enable it, so I'm not sure the above is strictly true.
I think the over-reaction to the stance of the security community is that it gives the vibe that the flaw is minimized. For the layman, every security flaw, every vulnerability is a totally obscure, incomprehensible jargon-filled techno mumbo-jumbo. Now we have the case that sometimes the experts go wild over the dangers of a flaw, sometimes they say "Ah, better than nothing". It gives the impression that WhatsApp is been let off the hook.
I think the argument would have been better framed by stating simply that WhatsApp's decision is a security flaw, that it should be fixed from a security POV, but it's still better than most alternatives and leave it at that.
That whole dance about it being an acceptable compromise would have been better off omitted. The letter commits the same error as the Guardian: in trying to be too comprehensive and precise, it muddies its point.
They're not acceptable in general.
They're acceptable within the context of the problem they are trying to solve, which isn't "make crypto nerds happy". Their users, by and large, know absolutely nothing about cryptography. Any attempt to add key verification will either:
- Spook them
- Intimidate them
- Confuse them
- Make them move to an insecure platform that their contacts
already use; i.e. SMS
- All of the above
"use whatsapp" is the correct recommendation for everyone who is not ready to use opsec recommendations from grugq, same as "TLS everywhere" is a very good recommendation regardless of all the issues we know about TLS.
Let's make pervasive passive collection impossible. Then we can work on stuff like endpoint security for everyone.
The "appeal to authority" fallacy occurs when someone is cited outside their area of expertise or is not an expert.
Qualified authorities in one century may be shown to be completely wrong. For example bloodletting, phlogiston, or even the age of the earth before radioactivity was discovered. Feynman talks about this... how the number stays close to the others until finally they jump to the right one.
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." - Max Planck
So while, yes, he is accurate that our knowledge of the world around us and of Science changes over time, that has nothing to do with the fallacies we're discussing.
(see my response here for the explanation of the scare quotes on security: https://news.ycombinator.com/item?id=13445337)