Hacker News new | past | comments | ask | show | jobs | submit login

This is not a fake issue.



Neither is the fact that vaccines can sometimes make you sick or kill you.

But putting "VACCINES CAN KILL" as a headline in a mass-circulation paper would be just as wrong, and for the same reasons.


>But putting "VACCINES CAN KILL" as a headline in a mass-circulation paper would be just as wrong

So you admit it's true, but it's wrong to write it as a headline?

It's interesting to see people put their head in the sand regarding the Signal/Whatsapp security flaws.


You're saying the entire security community has put its head in the sand. Why do you think the top people in the field have signed this letter? I'm asking sincerely. What do you think you understand that they're missing?


I haven't heard or investigated much about this issue. But I want to ask about the general question.

If something is actually true, then why is it preferable not to say it publicly (including in a headline)? As long as the qualifying statements are made alongside it (eg probabilities)?

Why should governments have so much classified information and secrets? What if they just operated entirely transparently? (Except for short term tactics for raids and investigations of suspected criminals and terrorists.)


I think the vaccine analogy is really helpful here. You can make true statements, like "vaccines can kill you", that cause massive public harm if they're not correctly contextualized.

People come away thinking "vaccines are bad". In this case, the Guardian put the tech equivalent of "vaccines can kill you" in a headline.


> If something is actually true, then why is it preferable not to say it publicly

Heavily de-contextualised true information can be just as misleading as actual lies. Hence the vaccine example.


> Heavily de-contextualised true information can be just as misleading as actual lies.

Don't they end up being lies of omission at that point?


What context needs to be given except for probability for which the statement "can" becomes "does"?

The probability can be conditioned on some relevant assumption to make the story sensationalized. But as long as the probability and assumption is clearly spelled out, I don't see what other context should be required for PUBLIC disclosure. Maybe also the total incidence number.

For example: Jihadist terrorists can kill you! With a 0.001% probability for every hour you spend in a large crowd. It has happened in two crowds so far.

Or: 97% of climate scientists agree the rise in greenhouse gases is mostly attributable to human activities. That is 97% of 35 PEOPLE in the world.


Some people are allergic to vaccines. That's a real story, too. But none of us have any trouble seeing the problem if The Guardian runs an RFK Jr. story saying "Vaccines Might Kill You".


Since when have you ever seen a probability in a newspaper?

(the climate scientist number is probably closer to 35,000; and when talking of surveys and samples, reporting the sample size causes everyone not professionally trained in statistics - that is, almost everyone - to underestimate the "power" of the survey.)



He signed the letter we're debating!


The people who signed this letter are the who's who of the study of cryptographic backdoors. What makes you believe you understand this subject better than they do?


I'll point out none of the security researchers in the article dispute the vulnerability is as described. They simply took issue with the appeal to leave WhatsApp as a result of the presence of the vuln. They essentially said, Yes, it's less secure to do it this way, but we think it's more important to make sure messages get delivered than to ensure absolute security.


> I'll point out none of the security researchers in the article dispute the vulnerability is as described.

Just so you know, tptacek signed this letter. I did as well.

Calling it a backdoor was outright dishonest. I've written backdoors. I even won a cryptography backdoor contest at DEFCON with one of my designs.

https://underhandedcrypto.com/2015/08/08/crypto-privacy-vill...

https://paragonie.com/blog/2016/01/on-design-and-implementat...

If it's to be said that there is a vulnerability, then it is simply, "If there are any messages that haven't been delivered yet, and the recipient changes keys, the client will re-encrypt to the new public key before alerting."

Okay, a lot of security experts wouldn't make that trade-off, especially if they were trying to compete with Signal. But WhatsApp isn't a Signal competitor. The alternative means of contacting someone you'd normally use WhatsApp for is SMS, because that's what people are using today.

Most WhatsApp users aren't interested in encryption. It just works for them. They may still need it, but they don't care about it.

Even if you could exploit this, you get:

  - Any undelivered messages (if any)
  - No past messages
  - No stealth either; the user does get alerted
So, yes, we do dispute the vulnerability is as described, especially when it was called a backdoor.


It's a vulnerability. The article says only that it "could be used by government agencies as a backdoor", which is true of almost any vulnerability. If a vulnerability exists in a given cryptosystem then to a large extent it doesn't matter whether it's there by accident or design. At least, that's been the security community's line for decades prior to this particular incident.


I stand by my assertion that the vulnerability exists exactly as described. You have said nothing that disputes that. If I understand you correctly, you take issue specifically with the characterization of the vulnerability as a 'backdoor', which suggests intent. I don't know the dev's intent, so I'm unable to speak to the validity of that characterization in that context. That said, the Guardian corrected the story in this regard, and the text now reads 'vulnerability', which I think we all can agree on.

WhatsApp is not a secure communications platform. Claiming otherwise is disingenuous.


- No stealth either; the user does get alerted

IF the user opts in, they have the option of being notified. Most users do not know this option exists or care to enable it, so I'm not sure the above is strictly true.


It still feels weird to have security experts characterize WhatsApp's choice as acceptable; it feels off to attribute its success around the rare occurence that a message could need to be resent.

I think the over-reaction to the stance of the security community is that it gives the vibe that the flaw is minimized. For the layman, every security flaw, every vulnerability is a totally obscure, incomprehensible jargon-filled techno mumbo-jumbo. Now we have the case that sometimes the experts go wild over the dangers of a flaw, sometimes they say "Ah, better than nothing". It gives the impression that WhatsApp is been let off the hook.

I think the argument would have been better framed by stating simply that WhatsApp's decision is a security flaw, that it should be fixed from a security POV, but it's still better than most alternatives and leave it at that.

That whole dance about it being an acceptable compromise would have been better off omitted. The letter commits the same error as the Guardian: in trying to be too comprehensive and precise, it muddies its point.


> It still feels weird to have security experts characterize WhatsApp's choice as acceptable;

They're not acceptable in general.

They're acceptable within the context of the problem they are trying to solve, which isn't "make crypto nerds happy". Their users, by and large, know absolutely nothing about cryptography. Any attempt to add key verification will either:

  - Spook them
  - Intimidate them
  - Confuse them
  - Make them move to an insecure platform that their contacts
    already use; i.e. SMS
  - All of the above


You've repeated this sentiment several times (and I believe you are right), but isn't it an appeal to authority?


Appeal to authority is not nothing. If there is a complicated issue and a majority of credible authorities on the subject agree on some conclusion, I should accept it unless I have studied it enough (advanced degree in the right field + study up on all recent publications/results) to be sure I know everything they know and fully understand their reasoning.

"use whatsapp" is the correct recommendation for everyone who is not ready to use opsec recommendations from grugq, same as "TLS everywhere" is a very good recommendation regardless of all the issues we know about TLS.

Let's make pervasive passive collection impossible. Then we can work on stuff like endpoint security for everyone.


Note that the grugq signed the letter as well.


There's nothing wrong with citing topical experts.

The "appeal to authority" fallacy occurs when someone is cited outside their area of expertise or is not an expert.


Yep, this exactly. Most books dealing with logic and fallacies now list it as "Appeal to unqualified authority" or similar.


I don't think the word "qualified" to qualify the word "authority" makes the appeal to auhority fallacy any truer. Most appeals to authority are actually to qualified authorities - the fallacy is neglecting to provide any reasoning behind the decision. In fact the authority may not have even intended their statement to indicate great certainty, or they may have been influenced by sociological phenomena.

Qualified authorities in one century may be shown to be completely wrong. For example bloodletting, phlogiston, or even the age of the earth before radioactivity was discovered. Feynman talks about this... how the number stays close to the others until finally they jump to the right one.

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." - Max Planck


Yeah, but for every Einstein there are 10^8 cranks.


(delete)


I would say it's not that he's incorrect more that he's off-topic. The changing of truth doesn't really have anything at all to do with an appeal to authority fallacy because said fallacy is based on the thoughts of the time. We can't possibly know how scientists are going to think 150 years from now, so we can't appeal to their authority.

So while, yes, he is accurate that our knowledge of the world around us and of Science changes over time, that has nothing to do with the fallacies we're discussing.


[flagged]


I downvoted this one just to push your buttons (it's a pune!). I didn't vote either way on the other comments.


Nobody owes you a reason for disagreeing with you and simply downvoting. The site guidelines do ask you not to go on about downvoting, though.


[flagged]


He has one dead comment which just complains about downvoting and contains no argument. I'm not sure what you mean by 'defunct'.


Situations where you are right and the top experts in a field are wrong are rare, and should make you take pause. Moreover, these people are not arguing "because I said so", but giving a very long list of falsifiable reasons to back their claims.


Neither was the Johnny Can't Encrypt issue. Now 1/7th of humanity is using E2E encryption. Making that happen involved some "security" vs usability tradeoffs

(see my response here for the explanation of the scare quotes on security: https://news.ycombinator.com/item?id=13445337)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: