
WhatsApp Security Vulnerability - c0rtex
https://www.schneier.com/blog/archives/2017/01/whatsapp_securi.html
======
aylons
While people discuss about a possible state-actor stronghanding WhatsApp and
the semantics of backdoor, the "design feature" of not showing the key changes
are making real victims, at least in Brasil:

The attacker first try to duplicate the mobile phone number of the first
victim, probably by social engineering their phone company. This part may look
difficult to do, but it is not hard if you realize you do not need to target
anyone special - everyone uses WhatsApp, so any number gives a high
probability of success.

After getting the first victim number, the attacker install WhatsApp, which
gladly verifies the user via SMS - WA has no login, no password, so anyone
receiving the SMS can impersonate anyone else.

As Whatsapp does not send any alert of key change by default, the attacker is
free to impersonate to person - in this case, he simply asks for some borrowed
money to be transferred to a bank account, which will be paid soon. The
recipient has no reason to distrust the message - it is being sent by his
friend in the same chat window as they always talked to, even the logs are
there. There is no message to warn about the potential issue, by design!

This is no hypothesis - this is actually happening for some time, now.[1] This
design feature surely has some loyal users.

[1][http://www.correiobraziliense.com.br/app/noticia/cidades/201...](http://www.correiobraziliense.com.br/app/noticia/cidades/2016/05/11/interna_cidadesdf,531298/brasilienses-
caem-em-fraudes-cometidas-atraves-de-aplicativo-de-celula.shtml)

~~~
StavrosK
Unfortunately, if WhatsApp did defend against this, it would be such a big
hassle that users would disable it. How many people do you know that wouldn't
just click "accept" on "this user's keys changed", or wouldn't just ask the
attacker "hey did you get a new phone?" "yes" "oh okay"?

People love to blame WhatsApp, but what can anyone realistically do?

~~~
aylons
It does not need to be a modal form - a notification message, embedded in the
the chat log, just before a "Hey, could you send me some money", could make
some people think twice before transferring:

"Wow, he is asking me in excess of USD500 just after WhatsApp warned me his
cell phone has changed. Weird".

The simple alert shown in moxie's own blog post [1], perhaps less cryptically
written, would probably do the job.

Heck, if this happened between me and girlfriend last week, I would most
probably fall, as I did not know this was disabled in WhatsApp. Now, at least,
I have turned the notification on.

[1] [https://whispersystems.org/blog/images/whatsapp-
keychange.pn...](https://whispersystems.org/blog/images/whatsapp-
keychange.png)

~~~
eridius
For the overwhelming majority of people, it would just lead to alert fatigue,
where users start ignoring the alerts because 99% of the time they're not
actually indicative of a problem.

~~~
eitland
As much as I agree that alert fatigue is a problem this shouldn't trigger it.

~~~
stouset
[citation needed]

~~~
eitland
I wrote shouldn't instead of won't.

That said, my reasoning went along the lines of:

Where I live at least people rarely switch phone numbers and I have yet to
hear about a single person that I know or have worked with who have had their
phone number hijacked.

So, lets say that other people are less lucky than me and this warning will
pop up twice a year, -will that be enough to trigger warning fatigue?

IMO, probably not.

Will we still have a problem with warning fatigue? Yes. Why? Because of the
sticker and warning requirements created by American lawsuits and EU cookie
law. (Oh, and IIRC my country isn't much better in this regard, just smaller
so less of a problem.)

While not a citation I hope this explains my reasoning.

~~~
stouset
> Where I live at least people rarely switch phone numbers…

First, it's not about people switching phone numbers. It's about switching
_devices_. This can be something as innocuous as uninstalling/reinstalling the
WhatsApp app. Or upgrading their phone on a one or two year cycle. Or because
they broke their phone and are using a friend's old phone for a few weeks. Or
wanting to send and read messages on their laptop too. And their work laptop.
Except they also had their work laptop reinstalled because of a virus, or
because IT needed to do an upgrade, or whatever.

This shit happens all the time.

> …and I have yet to hear about a single person that I know or have worked
> with who have had their phone number hijacked.

I think this proves my point. The signal-to-noise ratio for this type of
message is precisely zero for greater than 99.999% of WhatsApp users who are
_not_ being singled out by a nation-state for surveillance. And he number of
these users who actually bothers to confirm keys out-of-band is, while not
precisely zero, near enough as to make no difference.

For users who _do_ anticipate being singled out, there are two plausible
options: they are savvy enough to look into the settings and ensure the toggle
is enabled, or they're _not_ savvy enough to look for this type of option, and
they're probably screwed anyway because actually achieving practical privacy
against a highly-funded and highly-motivated governmental adversary is
brutally hard and requires significantly more active involvement than merely
toggling a switch on a messaging app.

> So, lets say that other people are less lucky than me and this warning will
> pop up twice a year

Twice a year times fifty contacts adds up to seeing this message frequently
enough that you learn to subconsciously ignore it. People _still_ try to
bypass virtually every TLS warning browsers throw at them even though that
number for most people is less than once per year, and even though browsers
have made it painfully difficult to do so.

------
ckastner
The article mostly just quotes two other sources that have already been
discussed here:

 _WhatsApp backdoor allows snooping on encrypted messages_ ,
[https://news.ycombinator.com/item?id=13389935](https://news.ycombinator.com/item?id=13389935)

 _There is no WhatsApp 'backdoor'_,
[https://news.ycombinator.com/item?id=13394900](https://news.ycombinator.com/item?id=13394900)

~~~
beambot
Yep, this is an analysis by a trusted individual in the security field. His
ultimate summary:

> [WhatsApp's representative is] technically correct. This is not a backdoor.
> This really isn't even a flaw. It's a design decision that put usability
> ahead of security in this particular instance.

~~~
frabbit
Or to re-phrase:

This security application is not secure but it is usable.

~~~
noja
There is no "secure", it's a scale from "no security" to just "very high
security".

~~~
aidenn0
Security isn't either a scale or a binary; from one point of view a large
number of binary values.

Either your security will or won't be compromised by a given threat model.
This is binary, but there's lots of different threat models one could have.

e.g. If you care about the Russian government impersonating you, it's a
different threat model than if you care about the US government reading your
communication, which is a different threat model than if you care about a
private actor encrypting all your data and holding it ransom.

This is then complicated by the fact that we can't see into the future
(sufficiently complicated code is likely to have bugs, we need to predict if
those bugs will be exploited before they are fixed; large government attackers
may or may not know about math that the public crypto community doesn't; which
governments will successfully compel a third party to do various things or
reveal various secrets &c.) so each binary value for the security becomes
probabilistic.

~~~
noja
Come on, a collection of binary values for all of the threat models on a
product used by millions of people is a scale by any other name. How good is
the product at covering each of the threat models?

------
agd
The question for me is that posed by the hacker who discovered the
vulnerability. Here's what he said [1]:

"He (Moxie) said: “The choice to make these notifications ‘blocking’ would in
some ways make things worse. That would leak information to the server about
who has enabled safety number change notifications and who hasn’t, effectively
telling the server who it could man-in-the-middle transparently and who it
couldn’t; something that WhatsApp considered very carefully.”

This claim is false. Those “blocking” clients could instead retransmit a
message of the same length that just contains garbage and this message would
just not be displayed by the receiver’s phone. Encryption guarantees the
garbage or real messages are indistinguishable in the encrypted form. Hence,
this technique would make identifying users with the additional security
enabled on a large scale impossible."

This was raised in the previous WhatsApp vuln thread but as far as I'm aware,
Moxie is yet to address this criticism. Would be good to get a response on
this.

[1]
[https://www.theguardian.com/technology/2017/jan/16/whatsapp-...](https://www.theguardian.com/technology/2017/jan/16/whatsapp-
vulnerability-facebook)

~~~
kemayo
I think you're using a different meaning of "blocking" than Moxie is. I
believe they mean "blocking" in the sense of waiting for the user to confirm
that the message should be re-sent -- i.e. blocking on the user's input.
Whereas you're using "blocking" to mean refusing to re-encrypt the message.

Presumably any message which would be detectable enough as garbage to not be
displayed on the reader's phone could be treated as them having this feature
enabled, allowing the information-leak Moxie mentioned.

(To be clear, I do think there's a argument to be had over which of these
leaks is _worse_. I just don't think this suggested approach actually
addresses Moxie's concern.)

~~~
grogers
Resending and reencrypting the message mean the same thing here, I don't
understand the distinction. Once the user okays the key change it can be
resent by encrypting with the new key, before that it would be blocked from
being resent.

I know nothing of the signal protocol, but whether the server can tell the
message is garbage depends on what the receiver client tells the server. An
ideal client would acknowledge receipt to the server but show the user an
error (or silently drop the garbage message). From the quote it seems like
this is the case, in which case the server can't tell a true message receipt
from receipt of garbage and the correlation doesn't work.

~~~
pfg
> I know nothing of the signal protocol, but whether the server can tell the
> message is garbage depends on what the receiver client tells the server.

You're forgetting that the server is the one telling the sender what the new
key is. If the key is under the control of the attacker/server, they can read
the message and determine if it's garbage or not.

------
wyldfire
Even if they changed this specific design decision/vulnerability, it seems
like there's a big gaping hole (or I'm missing something).

Given that WhatsApp brokers the initial key exchange, lawful interdiction can
take place at WhatsApp under subpoena. What we hope is the case is that
WhatsApp would fight these orders in court, claiming that the keys are merely
forwarded and aren't stored by design. But if they fought and lost, then
presumably they'd comply with the orders and the provision not to reveal the
order. Do we really think that WhatsApp and/or Facebook have the conviction of
Ladar Levison?

It would seem that all new accounts created at WhatsApp after that theoretical
warrant is executed are at risk.

~~~
ikeboy
I'd assume the keys are generated on device.

~~~
wyldfire
Presumably the device generates a keypair and then needs to exchange with the
remote device somehow? I assumed both devices connect to WhatsApp and it
delivers what is ostensibly the pub keys from each of the parties to each of
them?

~~~
pfg
This can be detected if the sender and the recipient attempt to verify their
keys out of band (i.e. in person or through some other trusted communication
channel). WhatsApp allows you to do that.

~~~
dingaling
Out of band but not out of app. It's the WhatsApp app that generates and
presents the 'security code' or key fingerprint for comparison.

It's not like SSH in which separate and discrete components generate the
keypair and verify fingerprint on connection.

~~~
pfg
That's moving the goalposts. A backdoor in the app itself is a whole different
matter - both legally (give us these records/change these records in your
database vs. build software according to our spec and ship it to your
customers, which is similar to Apple vs. FBI and might not be constitutional)
and technically.

I also don't see the difference between this and SSH. If your SSH server or
client is backdoored/compromised, you have no control over what happens with
your plaintext, no matter what the fingerprint verification tells you. The
only difference is that one is open source, so the likelihood that a backdoor
is detected is probably higher, though I don't think this means a) there is no
backdoor and b) a backdoor in a closed-source app cannot be detected.

------
eridius
If your threat model is the government compelling Facebook, then you should be
using a different product that's geared specifically towards security, such as
Signal. WhatsApp is a mass-market product aimed at the whole world, which
means it makes different tradeoffs, providing a less comprehensive threat
model in favor of higher usability. And that's a perfectly fine thing for this
app to do.

~~~
stouset
Yes, thank you. So many people in this thread are making the absurd assertion
that security is a binary thing — it's either totally secure against all
threats, or it's insecure.

What the security community has spent the last 20 or so years coming to grips
with is that it's _very hard_ to cover every attack surface, and not wind up
with a product that nobody outside of a select few are smart or dedicated
enough to use (e.g., GPG), or that people don't just blindly click through
endless warnings (e.g., the not-so-distant days of TLS). What we _can_ do is
make incremental improvement over the existing tools that people use by
covering more in the threat model or improving the usability such that more
people use it and/or fewer people ignore important concerns.

As a mass-market anti-surveillance and privacy-enabling chat app, WhatsApp is
an incredible success. It's not replacing GPG with a carefully-curated web of
trust. It's replacing plaintext SMS.

There are better tools if you _know_ your threat model includes targeted,
high-budget attacks the FSB, NSA, or CIA.

------
folex
I didn't quite grasp why attacking entity (e.g. government) has the ability to
read messages. What does "WhatsApp has the ability to force the generation of
new encryption keys for offline users" mean? Does it mean that WhatsApp
backend has the ability to force sender to use pregenerated compromised key
provided by attacker? In terms of WhatsApp security whitepaper, does that mean
that attacker can force sender to use newly generated (by attacker)
S_recipient, O_recipient and the main one, I_recipient? I'm asking because
"force the generation of new _encryption_ keys" doesn't really specify who
would generate keys, or what about identity key that signs everything.

~~~
aidenn0
Let's say WhatsApp wants to read the next message sent to user X:

1) WhatsApp makes user X appear offline

2) User Y sends user X a message

3) WhatsApp sends user Y an indication that user X's key has changed, along
with the public key for which they have the corresponding private key

With these steps, user Y's message will be resent with the new key that
WhatsApp knows, and so they can read the message. There is a configuration
setting that will display a notification that the key changed, but no way to
prevent an undelivered message from automatically being resent with the new
key.

~~~
folex
So the main problem is that on _identity_ key change, the new one isn't
required to be signed with previous identity key? If so, that's plain stupid,
isn't it?

~~~
aidenn0
It's a natural consequence of "My phone got run-over/lost/stolen"

------
arrakeen
conspicuously missing from this discussion is the self-healing capabilities of
the signal protocol, which as far as i understand is a major feature. when
marlinspike says, "This is called a \"man in the middle\" attack, or MITM, and
is endemic to public key cryptography, not just WhatsApp," i find it odd that
he wouldn't even address the fact that the signal protocol has protections
against this built into the protocol.

