Hacker News new | past | comments | ask | show | jobs | submit login
WhatsApp backdoor allows snooping on encrypted messages (theguardian.com)
1332 points by katpas on Jan 13, 2017 | hide | past | favorite | 321 comments

Current thread on the response to this article: https://news.ycombinator.com/item?id=13394900

Well, I kind of feel that I have to repost my comment on this old thread[1] with regards to the government of Egypt blocking Signal application:

"Isn't it "weird" that they chose to block Signal app and not the signal-protocol based Whatsapp? If Whatsapp really implements the same kind of security and privacy measures that Signal does, why is Whatsapp allowed to continue operating? If signal is preventing them spy on users and they ban it, is in't it safe to assume that Whatsapp is NOT preventing them spy on users, so they let it operate? Wouldn't you expect Whatsapp to be also targeted, especially considering the broad user-base it has compared to Signal? Yes, I know they had blocked Whatsapp in the past, but they didn't block it now. Which means that something has changed in the relationship of the Egyptian gov and Whatsapp since 2015."

1. https://news.ycombinator.com/item?id=13219304

Simple explanation would be that activists use Signal. [1]

They don't trust WhatsApp and rely on Signal for secure messaging. Blocking Signal means they are able to target activists without impacting much of the rest of the population.

[1] Many of the people I know who are activists in countries where they need to protect their identities use Signal

I wouldn't trust whatsapp even before this revelation.

I would never trust a closed source messaging app if I was an activist, regardless of what encryption they claim to implement.

I wouldn't trust anything owned by Facebook. Period.

The security of a system is only as strong as it's weakest link, which in this case is the system software (OS and drivers) and hardware. Imagine that baseband-hardware has been fitted with a backdoor that simply says "encrypt all textual input and send to this address". Even better to piggy back to a well-known endpoint, like Facebook, then compromise that (which is easy if you're a state actor). The only thing that really saves us is that it's just too much data! (Well, that and the fact that most of us are happily playing the games of commerce, and not particularly interesting to state security services.)

Good point. At least as a technical person, I would like to use an open-source messaging application.

Of course I'm not going to read the source code but at least I'm sure developers behind the app do not open a backdoor for someone else.

The mobile space is tricky. A source code dump doesn't really do much beyond "trust us, this is what you get from App Store too". You also need the possibility to build the software yourself, which include things like API keys, before we're close to what assurances open source software used to give us.

The nice thing about a FOSS mobile app is that you can (in theory, at least) sideload it. A covert operation could just gather up everyone's devices, build a fresh copy of the app, and then sideload that copy for everybody.

Of course, for that to be feasible, the network architecture of the app must not require API keys—and so must either be purely peer-to-peer, or involve a FOSS server component that the developer can run an instance of themselves (as in the Matrix protocol.)

While I'm totally the same in this regard, this does feel a bit like an open-source version of the bystander effect.

I don't know what the bystander effect is, but I assume we're taking about the same thing: I often feel that everyone is, along with myself, thinking "great - open source! I'm sure someone's checking it."

Of course, the counter is that if you publish it you don't risk that someone actually is checking.

Open beats closed, but we must be careful not to think it immediately makes the code sound.

I've been thinking about this particularly recently in relation to Monzo, the will-be bank. There's no web app and slow progress on the android front. Lots of open source effort though, since they publish an API, but... That's my bank account I'm (not) giving open source developers access to.

but we must be careful not to think it immediately makes the code sound

nobody is saying it's automatically sound, but open is the only option that makes any security analysis possible.

> open is the only option that makes any security analysis possible

I'm not disputing that. Let me repeat myself:

> Open beats closed

All I'm saying is that it doesn't stop there. Too often there's this complacent 'great, it's open source!' - I'm as guilty of it as anyone.

You're begging the question.


> open is the only option that makes any security analysis possible.

Many people are disputing that, and I'm getting around to that view. Closed doesn't mean you have nothing, it means you have the binaries, which you can disassemble and analyse. With open, you have a bit higher level language, which you have to analyse, plus then you have to show that the binaries correspond to it.

> open is the only option that makes any security analysis possible.

Generations of crackers and security researchers have proven that incorrect. There are plenty of tools for dealing with compiled programs.

I suppose the difference is that the bystander effect has a connotation with the person stepping in not getting any real benefit personally (e.g. breaking up a fight) vs. here where you would get some name recognition for calling out Signal (for example)

There is no logical way to verify that all activists (or even a majority of them) use Signal over WhatsApp. The perception that activists use Signal may have been enough to block them, but having a huge backdoor in WhatsApp is reason enough to not take action.

That's assuming it was a macro decision, and not a micro decision. The govt could have had specific intel on a particular activist, or cell that they knew were using Signal, and shut it down to deal with that situation at that time.

Signal actively promotes as activist messenger by using names of revolutionaries and anarchists (Makhno, Proudhon, Masha Kolenkina) all over their website. Just for example: https://whispersystems.org/blog/images/signal-faces.png

> Simple explanation would be that activists use Signal.

But why do activists simply not use WhatsApp, instead of Signal? If both were suppose to be fully encrypted and secure, why not use the tool that is available. I assume the needing encryption is to prevent the government snooping and eavesdropping on your plans rather than "liking the UI/UX of one system over the other"?

Maybe the activists know something we did not, and are right to be paranoid...

I think the rule of thumb around here is that any system that is closed-source must be treated as inherently untrustworthy from a security standpoint. WhatsApp has therefore always been untrustworthy for the scrupulous, regardless of the relatively flattering PR.

Based on news like this, rightfully so.

Facebook owns WhatsApp and has been increasingly hospitable to government intrusion on users' privacy. That seems like a good enough reason given that Facebook violated its pledge not to combine user data.

Also that the folks in the government doing the banning probably use WhatsApp themselves to conduct business and do their jobs.

WhatsApp is used by over a billion people. I'm sure some activists in Egypt use WhatsApp, too. That said, I think WhatsApp was blocked in Egypt, too, at least for a while. I don't know if they later "fixed" that or not, and how they did it.

A lot of Americans don't understand why messengers like WhatsApp are so popular around the world. The reason is that most carriers still extort users by charging text message fees.

In the US, everyone texts (or think they are using texts when running iMessage) because most plans give unlimited voice and texts, and charge by the GB of data.

Good point, but there is an explanation: blocking WhatsApp would lead to more intense backlash. See what happened in Brazil.

Not to say it isn't both, but the price of blocking (one of) the most popular messaging apps is higher to a government than blocking one in the low low percentiles of usage.

What you say makes blocking Signal pointless.

If they blocked Signal just because it was less of a trouble to block compared to WhatsApp, then all the people that were on Signal will easily switch to WhatsApp... What you have at this point, is a government paying the price of blocking a less popular messaging app they cannot control, while the people they are after can just switch to a MASSIVELY used messaging app the gov can also not control and additionally, is too expensive to block.

If this was the case,it would actually work against the gov. Do not underestimate gov authorities, they are not THAT naive. If they had not blocked Signal at all, they could at least track Signal users and at least have that information: that this small group of people (Signal users), contains the group of people they are after. They could have their honey pot there. Mixing the "dangerous" Signal userbase with the chaotic massive userbase of WhatsApp makes no sense, unless you really have WhatsApp on your side.

I hope you understand what I am trying to say.

edit: rephrasing

I wouldn't overestimate government authorities either. A report on a person of interest crosses the desk of a deputy minister that says the person uses Signal could be enough to get the application blocked in the country.

Elected officials and political appointees demand action on things that are counter to their interests all the time, the people that execute those orders (if they appreciate that the order is counter-productive in the first place) have to decide what measures are worth fighting and which ones are not.

Can you make an unblockable app?

An app that effectively used steganography[1] would probably come the closest to being an "unblockable app". As long as they don't detect that communication is going on, they can't usually block it -- short of blocking everything, which is rarely practical for long.

Some other interesting reading is: [2], [3], and [4]

[1] - https://en.wikipedia.org/wiki/Steganography

[2] - https://en.wikipedia.org/wiki/Covert_channel

[3] - https://en.wikipedia.org/wiki/Traffic_analysis

[4] - https://en.wikipedia.org/wiki/Anonymous_remailer

It would quickly reveal itself either by overtly disclosing its purpose on the app store it's indexed in or through a HUMINT/leaks.

First, just because an adversary understands how a given steganography app works, or knows that it exists doesn't mean that they can detect the specific communication that's occurring, or will move to block that communication.

The canonical image hiding stego applications are a case in point, where the applications are widely distributed and understood, but in principle (if not in practice due to steganalysis[1]) one could know of their existence and how they work but still be unable to detect that covert communication through them was going on, nor be able to block that communication short of blocking all image posting.

Second, they need not be on any app store.

Third, any leaks about their existence, if they come at all, may come too late. As Napoleon said, it's not necessary to censor the news -- it's sufficient to delay it until it no longer matters.

[1] - https://en.wikipedia.org/wiki/Steganalysis

No, but blocking it could piss off a large part of your population.

It all depends on how far you are willing to push the blocking and how much you are willing to disable so you can block anything.

Signal atm are using domain fronting. (iirc the app will soon test the network conditions before attempting to use domain fronting, but for now it checks the country code of your phone number)

It will open a HTTPS connection to google.com but after the connection is made sends a host request for something.appspot.com In order to block that you need to MITM the connection or block google.com (Not sure if DPI could be used to get the host header never really looked into it personally. I know that SNI Sends the host is part of the handshake so the webserver knows which cert to present you with. Could it be extracted, checked agasinst a list and then have the connection reset preventing connection? Dunno never played with it, but its an idea off the top of my head).

(Now for some mild rambling :-p)

Lets say you can't MITM/DPI s you can just block google then they would have to use another CDN, so you block that one too. How many you going to go though before your citizens get pissed off at you and do something?

But lets say you people really hated GMail anyway and put up with not having Google just so this message app was blocked (and the creators don't just change CDN's) then you just force your people to install your own Root Cert or they don't get any encrypted web traffic. Will people complain or just install the Cert and get their facebook back?

So people switch to using personal networks (bluetooth and WiFi hotspots when in a crowd of people) just jam Cell/2.4ghz/5ghz. Will people complain they can't use their phones?

And it just escalates to the point you need a Doctors note and a permission slip signed by your mum before you are allowed to make a phone call.

All the time who actually want to encrypt their messages use math they can do at a desk away from a computer or phone and just use whatever method the Goverment do allow / they can get away with (Standard SMS but who and when can be got from the telco's, dead drops, IRL meetings) but sacrifice their metadata in the process.

Nice description there. Google may not be pleased by this and be under pressure to revoke their access, but eventually they will make it clear that this shit doesn't fly. Nice workaround.

check out Ricochet. If i recall correctly, it uses blockchain type transport over tor.

how does that help, I think tor can be blocked..?

The GFW is able to recognise Tor usage.

> The firewall searches for a bunch of bytes which identify a network connection as Tor. If these bytes are found the firewall initiates a scan of the host which is believed to be a bridge. In particular the scan is run by seemingly arbitrary Chinese computers which connect to the bridge and try to “speak Tor” to it. If this succeeds, the bridge is blocked.


With all the things GFW does I wonder if they have some secret conferences or industry journals related to the firewall's algorithms and infrastructure.

Don't see why not? In Jason scotts talk The Mysterious Mr Hokum [0] he talks about an owner of an early ISP who not long after selling it was found dead. Iirc During his time as owner he would often have regular meetings with FBI agents to basically discuss what was going on the net.

Problem was after he died his Was actually on the run on fruad charges. I think Jason presumes he set up the ISP as another scam but he started it at the perfect time and started actually making legit money instead. So (again trying to recall the talk from memory, I must actually watch it again as I enjoyed it) this isp owner was having meetings with the FBI about his ISP all the while the FBI also wanted him on fraud charges. So yeah if the FBI don't mind having chats with ISP's just to see what's going on, I wouldn't be at all surprised if China had meetings with their ISP's too. From what I have read I about the GFW it seems that it's infrastructure differs from isp to isp. Dunno if that's cause it's left to the ISP to implement or if The Gov issue "black boxes" to do the firewall work and it's just different versions of hardware / software depending on when the boxes were issued.

But yeah I do like the idea of a secret defcon but kinda in reverse that discuses the tricks and infrastructure and the bypasses they discovered in the past year but in order to better run the GFW. In my imaginary con they are all still getting drunk and hacking into the hotel signage for the shits and giggles of it though.

[0] https://youtu.be/UTzQmhmgLC0

That same person developed ScrambleSuit[1], which is used as a pluggable transport to obfuscate traffic and prevent detection/active probing. Work is continuing to keep the GFW from being able to catch up [2][3].

[1] http://www.cs.kau.se/philwint/scramblesuit/

[2] https://github.com/Yawning/obfs4

[3] https://git.schwanenlied.me/yawning/basket2

I don't think Ricochet uses blockchain technology.

This is a nice idea, but it's also baseless speculation.

You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor whenever they want to (for those who haven't actually read the article: this backdoor only works when WhatsApp issues a key change for a conversation, and only then in certain circumstances). In other words, you imply that Egypt said "Hey WhatsApp, please actively hack into your Egyptian users' messages and send us the results" and WhatsApp said "ok sure here ya go".

It might be true, but Zuckerberg might be a FSB informant and I might be Elvis reincarnate. These are all baseless, yet not entirely implausible claims.

well it's only baseless speculation if you can provide at least one plausible alternative so we can say, "we don't know which is true".

niksakl's point is that the go-to "probably nothing going on" or the other "WhatsApp too popular to block so we block Signal instead" explanations are just not plausible at all.

So I don't think it's entirely baseless, and with this new information, even less so.

And Egypt making such a deal with a large company, you make it sound like you believe that's implausible, but this has in fact happened before: When Egypt hired Nokia and Siemens to develop, build and implement their DPI infrastructure. Later claiming "gosh we never expected they'd actually use this to hunt down, torture and kill dissidents". Maybe governments aren't that naive, but corporations surely will try and claim to be.

> You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor whenever they want to (for those who haven't actually read the article: this backdoor only works when WhatsApp issues a key change for a conversation, and only then in certain circumstances). In other words, you imply that Egypt said "Hey WhatsApp, please actively hack into your Egyptian users' messages and send us the results" and WhatsApp said "ok sure here ya go".

No, the private hackers Govs hire were able to use an exploit to snoop on Whatsapp. That's very probable.

Yeah, but that's not how the exploit would work. If you read the article, the "backdoor" is that WhatsApp could "generate" a new private key without your knowledge. Except that instead of generating a key, they'd use a well-known key. From there, they could give that key to state actors, or they could decrypt the traffic themselves and give it to state actors.

Either way, you need server side control of WhatsApp.

Which you could get by hacking WhatsApp endpoints.

Is there any evidence that this happened?

It is speculation, but it is far from baseless.

Not all speculation is inappropriate; sometimes it is the seed from which a correct conclusion ultimately grows.

Of course it is only speculation, but this is my argument: https://news.ycombinator.com/item?id=13390564

> You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor

It doesn't have to be THIS particular backdoor. "Why build one when you can build two at twice the price? Only, this [second] one can be kept secret."

There's a cost/benefit tradeoff to blocking each service and different governments have different thresholds.

It is more likely that the cost of blocking Signal was negligible in contrast to the benefit, while blocking WhatsApp would likely have huge cost - especially in a country that has only recently experienced a number of citizen-driven coupés.

It is also possible that they're specifically targeting a group (Muslim Brotherhood, or Jund al Islam and other Sinai insurgency groups) that utilize Signal.

To add to those who have referenced the cost to the government: consider who else uses WhatsApp besides just activists - it's likely many government employees use WhatsApp as well.

Anecdotal tidbit: I worked at the Rio 2016 Olympics. My team consisted of Brazilians, Americans, Britons, and Koreans. WhatsApp was how we communicated[1], I'm sure the same was true for most of the other thousands of people working setup for the Olympics.

When a power-hungry judge forced WhatsApp to be blocked a couple weeks before the opening ceremonies, it was rather problematic for the Olympics staff. My first thought was "uhhh. This isn't going to last for long," and it didn't.

I can't say for sure that it's because the IOC president called up the Brazilian president, and the Brazilian president yelled at the judge, but I like to think that's what happened.

[1] Integrated language translation would be a FANTASTIC feature to add.

Possibly answers your question: https://news.ycombinator.com/item?id=13234211

There was a different commenter, possibly in a different HN thread, who was explaining that as an Egyptian resident he thought the government was blocking things like Whatsapp and Signal to protect one of the non government employers in Egypt, the telecommunications industry, which makes money from charging for phone calls and sms messages.

I remember receiving the downvote brigade[1], when Moxie himself said that I should trust WhatsApp without having the source code and the ability to put it on my device.

We (even a "smart" community like HN) clearly do not have the ability to think critically about security, and even when our leaders are sincere -- and I really don't mean to suggest Moxie/Signal was complicit in this move -- we still rush to defend our champions so quickly that we don't even think about what's going on.

However something really important is that this might be mere incompetence: FaceBook might not have any mechanism for launching this attack, they just thought the notification message was annoying so they didn't display it. To that end we need to be vigilant about stupidity as well.

Where does it end? Will we actually stop being okay with buffer overflows and sloppy programming? Or are we going to continue trying to "be safer" and use "safe languages" and continuing to try to solve the problem of too much code to read clearly with more code.

[1]: https://news.ycombinator.com/item?id=11669395

> when Moxie himself said that I should trust WhatsApp without having the source code and the ability to put it on my device.

What are you talking about? All I can see there is that you asked for the source code of the QR generator and he delivered. He does not say you should trust WhatsApp.

That's not what geocar asked. He didn't ask anything, actually.

Rather he pointed out that what you see in the WhatsApp UI is meaningless because you have no way of knowing that the app you're running matches the code Moxie linked, or that the code your friends are running does. Moxie replied with a link to the QR generation code but this didn't answer geocar's question, probably because there is no answer.

Here's a simple way to put it. End-to-end messaging security is assumed to be (at least traditionally) about moving the root of trust. Before you had to trust Facebook. Now you don't. A closed source app that can update on demand doesn't move the root of trust and this probably doesn't match people's intuitive expectations of what it does.

Many people have pointed out similar things to what geocar has pointed out: E2E encryption is essentially meaningless if you can't verify what your computers are actually doing. Unfortunately fixing this is a hard problem.

I wrote about this issue extensively in 2015 in the context of encrypted email and bitcoin apps (where you can steal money by pushing a bad auto update):


I proposed an app architecture that would allow for flexible control over the roots of trust of programs using sandboxing techniques. Unfortunately there's been very little (no?) research into how to solve this problem, even though it's the next step in completing the journey that Signal has started.

By the way, just to make it super clear, the work Moxie has done on both Signal and WhatsApp is still excellent. It is, as I said, necessary work. But the Guardian has unfortunately not understood security well enough, nor have people from the cryptography community really helped journalists understand the limits of this approach. Nor has Facebook, I think.

> All I can see there is that you asked for the source code of the QR generator and he delivered.

Eh, I kind of agree with geocar's point in the original thread. Moxie shared source code to "a" QR generator. Is there any way to verify that this code is what's running inside of WhatsApp?

Yes. The most obvious one is to make it possible for me to build WhatsApp and install it on my phone, however there are a lot of challenges with this.

A less obvious one is to make it possible to detect WhatsApp cheating. This isn't perfect, but someone only needs to detect it cheating once and then your name is mud.

One such way I proposed: If I can create my own key, then I can pass my public key out of band to someone who can mitm[1] themselves and verify that the message on the wire was encrypted and only encrypted with my key, and I can mitm myself to verify that my device only ever sends things encrypted with my own key. Tooling for the protocol is non-existent, and despite someone claiming they could do it in 10 seconds, they never followed up with instructions on how I could do it in 10 seconds.

This also allows other non-WhatsApp versions of the client, which may make things much more difficult for FaceBook (since now they can't upgrade legitimate clients if they discover a protocol-level problem) but the Internet has some experience with protocols.

[1]: https://mitmproxy.org/

More interesting, I stand by my prediction that WhatApp would have a backdoor in it or start selling information once Facebook acquired it regardless of Moxie's improvements. Looks like I called it again. People need to stay away from this messenger unless they absolutely have to be on it for friends and family. Still encourage them to download Signal for anything private.

Yup, the protocol might be secure, but the implementation might not be. Without the source code, you can only guess and hope for the best.

As many argue (e.g. tptacek), and I find myself increasingly convinced by this:

- source code can be looked at, even verified, but it's hard. (Remember many bugs in OpenSSL, for example.)

- but binaries, too, can be disassembled, even verified. It might be harder, but it's a shades of grey, not binary (ha).

- even if you have the source code, you have to ensure that the binaries actually distributed to your phone correspond to the source code. That muddles the issue further.

It's always hard to tell whether aggressive down voting is real people or digital marketing campaign driven.

I'd go further and say Moxie is complicit by way of negligence. It's unethical to assist in the implementation of your protocol when you can't guarantee its privacy protections will actually stand. Otherwise it's free PR for Facebook to tout "Snowden-approved crypto".

I have no doubt Moxie acted in good faith and wanted to expand encryption to a large number of users, but this is just another example of why proprietary software cannot be trusted.

Any and all proprietary implementations of the Signal protocol are now suspect. OWS should denounce these implementations as least as firmly as they do interoperable open source Signal client forks.

On a completely unconnected note, what was the name of that technique that GCHQ uses to disrupt online forums and subtly undermine peoples reputations?

> On a completely unconnected note

You are not being sincere. You are implying that GP is a paid troll of spooks.

> OWS should denounce these implementations as least as firmly as they do open source Signal client forks.

They don't. Moxie does not want the forks to use his servers or the name of his app, that is all.

Well since the server for Signal is closed source, the signal client forks are pretty much useless (correct me if I’m wrong)?

The server for the text messaging is open source, only calls, and other stuff is closed.

> Moxie is complicit by way of negligence.

I just want to voice my opinion that maybe 1 in 100 people have Moxie's integrity and ethics.

one in a billion... he's the most ethical hacker and political actor i've encountered...

An error in judgment (naiveté) and integrity are not mutually exclusive.

> I'd go further and say Moxie is complicit by way of negligence

Your "further" stance is not supported by the evidence. You might disagree with the design choices, but they're not negligence or "complicity". Moxie answered, in the other thread, that

a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement. I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."


The vulnerability was found, published and reported without source and before your previous comment.

The vulnerability was found and published in April 2016, btw.


Not sure if I understood you well: do you imply that Moxie was involved in creating this backdoor?


Sorry, misread.

No, I do not mean to imply that at all.

That is why I said I really don't mean to suggest Moxie/Signal was complicit in this move

Some more background:

This was presented in the lightning talks at 33c3, starting around minute 48: https://media.ccc.de/v/33c3-8089-lightning_talks_day_4

Here's the congress wiki with some more links: https://events.ccc.de/congress/2016/wiki/Lightning:A_Backdoo...

And a blogpost: https://tobi.rocks/2016/04/whats-app-retransmission-vulnerab...

Thank you. The last link should be the source (a note to moderator).

It's news that Facebook still hasn't fixed it (and they're saying they won't fix it).

What do you call a known vulnerability that can be used for eavesdropping that a company refuses to fix ?

1) A mistake

2) A bug

3) A backdoor

4) A deliberate UX trade-off that, while clearly suboptimal for the kind of people who read HN, still leaves WhatsApp's massive base of everyday users in a much better position than they were prior to integrating the Signal protocol: immune to the more mundane threats of passive mass surveillance and the exfiltration of message history from Facebook's servers.

Nicely put. Anyone that is so shocked, shocked about this is well advised to install Signal, Threema, Wire, and furthermore employ defence in depth.

The key part is this, and it was apparently reported back in April 2016 with Facebook replying it's "expected behavior", it's not something a general attacker can do but it would enable WhatsApp/Facebook to read conversations:

> WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

It's worth noting as the article says, that this is built on top of the Signal protocol. In Signal, a similar situation with a user changing key offline will result in failure of delivery. Within WhatsApp under Settings>Account>Security there is an option to Show Security Notifications which will notify you if a users key has changed.

According to the article however, the notification is given after the messages are resent. There is nothing the user can seemingly do to prevent retransmission on a forced key change. This prevents further information from being sent, but in case of undelivered messages, they could be snooped on.

Sure, this could certainly leak some information, but it's hard to argue that this is a "backdoor".

A way of exploiting it is talked about in the article which would effectively allow Facebook/Whatsapp, or whoever is coercing them, to read all messages sent by a particular device.

It makes WhatsApp effectively not E2E encrypted. All messages can be recovered by Facebook. How is that NOT a backdoor?

No, all messages cannot be recovered by Facebook. Read the article - messages that are not yet delivered can potentially be read; if it has been delivered it cannot be retrieved.

You go read the article. The deciding factor is not whether the message has been delivered, but whether WhatsApp servers report to the device that the message has been delivered. There's nothing stopping them from claiming that no messages have been delivered and thus recovering all messages (as long as they had been preselected for false delivery reports) despite true delivery status.

> but whether WhatsApp servers report to the device that the message has been delivered

It is hard to check what WhatsApp does, but in Signal it is not the server, but a recipient who sends delivery receipt. WhatsApp then has to either recognize encrypted receipts or allow only one-way conversation during attack. Carrying out the whole attack just to decrypt "hi, are you here?" is not really interesting.

The delivery receipt is the message that is directly sent after the message has been delivered. Not too hard to distinguish those from other text messages.

So they can recover the messages, right? However, wouldn't these messages still be encrypted? Sure, they force a key change, and the messages are encrypted using the new key and sent. Theoretically, an attacker could have multiple copies of the same message, but these messages would still be encrypted under a variety of different keys right? Wouldn't the content of the messages still be secure?

Unless the key-change forces the user to be using an insecure key-pair, but is that actually happening?

New encryption (public) key is selected by the attacker, so he knows the decryption (private) key. Basically attacker just puts real device offline and registers his own device.

Wouldn't the attacker need to be authenticated as the user of the real device for this to work?

All messages, sent while a person is offline. It is bad, but not nearly as bad as "all messages"

It is in fact all messages. They can simply not deliver the first message and force a resend record that mesaage. Afterwards force again a resend with the old encryption key and deliver that mesaage. No one would get a notification.

I can see how you would leave the receiver in the dark by sending them the original, deferred message, but how would asking the sender's device to resend with a different key not result in a notification?

Furthermore, as soon as the sender attempts to deliver another message to the recipient, they would get another notification (because the encryption key changed back to the real key); alternatively the attacker could continue blocking (and reading) messages to the recipient, but the lack of delivery would be noticeable.

You could escalate it into a MITM rather easily, though, by attacking both ends; but again, a key change notification should be displayed to both parties.

Assuming the closed sourced app works as advertised, obviously.

Yes, you are right. But I think most people did not enable the security option so they wouldn't detect any interception of messages.

Well, what you are describing is a regular MITM attack. Unless you validate fingerprints, this is a risk with _all_ public key-based protocols.

Can't they (WhatsApp) simulate a user being offline?

I happened to have the Security Notifications on for a while now. I see the message: "X's security code has changed." pretty often. Under what circumstances does a new pair of encryption keys get generated?

One circumstance is when you put your sim card in a different phone. The new phone recognises that you already have a WhatsApp account, as it's tied to your phone number, but it doesn't have your private key, so it will generate a new pair and start exchanging the public part.

Maybe the very fact that you want to be notified of key change events got you marked as suspicious ;-)

I think when people swap sims in their device it triggers a key change

A new pair of keys are also generated when a device is wiped and restored from backup (in my experience on iOS).

I don't think this is as serious as it seems, this exploit only applies to undelivered messages, which granted is not great, but is at least something.

And any WhatsApp update could potentially include code to snoop on decrypted messages so exploits that can only be performed from the WhatsApp server side - i.e the example in the article about snooping entire conversations - are not really that relevant.

Having said that, it's disappointing and they should adopt Signal's approach.

Did you miss this from the article?

> Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

In other words, what seems like "a vulnerability that only affects some messages" could be turned into a full blown interception capability with very little change.

It is just as easy to have the clients send a copy of all generated keys to a central server for storage, no need to bother with this re-transmission subterfuge at all.

What you are seeing is not some vast conspiracy, it is a compromise made by some back-end engineer to get a front-end product manager off their ass without anyone thinking through a better UI option.

"So if a user loses their phone you are telling me that all unread messages are lost forever?"


"That won't work. Users will complain, someone will have to deal with those complaints, this just won't work."

"Ok, maybe we just push the unread messages back to the sender's phone and automatically re-send when the recipient gets a new phone."

"Sure, that works. So, about this other problem..."

In retrospect, it is fairly obvious that the send needs some control over re-transmission, but if you have never been a situation like this it is only because no one uses your code.

If a user loses their phone, I think they have a lot more to worry about than a few missed WhatsApp messages anyway. I don't think this is a "common sense" compromise that WhatsApp made here, especially in the context of them promising end-to-end encryption.

It's kind of like that other nonsense tech companies are doing these days, by supporting U2F auth, but then requiring you also set-up SMS auth in parallel, so that "if you lose your U2F key you can go back in with the SMS"

Yeah, except that completely eliminates the point of using a U2F key in the first place, since your security would be no better than when you're just using SMS auth.

Or we can go back to "security questions", which I think most agree now are just not worth it, despite the fact that they can help users "recover their passwords".

If end-to-end encrypted messages can be intercepted through this, then WhatsApp shouldn't be offering this feature. The downside is much greater than the upside.

If I lose my phone, I expect my new phone to have proper continuity on the messages. I'd rather have that than any encryption, to be honest. I don't care if the government spies on me. I do care if something someone sent me gets lost.

Then don't use a messenger that promises end-to-end encryption. Client side encryption is all about ensuring only clients that hold private keys can read messages delivered to them.

More like, you can stop using this messenger because, guess what? It does what I want and not what you want. You move.

You should at least have a backup of your private key so you can import it on your new phone rather than having the sender re-encrypt to whatever key your new phone decides to generate.

It's possible for any message that is not marked as delivered.

All Facebook has to do is not mark messages as delivered, i.e. lieing to the device, which can probably be done easily. So they could ask a device to regenerate keys and send the same message again, over and over again.

This would just result in the same encrypted message being sent over and over, albeit encrypted with different keys each time, right? The only way the content of the message would be vulnerable is if one of the new keys are insecure/compromised, unless there's something I'm missing.

I suspect that, i think server has ability to request any individual message to be transmitted with new key.

As I mentioned in my comment, any exploit that can only be performed by the server is essentially irrelevant as we already can't have perfect trust in the server.

edit: I'll respond to everyone as I worded this poorly. What I mean is that an attack that can only be performed by Facebook/WhatsApp(depending on if you believe they are kept separate) is mostly irrelevant as they could always push an update to the App/Play Store that sent all the decrypted messages to their servers anyway and we'd be none the wiser as it's all closed source. So why would they choose to use the vulnerability when this is fair simpler and could access far more messages with the update?

I'll concede that it's worrying if their server somehow became compromised but I'm seeing that as being highly unlikely.

I thought that was the whole point of end-to-end. That you don't need trust in the server because the messages are opaque. If this is an exploit that can be performed by a compromised server, it's very much relevant

So, just to clarify my understanding:

Basically, what we have here is a weakness in the client, namely a provision that allows the server to send the client a fresh key and ask for re-encryption and re-sending with the new key. This, in turn, would allow for a good old MITM attack if the server were to be compromised.

This re-encryption and re-sending of messages would be without intervention by the user, though a message "new key" would be displayed to the user provided they had chosen the option to display such notifications (which are disabled by default).

What's unclear to me is whether only messages that have not yet been delivered would be affected, or all.

> if the server were to be compromised.

IMO they have been since they joined Facebook.

If you're serious about security, every computer outside your control should be considered compromised, regardless of what you think about the owning company.

With Signal, I have an E2E connection where if I trust both clients, I can trust the connection. WhatsApp, however, has client code that will essentially reveal any unsent messages to the server on request. And then you just have to trust this compromised computer with any message you send.

Aren't all messages undelivered, until they are?

Hehe, yes, but the point is this:

if you had verified fingerprints with Bob and are happily chatting with him, all the messages that reached him (two tick marks in WhatsApp) are safe.

Only those that have not yet been delivered (one tick mark) would, when the server sends you you a new key, be re-encrypted and re-sent.

All of this, as usual, is predicated on the client behaving as promised.

From the news article:

Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

I frankly didn't understand what was said here.

I guess what they're suggesting is to compromise the server in such a way that it does not send "delivered" receipts for any messages anymore (even though it actually delivers them to Bob, and Bob answers, and a "normal" conversation ensues).

Then, at some point later, Eve on the compromised server could send a "oops, here's a new key, send everything undelivered again" message. Then, the client, as it is now, would just re-encrypt and re-send all those messages it deems undelivered so far (and then pop up the "key changed" message, if you had requested it in the settings).

You'd recognise the attack by seeing only single ticks on messages, even if Bob had seen them and answered.

The next question would be can the server mark a message as undelivered after it's already marked it as delivered.

If it can un-deliver all messages as well as have them re-sent with a new key then there may as well be no keys.

Is the server sending the client a new key to use? Or is the server telling the client to generate a new key?

We can't have perfect trust in the server, so this feature being supported by the client breaks the security guarantees of WhatsApp's E2E encryption.

Nicely put.

> "it's not something a general attacker can do but it would enable WhatsApp/Facebook to read conversations"

So it downgrades "end-to-end encryption" to "transport layer security".

Nothing to worry about according to Gizmodo:

  > The supposed “backdoor” the Guardian is describing is
  > actually a feature working as intended, and it would
  > require significant collaboration with Facebook to be 
  > able to snoop on and intercept someone’s encrypted
  > messages, something the company is extremely unlikely
  > to do.

I, for one, certainly cannot imagine Facebook collaborating to such an extent with the government.

There's a </sarcasm> there right?

What? You must be some sort of conspiracy theorist. Just be rational and extrapolate from your beliefs: if you admit that Facebook might do this, then why not Google, AT&T, Microsoft? There would be no end to it. Basically it would mean that all businesses are spying on you and handing the information over to the government.

I have complete faith that that is untrue based upon just the history of the last 5 years.

Sarcasm usually plays out very poorly on written mediums like this, but you nailed it.

Yes, well done.

I feel bad now. I'm just highly frustrated that everyone is not actuated by the idea "if they _can_ spy on you, then they _will_".

Any appeal to morals/integrity/laws are essentially moot in this area. We have the ability to protect ourselves and we should be using it.


"They have no reason to look" is to me equivalent to "I have nothing to hide".

Both may be true, but both willfully surrender control of the situation.

> We have the ability to protect ourselves and we should be using it.

I don't doubt that the technology exists, I just doubt the ability of the average person to be able to protect themselves. As someone who works in a technical field with decent computer literacy, I still have a hard time approaching this problem. Perhaps I'm just not as literate as I think I am.

> AT&T


I thought maybe after the last decade's revelations about the security apparatus, we might be beyond calling people who are a bit paranoid about their security "conspiracy theorists".

Ah, obviously you are a Russian troll trying to hack the US telecoms industry.

Serious question: why do people believe that corporations can somehow beat government/law? The government has absolute power and will always win.

I hope you are being sarcastic.

At the risk of stating the obvious: there is real benefit to using an entirely decentralised open source comms system like Riot.im (Matrix) or Conversations (XMPP), where you can pick precisely which app to run, who to trust to build that app, who to trust to advertise your public keys, and who to host your server.

It's inevitable that big centralised services like WhatsApp or even Signal are going to be under pressure from governments to support lawful intercept; in many countries it's essentially illegal to run a communication service that can't be snooped under a court order. Multinationals like Facebook are neither going to want to break the law (as it ends up with their senior management getting arrested: https://www.theguardian.com/technology/2016/mar/01/brazil-po...) - nor pull out of those territories (given WhatsApp market penetration in Brazil is 98.5% or similar).

oh, and one other thing - there's also real value to independently published public security audits of the crypto to pick up on things like WhatsApp's retransmission 'bug', at least as of a given snapshot of the codebase. E.g. https://www.nccgroup.trust/us/our-research/matrix-olm-crypto... for Matrix or https://conversations.im/omemo/audit.pdf for OMEMO & Conversations.

Off topic, but I like how their URL spells nccgroup trust us

No matter what IM service you use: As long as they manage the public keys for their users, they will be vulnerable to exactly this problem. This isn't just WhatsApp. This applies to iMessage and Signal too.

In all cases, we rely on the word of the service provider that they don't sneak additional public keys to encrypt for into the clients and in all cases we hear that doing so would cause a message dialog to appear, but we have zero control over that as this is just an additional software functionality (yes. Signal is Open Source, but do you know whether the software you got from the App Store is the software that's on Github?)

Also imagine the confusion and warning-blindness it would cause if every time one of my friends gets a new device I'd get huge warnings telling me that public keys have changed.

This is a hard problem to solve in a user-friendly way and none of the current IM providers really solve it. Maybe Threema does it best with their multiple levels of authenticity.

As such I think it's unfair to just complain about WhatsApp here.

'As such I think it's unfair to just complain about WhatsApp here.'

I disagree. WhatsApp have a known vulnerability which they won't fix (indeed they deliberately added this vuln on top of the Signal protocol), and no denial that they have used this vulnerability in the past.

They made a big PR song and dance about this feature only to backdoor it. That deserves criticism.

Exactly -- plus, the "notify key changes" setting is off by default. When I was looking through WhatsApp on my girlfriend's phone (it's useful to know what popular applications look like to be able to help others, even if I don't use them myself) I was very surprised to learn it was off by default. That's the same as disabling certificate checking on https and hoping that the pubkey you got is valid. It took me a while to believe it was actually the default and she hadn't turned it off herself (probably by accident), but it seems to be true. I just can't imagine how people call Whatsapp encrypted when whatsapp can push a new key à la "here, go encrypt your messages to this pubkey please".

> plus, the "notify key changes" setting is off by default

that's the thing: That setting is pure placebo security-theater. There's nothing to guarantee that this setting actually causes notification on all key changes, whether it's on or off.

Knowing that we all have trouble trusting Facebook, we can assume that all this setting does is inform users when their counterpart has a new phone (which in itself is a very slight privacy issue. I might not want you to know that I have a new phone / reinstalled WhatsApp).

It won't inform users when Facebook adds another public key for analytics and it also won't inform when the NSA adds a key through their special surveillance interface Facebook built for them.

That's the issue with all IM services that manage public keys for their users and thus, my original point was that it's pointless to rage against WhatsApp alone.

Worse: Let's say they change the default due to the present outrage: Then everybody will be pleased with them while the actual backdoor remains in place.

> There's nothing to guarantee that this setting actually causes notification on all key changes, whether it's on or off.

In that case the whole thing is a "placebo security theater": you cannot know whether WA implements the encryption at all. Even if you reverse engineer it and see "oh yeah there are all the necessary functions like hashing and encrypt_otr() and stuff" you still don't know whether they're actually used. Or if they're used, is there another thing that (perhaps only sometimes) sends data via another channel?

But if we trust them to operate in good faith, like Whatsapp's users do, then it should be secure since the protocol they claim to use is secure. But then they break it with a default setting like this. Wtf.

> I disagree. WhatsApp have a known vulnerability which they won't fix (indeed they deliberately added this vuln on top of the Signal protocol)

how would you fix it without causing notification-blindness?

There's no notification blindness. If a key was changed after a message was sent, then the sender would simply be notified and they could choose to resend using the new key. This is how Signal works.

Not automatically resending messages after keys have changed might be a good start.

Isn't this a warning that only shows up when someone gets a new phone? That shouldn't cause many notifications.

The encryption protects you against others snooping on your messages in transit which is what it is meant to do.

Absolutely mothing really stops any of WhatsApp, Apple or even Signal itself from reading your messages if they want to/are compelled to. The only way to protect yourself against the service provider is to manage public keys yourself manually using GPG like workflows which have proven to be unworkable.

The trade off is do you want free and easy to use messaging which protects you from other snoopers but not the service provider/government itself or do you want much more secure systems that no one outside the technology priesthood will use.

I don't think people switch phones often enough for the warnings to be a nuisance or be ignored.

I agree that a lot of people would be very confused when they see the error, though, and while it's easy enough to explain even in layman's terms, I don't think it would help.

I think that's it's totally fair to complain about WhatsApp, since the issue mentioned is separate from the more general problem you describe; they could easily have done it the way Signal does, and I suspect they opted to do it the way the do it for the same reason they don't have the security notifications on -- they don't want to deal with the confusion.

> I don't think people switch phones often enough for the warnings to be a nuisance or be ignored.

apparently in some countries they do and that's a reason to compromise the rest of the world..

just summarising how bizarre this excuse really is.

make it an opt-in setting, in some countries reliable connectivity in a situation of frequently changing devices (the more I think about it, the more contrived it sounds) might be more important than privacy, but in others it very much isn't and the consequences for failing privacy are much worse than missing a message between swapping of devices.

that's not a tradeoff you should get to make for everyone.

the error message itself (I have it on) is not at all obtrusive btw, it's a friendly yellow (like the old Google ads) small type, which a user will either ignore or get a vague sense of unease about not being secure (which is exactly correct), I don't see how this can be further confusing.

> do you know whether the software you got from the App Store is the software that's on Github?

Yes: https://whispersystems.org/blog/reproducible-android/

... minus the libraries in native code which also are considerably harder to reverse-engineer than the java parts.

Also, unless you're suspicious and actually check, you could be served a special version by the App Store that was compiled only for you and contains the required add-a-key-but-dont-show-a-popup feature.

I'm not saying that Signal and/or Google are shipping a backdoor. I'm saying that we have to trust them that they don't.

Of course, if the software or hardware on your phone is compromised, all is lost. This is not specific to Signal though.

> I'm saying that we have to trust them that they don't.

This applies to anything. It is not feasible to build and/or check all software and hardware by yourself.

As long as they manage the public keys for their users, they will be vulnerable to exactly this problem.

Indeed, the most secure way is to generate and confirm each other's keys physically. The thought occurred to me that those whom you'd want to truly communicate securely with are likely people you have met via other means already --- including in person --- and so you should already have an effectively independent channel to share keys. It seems like the level of trust you have with someone is proportional to the probability of that being true: if you've never actually met someone in person, how do you know they are who they say they are? In some sense, you could say that, how secure the communication with someone is, doesn't matter if you don't already have that relationship of trust established.

What about political dissidents trying to organize some event in a group where different people are brought in by others in a web of trust.

If A and B have already mutually validated each others' keys, and B and C have, then B can act as an intermediary to relay the key fingerprints.

B is a key exchange here and you have to trust that B isn't lying about C's key when it tells A.

The solution to this is to have multiple independent clients all working with the same protocol. This way it doesn't matter if an IM service handles your public keys, cause if they send different ones, they can't prevent the client from notifying. They simply don't control the client.

In general, it is the control of FB over the whatsapp client where the vulnerabilities lie.

>As long as they manage the public keys for their users, they will be vulnerable

On the other hand, as long as users are required to manage their public keys, there won't be end-to-end encryption for the masses (which WhatsApp had declared as their goal and to some degree achieved).

At least until key management and other security basics will be taught at elementary school, by the time multiplication table is taught.

I'd be curious to hear HN's thoughts on what messaging apps they use/trust.

I've tried in the past to get friends to switch over to Telegram, but there are issues since they rolled their own encyption protocol.

I've looked into using Mumble for voice, it seems quite secure because you host it yourself, and it's open source.

There's also a good list from the EFF: https://www.eff.org/node/82654

> As such I think it's unfair to just complain about WhatsApp here.

I think it would be wrong to start complaining about other apps. We don't know of vulnerabilities in other apps. We DO know of one in WhatsApp. Let's focus on what we know and take WhatsApp to task on it instead of wasting energy on what we don't know.

From the outset I've always expected that a backdoor was present in Whatsapp. In fact, I'd be surprised if they hadn't granted themselves some special capabilities with regards to the content of the communications. Touting their end-to-end encryption has enticed many people to trust the product, sometimes with strong conviction, while giving themselves a monopoly on access to communication perceived as secure by the end users. It stands to reason that claims about security and privacy of an end product (the Whatsapp app), no matter how lofty the goals that its creator (especially a murky company like Facebook) has purportedly set out to realize, can be verified without being completely open. There is software out there like OpenSSL that is developed by PhD's, and is completely open and available to anyone who wishes to validate its security, yet vulnerabilities are found years after they've been introduced into the code. Claims to Whatsapp's security/privacy are preposterous a priori.

The fact that you have a PhD in cryptography doesn't necessarily mean you know how to write secure code. Especially C code. Lot of people hated OpenSSL quality long before Heartbleed, but it took that vuln for people to actually realize how bad it is. I can imagine a good, secure SSL library being written by somebody without a PhD, in a safer language.

I'm not sure that security is fully correlated with the degrees held by developers. It seems to have more to do with their motives. WhatsApp is owned by Facebook, which is wholly motivated by profit and the aggrandizement of Zuckerberg, not by providing secure code.

What's most interesting to me is that for all the people who complain that C is insecure, I don't see any great, proven open source crypto implementations written in the "secure" languages.

As an aside to your aside, LibreSSL is certainly more secure than OpenSSL, and it is written in C. Theo de Raadt doesn't have a PhD (though obviously he's not the only one hacking on LibreSSL).

There are plenty of crypto implementations out there. What do you mean by proven?

> Especially C code

Isn't WhatsApp an Erlang app?

Whatsapp is both a server and a client. The server might be written in Erlang, but the client (where all the end-to-end encryption happens) is written in whatever the device can run.

The device runs machine code. Client code can be written in any language which can be either compiled or interpreted to machine code.

Android devices run Java, with an option for machine code for some functionality.

The socket layer at OS-level is written in C and Erlang probably uses an SSL library written in C. Turtles all the way down!

WhatsApp uses Erlang and various other languages. I doubt the encryption code is in Erlang. Erlang can call C code.

> In fact, I'd be surprised if they hadn't granted themselves some special capabilities with regards to the content of the communications. Touting their end-to-end encryption has enticed many people to trust the product, sometimes with strong conviction, while giving themselves a monopoly on access to communication perceived as secure by the end users.

It reminds me of this PR puff piece[1] by Google, banging on about how secure their data centre was, the limited access by employees, the amazing information security team, the underfloor lasers to detect intruders, etc. while totally ignoring the elephant in the room, i.e. NSA backdoors which Google is forced to comply with and can't reveal publicly when they do so.


There is no technical way to defend against the app itself (so WhatsApp/Facebook) and the operating system (Android/Google, iOS/Apple, etc). Transitively, this means no real defense against the US government when they can invoke the "national security" card.

Also, hardware and manufacturer cannot be defended against with software-only. This means Intel, Qualcomm, FoxConn, etc. Transitively, the Chinese Government.

I see no other possibility but trust them or don't use them.

(There is a technical way to fight the OS, but it is not mature/available yet. See Intel SGX.)

What you've said here, whether true or not, is a much more general notion than the linked topic, which is a specific, documented vulnerability in an app popular largely because it is ostensibly secure.

> I've always expected that a backdoor was present in Whatsapp

Yea, and this is exactly why i never understood why Whispersystems/Moxie cooperated with Whatsapp/Facebook: It gives people a false feeling of security (communicating via Whatsapp), and basically Whispersystems facilitated/made this possible.

It was so obvious...

"Asked to comment specifically on whether Facebook/WhatApp had accessed users’ messages and whether it had done so at the request of government agencies or other third parties, it directed the Guardian to its site that details aggregate data on government requests by country."

This is why people should try and use Signal instead of WhatsApp. You can't trust Facebook to care about your privacy.

It seems hard to expect full privacy from any company based in the US given the government's tendency to force them to allow access.

When the US government asked OWS for data on some users, all they got was the telephone number and the date of the last login.

Ahem. Don't they also have contact information? From https://whispersystems.org/blog/contact-discovery/ and lack of future follow-ups on the subject, I believe they do. Possibly, hashed or obfuscated, but still recoverable.

Which must mean either I'm misunderstanding something (e.g. things had changed since the blog post was published and relevant GitHub issues were closed), or they had not disclosed some information they have to the US government, or they (or word of mouth, retelling the story) is misinforming users about what was disclosed.

(Upd: Yes, it would be a good idea to go through Signal source code and see what exactly is sent, before making any suggestions that may look like an accusation, but... sorry, the code is quite complicated and I don't think I can figure this out any fast. I found ContactTokenDetails class, but lost my way trying to trace its usage and how it's wrapped/encrypted/etc.)

You can look at the published court documents:


The page I linked was the full data they disclosed.


Seems that they either somehow don't have contact info (but then - how contact discovery's working?) or they had failed to comply with court order. Or I'm really not getting something, which is also well possible (and quite probable) explanation.

Upd: Hmm... or maybe the user had no contacts.

I think they just don't keep the contact list. It is uploaded, but only matched against the list of subscribed users at the time of the upload and then deleted. Only downside is that if a contact joins later and does not have you in its contact list you don't get notified, or only when you recheck your contact list.

Your link above says at the end:

For TextSecure, however, we've grown beyond the size where that remains practical, so the only thing we can do is write the server such that it --- doesn't store the transmitted contact information ---, inform the user, and give them the choice of opting out.

Oh. I've completely missed/forgot this. Thanks for pointing out.

Yes, now it's all clear - they have contacts, but only ephemerally. Good.

I often give Signal another chance, but it's UI is horrible. Some messages are not delivered, some are delivered only to Signal Desktop but not to my cell phone so I'm only notified days after the message was sent...

Signal Desktop is pretty bad, but their Android client is top-notch. One of the best messaging apps I've ever used. It integrates with SMS seamlessly, does inline replies directly from the notification, smartwatch support, Giphy support, etc

I have also had my share of delivery problems. But I'm on iOS and there is no alternative to Signal. So I ended up using iMessages most of the time, and Signal only for confidential stuff or when the recipient is on Android.

What I want is: 1) desktop/tablet and phone message delivery, with sane notifications and reliability, 2) doesn't feed all my messages to an ad company, 3) works on my non-Apple devices (otherwise iMessage would be entirely sufficient), and 4) good enough that I can get people to switch (or transparently uses SMS, so it doesn't matter).

Signal fails 1 (the desktop app is pretty bad) and 4 (too many little problems, others won't switch). I'm starting to think Slack, of all things, might be my best solution. Really, I just want ICQ with smart phone/desktop notifications, and picture/video embedding, which doesn't seem like it should be a thing I ought to have any difficulty whatsoever tracking down in 2017.

Signal is bad as explained previously, it requires Google on your phone to even work.

If you think Google is more trustworthy than Facebook, sure go ahead and just use Hangouts or whatever.

We cant have nice good encryption and safe communication when geeks push this Signal onto unsuspecting users, when the real option is to keep improving Tox.Chat and bitmessage.

I guess it's worth mentioning that people are currently working on removing the Google services dependency in Signal: https://github.com/WhisperSystems/Signal-Android/pull/5962

That is good to know, thank you for sharing that! I'll be following this and try Signal again when it should finally work on my phone :)

Looks like they are waiting for the calling portion of the app to become open source. Any ETA on that?

"Signal is bad as explained previously, it requires Google on your phone to even work.

If you think Google is more trustworthy than Facebook, sure go ahead and just use Hangouts or whatever."

Every time Signal comes up on HN people make this point (Signal is bad) as if it is true.

And every time it is exposed as bs.

A legitimate criticism is that they make it hard for people who don't want to use play services to user their app. For the privacy of the messages themselves, google really cannot interfere, unlike WhatsApp/Facebook.

There are certainly people who want to use Signal without Google services.

I don't know how legitimate a complaint it is since Moxie et al have said that they would accept a well written pull request which provides similar functionality. But this just hasn't been forthcoming.

What I dislike about Signal mentions on HN is that aggressive posters conflate a number of different issues people have with Signal - lack of federation, reliance on Google push notifications, lack of SMS support, etc - and somehow lump them in together.

[Just to be clear - I am not saying you are doing this].

I don't have a lot of skin in the game, but I am genuinely curious as to what you mean. How else other than "lump[ing] them in together", would you comprehensively criticize it?

I mean, two things good about Signal is that it let's you chat with friends and family in a secure manner.

There are these following issues though: I doesn't federate, it relies on Google Push, it doesn't support SMS. Also, I don't like how Signal does [...]"

Is that already an invalid way to make an argument ?

You're correct. That would be a fine way to make a comprehensive criticism.

I was trying to express frustration with posters who start out with a nebulous complaint like "Signal is bad and OWS is evil". If called on this they come back with "It allows Google to spy on you", if countered they come back with "it doesn't allow freedom to federate" and so on.

Rather than being a multi pronged criticism it's more like a bait and switch, with each new argument being deployed when the previous one is rendered invalid.

AFAIK, Play Services is controlled by Google and has system-level permissions, so it could easily access Signal messages post-decryption if Google wanted it to.

The JVM is also under Googles control, so they could similarly access it there? Or is that open and audited? How to verify which JVM my device runs?

EDIT: of course the fewer attack vectors the better

That part of Android is open source, so you could in theory audit it and build it yourself. I would be surprised if any big deliberate backdoors hid there. There are large downstream projects that use this source and builds on it which potentially would notice.

The Play Services however pretty much amounts to a remote root shell open at all times. Google can remove or modify code at will, and they have been known to do it in practice for spyware removal. I can understand how an activist finds that problematic.

This makes no sense in so many ways. I suggest you read more on exactly how Signal relies on Google. It does not at all compromise the encryption protocol. Also Tox? Good luck with that.

I am currently trying out tox with a small number of friends (ok, one friend). I am curious as to what your criticism of tox is. While it seems it's still a bit new, it seems it does all that it claims to do.

It is completely unusable on mobiles because it drains bandwidth and battery.

I use it all the time on desktop. Couldn't even get the mobile app to start.

> Signal is bad as explained previously, it requires Google on your phone to even work.

only for notification delivery. The message payload is not part of the push notification.

He never said it was. Google services aren't isolated like normal apps either, as far as I know they can access other app's data (when installing cyanogenmod, I had to install google apps in some weird way from the bootloader because it has to change protected things on the phone). His point of requiring google's stuff to be installed is valid, even if he wrote it thinking payload gets sent over it.

Even if you don't install Gapps, large parts of Android/CyanogenMod are from Google as well. How does installing Gapps make the security worse?

Good question.

Stock Android does not, by inspecting network traffic, contact Google servers.

Google play services and other GApps, do, and they can be exploited in this traffic, or told by Google to activate other backdoors.

Signal with GApps, Google can know which phones, and which users, are using Signal, thats a security vulnerability. Google can infer from their Google-messaging thing, that notifications are sent, and have a high probability of knowing if it is to Signal. Who talks when is leaked to Google.

>Stock Android does not, by inspecting network traffic, contact Google servers.

It does to check for internet access upon connecting to wifi.


Because it's likely someone would have noticed if every AOSP phone called third party servers (or something like that). Plenty of people made Android derivatives, modded it, or just compiled it to toy with. Of course there's no guarantee without a full audit (and even with an audit, they might miss something), but I trust AOSP a lot more than a closed source app suite.

More details in "WhatsApp Retransmission Vulnerability" [1] from April last year.

[1] https://tobi.rocks/2016/04/whats-app-retransmission-vulnerab...

This should be the top post - exactly this vulnerability was announced last year April; it's just that the Guardian picked it up now (with a somewhat clickbait-y headline, to boot).

I'm going to have to come to the Guardian's defence here. We may take issue with the term "backdoor" but, for a general readership, their headline is a good summary of the issue using appropriate language.

Doesn't backdoor imply malicious intent? (Of course you could argue that a good backdoor looks like a innocuous bug or even a feature...)

Also, if I have notification of key changes enabled and verify key fingerprints, at most one exchange could be snooped without me noticing. (If notification of key changes is not enabled and key fingerprints not verified, all bets are off anyways.)

C'mon we know this already, it's not a backdoor.

This has been known and is discussed in the protocol and forums as the trade off in ease-of-use versus validation. For people wanting security, they simply check the verify keys, warn on key change. For people who don't care as much about verifying the recipient, they don't know about the feature, and don't use it, but they still get pretty good security, can upgrade to verifying if the choose, all without having to re-key or change protocols/messenger apps.

They should also add a toggle that prevents the client from retransmission to unknown keys without human approval.

But if whatsapp owns the code, they don't need a backdoor. They can simply push an update that sends a copy of the msg to whatever server they may like.

Which is the case with Signal as well, and this "security" feature of "google play services" is why the developer of Sigal does not want Signal to be in f-droid.org's repositories. He wants to be able to push "updates" for any future "vulnerability" onto the users of Signal.

A big reason I dont trust signal. Every single that app has some bad side, signals is the reliance on Google.

Using Signal on iOS here - what is the reliance on Google?

Nothing since you don't have Google Cloud Messaging on iOS, but it's a different situation. IOS is not open source software like Android (AOSP) is. If you want, you can have all the software on your phone be open source (save perhaps for drivers), even though most people will opt to install at least the Google Play Store, which requires you to install the whole google suite (or at least it used to when I flashed Cyanogenmod).

So anyone wanting to have a phone with open source products on it for security reasons, they totally can on Android, but it's impossible with iPhones. Signal probably relies on Apple's variant of Google Cloud Messaging, but since you'll always have that on your phone anyway, it makes no difference.

Without detection? Analyzing packets and such.

This is not a backdoor. It is a vuln, and it'd be nice if it wasn't there, but this is not a backdoor.

There is no reason to assume this was "snuck in" with an intent to deceive users. Retransmission has been known and discussed repeatedly, months ago, and Facebook acknowledged it. What happened here is a choice of UX over security, specifically, choosing not to break existing WA users as they move them over to the otherwise great Signal protocol.

When a key changes, you can just keep trying, notify the user, or drop everything on the floor. If you want the latter, use Signal.

It would be nice if WhatsApp made 2 the default, and 3 optional. Right now 1 is the default and 2 is the option. The trick is to get the UX somewhere where normal people can do something useful with that information.

If you are at all upset about this, you are not a target WhatsApp user. It'd be nice if they changed this, but for the love of all that is good and holy, stop calling it a backdoor, because it isn't. Words mean things.

> The desire to protect people's private communication is one of the core beliefs we have at WhatsApp, and for me, it's personal. I grew up in the USSR during communist rule, and the fact that people couldn't speak freely is one of the reasons my family moved to the United States

Jan Koum and Brian Acton, founders of Whatsapp

I think it's pretty obvious that we cannot trust any messenger app that is closed source or relies on some company's service infrastructure. If it's closed source, you cannot possible know what it does. If it's relying on a company's infrastructure, it's likely to be banned by oppressive governments (and that includes most of the so called "free world"). In frustration over my own Government (Norway), I started last year a project to launch a new IM client based on the legacy TorChat protocol (https://github.com/jgaa/darkspeak). It turned out to be way more work than I expected - so it's been on hold for a few months while I spend time on some more urgent projects. However, I think p2p IM software, based on open source, over Tor (or similar technologies) is the only way to preserve privacy and confidentiality in the future.

> If it's closed source, you cannot possible know what it does.

You can set up a wifi and try to MitM yourself and see what packets WhatsApp is sending/receiving. Then you can try to snoop on them and test. The fact that it is closed source doesn't mean you can't analyze it, it just means it's a black box that you have to carefully dissect.

You can get some idea by looking at where the packages are going - but in todays ipv4 space, most p2p packages have to transit trough some public IP addresses. That means that, unless you are able to decrypt the traffic, it will be difficult to know if someone is listening in on the conversation. Also, just by looking at the packages, you will not have any means to detect back-doors, unless they are accessed while you are looking. Back-doors potentially requested by intelligence agencies for snooping on high value targets are likely to go undetected.

So just look at the actual code executing. Should be fairly easy to tell if there's some huge secret function in the binary.

Well - you know, when you strip the symbols from the optimized binaries, the "huge_exploit_nsa_hook()" function kind of morphs into 0x66666666 or some other seemingly random number. Besides, I knew only one programmer who could read binary dumps of a program and instantly tell what id did. That was 30 years ago, when executables were measured in kilobytes.

Fortunately there are useful tools that'll help navigate binaries, like IDA Pro. They'll produce control flow graphs in addition to letting you annotate things. I've done this in a professional capacity a few times, though I'm not remotely an expert and barely know what I'm doing.

In Java, it's even easier due to JVM restrictions. I wrote an obfuscator for .Net, but Java offers less capabilities in it's bytecode. I even used a commercial product that had been obfuscated. The obfuscator broke something on Mono. It took about an hour to write a small script to go through the binary and fixup the broken bits so other tools would work on it.

Good call on reversing, I'd written about it in a first draft but then scratched everything and started over again. Indeed, there are some good reversing tools. Still, his call on packet analysis being incomplete (unless you happen to see an interesting event) is right. I was thinking of a more simple test to see if things are effectively encrypted, and how resistant to cryptoanalysis is the protocol.

Has anyone heard anything from Moxie Marlinspike on this? Would be interesting to hear his perspective - Open Whisper Systems helped out with the encryption.

Well there are two possible scenarios I can envisage.

  a) The issue was an oversight and simply a bug that needs
     to be fixed. The question is why FB doesn't want it 
  b) Moxie knew that this issue existed but was NDA'ed into
     leaving it there for nefarious purposes. Now it's public 
     knowledge, where do we go from here?

This exploit is not in the original Signal protocol, and was introduced by WhatsApp. Signal discards undelivered messages when the encryption key changes, WhatsApp implemented re-transmission because they think it improves usability. It does do that, and it also introduces this security risk.

It says so right in the article. Stop spreading FUD.

Moxie endorsed Whatsapp, though. We view Moxie as a trusted actor, so either he is untrustworthy which would SUCK or he didn't know that they did this.

If one of those was fact I would guess "a".

It doesn't matter whether you use WhatsApp, Facebook Messenger "Secret Conversations" or even Signal app (or PGP or any public key based communications system)!.

If you are not verifying key fingerprints out of band, then you are potentially vulnerable to a malicious server MITMing new sessions.

If you want secure end-to-end messaging, verify keys out of band, do not solely trust a 3rd party for key exchange!

And you have to verify the software is using those verified keys for every message you send.

Is anyone surprised? Facebook owns them, and Facebook has been in the back pocket of the intelligence agencies for at least half a decade.

Are there some landmark issues around this assertion? We know Yahoo backdoored email, and their head of security resigned as it happened behind his back. This doesn't mean agencies are successful at coercing every company by default.

Does anybody seriously still doubt that all the main US tech/communication products all have backdoors?

I think the shock here is that FB considers this a usability feature rather than a vuln. I can see both sides. If you want real security use something else. If you want privacy but not from state actors or companies, use WhatsApp.

The biggest security issue on WhatsApp are the backups, especially the cloud backups not the protocol and this so called "backdoor" itself. Pictures not encrypted on backups, encryption keys of backups stored on WhatsApp side which might or might not (???) have access to your cloud backups on Google Drive and iCloud. If a government (USA?) gets access to one of your or your friends backups and the encryption key it can see all of the conversation. This is for me the weakest point of WhatsApp.

I've noticed this as well, do they even encrypt the backups the upload to google drive and if so with what key? If they use one key then the advantages of perfect forward and perfect future secrecy that the double-ratchet protocol provides is lost.

Messages and media backed up to Google Drive are "not protected by WhatsApp end-to-end encryption while in Google Drive" according the app.


You replied to me about a month ago concerned about my health when I was dealing with issues with my wife: https://news.ycombinator.com/item?id=13039203

I couldn't reply to that as it is too old, but wanted to tell you that we talked and we ended up parting ways. While things are still in a turmoil, it seems like some kind of window opened and hope is out there again.

Just wanted to thank you for the concern showed then.

Happy new year!!


Aside, anyone know why facebook backups on google? That always struck me as strange.

"why don't you use whatsapp now that it has built in encryption like your Signal?"


I've used signal for quite a while but went back to Threema because the messages were delayed too often.

What are opinions about Matrix (matrix.org) used with the Riot client?

This combo checks all the boxes that Signal checks (including the Olm ratchet, a close relative of the Signal ratchet), and adds :

- decentralization (run your own server)

- no need to disclose your phone number

> Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

Actually it is not that easy. Signal protocol [0] does not have any inherent delivery notification, but it is implemented in the application [1]. If attacker wants to deliver messages two-way without delivering receipts, it has to recognize them somehow. Of course you can try to guess by not delivering the first message after each delivery, but it seems too unreliable for a backdoor.

[0] https://whispersystems.org/docs/specifications/doubleratchet...

[1] https://support.whispersystems.org/hc/en-us/articles/2125355...

> in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.

That quote sounds even more alarming to me than the description of the backdoor. Because, as I read it: the unencrypted message is not stored on the device, but somewhere else. How else would they be able to still deliver a message, using a new encryption key, even after the sender switched to a new phone?

> Boelter reported the backdoor vulnerability to Facebook in April 2016, but was told that Facebook was aware of the issue, that it was “expected behaviour” and wasn’t being actively worked on. The Guardian has verified the backdoor still exists.

This is really damning on the part of Facebook and WhatsApp! How could they just brush this off as "expected behavior" and wasn't being actively worked on? I guess their priorities are where a social media company like Facebook would have them be - make more avenues to monetize the usage.

The initial response from the WhatsApp spokesperson is just PR speak, and really terrible for a response (until the direct question came up and another statement was issued).

It's sad that Signal and Open Whisper Systems are being dragged in here, because many people may just look at the headline, probably skim the beginning of the article a little bit and assume that the OWS implementation is the culprit or that OWS is somehow complicit in this.

Use Signal. Get everyone around you to use it. Seriously. Facebook is a for-profit that gets all of its money from ads (just like Google), would you seriously expect them to protect your privacy?

Why do we sit here and argue about whether people should use WhatsApp or Signal? It's Facebook. How can we talk about Facebook as a serious candidate for private end to end messaging when they're one of the world's biggest data brokers? Why wouldn't you just use Signal and recommend it to everyone?

I'm not a crypto guy, but I'm trying to understand how this backdoor could be used by governments or WhatsApp/Facebook itself. I'm not entirely sure how such an attack based on this backdoor would work.

The article says that WhatsApp servers have the ability to trigger the clients to generate new keys, but even with new keys how can the server read the messages at all? Has the server got a copy of the new generated keys?

Probably there is something big I'm missing.

I'm trying to understand this same thing. I don't see why triggering a client to generate new keys is a problem. Giving the client keys to use is a problem, but that's not what it's saying.

Edit: it is described much better here: https://tobi.rocks/2016/04/whats-app-retransmission-vulnerab...

The idea is that in addition to the keys being regenerated, the recipient phone is spoofed (a key point not mentioned). So the FBI could tell the Whatsapp company to generate a fake recipient phone and connect the sender phone to that phone instead.

Thank you, now I understand.

I had a conversation about whatsapp capabilities recently with an assistant state AG. This person debunked the notion that whatsapp is secure from government snooping and further intoned that you don't even need a FISA court to provide a warrant to get to the target's information. Any judge can issue the warrant for a line tap and the target would never be the wiser as they are sealed in secrecy.

For everyone who is (rightfully) upset about this: turn your anger into action, donate to people who are actually concerned about your privacy and who are taking action to defend it. I suggest OpenWhisperSystems:


It's a typical example to CONVENIENCE.

It's convenient to re-send the message.

No one serious of privacy would ever use Facebook / WhatsApp.

So the title is a click-bait. The decision behind re-sending is based purely on convenience and cost-benefit analysis.

Actually I think they should display a notification / popup / warning whatever.

Without being open-source, who can assure that there isn't always encryption with a second backdoor key ?

I can't easily even see a hash of my key, how do I know it has or hasn't changed? It's pretty easy to have a feature that only shows some of the keys changes and not all of them.

Even if it is open-source you still can't be sure unless you build the app yourself. Otherwise there's no way to know whether the source code you're reading is really the same code that's running on your phone.

How do I know that my Android phone doesn't have a backdoor keylogging everything that I type and uploading it to Google/NSA each night?

I haven't rooted and installed wireshark on this device, but even if I did it could just not send it whilst that is logging. Or, it could be that wireshark doesn't see everything. Or I just wouldn't notice as there are many packets going back and forth between my phone and Google.

I suppose I could install Cyanogen and not install Gapps. But then, how do you know that Cyanogen isn't compromised?

Life's too short. Facebook messenger is convenient and most of my friends use it so I go for it. I just assume that all of my communication and more seriously location data for the last few years are logged with the intelligence agencies.

"Life's too short to worry about privacy" is precisely the kind of attitude that normalizes increasingly invasive surveillance and inadvertently feeds into the desire of companies to glean as much information as they can from their users' data. Why does the convenience of Facebook messenger have to come at the cost of privacy?

I think it's an appropriate response to criticise a company for implementing what can only be generously interpreted as a bug, if not a backdoor, and dismissing concerns when it was pointed out to them, all the while making specious claims about being secure and lulling its users into a false sense of security. Public outrage is a powerful tool in ensuring that companies don't get too adventurous in spying on their users for fear of getting caught and called out on it.

At the risk of raising the spectre of authoritarianism, I think the folks who held on to their religious beliefs in countries that enforce/d a particular religion (or no religion), or secretly organised protests against communist regimes would gape in disbelief at the choices of the current generation to use always-on digital assistant devices, communication tools and social media platforms that have been shown to be linked with government surveillance programs. Sure, your government may be democratic and benevolent at present, but what would stop an authoritarian President from using troves of already collected data to purge the country of its "dissidents"? It's not a far-fetched concept - Why do the UK fire and rescue authorities need access to the browsing history of citizens [1]? It will be all too easy for a government with all kinds of data on its citizens to establish a "citizen value" score [2] and optimize access to healthcare and other services based on it. Just the possibility of such a dystopian future should be a cause for concern on our willingness to exchange privacy for convenience.

[1] - http://www.ibtimes.co.uk/big-brother-watching-you-every-orga...

[2] - http://www.independent.co.uk/news/world/asia/china-surveilla...

> […] In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability -- one that they would be forced to push onto the user's device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.

[1] https://www.schneier.com/blog/archives/2016/03/possible_gove...

In countries like Mexico, carriers do not charge your data use of fb and WhatsApp. They offer it as free social network. I am sure government is behind of such a good will to users from big companies. You get free communication in exchange from your privacy.

This headline is not the article headline. It's not a small change either. There is a huge difference between:

"WhatsApp vulnerability allows snooping on encrypted messages"


"WhatsApp backdoor allows snooping on encrypted messages"

Facebook 100% reads your "encrypted" WhatsApp messages. I had a conversation with someone about a very unique topic on WhatsApp, 5 minutes later I see remarketing ads on Facebook about the same topic.

Well, if anyone is surprised by this... you really should'nt have been.

I still use it. Lock in effect. But I never would have trusted their encryption nearly enough to send anything sensitive.

If only sensitive stuff is encrypted, encryption becomes suspicious.

Add "you don't have something to hide, right?" to using encryption for sensitive stuff and you got a 1984 sequel where encryption is banned or must contain backdoors.

Some of my friends refuse to use LINE, claiming WhatsApp is totally secure and LINE is really insecure.

If my messages are going to be read I would rather they be full of stickers.

I love LINE.

Look there's no defense against the company WhatsApp itself. They are managing the public key infrastructure AND the message forwarding infrastructure.

The clients are not verifying the keys independent of WhatsApp. If WhatsApp have to (pushed by governments) or want to (FB advertising enrichment) they can always MITM conversations.

The question is whether others can read the data in transit - and the answer is still no.

This is why you should never trust proprietary secure messaging solutions that offer you both the client and the channel.

The future of trusted secure messaging will be open source, auditable, independent non-native clients that connect and send over third party message channels independently.

See https://www.seecret.io

I feel like this shouldn't surprise me, but I had a lot more faith than was probably warranted in the guys behind WhatsApp. Part of it was I was so impressed with their backend tech, I just felt these were people like me that had similar cares and concerns that I do, including security, privacy, and performance. So when they implemented the Signal protocol, it was like a sign that I really had been right to trust them.

This is a sad day, because BILLIONS of people use WhatsApp. I wish I could get everyone to convert to Signal, but as I travel around the world WhatsApp is the most used way to communicate with people. Just today I added two additional local contacts to my WhatsApp so I could communicate here with them.

I wish I had a clearer understanding of the incentives here. Is this pure government strong-arm style coercion with NSLs, or is this intentional malfeasance on the part of executive management hoping to data mine for their own profits? Is it an innocent mistake? The technical talent was there to do this right, and they flubbed it anyway. WhatsApp implementing the Signal protocol was one of our great hopes for having legitimate worldwide secure communications in the hands of everyone in the coming decade. Now it's all lost...


"Is this pure government strong-arm style coercion with NSLs, or is this intentional malfeasance on the part of executive management hoping to data mine for their own profits?"

Facebook is a surveillance company that sells profiles and/or data to 3rd parties for money. They own WhatsApp. That gives us a probable answer. Far as general case, the Core Secrets leaks indicate they both bribe companies & the FBI "compels" those that resist to "SIGINT-enable" the systems under "FISA" authority. The Yahoo case also indicated they fine companies enough to put them out of business. So, they can fine companies or possibly jail their executives if they don't put the backdoor in. It's also always secret with likely excuses that it's classified matter of national security, part of ongoing investigations, etc.

I am flagging this article, as the headline and first few paragraphs are very misleading, based on my understanding from: https://tobi.rocks/2016/04/whats-app-retransmission-vulnerab...

They make it sound like an intentional backdoor has been introduced to WhatsApp to facilitate monitoring.

Rather, it seems like there's a weakness in the implementation, where if a message is undelivered, an attacker could trick the sender's client into sending the undelivered message to a new key they control.

That does seem like a weakness, but not an intentional backdoor as the article initially lead me to believe. I could see how someone would trade off ease of use and message delivery with security and make that call.

Yes, it could be a subtle backdoor (with limited exploitation), and yes, open source clients would be great. But real end users use WhatsApp to encrypt their private messages on a scale never before achieved, because of the usability tradeoffs they've made. I think we should bear that in mind before describing any implementation tradeoff as a 'backdoor'.

Agree, also think it's likely this is an intentional trade off: Alice sends Bob a message but Bob's phone is broken, so he gets a new one. The message is marked as not delivered. Since Bob's old keys are lost, WhatsApp needs to generate new ones. The trade off here allows in this scenario to accept new keys transparently.

Not ideal from a security perspective but what would be the alternative? Bob meeting Alice so they can compare fingerprints? Bob sending Alice a PGP signed message?

>what would be the alternative

Alice getting a warning about key mismatch and a prompt for redelivery (or not) of the pending message. Bob-with-new-phone does not get to read Alice's messages to Bob without Alice at least having the ability to verify that Bob indeed changed phones. Yes, 99% of users will click "redeliver" without checking, but the ones for whom secrecy matters won't.

I think this is how Signal does it, and it is the only security conscious way to do it.

Yes, I personally prefer that too. But I understand if facebook decided this is too much to ask for their users.

Yeah, I've seen this behaviour with Signal. It's UI is somewhat confusing thou, it took me a while to understand that I needed to re-generate keys so what I could "fix" my conversation. There was no redelivery of the messages that couldn't be decrypted thou.

Could do it like Signal - always alert the user when a key changes, and not resend messages until the user approves it (having checked the key fingerprint via some other channel, if so inclined).

As it stands now, you have to trust both the client app on your phone, and the WhatsApp server. The idea with e2e encryption was that you only have to trust the client app on your phone.

I completely agree with you, and I am going to add that this scenario shows how difficult it is to make a one-to-one messaging system completely secure AND easy to use at the same time.

What if the originating client (Alice) was responsible for re-encrypting undelivered messages?

> They make it sound like an intentional backdoor has been introduced to WhatsApp to facilitate monitoring. Rather, it seems like there's a weakness in the implementation

If I wanted to install an intentional backdoor, I would do my best to make it look like merely a weakness in the implementation.

Therefore any time we see an implementation weakness it should be reported as an intentional backdoor? Is this really what we want?

What would we then say if we got proof that they actually put an intentional backdoor in? That's clearly a much more serious scenario (if the vendor is surreptitiously working against you, you are much more screwed than the one bug youve found), and it would be nice to be able to communicate it.

I thought that was why we had a word like 'backdoor' vs 'security bug'.

Sure, but this is now conspiracy theory logic that is impossible to disprove.

> subtle backdoor (with limited exploitation)

oh, please.

> The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent.

Surely this is backwards. It's the recipient who is notified about key changes when the relevant setting is enabled.

The complaint here seems similar to complaining ssl is insecure because the certificate authorities can create certificates at will.

Whatsapp can't do this without leaving traces and if they did this on a larger scale without only doing it with people that don't care to look for the signs, someone is bound to find out.

Who would have guessed..

I think it is worth changing the behavior of the client to fix this. At time of sending the recipient's key is known -- there should be no circumstances where the message is re-encrypted for a different recipient without the sender's explicit involvement...

Seeing as we're on the topic of encrypted comms, anyone have an analysis/critique of SpiderOak's "Semaphor"?


Does the safety numbers verification do anything against this, or can they bypass that as well?

So this was on the front page with 1302 points at time of writing, and now it's nowhere to be found...

Is there a quirk with HN's algorithm that I'm not aware of, or is there something else afoot? A mass-flagging? A manual take-down of sorts?

Doesn't this mean that only subsequent messages can be decrypted? i.e. Whatsapp has provided forward secrecy (as long as they haven't been using this trick from the initial secrets that were set up)?

Pretty sure that is the case. the key is changed while you're offline making any unsent messages use the new key that they know.

But if they can change the key while you're offline that means they can change the key and know everything from that point on.

Though you would get the key-change-notification (if you had enabled it, overriding the default), and could then verify fingerprints via some other channel.

The moral of the story is, don't exchange messages electronically if you are expecting privacy.

The only real private way of exchanging information is face-to-face in a private place.

How does one protect oneself for this?

Wait... you mean key management is hard to get right with a large and distributed userbase? Who knew?

Doesn't this mean that only unsent messages are vulnerable, as they are sent with the new key?

Yeah, really pretty much confirms what everyone already believed.

I can't believe that so many apparently security conscious people accepted WhatsApp as being OK. For years we've known and been told that any security software must have publicly available algorithms and source code. And then all of a sudden WhatsApp was lauded for protecting users' privacy when it is itself proprietary, closed-source program, owned by a company notorious for not not respecting user privacy.

I find this hardly surprising. Somehow the USA government is very, very good at convincing companies to spy on their users.

Many governments are good at spying on their citizens, but are you implying the USG forced/compelled/convinced WhatsApp to intentionally weaken their security?

Are there are recommendations for video chat / conference calling yet? Googling around leads one to believe that WhatsApp's is the most security-minded video calling available that's widely available...

If You're Not Paying for It; You're the Product;

I am not a security expert, but for me Moxie lost his credibility, even though he maybe one of the best crypto experts out there

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact