That seems to fit the currently accepted understanding of the word: https://en.wikipedia.org/wiki/Backdoor_(computing)
The point about this "backdoor" business is that the WhatsApp client does not even give the user the chance to even make a mistake of skipping or mis-executing validation. Instead, it will just make that mistake FOR you, every time, for your convenience!
That utter failure of design, and breach of trust, enables a remote actor (the WhatsApp servers) to access secure data. So yes, it is a "backdoor".
The linked piece is hard to critique because it's borderline incoherent. The "conclusion" is simply not a conclusion, particularly this passage:
> A provider always has the ability to intercept messages as long as the user does not verify fingerprints. With WhatsApp, it is even harder to make sure, no MitM takes or took place. WhatsApp is closed source, so who can tell, if WhatsApp just displays wrong identity keys and lets the user think that everything is perfectly OK ..?
The encryption in WhatsApp and Signal and Apple messaging all are all built to protect data from others in transit not necessarily from the service provider itself.
No system where a central service provider manages both key infrastructure and message delivery can ever be secure from MITM by the service provider unless you do manual key verification through a different channel. Signal does provide the means to doing so by physically meeting a person and verifying which is good. But are you truly going to be able to explain these concepts beyond techies?
For instance if you're on some list for message interception, they can give you MITMed keys when you first login. Or they can insert some subtle signal that tells the app on your specific phone to ignore key changes and avoid showing notification in some way you would struggle to check (closed source and obfuscated code) etc etc. They could even show you the right key if you attempt verification but use a compromised one for communication. This particular vuln. would be a ridiculously crude way to intercept messages.
To repeat, in any system where key distribution and message distribution are centralized, there is no way to protect against the service provider - and anyone who co-opts the service provider (eg. with a court order). The objective of the encryption is to protect against other actors snooping on you
Whatsapp will re-transmit messages with a key provided by whatsapp without ever giving the user the option to verify that key. Even with the opt-in, the message will be re-transmitted. All the opt-in ensures is that you are notified of the key change (a notification you receive after the message has already been re-transmitted under the new key)
If I read the post and came up with the thesis sentence myself, it would be "WhatsApp is vulnerable to MITM attacks because it tries to automate key changes by default"
In other words, this is not a crack because the glass is already broken.
I'm so relieved.
This is flat-out wrong. There is no opt-in feature to block sending when the key material changes. There is only an option that notifies when the material changes.
And this is precisely the problem. On certain messages (those not yet delivered) whatsapp can force re-transmission encrypted with a key of their choosing. No options will block the re-transmission.
Also the vulnerabilty matches perfectly one scenario - when a person is in custody, the LEO cannot open its phone, but they can create account on new device with his sim card and continue "trusted" chats.
I install WhatsApp. How do I roll over my identity?
The way I see it is that WhatsApp is delegating the task of identity verification to the network provider (admittedly a weak link for the security conscious). But it _is_ the easiest way for the average user to continue chats on a new phone.
If the default setting were reversed, HN would stop complaining, but the 90% would.
The most 'secure' means of communication is probably a one-time pad communicated via paper on magic ink that you then burn, or something. There is a cost to ease of use in many cases. I wish the conversation was less about right v wrong, and more about what tradeoffs should be made and where to draw the line.
To expand on the example given above, if the police get your phone, turn it off and wait for a while. You might have quite a few incoming unreceived messages. They can then simply take the sim, put it in a new phone, and register that with whatsapp. They can then read all messages sent to you since they turned of your phone.
This backdoor cannot be exploited by third parties, only by Facebook themselves, who already have much easier ways to intercept or manipulate communication. So although I don't think Whatsapp makes the right trade-off here (people get a new phone only once every few years, so why optimize for that edge case?), I'm not concerned about the privacy implications either.
I suspect other commenters here are confused about the nature of the Signal protocol, and who you have to trust for the system to be secure. If you used to believe that Facebook is 100% unable to intercept or tamper with Whatsapp communication, then this would be upsetting. But since they're a trusted party already, this changes nothing.
Let's not get hung up on semantics, and focus on the HARM.
The article is factually wrong on this.