Hacker News new | comments | show | ask | jobs | submit login
WhatsApp Encryption Security Flaws Could Allow Snoops to Slide into Group Chats (wired.com)
362 points by uptown 6 months ago | hide | past | web | favorite | 166 comments



Here's how WhatsApp group messaging works: membership is maintained by the server. Clients of a group retrieve membership from the server, and clients encrypt all messages they send e2e to all group members.

If someone hacks the WhatsApp server, they can obviously alter the group membership. If they add themselves to the group:

1. The attacker will not see any past messages to the group; those were e2e encrypted with keys the attacker doesn't have.

2. All group members will see that the attacker has joined. There is no way to suppress this message.

Given the alternatives, I think that's a pretty reasonable design decision, and I think this headline pretty substantially mischaracterizes the situation. I think it would be better if the server didn't have metadata visibility into group membership, but that's a largely unsolved problem, and it's unrelated to confidentiality of group messages.

In contrast, Telegram does no encryption at all for group messages, even though it advertises itself as an encrypted messenger, and even though Telegram users think that group chats are somehow secure. An attacker who compromises the Telegram server can, undetected, recover every message that was sent in the past and receive all messages transmitted in the future without anyone receiving any notification at all.

There's no way to publish an academic paper about that, though, because there's no "attack" to describe, because there's no encryption to begin with. Without a paper there will be no talks at conferences, which means there will be no inflammatory headlines like this one.

To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.


> To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.

Honestly, this paper would be fine if it was just an analysis. The shitty thing about it is rather the prep'ed buzzy wired article

EDIT: I just noticed that Matthew Green published a blog post about this titled "Attack ...". That's really surprising :/


>EDIT: I just noticed that Matthew Green published a blog post about this titled "Attack ...". That's really surprising :/

How so? He consistently sensationalises his stuff.


> In contrast, Telegram does no encryption at all for group messages, even though it advertises itself as an encrypted messenger, and even though Telegram users think that group chats are somehow secure. An attacker who compromises the Telegram server can, undetected, recover every message that was sent in the past and receive all messages transmitted in the future without anyone receiving any notification at all.

I'm going to be honest, moxie. I'm a big fan of your work, and I basically agree with everything you've stated here as someone who works in the security industry. I don't particularly like Telegram, and I encourage use of Signal where possible. Just last week I was defending Signal on HN[1].

However, I think you shouldn't be bringing up Telegram here. The article does not mention Telegram by name, and I think that bringing it up here, as one of the developers of the Signal Protocol, distracts from your point. Holy war threads between Signal and Telegram bubble up on occasion on Hacker News, and people are basically aware of who you are. As an outside observer, bringing up Telegram in the way you did comes across as preternaturally defensive whataboutism.

I think you could have expressed your points about the security industry's disincentives (which are legitimate observations, in my opinion) without using Telegram as an example. But bringing up Telegram instantly shifts the focus away from Whatsapp, Signal and latent problems in the security industry; instead, it becomes the usual Signal vs Telegram circus. I don't think that's a particularly persuasive way to forward your points.

To reiterate: I agree with what you're saying, but I think that it's very likely your comment will be perceived in a way that you don't intend, to the detriment of persuasion.

___________________________

1. https://news.ycombinator.com/item?id=16064932


> I think you shouldn't be bringing up Telegram here. The article does not mention Telegram by name, and I think that bringing it up here, as one of the developers of the Signal Protocol, distracts from your point.

He quickly dismissed the idea that this vulnerability is a real one, and explained why. In the end it looks like a minor issue, blown out of proportion by this article.

The problem is precisely that this article does not mention Telegram even though it is in direct competition with Signal. If I didn't know better, I would assume from the article (and the paper) that Telegram is not subject to this vulnerability, and is probably "still" secure (if I thought it was before). Moxie addresses the issue, so this is not whataboutism; he just hints at what the article should have mentioned, that experts have been recommending Signal (and, after it, WhatsApp) over Telegram for ages, and that even though this recommendation could now take a hit, it probably won't budge with a vulnerability that small.

> Holy war threads between Signal and Telegram

"vim vs emacs" is a holy war; the fact that Signal is more secure than Telegram is not, when there is a consensus among experts about the question. IMHO, calling it such is misleading.


There's a fine line between being outspoken [1] about one's concerns, and hammering the same points that every opportunity. Telegram's marketing is heavy with weasel words that multiple people -- journalists, tech experts, crypto experts -- have called on probably being empty posturing, and their implementation shrouded in opacity in the exact ways it shouldn't. No one except laypeople believe Telegram clears the bar set by Signal, Matrix, or any of the systems OWS consulted on, but there are lots of laypeople: millions of them.

When you're in the industry, especially a leading innovator in the industry, it's infuriating to see an inferior product being recommended, one that you can credibly suspect doesn't even deliver on the promises, but in your attempts to discredit that product you'll sometimes come off as a crusading zealot, to the detriment of other content you've packaged with your commentary.

There was little need to call out Telegram in the post by name, because it does instantly re-frame the conversation, and in a forum some of the conversation will continue down the new path, as it does now. That's a mistake in this format, and it does come off as a defensive misdirect made in the heat of argument. A place to reinvigorate this criticism in light of the new revelations is one's own personal -- or even professional -- blog, where you can start off on the high ground.

[1] https://hn.algolia.com/?query=by:moxie%20telegram&sort=byDat...


I don't think the link you posted helps your case that Moxie is "hammering the same points that every opportunity". The link shows that he mentions Telegram only a few times a year, often in response to a telegram-specific article, and the last time before this one being two years ago.

Maybe it would seem that way for someone who's religiously following what Moxie says, but that's sort of like complaining of hearing "you should charge more" too often if you're religiously following patio11.

I also think he made a valid point in his most recent post, and mentioning Telegram added valuable context to his argument.


> "vim vs emacs" is a holy war; the fact that Signal is more secure than Telegram is not, when there is a consensus among experts about the question. IMHO, calling it such is misleading.

A holy war is determined by its propensity to raise "debates of attrition" in which both sides are so unyielding they may as well be (and sometimes are) ideological. Whether or not one side has a legitimate claim to superiority over the other is entirely orthogonal; such a debate is "holy" in nature because even if that superiority existed and was demonstrated, it would not be accepted. You cannot use reasoned expertise to decompose ideological adherence.

With respect to your other point:

> The problem is precisely that this article does not mention Telegram even though it is in direct competition with Signal. If I didn't know better, I would assume from the article (and the paper) that Telegram is not subject to this vulnerability, and is probably "still" secure (if I thought it was before). Moxie addresses the issue, so this is not whataboutism; he just hints at what the article should have mentioned, that experts have been recommending Signal (and, after it, WhatsApp) over Telegram for ages, and that even though this recommendation could now take a hit, it probably won't budge with a vulnerability that small.

I would have accepted this explanation, which is far more nuanced in presentation than the one we're discussing. You added all the context that would have safely negotiated those waters; but as stated, the comment does not achieve this purpose, in my opinion.


> I think it would be better if the server didn't have metadata visibility into group membership, but that's a largely unsolved problem, and it's unrelated to confidentiality of group messages.

Nitpick: Signal solves this problem just fine¹, by treating messages to a group as simple pairwise messages, encrypted similarly to pairwise messages, and sent separately to each member of the group. Group management is all done through these e2e-encrypted messages.

¹Signal also has a group messaging bug in that the app doesn't check that someone is a member of a group before accepting their group management commands, but that is trivial to fix.


I'm guessing that moxie has a pretty good idea of how Signal works...


That doesn't hide the group in any meaningful way. All the same-size same-time messages to the same contacts, over and over, make the group clear.


"I think it would be better if the server didn't have metadata visibility into group membership, but that's a largely unsolved problem, and it's unrelated to confidentiality of group messages."

Although this is true, I guess in this case this is not related.

As long as all the communication between peers are e2e, I think this situation can be solved by peers advertising the people they have invited to the group, later clients can refuse to do key exchange with parties, which are not announced before.

Or server can send new member join messages, by relaying a invitation message signed by the admin (or whoever invited the member)


> As long as all the communication between peers are e2e, I think this situation can be solved by peers advertising the people they have invited to the group, later clients can refuse to do key exchange with parties, which are not announced before.

This breaks group join links.


It does not then admin will just announce invite link and clients will check whoever joined with that invite link.

Ofc then whatsapp can reuse that link, but there is already some warning for invite links in whatsapp help


Details?


See https://faq.whatsapp.com/en/android/23776567/?category=52452... for the details of the feature. The identity of the new member isn't known at invite time.


It doesn't have too though, if you create a join link you could also advertise the code to other participants. When the new member joins via this invitation link, the code is recognized by everyone.

(I've been downvoted for saying that, but the solution works)


What public key do you suppose would be used to encrypt those messages without giving the server the ability to read them?


messages are already encrypted in the group chat, but there is another problem I didn't think about (see other comment)


As soon as the code is redeemed, the server has access to the code.


1. when you create a joining link, it creates a secret uuid

2. this uuid is shared with the rest of the group

3. if Alice joins the group, every uuid created is shared to Alice (except the one Alice used, if Alice used a joining link)

4. when Bob attempts to join the group via the group id, if Bob does not have a known code Bob is refused

5. if Bob uses a known code, Bob is accepted and everyone deletes the code

This does not prevent different participant views to be created, but this is already a problem in WA anyway.


(a) it means each code can only be redeemed once, which is a bit of a usability hit (b) it still doesn't stop a malicious server, which can try to MITM the group join and proxy messages to the client.


If this is a usability problem then remove this line:

> and everyone deletes the code

I'm not sure I understand your attack in (b), the message is encrypted to the participants the server cannot relay or mitm it.


> All group members will see that the attacker has joined. There is no way to suppress this message.

The only issue that I see here is that a large group may not see the user joined. People can ignore those messages over a certain threshold.

However most large groups are not going to be as privacy sensitive so it is not really an issue.


yup, security goes out the window in large group chats


Briefly, why is an alternative not to have the administrator sign the membership list and then let the server pass it around? I realize that this will means you won't be able to directly join via an URL, but you could either do that by having different types of group or having the admin have to verify each member after they apply to join via URL.

I feel like the article could have mentioned Telegram though, and I don't see why it couldn't have been mentioned in the paper too.


Thanks for the detailed response :)

I have a few questions:

- How does group messaging in Signal work?

- Does the server also hold group metadata?

- If there is a difference, why is there a difference?


> If someone hacks the WhatsApp server, they can obviously alter the group membership.

Of course an attacker can subscribe to the conversation if s/he owns the server, but that doesn't make it "obvious" that s/he can actually read messages' contents from that point onwards without any sort of confirmation from the chat's participants.


All the chat's participants get a notice that the new member joined. Every time they've ever joined a group, that's been the behavior. They've never gotten a "yes/no" dialog for new group members (it's hard to see how that would even work in practice). I think the behavior here is in fact pretty obvious.


The confirmation would be a participant who invited them and tells the other chat participants about it cryptographically (not by user interaction). That everyone rekeys (or whatever happens in the Signal protocol to allow future messages to be read by the new participant), I would see as a flaw of too much trust on the server.

A notification in a busy group gets lost, and in the scenario of an attacker owning the server, they could easily time it to coincide with a busy period.


How do you know the closed source app will show you all of the group members? Also anything to add about signal related to this paper?


A reminder that machine code is not a black box, it just takes a tad more work to look at it. Closed versus open source is not relevant here.


> and even though Telegram users think that group chats are somehow secure

how do you know what do telegram users think in regards? Assumption? — mother of all f*ckups, they say


jumba


> There's no way to publish an academic paper about that, though, because there's no "attack" to describe, because there's no encryption to begin with.

Then why does the Telegram faq state that there's "server-client encryption" for group chats? [0] "Secret Chats" supposedly even uses e2e encryption [1]

Note: Never used Telegram (Signal does the job for me, thank you!), I'm no coder, but your comment makes me wonder if I'm missing something here?

[0] https://telegram.org/faq#q-so-how-do-you-encrypt-data

[1] https://core.telegram.org/api/end-to-end


"server-client encryption" means that the data is only encrypted between the client and the server (i.e. the server can read all messages). "Secret Chats" are only between 2 clients and are end-to-end encrypted which means the data is encrypted the whole time between the clients, and the server can't read any of the data.


It’s likely that server client encryption just means TLS for the server endpoint.


Ah right, so the data on the server itself isn't encrypted at all, which obviously is quite an issue if the server would get compromised.

Now I feel kinda stupid for asking the question, guess that whole "There is no encryption" and Telegram faq saying "We have encryption!" threw me off, what's encrypted where is obviously the most important aspect.


Now you understand part of why Telegram alarms experts so much. No competent security engineer would claim a group chat was "encrypted" because it had TLS "server/client encryption". That's a property even AIM satisfied.


Imho part of the reason for that might be because Telegram has gotten the reputation of being the "encrypted Terrorist IM app" [0] Especially in the mainstream media, I've seen countless news pieces which blame Telegrams encryption for making it supposedly impossible to track down terrorists [1].

In contrast to that Signal and WhatsApp don't nearly get mentioned as often, at least in regards to "encryption enables terrorists!" FUD.

[0] https://www.hsdl.org/c/extremism-and-encryption-terrorists-o...

[1] http://www.smh.com.au/technology/technology-news/telegram-th...


That's an important point. I think the thing about Telegram is that it would indeed much less willingly cooperate with governments than say Whatsapp or FB Messenger do, since they don't have much relation with the western governments after all. However that says nothing about whether it is theoretically secure to hacking, which is quite a different thing.


What does Telegram have to do with WhatsApps problems?

The reasonableness of the WhatsApp design decision is ultimately to be determined by users. If users have no insight into the design then we can hardly say they have already decided on reasonableness. At best we can say they are ambivalent. (But if that were true, why would anyone responsible for the design care about the headline?)

Whether users get their insights into the design from WhatsApp, self-directed research, the work of "security researchers" or the media is perhaps an important issue.

If WhatsApp has made the "right" decisions then one would think they would be very forthcoming in subjecting them to review by users. If so, there would be very few surprises. WhatsApp could simply point to a detailed, public, technical document they released and say, "There it is. We tested or considered this before releasing the software and informed users about the risks, however remote. As such, nothing has been "discovered" by these researchers."

But I suspect if we went looking for this information we might only find marketing. Information promoting a "new feature", group chats.

If a WhatsApp/Facebook employee or contractor joins the chat is that considered an "attacker"? Example of a silly question perhaps, but it still needs to be answered, lest some "security researcher" and the media produce an undesired headline.

Anyone designing a messaging system today should be aware that some people are going ask these types of questions, sooner or later. Millions of people will not ask them and use the software willingly, but does that necessarily mean they do not care about these questions if someone else asks them? If yes, then headlines about "non-issues" should be of no concern.


>>To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.

I am not sure your point is really being reinforced by comparing a flaw by a bigger flaw. You admitted (regarding WhatsApp) that 'it would be better if...'. And that it's an 'unsolved problem'. So why not focus as a community on solving that problem instead of comparing it to a (in your opinion) bigger problem (meaning Telegram) to even out the score?

Additionally, Telegram did not leave cryptography out of everything. You might not agree with it, it might not be secure, it might not be available in group chats, but to say they have left it out completely isn't true[1], you know that too.

https://core.telegram.org/api/end-to-end


Even if this is true, don't forget that in WhatsApp you don't get the message history when you join a group. Plus, everybody sees that someone joined. That doesn't take the problem away, but it makes the impact much smaller than it sounds.


"everybody sees that someone joined" i can see that xxx has joined. Is the message coming from server or client? If it is from the server, then the attacker could block that message right?


I have looked at such implementations but don’t know you about whatsapp specifically: you will see the notification no matter what. This makes this attack not really an attack. Sure the servers can modify the participant list but everyone will see it and everyone will choose to write to the new group chat willingly.

When someone joins/leaves the group every participant generates and distribute a new key. So as a rogue participant, the keys you’ll receive won’t allow you to decrypt past messages.

PS: this article was posted before the speaker was even done with his talk. This is funny?


I don't see how you're arguing that this isn't an attack.

However, it's clear that the severity will depend on the particular group. A group of three friends will certainly notice when someone is added to the membership. A larger group of people who don't know each other personally but trust the administrator to administer membership will have to rely on the admin both noticing and taking action, and he may be hampered in doing so by the attacker.


It's just a UX failure more than an attack imo. If you're paranoid you will for sure see that someone has been added to the group. If you don't care well...


That's coming from the client because the client has received a new public key with which to encrypt messages.


It's implied that that's generated by the client when it's going through the add member routine (exchanging keys etc).


> Plus, everybody sees that someone joined.

The article states that admins can suppress messages, including the ones that report that someone has joined.


Whatsapp disagrees, from the article:

...a WhatsApp spokesperson confirmed the researchers' findings, but emphasized that no one can secretly add a new member to a group—a notification does go through that a new, unknown member has joined the group.


Well, it has to send the public key for the new group member to the client, so the client will encrypt messages for them with that key. So the server HAS to notify the client of a new member if they want to intercept any messages.


No, the notification of a new member joining is always sent. Subsequent messages might be blocked.


Or, from what I gather, previous ones (given that the server is compromised) and then sent later (after the new member notice).


> They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Given that WhatsApp isn't open source and so on, controling the WhatsApp server, or controling some part of its code, or controling the signing keys, all can compromise the privacy and encryption.

This study shouldn't cause much surprise.


You can inspect the WhatsApp binary and prevent updates. Having to trust the server is a big deal.


And that's indeed the point of end-to-end encryption: that you don't have to trust the server.


You still trust the server, unless the encryption is done with code that wasn't delivered from the server. E2E prevents your content from being stolen in a data breach or from being accessed if the server was fine when you sent a message but compromised later.


Good point. The (variously named) security code should allow you to withdraw even that trust (assuming you verify the security code and the binary on your client...), right. Or does it? If the server knows the secret, it can invisibly MITM you, right?


Unless you're independently verifying the keys in meatspace you're still trusting the server.


How easy is this, in practice? Has anyone inspected the WhatsApp binary? How much of it?


I have. It is a pain in the ass but certainly doable.

I am experienced with iOS, but honestly it is a big app so those who are familiar with Android could do this better, as I believe they actually have decompilers versus needing to read the compiled ARM code.


but the fact that it causes surprise causes me surprise, even though it shouldn't by the same reasons you exposed.


I just want to point out that WhatsApp controls the WhatsApp's client, so who the fuck cares about the server?


Honestly, I think half of the security community just gave up and accepted that WhatsApp is probably the best we're going to get.


I don't know anybody in security community who would prefer WhatsApp over Signal. Of course, my sample is limited.


But that in itself is also a problem, because people outside the security community does not like Signal.

I've tried so many times to convert people, but every non-security person prefer LINE, WhatsApp and Telegram. The reasons are simple; they are fun to use, good user experience and the most important one "my friends use it".


In the past, I've thought of installing Signal, but never did because it identifies you by your phone number, which seems like a totally unjustifiable and idiotic idea. I'd like to keep using my internet-based service even if I switch phones. There is no reason for my phone number to be known to Signal at any point.

Just now, I tried to install Signal and WhatsApp for work communication. Neither is willing to let me do anything unless I grant them permission to read my contacts.

I use WeChat. (Not for security reasons, obviously.) Somehow, despite the fact that WeChat is heavily coupled to your phone, they've realized that it makes more sense to use a WeChat account than a phone number, and that it makes no sense to prohibit users from just adding contacts manually. Adding contacts manually is actually a preferred approach! (In the form "scan QR code".)


Yes, "my friends use it" argument greatly impedes switching to another app, even though it is better. Anyway, slowly, but surely I expand my circle of Signal users. Current version of Signal is actually quite polished, at least not worse than WhatsApp was the last time I used it (before acquisition by Facebook), and I rarely hear complaints about usability anymore from people who have actually tried it.


I'm trying to push Keybase within my friend group. I've yet to try Signal.


Ditto, but you'll have more luck with Signal. They'll see the value of it more quickly. I'm currently trying to get 3 people to use Keybase.

My girlfriend/fiancee: Lives in another country, we're sharing data involving PII to deal with immigration stuffs.

My sister: We have a side project and I'd like to use the team based, private git repos.

A colleague: Same as sister, plus he's generally interested in privacy/security. And it's still been like pulling teeth to get him to do it.


Yes, in Signal you for example still cannot share one photo to multiple recipients or share multiple photos. I've installed it to all my relatives and they use it anyway, just those simple things would make it a bit better for your average "share pictures of my kids" types of use.


People say things like this about Signal but tend not to acknowledge why Signal is like that. Look at how Signal handles something as basic as user profiles, then compare it to how other applications address the same problems. I'll recommend Wire alongside WhatsApp any day, but keep in mind that Wire's servers apparently have a record of every conversation that has occurred between any two Wire users (not the content, mind you, just the link).

This is why I disagree with Matthew Green, do not think we've totally figured out secure messaging yet and that they're all "so good", and think that if you're serious about privacy --- enough to have strong opinions about WhatsApp vs. Signal, for instance --- that you should use multiple messengers:

- a "tier 1" secure messaging app like Signal that makes all reasonable tradeoffs in favor of security and privacy regardless of the UX cost, used when possible and for sensitive conversations.

- a "tier 2" secure messaging app like WhatsApp or Wire as your "daily messenger".

- "tier 3" messenger applications (including email) that you use mostly to rendezvous to a real messenger application.

In this scheme you can start to understand Signal as not just a decent messenger application with best-in-class security and privacy, but also as a laboratory for future privacy enhancements to messaging.


Ok, just those problems I have outlined seem to me like an UI hurdle. What's the problem making a simple copy of the message dispatched to the contacts I choose? As if I would do it manually?


the problem is the problem definition.

signal solves messaging.

but messaging is only a small part of whatsapp success. Whatsapp is a social network site. where you add friends by the bucket, even if you dont't plan on messaging them, and share baby pictures to a circle or family/friends.


And maybe add some "defence in depth" by splitting really sensitive ("tier 0") stuff across multiple channels/apps, such that any one breach doesn't compromise it.


Doesn’t this have the potential to inform a potential adversary as to your habits regarding those levels? “Ah he’s back on “tier 1” so this might be juicy!”


This. I try to spend way more time being the haystack to keep my needle hidden best...


> Yes, in Signal you for example still cannot share one photo

Well, you can in a group chat.

I also wish that Signal had better search capabilities, and export/import on iOS.

And also support for non-phone accounts.


You can definitely share pictures in groups in Signal.


Groups maybe, but sometimes I do not want groups as people I share photos to speak different languages for example.


You can share photos 1 to 1 in signal too. If the other person doesn't have signal then you have to set your providers mms server in the settings, but it's not very difficult.


I think indeed you're using it to send texts... But if the recipient is also on Signal it is very similar to Whatsapp, Gifs included.


I think I did send photos to Signal groups?


Signal has the perception of being more secure simply because it isn't owned by Facebook. I will never trust a Facebook server with private messages, despite both Signal and WhatsApp having the same security features on paper.


Same with me.

I refuse to think Mark Zuckerberg

* bought it for north of USD10Bn

* made it free

* stopped all other monetization efforts (paid api gateways etc)

just to provide free messaging service to everyone.

I have two explanations:

* either he felt it was a threat to his future messenging monopoly

or

* (and this is already not a secret anymore) they wanted to feed the data into their already huge tracking and ad serving network.

Both of those are good enough reasons for me to leave as I care about healthy competition and my future privacy.

But maybe the biggest reason why is because they lied to me: they promised to be the service that provided a good messaging service in exchange for a modest fee. They were profitable and yet sold out.


Another possibility is that Zuckerberg's plans were destroyed (or at least delayed) by EU ruling[0], so he didn't get what actually expected.

0. https://www.theguardian.com/business/2017/may/18/facebook-fi...


?

You mean you agree he had an evil plan and it was thwarted?


"An evil plan" might be a bit of an exaggeration, but I think he might have expected more (read more private data or ad push) from WhatsApp users than he currently can get. Of course, that's just speculation, but as you've originally mentioned, it makes no business sense for Facebook to maintain a free messaging app with no strings attached.


> despite both Signal and WhatsApp having the same security features on paper.

Well, if you don't consider "being open source" a security feature...


Do you?

If you are going to audit something for its security features, you pretty much have to start with a disassembled binary, don't you?


Purely on the fundamentals, you could also start with the published source code, build it, and see whether the output matched the binary you're auditing.

In practice, that's unlikely to work, but it could work.


The problem isn't verifying that the source was used for some particular binary, the problem is that when you read source code, the names of functions and variables will impact your understanding of what they do. If you were to take the source code and remove all comments and randomize every symbol name, then you might be okay.


That's a problem, sure, but it doesn't suggest at all that "real security analysis" means starting with an obfuscated blob and reversing it. It suggests that you're better off doing both kinds of analysis. Variable names can lead your thoughts in certain directions and make it more difficult to see certain execution paths. Obfuscated blobs won't lead your thoughts much of anywhere they weren't already inclined to go -- but they make it much more difficult to see all execution paths.


Signal for Android does offer reproducible builds of the binary now. https://signal.org/blog/reproducible-android/


Not if it's open source and has reproducible builds...


Good luck not being influenced by comments (although they can be stripped) or by how things are named. I still think you are better off starting with a binary.


Which could still be done by compiling the source then disassembling it. You lose nothing by it being open source to begin with.


Like others, I specifically picked Signal over WhatsApp due to the problem of trusting proprietary apps. I think that the UK Govt Ministers that use WhatsApp to coordinate their actions should switch to whatever GCHQ recommends for givt business – I'm sure that they haven't approved WhatsApp.


I talk to more cryptographers over WhatsApp than Signal.


I'm sure your sample is bigger than mine, so maybe that's a regional thing (I'm in Europe)? Did you notice some trends?


Just that WhatsApp can be more reliable than Signal.


[flagged]


> "PR tour"

What PR tour?

All that I remember was that he helped with the implementation and that they have a functioning system that they consider correct.

I think he and everybody else knows the limits of the security model of WhatsApp. Just because they have a working setup, does not mean they are actually using it, or have not changed it without his knowledge.

His assertion was simply that WhatsApp was capable of offering the same service as Signal.

This is from memory, but I don't remember any 'PR tour'.


It doesn't matter about preference. Many security people will tell you "WhatsApp is encrypted" as if that makes it secure. My feeling is that rather than deriding it as the false sense of security that it is, the community prefer to trust its claims and believe that it's secure.


Yes many security will tell you "WhatsApp is encrpyted" and that is true, and it is simply your assertion that this implies that they believe it is secure.

But its not just a false sense of security. Yes, everybody that understands security will have issues with all the problems all of us know that exists. That however does not change that for user there was a real increase in security.

I would rather say it is rather disingenuous for you to claim that it is all false security just because it is not what you would consider perfect.


> it is not what you would consider perfect.

And that is exactly what I mean by security people giving up. Twenty years ago what I think is perfect was merely the norm. Now WhatsApp is OK and it's just me who doesn't consider it perfect.


Best I can tell, people are still publishing papers on messaging. For example, this very paper published in RWC. Where have people given up?


Pretty arrogant of you.

I was just at a confrence where new more secure communications was one of the topics and there are many people working on new apps, improved protocols and working on figuring out flaws in existing products.

Everybody there understands the security constraints of Whatsapp, and believe me they can hardly shut up about it.


Exactly. Whatsapp "saying" they use e2e encryption itself is a suspicious thing. I don't know what algorithms they are using. I don't know they're not taking fucking screenshots of the chat messages and piping them to Zuckerberg's personal machine. Talking about encryption in a closed source app in istore or android play (which don't allow you to even verify binaries yourself) is just like discussing how well the infotainment system works on a flight with a burning jet engine.


So the problem is WhatsApp servers can add people to groups? I don't want to be a cynic but aren't groups per se stored on a centralised server? Or the definition of a group at least. So this isn't much of a surprise?


No. Cynic or not, I'm unsure why you're assuming the entire research results made no sense!

The administrator of a group should sign the "add member" messages, and group members should only believe that a member has joined if they have such a signed message authorizing the member. You are correct in so far as this is apparently not what happens at the moment.


Apps like Threema manage groups entirely on the client. Group messages are sent to every member separately. The server doesn't know what groups you have created, or who's in them.

This comes with some usability tradeoffs of course. But it's possible.


Yeah I’m with you on that but mostly because I’ve grew up on shitty chat apps that would allow glitches and wouldn’t handle participants correctly. For example in MSN you could write as the other person in a 1-to-1 chat. The first time someone did that to me I unpluged the ethernet cord from my computer.

Anyway these are small issues because the attack still doesn’t reveal any messages to the attacker. These are more about UI/UX issues.


Hah! Meanwhile in Facebook group chats, anyone can add anyone, and they get to see the whole conversation history immediately and completely irrevocably (even if someone removes them later, they get to keep the history until their removal, and you can’t delete Facebook messages either).


Hmm, this is interesting. I would expect that members would at least have some sort of room key (or at least signed assertion) that they would need to send to a new member, to ensure that the server couldn't unilaterally add participants.


They don’t but this can easily be fixed by having the admin send a special encrypted message to the group as proof that someone was added/removed.


A malicious server admin would probably be able to just intercept and stop that. I don't know how they do group encryption, but I imagine they either have a room key (although with forward secrecy that sounds unlikely) or they do 1:M sending. In any case, it sounds that, since the server doesn't have the group chat keys, they could just check for authorization from the admin (ie a signed message verifying that they're the ones adding the user) before adding a new user to the chat.


> A malicious server admin would probably be able to just intercept and stop that

Which would stop anyone from being invited to the group

> or they do 1:M sending

That's what they do. When you join a group you generate a key that you distribute to all the other participants via a 1-on-1 encrypted session, you then use it to derive keys in a normal chaining-key thingy to encrypt messages to all other participants.

> they could just check for authorization from the admin

So you mean the admin would be in on it?


> Which would stop anyone from being invited to the group

No, just the malicious server adding the malicious user.

> So you mean the admin would be in on it?

I mean WhatsApp could patch this attack vector by requiring the new member to get a signed assertion from the group admin, proving to the other members that the group admin was the person who added the user.


> I mean WhatsApp could patch this attack vector by requiring the new member to get a signed assertion from the group admin, proving to the other members that the group admin was the person who added the user.

this is related to what I was talking about, except that in my scenario the admin distributes the proof


Ah okay, I think we're talking about the same thing then.


This is a big nothing burger, you could practically infer the design from the fact that they supported encrypted group chats at all. Very sensible design IMO.


WhatsApp has some decent decisions even after the complexity of being tied down to a single device at a time.

WhatsApp Web is essentially a hack where any message you send through the web app is _always_ routed through your phone (which is why it needs to be connected all the time).

Earlier you could not even view the media on the web client without first downloading it on your phone. But now it looks like they've hacked it further such that you can view the E2E encrypted message on the web app without downloading it to your phone. I guess it achieves this via shipping the decryption key to the web app (just for the current session) where it allows it to decrypt messages in the browser and using the phone just as a router of sorts. This is just speculation based on apparent behavior though.


Potential solution: all participants will only start sending new messages to new participant/key if his "joined" message was signed by chat administrator (which they can verify). Server cannot fake this sig as it does not have administrator's key.


If you are using WhatsApp, make sure you deactivate your account if you change phone numbers. I recently got a new phone number and when I logged in, I assumed a non-deactivated profile previously attached to my new number.


In case someone is wondering the unique group link is of length 22. Made of A-Z|a-z|0-9 . You can also refresh a link, there isn't any limit on number of refreshes it seems but won't be able to reach 62^22.


I don't get why people even consider using WhatsApp for secure messaging. Everybody knows it's owned by Facebook, and everybody knows how they operate.


> and everybody knows how they operate.

Yep! Painless to use, and truly cross platform. WhatsApp operates like a charm.


As much as I hate facebook, their social messaging products (IG, messenger, whatsapp) are top-notch, truly a cut above anything else.


how do other apps prevent this? short of everybody in the group manually adding the new participant's key, I don't see why this flaw can't be replicated in other chat apps


From looking at the paper [1], the basic mitigation seems to be that the group admin that adds someone to the group sends the "add this account to the group" message to all group members end-to-end encrypted.

In Signal, apparently some malicious users that are not a group member (such as a former group member) could add users to a group. In Threema and WhatsApp, a malicious server can add further users, but Threema has apparently fixed that already.

[1] https://eprint.iacr.org/2017/713.pdf


Not 100%. In Threema, a malicious server could replay an old add-this-user message after that user has been removed from the group to re-add it to the group. But it could never add new users (since add messages are encrypted by the admin). Also, the server can't tell group control messages from regular text messages, making this even harder. Replay protection has been added in the meantime.


Requiring new participants to have an invitation signed by the administrators private key seems like it would prevent this.


interesting post! But only the idea is kind of creepy... Anyways, if someone new enters the group this person is usually not able to see all the previous posts...


"Usually"?


Not usually, it never allowed to see the message history.


yes you can never see the old messages


The most interesting part of this (to me at least) is that even our 'secure' messaging systems rely entirely on trusted entities. I don't think Signal is immune to this problem either, as you still need to communicate with their servers. As a distributed trust system, WhatsApp and Signal are single points of failure.

Messaging is analogous to money in a lot of ways. Perhaps we'll see a good distributed peer to peer messaging protocol at some point in the future.


> Perhaps we'll see a good distributed peer to peer messaging protocol at some point in the future.

The Matrix protocol is actually fairly promising, as long as you only use it for Matrix<->Matrix communication. Things fall to pieces when you try to bridge it with IRC.

It's certainly more user-friendly for non-tech. folks than, say, the awkward key exchange/setups of XMPP clients, but it's still a far cry from something like Signal in terms of 'ease of use'


> awkward key exchange/setups of XMPP clients

What do you mean by that? Conversations.im establishes e2e secure session without any fingerprint approving interaction whatsoever. Of course you can manually scan barcodes for paranoid mode but that's not necessary.

For me it's a perfect balance between convenience and security. Read more at https://gultsch.de/trust.html


Hmm, I'll check this out, thanks.

I was mainly speaking to previous experiences trying to set up Conversations on one device, Pidgin on another, there not being any good way to use a consistent key for my user on both, and trying to walk non-technical family members through that fiasco. This was about 2 years ago, maybe the situation has improved!


Keep in mind that Pidgin supporting many protocols means "lowest common denominator" in several cases. For OMEMO E2E I'd recommend Gajim or (still in alpha but promising) dino.im


You can check on the progress of Omemo support for the various XMPP clients here:

* https://omemo.top/


Doesn't matrix do everything through federated servers?


There are more entities we have to trust in this matter. For example when I download whatsapp on my phone I trust that Google hasn't injected its own spyware into the binary, because there is no way for me (as an end-user who doesn't want to hack my own phone to get root access) to verify that the binary came directly from whatsapp.

As much as Jobs revolutionized the smartphone industry, this is the long-lasting legacy we will have to live with for decades (or forever?).


I'm no expert but GNU Ring https://www.ring.cx does look interesting. It doesn't give the federated goodness that Matrix does, but is truly distributed P2P using DHT. It was absorbed by GNU in 2016, so it does have that going for it.


There is Briar which is peer-to-peer and uses Tor (or even just Wifi):

https://briarproject.org/


Decentralized private messaging doesn't sound too hard. With the e2e encryption that already exists it doesn't seem to difficult to add more servers as you don't really need trust in the first place.

I could be wrong though.


To maybe clarify what dasil003 means, state-of-the-art encryption for this requires that everyone has a pair of keys, one of which they keep a secret - their Private Key, and one of which they tell all of their friends, acquaintances, everyone who might ever want to communicate with them - their Public Key.

And then if you want to talk to them, you encrypt your message with their Public Key. Then only they can decrypt it with their Private Key (assuming no one else their Private Key).

But if a hacker / NSA agent / script kiddo pretends to be your friend and tells you their Public Key, then they'll be able to decrypt your messages with their Private Key. You might never know that they aren't your friend.

You'll have to ensure that you actually have the Public Key of your friend by some other method. For example meet them in person to exchange Public Keys, or read it out over the phone, if you know what their voice sounds like on the phone.


Ok so the issue is authentification of the identities. This doesn't seem to be a solved problem in any space, even HTTPS is vulnerable to this kind of attack.

It could be decentralized through pluggable identity authorities which provide the public key transfer via a secure channel. Essentially you'd use the same protocol and select what servers (WhatsApp, Signal, maybe some sort of immutable ledger elsewhere) to use.


It is solved, but the solutions all involve a mutally trusted third party.


You always need trust, how do you know you're talking to your friend?


Side channels are the only way, practically speaking. Call them and have them read off the hash of their key and verify it against the key you received from "them". Or have them send a nonce value through the system to you that you provided over a side channel.

Any time you have a central authority providing that validation, you have to trust that your "friend" hasn't fooled them.


The theory with Signal is that you might verify their identity in meatspace, or some other way. But yes, it's definitely an interesting chicken and egg problem. As far as the thought exercise goes, it might be worth limiting your scope at this point.


Threema has a system where contacts are verified using a QR code scan: https://threema.ch/en/faq/levels_expl These three dots are shown throughout the UI, so with the identity of a contact you always see the trust level.

This is much easier to explain to a non-techie than comparing security numbers or hashes.

- Red = No trust

- Yellow = You trust that the server has verified the identity

- Green = You have verified the identity yourself

(Edit: Formatting)


Opinions about giving this trust matter to dedicated certificate authorities instead, like we do with HTTPS?

Edit: Found the problem with that. Certificate authorities are supposed to actually go out and confirm that the person who requested the certificate, is the person that they say they are. This is far too much work, if you want to verify each individual person who wants to use a messenger.


That anybody uses these proprietary "chat apps" for anything baffles me.


What else are you supposed to use to communicate on a phone? Unauthenticated cleartext SMS?


Some chat system that isn't an "app", but rather, where the "app" is just an open source client for a documented, reasonably secure system.

EFF's recommendation list is being remade: https://www.eff.org/secure-messaging-scorecard

But the old one is still reachable from it.

WhatsApp is the one that got bought by facebook, which should be a red flag for those sensible.


>How to use WhatsApp on Android or iOS.

From the EFF page you have linked. :D


EFF recommending Facebook for IM?

I want my donations back.


What do you use?


"If you build a system where everything comes down to trusting the server, you might as well dispense with all the complexity and forget about end-to-end encryption," says Matthew Green, a cryptography professor at Johns Hopkins University who reviewed the Ruhr University researchers' work. "It's just a total screwup. There's no excuse."

Someone was asking about blockchains. THIS is why you use blockchains. So you don't trust just the server. Actually everything on the Web trusts the server. That's just how the Web was designed.

Now, one way to mitigate this - and also improve security in open source projects - is to implement a blockchain hosted by many organizations which can't all be compromised easily.

At Qbix, we are working on a drop-in data structure that would implement arbitrary business rules in a secure way. The nice thing is you don't need everyone to adopt it, for it to help you secure your network.

For example, some guy starts a group so he has all the access and privileges, and he uses them to invite others and assign privileges. Then he repudiates his own privileges in the group. Now everyone can verify and be SURE that everyone has the same privileges - rules are added for different types of Messages posted on the Stream and are are enforced by the blockchain.

That is how you do governance. It ain't easy but there can be packages made, of different governance types.

PS: However when you have end-to-end encryption, you don't need the blockchain to be hosted on servers. You can have the server relay messages between clients and enforce rules on the clients.

If you don't need consensus, sometimes you don't even need a blockchain! For example, with Reddit, you can just have an append-only Merkle Tree and have clients pass each other comments.

But what if you wanted to expand more comments? Do you have read permission? Do you have write permission?

One way to do it is on a per-thread basis. Each thread is owned by its OP. The rules are enforced by the OP and the messages are published by the OP. So then you trust the OP to be available (online) and accept and broadcast your reply etc.

But if you have MORE THAN ONE user involved in governance of a Stream (our terminology) then you need a blockchain. At the very least, to verify there are no malicious forks of a stream.

If you want to find out more you can look in my profile (about) and email me.


This was a deliberate design decision by WhatsApp. I even remember going to a talk by a WhatsApp engineer when they announced end-to-end encryption and he spent a fair amount of time detailing the method WhatsApp had implemented to allow for adding new users to groups while allowing these new users to read old messages. So I'm pretty sure this has been known from the start. Don't use Facebook products if you care about real security.


Whatsapp never allowed new users to read message history. Never!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: