"Isn't it "weird" that they chose to block Signal app and not the signal-protocol based Whatsapp?
If Whatsapp really implements the same kind of security and privacy measures that Signal does, why is Whatsapp allowed to continue operating?
If signal is preventing them spy on users and they ban it, is in't it safe to assume that Whatsapp is NOT preventing them spy on users, so they let it operate? Wouldn't you expect Whatsapp to be also targeted, especially considering the broad user-base it has compared to Signal?
Yes, I know they had blocked Whatsapp in the past, but they didn't block it now. Which means that something has changed in the relationship of the Egyptian gov and Whatsapp since 2015."
They don't trust WhatsApp and rely on Signal for secure messaging. Blocking Signal means they are able to target activists without impacting much of the rest of the population.
 Many of the people I know who are activists in countries where they need to protect their identities use Signal
I would never trust a closed source messaging app if I was an activist, regardless of what encryption they claim to implement.
Of course I'm not going to read the source code but at least I'm sure developers behind the app do not open a backdoor for someone else.
Of course, for that to be feasible, the network architecture of the app must not require API keys—and so must either be purely peer-to-peer, or involve a FOSS server component that the developer can run an instance of themselves (as in the Matrix protocol.)
Of course, the counter is that if you publish it you don't risk that someone actually is checking.
Open beats closed, but we must be careful not to think it immediately makes the code sound.
I've been thinking about this particularly recently in relation to Monzo, the will-be bank. There's no web app and slow progress on the android front. Lots of open source effort though, since they publish an API, but... That's my bank account I'm (not) giving open source developers access to.
nobody is saying it's automatically sound, but open is the only option that makes any security analysis possible.
I'm not disputing that. Let me repeat myself:
> Open beats closed
All I'm saying is that it doesn't stop there. Too often there's this complacent 'great, it's open source!' - I'm as guilty of it as anyone.
Many people are disputing that, and I'm getting around to that view. Closed doesn't mean you have nothing, it means you have the binaries, which you can disassemble and analyse. With open, you have a bit higher level language, which you have to analyse, plus then you have to show that the binaries correspond to it.
Generations of crackers and security researchers have proven that incorrect. There are plenty of tools for dealing with compiled programs.
But why do activists simply not use WhatsApp, instead of Signal? If both were suppose to be fully encrypted and secure, why not use the tool that is available. I assume the needing encryption is to prevent the government snooping and eavesdropping on your plans rather than "liking the UI/UX of one system over the other"?
Maybe the activists know something we did not, and are right to be paranoid...
In the US, everyone texts (or think they are using texts when running iMessage) because most plans give unlimited voice and texts, and charge by the GB of data.
Not to say it isn't both, but the price of blocking (one of) the most popular messaging apps is higher to a government than blocking one in the low low percentiles of usage.
If they blocked Signal just because it was less of a trouble to block compared to WhatsApp, then all the people that were on Signal will easily switch to WhatsApp...
What you have at this point, is a government paying the price of blocking a less popular messaging app they cannot control, while the people they are after can just switch to a MASSIVELY used messaging app the gov can also not control and additionally, is too expensive to block.
If this was the case,it would actually work against the gov. Do not underestimate gov authorities, they are not THAT naive. If they had not blocked Signal at all, they could at least track Signal users and at least have that information: that this small group of people (Signal users), contains the group of people they are after. They could have their honey pot there. Mixing the "dangerous" Signal userbase with the chaotic massive userbase of WhatsApp makes no sense, unless you really have WhatsApp on your side.
I hope you understand what I am trying to say.
Elected officials and political appointees demand action on things that are counter to their interests all the time, the people that execute those orders (if they appreciate that the order is counter-productive in the first place) have to decide what measures are worth fighting and which ones are not.
Some other interesting reading is: , , and 
 - https://en.wikipedia.org/wiki/Steganography
 - https://en.wikipedia.org/wiki/Covert_channel
 - https://en.wikipedia.org/wiki/Traffic_analysis
 - https://en.wikipedia.org/wiki/Anonymous_remailer
The canonical image hiding stego applications are a case in point, where the applications are widely distributed and understood, but in principle (if not in practice due to steganalysis) one could know of their existence and how they work but still be unable to detect that covert communication through them was going on, nor be able to block that communication short of blocking all image posting.
Second, they need not be on any app store.
Third, any leaks about their existence, if they come at all, may come too late. As Napoleon said, it's not necessary to censor the news -- it's sufficient to delay it until it no longer matters.
 - https://en.wikipedia.org/wiki/Steganalysis
It all depends on how far you are willing to push the blocking and how much you are willing to disable so you can block anything.
Signal atm are using domain fronting. (iirc the app will soon test the network conditions before attempting to use domain fronting, but for now it checks the country code of your phone number)
It will open a HTTPS connection to google.com but after the connection is made sends a host request for something.appspot.com In order to block that you need to MITM the connection or block google.com (Not sure if DPI could be used to get the host header never really looked into it personally. I know that SNI Sends the host is part of the handshake so the webserver knows which cert to present you with. Could it be extracted, checked agasinst a list and then have the connection reset preventing connection? Dunno never played with it, but its an idea off the top of my head).
(Now for some mild rambling :-p)
Lets say you can't MITM/DPI s you can just block google then they would have to use another CDN, so you block that one too. How many you going to go though before your citizens get pissed off at you and do something?
But lets say you people really hated GMail anyway and put up with not having Google just so this message app was blocked (and the creators don't just change CDN's) then you just force your people to install your own Root Cert or they don't get any encrypted web traffic. Will people complain or just install the Cert and get their facebook back?
So people switch to using personal networks (bluetooth and WiFi hotspots when in a crowd of people) just jam Cell/2.4ghz/5ghz. Will people complain they can't use their phones?
And it just escalates to the point you need a Doctors note and a permission slip signed by your mum before you are allowed to make a phone call.
All the time who actually want to encrypt their messages use math they can do at a desk away from a computer or phone and just use whatever method the Goverment do allow / they can get away with (Standard SMS but who and when can be got from the telco's, dead drops, IRL meetings) but sacrifice their metadata in the process.
> The firewall searches for a bunch of bytes which identify a network connection as Tor. If these bytes are found the firewall initiates a scan of the host which is believed to be a bridge. In particular the scan is run by seemingly arbitrary Chinese computers which connect to the bridge and try to “speak Tor” to it. If this succeeds, the bridge is blocked.
Problem was after he died his Was actually on the run on fruad charges. I think Jason presumes he set up the ISP as another scam but he started it at the perfect time and started actually making legit money instead. So (again trying to recall the talk from memory, I must actually watch it again as I enjoyed it) this isp owner was having meetings with the FBI about his ISP all the while the FBI also wanted him on fraud charges. So yeah if the FBI don't mind having chats with ISP's just to see what's going on, I wouldn't be at all surprised if China had meetings with their ISP's too. From what I have read I about the GFW it seems that it's infrastructure differs from isp to isp. Dunno if that's cause it's left to the ISP to implement or if The Gov issue "black boxes" to do the firewall work and it's just different versions of hardware / software depending on when the boxes were issued.
But yeah I do like the idea of a secret defcon but kinda in reverse that discuses the tricks and infrastructure and the bypasses they discovered in the past year but in order to better run the GFW. In my imaginary con they are all still getting drunk and hacking into the hotel signage for the shits and giggles of it though.
You're implying that WhatsApp, Inc. gave the Egyptian government the ability to remotely retrigger this backdoor whenever they want to (for those who haven't actually read the article: this backdoor only works when WhatsApp issues a key change for a conversation, and only then in certain circumstances). In other words, you imply that Egypt said "Hey WhatsApp, please actively hack into your Egyptian users' messages and send us the results" and WhatsApp said "ok sure here ya go".
It might be true, but Zuckerberg might be a FSB informant and I might be Elvis reincarnate. These are all baseless, yet not entirely implausible claims.
niksakl's point is that the go-to "probably nothing going on" or the other "WhatsApp too popular to block so we block Signal instead" explanations are just not plausible at all.
So I don't think it's entirely baseless, and with this new information, even less so.
And Egypt making such a deal with a large company, you make it sound like you believe that's implausible, but this has in fact happened before: When Egypt hired Nokia and Siemens to develop, build and implement their DPI infrastructure. Later claiming "gosh we never expected they'd actually use this to hunt down, torture and kill dissidents". Maybe governments aren't that naive, but corporations surely will try and claim to be.
No, the private hackers Govs hire were able to use an exploit to snoop on Whatsapp. That's very probable.
Either way, you need server side control of WhatsApp.
Not all speculation is inappropriate; sometimes it is the seed from which a correct conclusion ultimately grows.
It doesn't have to be THIS particular backdoor. "Why build one when you can build two at twice the price? Only, this [second] one can be kept secret."
It is more likely that the cost of blocking Signal was negligible in contrast to the benefit, while blocking WhatsApp would likely have huge cost - especially in a country that has only recently experienced a number of citizen-driven coupés.
It is also possible that they're specifically targeting a group (Muslim Brotherhood, or Jund al Islam and other Sinai insurgency groups) that utilize Signal.
Anecdotal tidbit: I worked at the Rio 2016 Olympics. My team consisted of Brazilians, Americans, Britons, and Koreans. WhatsApp was how we communicated, I'm sure the same was true for most of the other thousands of people working setup for the Olympics.
When a power-hungry judge forced WhatsApp to be blocked a couple weeks before the opening ceremonies, it was rather problematic for the Olympics staff. My first thought was "uhhh. This isn't going to last for long," and it didn't.
I can't say for sure that it's because the IOC president called up the Brazilian president, and the Brazilian president yelled at the judge, but I like to think that's what happened.
 Integrated language translation would be a FANTASTIC feature to add.
We (even a "smart" community like HN) clearly do not have the ability to think critically about security, and even when our leaders are sincere -- and I really don't mean to suggest Moxie/Signal was complicit in this move -- we still rush to defend our champions so quickly that we don't even think about what's going on.
However something really important is that this might be mere incompetence: FaceBook might not have any mechanism for launching this attack, they just thought the notification message was annoying so they didn't display it. To that end we need to be vigilant about stupidity as well.
Where does it end? Will we actually stop being okay with buffer overflows and sloppy programming? Or are we going to continue trying to "be safer" and use "safe languages" and continuing to try to solve the problem of too much code to read clearly with more code.
What are you talking about? All I can see there is that you asked for the source code of the QR generator and he delivered. He does not say you should trust WhatsApp.
Rather he pointed out that what you see in the WhatsApp UI is meaningless because you have no way of knowing that the app you're running matches the code Moxie linked, or that the code your friends are running does. Moxie replied with a link to the QR generation code but this didn't answer geocar's question, probably because there is no answer.
Here's a simple way to put it. End-to-end messaging security is assumed to be (at least traditionally) about moving the root of trust. Before you had to trust Facebook. Now you don't. A closed source app that can update on demand doesn't move the root of trust and this probably doesn't match people's intuitive expectations of what it does.
Many people have pointed out similar things to what geocar has pointed out: E2E encryption is essentially meaningless if you can't verify what your computers are actually doing. Unfortunately fixing this is a hard problem.
I wrote about this issue extensively in 2015 in the context of encrypted email and bitcoin apps (where you can steal money by pushing a bad auto update):
I proposed an app architecture that would allow for flexible control over the roots of trust of programs using sandboxing techniques. Unfortunately there's been very little (no?) research into how to solve this problem, even though it's the next step in completing the journey that Signal has started.
By the way, just to make it super clear, the work Moxie has done on both Signal and WhatsApp is still excellent. It is, as I said, necessary work. But the Guardian has unfortunately not understood security well enough, nor have people from the cryptography community really helped journalists understand the limits of this approach. Nor has Facebook, I think.
Eh, I kind of agree with geocar's point in the original thread. Moxie shared source code to "a" QR generator. Is there any way to verify that this code is what's running inside of WhatsApp?
A less obvious one is to make it possible to detect WhatsApp cheating. This isn't perfect, but someone only needs to detect it cheating once and then your name is mud.
One such way I proposed: If I can create my own key, then I can pass my public key out of band to someone who can mitm themselves and verify that the message on the wire was encrypted and only encrypted with my key, and I can mitm myself to verify that my device only ever sends things encrypted with my own key. Tooling for the protocol is non-existent, and despite someone claiming they could do it in 10 seconds, they never followed up with instructions on how I could do it in 10 seconds.
This also allows other non-WhatsApp versions of the client, which may make things much more difficult for FaceBook (since now they can't upgrade legitimate clients if they discover a protocol-level problem) but the Internet has some experience with protocols.
- source code can be looked at, even verified, but it's hard. (Remember many bugs in OpenSSL, for example.)
- but binaries, too, can be disassembled, even verified. It might be harder, but it's a shades of grey, not binary (ha).
- even if you have the source code, you have to ensure that the binaries actually distributed to your phone correspond to the source code. That muddles the issue further.
I have no doubt Moxie acted in good faith and wanted to expand encryption to a large number of users, but this is just another example of why proprietary software cannot be trusted.
Any and all proprietary implementations of the Signal protocol are now suspect. OWS should denounce these implementations as least as firmly as they do interoperable open source Signal client forks.
You are not being sincere. You are implying that GP is a paid troll of spooks.
They don't. Moxie does not want the forks to use his servers or the name of his app, that is all.
I just want to voice my opinion that maybe 1 in 100 people have Moxie's integrity and ethics.
Your "further" stance is not supported by the evidence. You might disagree with the design choices, but they're not negligence or "complicity". Moxie answered, in the other thread, that
a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement.
I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
That is why I said I really don't mean to suggest Moxie/Signal was complicit in this move
This was presented in the lightning talks at 33c3, starting around minute 48: https://media.ccc.de/v/33c3-8089-lightning_talks_day_4
Here's the congress wiki with some more links:
And a blogpost:
What do you call a known vulnerability that can be used for eavesdropping that a company refuses to fix ?
1) A mistake
2) A bug
3) A backdoor
> WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.
It's worth noting as the article says, that this is built on top of the Signal protocol. In Signal, a similar situation with a user changing key offline will result in failure of delivery. Within WhatsApp under Settings>Account>Security there is an option to Show Security Notifications which will notify you if a users key has changed.
It is hard to check what WhatsApp does, but in Signal it is not the server, but a recipient who sends delivery receipt. WhatsApp then has to either recognize encrypted receipts or allow only one-way conversation during attack. Carrying out the whole attack just to decrypt "hi, are you here?" is not really interesting.
Unless the key-change forces the user to be using an insecure key-pair, but is that actually happening?
Furthermore, as soon as the sender attempts to deliver another message to the recipient, they would get another notification (because the encryption key changed back to the real key); alternatively the attacker could continue blocking (and reading) messages to the recipient, but the lack of delivery would be noticeable.
You could escalate it into a MITM rather easily, though, by attacking both ends; but again, a key change notification should be displayed to both parties.
Assuming the closed sourced app works as advertised, obviously.
And any WhatsApp update could potentially include code to snoop on decrypted messages so exploits that can only be performed from the WhatsApp server side - i.e the example in the article about snooping entire conversations - are not really that relevant.
Having said that, it's disappointing and they should adopt Signal's approach.
> Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”
In other words, what seems like "a vulnerability that only affects some messages" could be turned into a full blown interception capability with very little change.
What you are seeing is not some vast conspiracy, it is a compromise made by some back-end engineer to get a front-end product manager off their ass without anyone thinking through a better UI option.
"So if a user loses their phone you are telling me that all unread messages are lost forever?"
"That won't work. Users will complain, someone will have to deal with those complaints, this just won't work."
"Ok, maybe we just push the unread messages back to the sender's phone and automatically re-send when the recipient gets a new phone."
"Sure, that works. So, about this other problem..."
In retrospect, it is fairly obvious that the send needs some control over re-transmission, but if you have never been a situation like this it is only because no one uses your code.
It's kind of like that other nonsense tech companies are doing these days, by supporting U2F auth, but then requiring you also set-up SMS auth in parallel, so that "if you lose your U2F key you can go back in with the SMS"
Yeah, except that completely eliminates the point of using a U2F key in the first place, since your security would be no better than when you're just using SMS auth.
Or we can go back to "security questions", which I think most agree now are just not worth it, despite the fact that they can help users "recover their passwords".
If end-to-end encrypted messages can be intercepted through this, then WhatsApp shouldn't be offering this feature. The downside is much greater than the upside.
All Facebook has to do is not mark messages as delivered, i.e. lieing to the device, which can probably be done easily. So they could ask a device to regenerate keys and send the same message again, over and over again.
edit: I'll respond to everyone as I worded this poorly. What I mean is that an attack that can only be performed by Facebook/WhatsApp(depending on if you believe they are kept separate) is mostly irrelevant as they could always push an update to the App/Play Store that sent all the decrypted messages to their servers anyway and we'd be none the wiser as it's all closed source. So why would they choose to use the vulnerability when this is fair simpler and could access far more messages with the update?
I'll concede that it's worrying if their server somehow became compromised but I'm seeing that as being highly unlikely.
Basically, what we have here is a weakness in the client, namely a provision that allows the server to send the client a fresh key and ask for re-encryption and re-sending with the new key. This, in turn, would allow for a good old MITM attack if the server were to be compromised.
This re-encryption and re-sending of messages would be without intervention by the user, though a message "new key" would be displayed to the user provided they had chosen the option to display such notifications (which are disabled by default).
What's unclear to me is whether only messages that have not yet been delivered would be affected, or all.
IMO they have been since they joined Facebook.
With Signal, I have an E2E connection where if I trust both clients, I can trust the connection. WhatsApp, however, has client code that will essentially reveal any unsent messages to the server on request. And then you just have to trust this compromised computer with any message you send.
if you had verified fingerprints with Bob and are happily chatting with him, all the messages that reached him (two tick marks in WhatsApp) are safe.
Only those that have not yet been delivered (one tick mark) would, when the server sends you you a new key, be re-encrypted and re-sent.
All of this, as usual, is predicated on the client behaving as promised.
Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”
I frankly didn't understand what was said here.
Then, at some point later, Eve on the compromised server could send a "oops, here's a new key, send everything undelivered again" message. Then, the client, as it is now, would just re-encrypt and re-send all those messages it deems undelivered so far (and then pop up the "key changed" message, if you had requested it in the settings).
You'd recognise the attack by seeing only single ticks on messages, even if Bob had seen them and answered.
If it can un-deliver all messages as well as have them re-sent with a new key then there may as well be no keys.
So it downgrades "end-to-end encryption" to "transport layer security".
> The supposed “backdoor” the Guardian is describing is
> actually a feature working as intended, and it would
> require significant collaboration with Facebook to be
> able to snoop on and intercept someone’s encrypted
> messages, something the company is extremely unlikely
> to do.
I, for one, certainly cannot imagine Facebook collaborating to such an extent with the government.
I have complete faith that that is untrue based upon just the history of the last 5 years.
Any appeal to morals/integrity/laws are essentially moot in this area. We have the ability to protect ourselves and we should be using it.
Both may be true, but both willfully surrender control of the situation.
I don't doubt that the technology exists, I just doubt the ability of the average person to be able to protect themselves. As someone who works in a technical field with decent computer literacy, I still have a hard time approaching this problem. Perhaps I'm just not as literate as I think I am.
I thought maybe after the last decade's revelations about the security apparatus, we might be beyond calling people who are a bit paranoid about their security "conspiracy theorists".
It's inevitable that big centralised services like WhatsApp or even Signal are going to be under pressure from governments to support lawful intercept; in many countries it's essentially illegal to run a communication service that can't be snooped under a court order. Multinationals like Facebook are neither going to want to break the law (as it ends up with their senior management getting arrested: https://www.theguardian.com/technology/2016/mar/01/brazil-po...) - nor pull out of those territories (given WhatsApp market penetration in Brazil is 98.5% or similar).
In all cases, we rely on the word of the service provider that they don't sneak additional public keys to encrypt for into the clients and in all cases we hear that doing so would cause a message dialog to appear, but we have zero control over that as this is just an additional software functionality (yes. Signal is Open Source, but do you know whether the software you got from the App Store is the software that's on Github?)
Also imagine the confusion and warning-blindness it would cause if every time one of my friends gets a new device I'd get huge warnings telling me that public keys have changed.
This is a hard problem to solve in a user-friendly way and none of the current IM providers really solve it. Maybe Threema does it best with their multiple levels of authenticity.
As such I think it's unfair to just complain about WhatsApp here.
I disagree. WhatsApp have a known vulnerability which they won't fix (indeed they deliberately added this vuln on top of the Signal protocol), and no denial that they have used this vulnerability in the past.
They made a big PR song and dance about this feature only to backdoor it. That deserves criticism.
that's the thing: That setting is pure placebo security-theater. There's nothing to guarantee that this setting actually causes notification on all key changes, whether it's on or off.
Knowing that we all have trouble trusting Facebook, we can assume that all this setting does is inform users when their counterpart has a new phone (which in itself is a very slight privacy issue. I might not want you to know that I have a new phone / reinstalled WhatsApp).
It won't inform users when Facebook adds another public key for analytics and it also won't inform when the NSA adds a key through their special surveillance interface Facebook built for them.
That's the issue with all IM services that manage public keys for their users and thus, my original point was that it's pointless to rage against WhatsApp alone.
Worse: Let's say they change the default due to the present outrage: Then everybody will be pleased with them while the actual backdoor remains in place.
In that case the whole thing is a "placebo security theater": you cannot know whether WA implements the encryption at all. Even if you reverse engineer it and see "oh yeah there are all the necessary functions like hashing and encrypt_otr() and stuff" you still don't know whether they're actually used. Or if they're used, is there another thing that (perhaps only sometimes) sends data via another channel?
But if we trust them to operate in good faith, like Whatsapp's users do, then it should be secure since the protocol they claim to use is secure. But then they break it with a default setting like this. Wtf.
how would you fix it without causing notification-blindness?
Absolutely mothing really stops any of WhatsApp, Apple or even Signal itself from reading your messages if they want to/are compelled to. The only way to protect yourself against the service provider is to manage public keys yourself manually using GPG like workflows which have proven to be unworkable.
The trade off is do you want free and easy to use messaging which protects you from other snoopers but not the service provider/government itself or do you want much more secure systems that no one outside the technology priesthood will use.
I agree that a lot of people would be very confused when they see the error, though, and while it's easy enough to explain even in layman's terms, I don't think it would help.
I think that's it's totally fair to complain about WhatsApp, since the issue mentioned is separate from the more general problem you describe; they could easily have done it the way Signal does, and I suspect they opted to do it the way the do it for the same reason they don't have the security notifications on -- they don't want to deal with the confusion.
apparently in some countries they do and that's a reason to compromise the rest of the world..
just summarising how bizarre this excuse really is.
make it an opt-in setting, in some countries reliable connectivity in a situation of frequently changing devices (the more I think about it, the more contrived it sounds) might be more important than privacy, but in others it very much isn't and the consequences for failing privacy are much worse than missing a message between swapping of devices.
that's not a tradeoff you should get to make for everyone.
the error message itself (I have it on) is not at all obtrusive btw, it's a friendly yellow (like the old Google ads) small type, which a user will either ignore or get a vague sense of unease about not being secure (which is exactly correct), I don't see how this can be further confusing.
Also, unless you're suspicious and actually check, you could be served a special version by the App Store that was compiled only for you and contains the required add-a-key-but-dont-show-a-popup feature.
I'm not saying that Signal and/or Google are shipping a backdoor. I'm saying that we have to trust them that they don't.
> I'm saying that we have to trust them that they don't.
This applies to anything. It is not feasible to build and/or check all software and hardware by yourself.
Indeed, the most secure way is to generate and confirm each other's keys physically. The thought occurred to me that those whom you'd want to truly communicate securely with are likely people you have met via other means already --- including in person --- and so you should already have an effectively independent channel to share keys. It seems like the level of trust you have with someone is proportional to the probability of that being true: if you've never actually met someone in person, how do you know they are who they say they are? In some sense, you could say that, how secure the communication with someone is, doesn't matter if you don't already have that relationship of trust established.
In general, it is the control of FB over the whatsapp client where the vulnerabilities lie.
On the other hand, as long as users are required to manage their public keys, there won't be end-to-end encryption for the masses (which WhatsApp had declared as their goal and to some degree achieved).
At least until key management and other security basics will be taught at elementary school, by the time multiplication table is taught.
I've tried in the past to get friends to switch over to Telegram, but there are issues since they rolled their own encyption protocol.
I've looked into using Mumble for voice, it seems quite secure because you host it yourself, and it's open source.
There's also a good list from the EFF: https://www.eff.org/node/82654
I think it would be wrong to start complaining about other apps. We don't know of vulnerabilities in other apps. We DO know of one in WhatsApp. Let's focus on what we know and take WhatsApp to task on it instead of wasting energy on what we don't know.
What's most interesting to me is that for all the people who complain that C is insecure, I don't see any great, proven open source crypto implementations written in the "secure" languages.
As an aside to your aside, LibreSSL is certainly more secure than OpenSSL, and it is written in C. Theo de Raadt doesn't have a PhD (though obviously he's not the only one hacking on LibreSSL).
Isn't WhatsApp an Erlang app?
It reminds me of this PR puff piece by Google, banging on about how secure their data centre was, the limited access by employees, the amazing information security team, the underfloor lasers to detect intruders, etc. while totally ignoring the elephant in the room, i.e. NSA backdoors which Google is forced to comply with and can't reveal publicly when they do so.
Also, hardware and manufacturer cannot be defended against with software-only. This means Intel, Qualcomm, FoxConn, etc. Transitively, the Chinese Government.
I see no other possibility but trust them or don't use them.
(There is a technical way to fight the OS, but it is not mature/available yet. See Intel SGX.)
Yea, and this is exactly why i never understood why Whispersystems/Moxie cooperated with Whatsapp/Facebook: It gives people a false feeling of security (communicating via Whatsapp), and basically Whispersystems facilitated/made this possible.
It was so obvious...
This is why people should try and use Signal instead of WhatsApp. You can't trust Facebook to care about your privacy.
Which must mean either I'm misunderstanding something (e.g. things had changed since the blog post was published and relevant GitHub issues were closed), or they had not disclosed some information they have to the US government, or they (or word of mouth, retelling the story) is misinforming users about what was disclosed.
(Upd: Yes, it would be a good idea to go through Signal source code and see what exactly is sent, before making any suggestions that may look like an accusation, but... sorry, the code is quite complicated and I don't think I can figure this out any fast. I found ContactTokenDetails class, but lost my way trying to trace its usage and how it's wrapped/encrypted/etc.)
The page I linked was the full data they disclosed.
Seems that they either somehow don't have contact info (but then - how contact discovery's working?) or they had failed to comply with court order.
Or I'm really not getting something, which is also well possible (and quite probable) explanation.
Upd: Hmm... or maybe the user had no contacts.
Your link above says at the end:
For TextSecure, however, we've grown beyond the size where that remains practical, so the only thing we can do is write the server such that it --- doesn't store the transmitted contact information ---, inform the user, and give them the choice of opting out.
Yes, now it's all clear - they have contacts, but only ephemerally. Good.
Signal fails 1 (the desktop app is pretty bad) and 4 (too many little problems, others won't switch). I'm starting to think Slack, of all things, might be my best solution. Really, I just want ICQ with smart phone/desktop notifications, and picture/video embedding, which doesn't seem like it should be a thing I ought to have any difficulty whatsoever tracking down in 2017.
If you think Google is more trustworthy than Facebook, sure go ahead and just use Hangouts or whatever.
We cant have nice good encryption and safe communication when geeks push this Signal onto unsuspecting users, when the real option is to keep improving Tox.Chat and bitmessage.
If you think Google is more trustworthy than Facebook, sure go ahead and just use Hangouts or whatever."
Every time Signal comes up on HN people make this point (Signal is bad) as if it is true.
And every time it is exposed as bs.
I don't know how legitimate a complaint it is since Moxie et al have said that they would accept a well written pull request which provides similar functionality. But this just hasn't been forthcoming.
What I dislike about Signal mentions on HN is that aggressive posters conflate a number of different issues people have with Signal - lack of federation, reliance on Google push notifications, lack of SMS support, etc - and somehow lump them in together.
[Just to be clear - I am not saying you are doing this].
I mean, two things good about Signal is that it let's you chat with friends and family in a secure manner.
There are these following issues though: I doesn't federate, it relies on Google Push, it doesn't support SMS. Also, I don't like how Signal does [...]"
Is that already an invalid way to make an argument ?
I was trying to express frustration with posters who start out with a nebulous complaint like "Signal is bad and OWS is evil". If called on this they come back with "It allows Google to spy on you", if countered they come back with "it doesn't allow freedom to federate" and so on.
Rather than being a multi pronged criticism it's more like a bait and switch, with each new argument being deployed when the previous one is rendered invalid.
EDIT: of course the fewer attack vectors the better
The Play Services however pretty much amounts to a remote root shell open at all times. Google can remove or modify code at will, and they have been known to do it in practice for spyware removal. I can understand how an activist finds that problematic.
only for notification delivery. The message payload is not part of the push notification.
Stock Android does not, by inspecting network traffic, contact Google servers.
Google play services and other GApps, do, and they can be exploited in this traffic, or told by Google to activate other backdoors.
Signal with GApps, Google can know which phones, and which users, are using Signal, thats a security vulnerability. Google can infer from their Google-messaging thing, that notifications are sent, and have a high probability of knowing if it is to Signal. Who talks when is leaked to Google.
It does to check for internet access upon connecting to wifi.
Also, if I have notification of key changes enabled and verify key fingerprints, at most one exchange could be snooped without me noticing. (If notification of key changes is not enabled and key fingerprints not verified, all bets are off anyways.)
This has been known and is discussed in the protocol and forums as the trade off in ease-of-use versus validation.
For people wanting security, they simply check the verify keys, warn on key change. For people who don't care as much about verifying the recipient, they don't know about the feature, and don't use it, but they still get pretty good security, can upgrade to verifying if the choose, all without having to re-key or change protocols/messenger apps.
So anyone wanting to have a phone with open source products on it for security reasons, they totally can on Android, but it's impossible with iPhones. Signal probably relies on Apple's variant of Google Cloud Messaging, but since you'll always have that on your phone anyway, it makes no difference.
There is no reason to assume this was "snuck in" with an intent to deceive users. Retransmission has been known and discussed repeatedly, months ago, and Facebook acknowledged it. What happened here is a choice of UX over security, specifically, choosing not to break existing WA users as they move them over to the otherwise great Signal protocol.
When a key changes, you can just keep trying, notify the user, or drop everything on the floor. If you want the latter, use Signal.
It would be nice if WhatsApp made 2 the default, and 3 optional. Right now 1 is the default and 2 is the option. The trick is to get the UX somewhere where normal people can do something useful with that information.
If you are at all upset about this, you are not a target WhatsApp user. It'd be nice if they changed this, but for the love of all that is good and holy, stop calling it a backdoor, because it isn't. Words mean things.
Jan Koum and Brian Acton, founders of Whatsapp
You can set up a wifi and try to MitM yourself and see what packets WhatsApp is sending/receiving. Then you can try to snoop on them and test. The fact that it is closed source doesn't mean you can't analyze it, it just means it's a black box that you have to carefully dissect.
In Java, it's even easier due to JVM restrictions. I wrote an obfuscator for .Net, but Java offers less capabilities in it's bytecode. I even used a commercial product that had been obfuscated. The obfuscator broke something on Mono. It took about an hour to write a small script to go through the binary and fixup the broken bits so other tools would work on it.
a) The issue was an oversight and simply a bug that needs
to be fixed. The question is why FB doesn't want it
b) Moxie knew that this issue existed but was NDA'ed into
leaving it there for nefarious purposes. Now it's public
knowledge, where do we go from here?
It says so right in the article. Stop spreading FUD.
If you are not verifying key fingerprints out of band, then you are potentially vulnerable to a malicious server MITMing new sessions.
If you want secure end-to-end messaging, verify keys out of band, do not solely trust a 3rd party for key exchange!