This is a great explainer:
I can't seem to find the PDF of the slides of this talk anymore.
These are also very useful for understanding:
- even if they have implemented it faithfully, you should compare fingerprints. If they don't line up, you might be subject to a MITM attack
I would argue that reading source code is much easier, though. For example, if you are auditing code written in a memory-safe language, you don't need to look for memory corruption bugs. You also have an audit trail for all source code changes.
Or indeed if it were a patch that could be initiated as forced update at will from the server-side, again that would be clear to "people" based on the client side binary?
These are the ways I imagine I'd snoop on supposed end-to-end encrypted communication channels; there's probably something much cleverer, but again, we're saying that can be detected easily?
Genuine questions, not a programmer, not familiar with the state-of-the-art of reading machine code/interpreting network traffic, nor indeed with which watchdogs are guarding against abuse by the TLAs.
Do you have any response to the substantive point or just quibbles on the semantics.
Its not hard to imagine how WA could be compromised without anyone knowing for many years.
That is, those who read WhatsApp disassembly and did thourough enough review to warrant with sufficient confidence not raising any flags. It would also help to know which specific build the were looking at.
What you and other assembly verification supporters are proposing is an assembly version of the many eyeballs hypothesis. The idea that many people that are qualified could see it means they thoroughly analyzed it in a way that implied actual security results. I see little to no evidence of that. Actually, I see the opposite where malware hits binary systems in many ways that were easily preventable at source or assembly. That assumption means I don't trust the claim that assembly being verifiable means that it was likely verified. More like stockpiled into a 0-day collection.
You also can't trust something to be correct, reliable, and secure with assembly or binary alone. If you could, high-assurance field would be all over that. Instead, the evidence indicated the source... esp if requirements were encoded as formal policy... had more information to work with to analyze potential compromises and information leaks. A lot gets lost in producing assembly. So, they instead analyze source for correctness in all states of execution for various properties then verify that assembly does same thing with some proof supplied to evaluators and/or customers. Much more trustworthy. Also almost non-existent for proprietary or FOSS software.
As it stands, average people (or average developers) have little incentive to go trawling through the existing body of open source code, mostly because they probably have better things to do with their time. In the commercial world, bug bounties attempt to skew the incentives to encourage the 'more eyes' part of the axiom 'given enough eyeballs, all bugs are shallow'.
And I'm pretty sure bug bounties would apply to shellshock and heartbleed as long as you can find a company with a bounty program that also used openssl or could be exploited via bash.
Beyond that, despite the repeated claims of open source advocates, there's nothing preventing people from taking the app store versions of things like WhatsApp and reverse engineering them.
So, while being open source is not the complete answer, it certainly doesn't hurt.
I don't think we have good solutions for the problem of malicious updates in general.
The only one I can think of is a trusted hypervisor that hashes memory in the guest and reports on it. And even then, how do we trust that?
So what do you do? It comes back to making sure that 'they' can only hack some of the people all the time, and all of the people some of the time. It's preventing them hacking all the people all the time I worry about.
However, open source does give users some real recourse in the event that the project moves in an undesirable direction. I don't like what's happening, I can fork it without your permission and still have access to the same development environment and build tools the original project had. I think that's important and valuable.
I can read a commented Python or BASIC program with almost no effort unless it's very sloppy. I can tell you a MISRA C or SPARK program with associated static checks is immune to entire classes of errors without analyzing the source myself. I can tell you what information flows can or can't happen in a language like SIF implementing information-flow security. I can do all of this while expending almost no effort. So, I'm much more likely to do it than if I had to decompile and reverse-engineer a binary from scratch with analyses using the tiny information in a binary.
So, every time you say that, what you're really saying is: "Anyone could do this if they spent enormous time and energy. Sort of like they could hand-compile their C code each iteration. They probably won't but I'm countering your wanting for source because in theory they could do all this with assembly with enough effort."
It's definitely not true for correctness as assembly lacks what you need to know it's correct. It's probably not true for security as correctness is a prerequisite for it. In any case, economics is important where the effort required to achieve a thing determines whether someone will likely spend that effort. In case of binary analysis, it's apparently a lot less than source analysis.
They claim they use the Axolotl double ratchet, though Moxie/OWS claims Wire uses a variation of the protocol they don't recommend.
 (long but enlightening read) https://news.ycombinator.com/item?id=11725602
4 days ago, no follow-ups to questions for details.
...and which most of my friends used.
I'm OK with Signal's UX, but the problem is that I know exactly two people who use it too, everyone else is on WhatsApp. I mean: Writing this implementation would be hard, but still tremendously easier than getting enough traction to make it useful.
However it's hard to recommend because of the lack of features.
"centralised" and "server" are just about the worst words you want to hear in the description of a system like this and yet they are just thrown in there in a flippant comment at the end of a section?
I think a more detailed description of this glorified cache is warranted.
To cut a long story short: Alice gives the cache, aka Mallory, a set of secret data which are implied but not proven to be able to be used by Bob to create cryptographic text which Alice can decrypt but this magic cache, aka Mallory, cannot. This document provides few hints and no detail on how we can be assured that the magic cache, A.K.A. MALLORY, is unable to make use of the secret data provided by Alice (and "promised not to be shared") to make inferrences on the crypyographic text provided by Bob.
So: making the whole thing completely public only enables adversary to match session initializations with receivers, which the server can do by definition as it has to route the messages to correct recipient. (In the case without such central server, anybody observing the traffic could do that, as another role of the central server is to mask sender addresses on the lower protocol layers)
I know that doesn't sound quite so impressive, but that's because it isn't.
Additional and to some extent non-trivial difference from traditional OTR is in how these ephemeral keys are used in key exchange, whose result depends not only on DH with ephemeral keys but also on DH exchange that mixes ephemeral keys with long term ones. This causes that the ephemeral key of the passive side does not have to be signed and allows anyone to produce arbitrary session transcripts, both of these points allow significant reduction of size of the exchanged messages.
Or would have been, if they had any.
I have no idea if this solution to the problems posed is sound because before I can even get to the source code there is no description of how or even whether it solves the fundamental problems in the cryptographic situation it describes (viz. device which cannot straightaway engage in IP communication requires to receive unsolicitated securely-encrypted messages).
Please elaborate and clarify.
I'm very interested in any potential implementation that allows asymmetric conversations without a dedicated third node (if we don't like the name server) somewhere in between.
There are people, and/or messages, in which a lack of civility needn't warrant a complete dismissal. They're relatively rare.
I'm willing to give a hearing here. Crypto is, for better or worse, an area in which there is a tendency toward both informed and abrasive contributions.
I'm aware HN doesn't take well to that. I've been arguing the opposite strategy for the past few days with a friend (he likes tossing bollocks about, I prefer avoiding that). I'd advise ChoHag to tone it down (and reconsider their chosen handle), but contribute. I did consider vouching the flagged/dead comment, but decided against in this case.
> If you can't see it by now then good luck with your life.
Please leave these out of comments on Hacker News, it's simply not OK.