1) They seem to have replaced TLS/SSL between client and server with "Noise Pipes". Based on a couple of minutes Googling this seems to be a brand new one-man protocol from Trevor Perrin (the same guy who did Axoltl on which Signal is based). At least, I'd never heard of it. I wonder if this is the first inkling of a post-TLS future?
2) It's a shame to see key words be killed off by internationalisation concerns. 12 words seems so much more friendly, at least to English speakers, than a 50 digit number. In practice I doubt any non-trivial numbers of people will ever compare codes by reading out such a number. I hope further research here can develop better replacements for encoding short binary strings in i18n friendly ways (perhaps with icons instead of specific words? if you don't speak a common language with your chat partner then the app is useless anyway).
3) What's the next step? My feeling is that the next step is securing the build and distribution pipeline. WhatsApp could partner with security firms around the world, like Kaspersky Lab in Moscow, perhaps one in Germany and another in Iran, to make it harder for the software to be forcibly backdoored by a single decision of a single government representative. This would require splitting the RSA signing keys used by the app stores. I have some code in my inbox that claims it can do this (it's written by some academics and I obtained it after a bit of a runaround) but I never found the time to play with it.
Of course, getting a bunch of security firms to sign off on every update, no matter how trivial that update is, might prove politically difficult inside Facebook. If mobile platforms supported in-app sandboxing better then the app could slowly be refactored to be more like Chrome, where the base layer doesn't trust the upper layers. Those upper layers wouldn't have access to key material and could then be updated more freely than the higher privileged components.
WhatsApp was already using a custom protocol instead of TLS. We worked with them to transition over to Noise Pipes, which has some advantages over what they were doing before. Also, we've renamed Axolotl to Signal Protocol: https://whispersystems.org/blog/signal-inside-and-out/
Axolotl is cool in a techy/underground sort of way (if your into that) but completely misses the boat on being 1) Easy to spell, 2) easy to pronounce, 3) easy to understand.
Mainstream users would just say "wtf" and move on to whatsapp/telegram/facebook messenger.
"Axolotl" at least has seriously pronunciation issues so I am glad it is not used "user-side".
It's also just a cool name.
In fact, axolotls only metamorphose into salamanders if triggered by the external environment. If you don't stimulate them in the right way, they stay axolotls. For a long time they were thought of as two different types of creature completely.
There's a story of one of the very early researchers looking at his vivarium one day and discovering that it was now full of salamanders rather than the axolotls he was expecting. I would love to have heard the conversation...
They’re distinctively spelled, don’t collide with existing search terms, often have available domains, etc. Most importantly, they anticipated the web 2.0 trend of ending words with two consonants in a row. ;)
Finally, just look at this guy: https://upload.wikimedia.org/wikipedia/commons/f/f6/AxolotlB...
Edit: yes, it's a suffix (for Classical Nahuatl): "Non-possessed nouns take a suffix called the absolutive. This suffix takes the form -tl after vowels (ā-tl, "water") and -tli after consonants [...]" ().
I don't understand what you're thinking of here? Here are some Latin nouns, all in nominative case:
nauta (first declension)
> Most importantly, they anticipated the web 2.0 trend of ending words with two consonants in a row. ;)
But those words (coyote, mesquite, tomato, and avocado, unless I seriously miss my guess) all end in a vowel.
Because they were adopted by Spanish speakers. English products using these names could either adopt the Spanish form, or as the GP suggest follow the 2.0 trend and recover the original form, publishing their webpage in the East Timor .tl domain.
"Signal" in IT usually means UNIX inter-process communication.
Noise starts from a clean state with modern knowledge of cryptography and modern cryptography. Much easier to understand and replicate, much harder to shoot yourself in the foot with.
TLS brings modularity and evolutivity, much needed in a protocol the scale of HTTP. In Whatsapp's case, Whatsapp controls both the server and the client; it is much easier to transition between versions because all bricks are under control. When you don't need what TLS brings anymore it makes sense to discard it.
As another example: Tarsnap (https://www.tarsnap.com/) uses spiped (https://www.tarsnap.com/spiped.html), a very simple yet powerful mechanism to build an encrypted channel. Its protocol and proof fit in ~100 lines (https://github.com/Tarsnap/spiped/blob/master/README). When you don't need all the jazz provided by TLS (and when you're lucky enough to be able to pre-share keys, which helps a lot) then a simple protocol is good.
You're right that it's not a good idea for generalists to pick up Noise and go to town with it. Noise isn't misuse-proof.
Nor is SSL. Couldn't an opinionated SSL library (supporting exactly one protocol version, one cipher only, etc) provide the same simplicity of use while remaining interoperable and reusing an already-validated protocol?
It's not the choice I would have made, but I am not as good as Moxie or Trevor.
The idea is more or less what you want, hardcode some known-good configuration for servers that can't or won't be upgraded every other month.
I would argue it's easier to shoot yourself in the face trying to re-implement/re-design something like TLS.
Maybe not all of WhatsApp's platforms support the latest TLS improvements though, thus it's easier to roll their own?
As an aside, what's with Nietzsche? He seems to crop up in your screenshots quite frequently.
(just a guess)
Is that true? Are the messages decrypted server-side for iOS users?
And for anyone else that hates visiting these social networks, Moxie's reply was, "I'm sure, but it'd be to everyone's benefit for you to verify."
Imho one thing that the Signal project suffers from -- and that a lot of open-source projects suffer from -- is poor documentation. They need document their protocol better, to make it easier for third parties to integrate with their system.
Signal still lacks a working desktop client (no, the one that you can use if you're running Chrome doesn't count), and I'm sure tons of people would be eager to do stuff like provide integration for Pidgin if only there was better documentation.
This would solve part of the trust issue both for proprietary and open source software (I mean, who's actually using or verifying reproducible builds?). It would also be a huge problem for intelligence agencies or other adversaries who want to do this undetected.
Certificate transparency works because detected problems are actionable: the bad certificates can be revoked and, if necessary, so can the entire CA that was compromised.
There is no way to revoke an app store, as there's only two of them that matter. So if the transparency system revealed subversion .... then what? There'd be a big uproar for a few days, but nobody would know who the target was, or what the bad app was doing, and there'd be no action that could be taken.
Those are the main scenarios where Binary Transparency might be of use:
1. An application developer's private key gets compromised (either through a breach or an inside job). Binary Transparency would allow the developer to monitor the log server for any unauthorized binaries and, once detected, investigate the source of the attack, rotate keys and possibly blacklist the affected binary (not sure if a mechanism like that exists at the moment, but it would make sense with Binary Transparency).
2. The application developer is coerced into signing a binary including a backdoor. Binary Transparency would guarantee that there's a public record of a modified binary being released. This alone would probably make any adversary think twice about doing this, especially if they want to stay unnoticed. In addition to that, gossip protocols and dedicated auditors could allow users to detect odd releases even if the developers were gagged. This might very well result in the app being effectively revoked because users stop trusting it.
Both scenarios would at the very least allow the backdoor to be reverse-engineered.
Ultimately, I'm afraid you're probably right and the limitations and complexity of this approach probably won't make it viable for a long time. Maybe the thinking will change if we see some high-profile cases where apps are backdoored - that's what got the ball rolling for Certificate Transparency, after all.
I have a feeling that English speakers are the minority of WhatsApp users.
> if you don't speak a common language with your chat partner then the app is useless anyway
They do speak a common language, it's usually just not English.
How would whatsapp know what language to present the words in?
"Ok, your pattern is Bread Cloud Tiller..."
"Scheiße! We're being MITMed! My pattern is Brot Wolke Pinne..."
Alternatively, the party whose code is being confirmed would just select the language, and the other party's UI could automatically update to show the code in the chosen language. The first party would still have to pick a language both people understand but (as long as both parties have a working internet connection) there wouldn't be a sync issue.
> They do speak a common language, it's usually just not English.
I think you misunderstood the original poster was getting at. Here's the full parenthetical: "(perhaps with icons instead of specific words? if you don't speak a common language with your chat partner then the app is useless anyway)". So replace the words that you're supposed to read with little pictures. Then it doesn't matter what language the people speak as long as they're speaking the same language, since they'll be "reading" the images. You'll see a little house icon, and read "house" or "casa" or "jia" or whatever.
Basically, users change their identity key in practice so often that a 3rd-party can't be the source of truth on whether a MITM has happened. Only the user can audit that and most users are not equipped to do so. So, towards the goal of universal e2e messaging its not clear that key transparency is a real win over the simplicity of trust-on-first-use.
In contrast, in CONIKS, a user had to check every single epoch. Which was impractical and meant you relied on third party auditors.
This doesn't deal with the restore from backup issue. I think given the FBI apple phone issue though, maybe it's worth reevaluating the risk vs. reword here. Especially if user visible warnings are opt in.
I've wonder if things like multi-device support or deriving a private key from a passphrase (mini-lock style) will eventually make user identity far less likely to change. Then it might be safer to complain louder when your in the situation that someone's identity has changed.
There's a QR code representation as well that can be scanned to verify.
They're unified, they're mostly language-agnostic, and there is shitload of them so the representation would be much more compact than hexadecimal or base64-encoded string.
Put it on a .io domain for bonus points.
If those only'd be international, I'd seriously propose key fingerprints based on animated meme-filled "gifs". Because that's what the common audience understands (or at least my impression is so). Not some weird-looking strings of meaningless digits and letters.
The notion of a cryptographic protocol framework, rather than a complicated negotiation scheme for a kitchen-sink protocol, is also somewhat novel.
I think it's actually a solid decision from WhatsApp since the majority of their users are from non English speaking countries.
One example: WhatsApp is one of the primary messaging services used in Brazil, with something like half the population using it. The vast majority of which don't speak any languages other than Portuguese.
When the courts in Brazil shut down WhatsApp for 48 hours, there was a huge outcry because it is THE primary means of communication for people, especially to talk to family who have gone abroad.
I rather like the urbit way of encoding numbers. I can't remember the exact details but it's something like: There are 256 unique three letter words (all nonsense, but deliberately picked to be possible to pronounce). Each word is mapped to a byte and then a blob of binary data becomes a nonsense word.
So for example "tasfyn-partyv" instead of "1242B24A".
The first I encountered that was in SILC, where it was used to make the key fingerprint more or less pronounceable. It's interesting that the encoding scheme never really got wider attention, although it is available in both openssh and openssl.
Similar ideas come and go, and get rediscovered - so maybe now is the time for wider acceptance.
A bit of googling provided me with a nice starting point for implementations, so babble print has certainly been recognised in some circles.
That only works in English, again.
Of course that's not helpful when two people who don't speak a common language are trying to establish communication, but it probably isn't their biggest problem.
They already spent a bunch of time and effort finding these phonemes to build Esperanto, right?
Its choices are large, irregular, unclearly defined and basically Eastern Polish.
Lojban is designed to be well-defined on phonemes, but that still only works when knowing its (rather simple) pronunciation rules.
Is there any work on encrypting group chat sessions?
Maybe the next step is a more obvious indicator.
They do this to prevent degradation attacks.
Don't know what kind of funding they've been having, but for an open-source app, it's pretty polished.
So I fail to see the problem. It still likely protects you from nosy neighbors or nasty non tech savvy competitors, even if it doesn't protect you from state level actors or from Facebook itself.
The above, in addition to the fact that open source is neither required for auditing, nor guarantees proper auditing occurring (see OpenSSL having vulnerabilities for years before anybody released them to the public)
1) I would assume Facebook still gets unencrypted access to my address book for use with their shadow profiles
2) We have zero control over what key the client encrypts the messages for. Is it only the other peer's phone? Or is it for the peer's phone plus Facebook for analysis of the messages?
Especially 2) is of some concern to me (against 1 I can't protect myself anyways because if people add me to their address books I'm screwed anyways). From that perspective, I'm still inclined to trust apple's iMessage a bit more especially after recent events.
The only safe solution right now is to compile your own signal client and use that, of course at the cost of reach because nobody else is on Signal.
WhatsApp might be a good compromise at least for only semi-important messages: The probability that any of your contacts has WhatsApp is much, much higher than the probability of them having Signal running. On the other hand, whatever you're sending over WhatsApp is likely going to be used by FB (and then possibly handed out to governments and/or stolen by attackers).
Tier 1 secure messengers: all possible tradeoffs in favor of security made; use for worst-case adversaries:
Tier 2 secure messengers: serious secure messaging protocols that make some tradeoffs in favor of adoption and usability; use for normal messages of low sensitivity:
Tier 3 secure messengers: secure messaging protocols with flaws, limited cryptanalysis, only content-controlled browser clients, &c
- Redacted to avoid flamewars.
Everything else: insecure; use only to bootstrap conversations to a Tier 2 protocol (knowing that if you're a state-level target, you're exposed even after you "upgrade")
- Google Talk
- Facebook Messenger
I think WhatsApp would like to be a tier 1 secure messenger (they'd be the first mainstream tier 1 secure messenger!) and they have a shot at being that, but they're probably some years away from it.
† (I know PGP and OTR are cryptographically limited compared to Signal Protocol, but from an OPSEC perspective they're still a tier above iMessage.)
This means a malicious server could both store away your contacts and try to MITM you, risking detection if you do verify your fingerprint with the other party.
A messenger that really did "all possible tradeoffs in favor of security" would force you to verify the recipient, either in person or via a web of trust. I think not doing that is a perfectly acceptable tradeoff to protect 1 billion users from passive mass surveillance.
These are all UI issues though, the Signal protocol seems solid, and I'll be glad to see it used everywhere.
Hopefully we will get clients that you could recommend to a journalist or lawyer and have confidence that they will be able to use it for secure authenticated communication.
Reasonable people can disagree, of course.
I hope it's obvious that, since OTR is in tier 1, these tiers aren't an analysis of how much I like different messengers. :)
Well, OTR is just the protocol, implementations vary in how well they make you authenticate your conversation partner. And PGP will whine at you a lot if you haven't marked a key as trusted.
But somewhat agreed. A bad UI can ruin the security of otherwise solid crypto.
I just don't see how the security of Signal and WhatsApp are different. Assuming for a moment (very optimistically) that the user verifies the fingerprint and enables the alert for changed keys, they are using the same protocol with the same guarantees.
Are you factoring in some level of trust in their ability to write a secure client or run secure servers?
The last time this came up, inspired by a post by pbsd, I submitted a detailed write-up of a protocol Signal could use to avoid requiring users to submit their contact information: https://news.ycombinator.com/item?id=11289223
It's a solved problem.
> A messenger that really did "all possible tradeoffs in favor of security" would force you to verify the recipient, either in person or via a web of trust. I think not doing that is a perfectly acceptable tradeoff to protect 1 billion users from passive mass surveillance.
Except it introduces another point of security failure. I would prefer to use a system like SDSI/SPKI's friend-of-a-friend naming, with a contact-sharing mechanism so that I could, say, know that I was talking to 'Sheila's Dad' or 'My Boss's Jim's Phone' (presented in a somewhat more user-friendly way, maybe 'My Boss → Jim → Phone' or something).
Maybe in the future when prop224†† is implemented the encryption will be more solid.
That's what's so exciting about the WhatsApp announcement. WhatsApp is by all accounts a pretty great messaging application, and it doesn't just have decent encryption now; it has best in class encryption specifically designed to protect a messaging application, designed by experts who thought about this problem for a long time.
The nice thing about using the onion address (transport layer) is that you have mandatory e2e authentication with only one id that solves multiple real world problems with bgp/dns/tls.
How would you propose to go further from current state-of-the-art WhatsApp to stop leaking meta-data? I know Ricochet is open to use a stronger encryption layer on top of Tor †.
The article links to the technical white paper which explains why your points are invalid.
> I'm still inclined to trust apple's iMessage a bit more
Do you have any proof why iMessage is more secure or is that statement also baseless?
It states that:
- The private keys remain on the devices (which is good, but they don't need them to read your messages)
- The messages can't be read by WhatsApp (they don't say anything about their parent company Facebook or governments though).
If you read all the whitepapers and alert messages, you can totally read them in a way that allows Facebook to see, store and analyze the clear-text of every message you're sending, without anybody lying anywhere.
// edit: got my answer here: https://news.ycombinator.com/item?id=11432629
I'm curious, is that because of what they did in the FBI case, or for technical reasons? IIRC iMessage would allow Apple to add public keys which they (or the FBI/$ADVERSARY) control as a sort-of backdoor as well. I can't say that I've been keeping track of Facebook's position or history on this topic, so it might or might not be fair to say that Apple deserves more trust here, but on a technical level there's not much of a difference.
So, it feels like for the average consumer, a product like iMessage ticks all the necessary boxes. While WhatsApp adding features like the ability to compare your keys via a QR code looks cool, I have to wonder how many users will actually take advantage (and how would they even know if that number or QR code really represents what they think it does?)
I'm surprised by this statement. After recent events I'd say the imessage protocoll is a weird ad-hoc construction that failed to follow basic modern crypto constructions like authenticated encryption and forward secrecy. I don't expect anything alike from the signal protocol.
So ignoring actual protocol and implementation flaws (I agree with you that WhisperSystems will probably be ahead of Apple there), both rely on the key management being done in a thrustworthy manner.
And this is where I trust Apple more than Facebook, especially in light of the recent FBI events. If Apple says they are not maliciously injecting fake public keys for the purpose of surveillance and marketing, I tend to believe Apple.
If Facebook is silent about this, I assume they are doing it.
Also, iMessage's crypto protocol is cryptographically broken. It's been patched together in ways that prevent the obvious attack, but don't actually fix the underlying issue. Doing so involves replacing the protocol. In the mean time, those patches are not fool proof and someone may get around them.
Does it? It allows me to verify a key. It doesn't give me any ability to control what other keys it's working with.
This only helps me to make sure that the message I just got, I actually got from the person it's claiming to be. It doesn't give me any other protection though.
I agree about the broken iMessage crypto and I hope to see Apple upgrade the protocol in a future OS release. With their quick uptake of new OSes, a fix could propagate relatively quickly.
One BILLION people just got their messages encrypted. Facebook was under no obligation to do this; for the vast majority of tech history the messages sent by 99% of people were very insecure, and when the tech giants responsible were asked their response was "ehh". This suddenly cuts into a huge portion of that. Pretty much everyone with a mobile phone in Europe or South America, as well as large parts of Asia, suddenly now has completely encrypted messages.
Kudos to Whatsapp for this fantastic move.
On a related note, if anyone is inspired by this announcement to start using encryption in other parts of their life, I have a handful of Keybase invites available (to bypass the 25k+ waiting list.) Keybase's security depends on lots of people tracking other people, so only ask for one if you'll track other people. I see too many people that make an account and nobody ever tracks them/they don't track anyone. My email's in my profile.
Does this actually protect you? If data goes through US FB servers, don't the NSA have access?
They need to add the name and job title of the blog posts author to the bottom.
To be fair: whatsapp mentioned it at the other two links.
Essentially, no. You have to trust that the app is really doing what it claims to be doing.
If it were open source, would it be different? Maybe. Signal Messenger is open source and the Android build is reproducible. However, the way you reproduce it is to run a Docker image, so that isn't really meaningful unless you audit the code that's used to build it. And then audit the code of the app itself. Fortunately, the Android app is written in Java, so you can easily do static analysis to narrow down what code needs to be audited. Unfortunately, it also includes native code that can reach into arbitrary parts of the heap if it were to try and steal keys. So you'd have to audit all of that. And then keep up with all the changes. Which you aren't going to do.
Even if you did all of that, at some point the naughty thought will occur to you that all your effort can be undone by an operating system update that contains more than it claims to. And then you'll give up.
In practice, it's impossible to really verify anything about how our modern computers work: there are too many people and companies who can silently access key material or just fool us in other ways.
So it's best to understand this work in context - this is not about allowing you to use WhatsApp if your threat model is that WhatsApp is entirely malicious. It's about discouraging governments from going to WhatsApp and saying "you need to give us all your data" because it creates the same situation Apple has: WhatsApp would need to write new software to undo the encryption, and that can be legally argued to be "compelled speech", and thus it can fall outside the legal powers granted under wiretap orders.
Consider all the work I listed above. Imagine you actually did it. Great. But ... you're using WhatsApp to communicate with someone else. Did they do the same work? No? What if their device received a different binary to yours?
You can't even solve this with clever auditing frameworks because then you're just shifting the trust to whoever writes and distributes the auditing tools.
That's why it's so important to understand that you cannot solve political problems with cryptography. At most, you can reconfigure the set of people you need to trust (hopefully by making it smaller). But if those people can be forced to act by a political entity, then no crypto will save you.
I wonder which of the two devices is lying.
what I'm wondering: My WhatsApp tells me it can't encrypt because the other person uses an outdated version. But the other person gets told the chat is encrypted. What is the truth then? Is it still doing crypto but my UI is denying it? Or is it not encrypted and the other person has a false sense of security? That's at least a bit strange...
As good as this sounds on paper, I hesitate to trust Facebook to transmit my data without wanting to peek a bit. I currently use both Whatsapp and Signal and will probably continue to do the same unless there is a way for users to verify Facebook doesn't keep a copy.
Moreover, if reverse engineering is so easy, why not open-source it from the beginning?
I think the misconception some people here have about the necessity of source code is born out of the idea that a cryptographic backdoor would look something like a mysterious HTTP POST of your key or plaintext to some random endpoint (that POST, by the way, would be trivial to spot in the binary; you wouldn't even need to read assembly).
But real cryptographic backdoors can be extremely difficult to spot. A cryptographic algorithm that uses signatures, for instance, can be fatally compromised by breaking signatures (see: TLS). An injected cryptographic flaw that breaks signatures can be as simple as biasing a single-digit number of bits in a nonce; a bias can be as subtle as generating one less byte of randomness than the protocol requires.
Those aren't quite the same skill though. Some folks could have the skills to verify protocols from source, but not the skills to work with a non-obfuscated binary. Task 'A' being harder than task 'B' doesn't mean that everyone who can do 'A' (harder task) is capable of 'B (easier). Nor does the inverse follow at all.
If we admit that doing basic (non-messaging protocol impl) verification on a binary is difficult and doing messaging protocol impl verification is also difficult, it seems reasonable to presume that doing both will take more time, work, and as a result, allow for more errors in verification.
Essentially, verifying impls without source code is more difficult/time consuming/error prone than verifying ones with source code.
Of course, they could give someone the source code to verify without making it open source. But that requires that one trust this other party, selected by folks who have an interested in their protocol being reported as secure (whether it is or not).
The goal when folks are looking for freely available source code is to eliminate some of those needs for trust (by allowing a greater number of verifiers) and eliminate some of the conflict of interest (by removing some control the interested party has).
Sure, closed source bits that promise to be good and that we can (potentially) look at are OK. But having source code for them is still better.
You must verify the binary because you cannot trust the source, so it is a basic skill of anyone who has the competency to validate an encrypted message application.
Now, its possible, that the source along with repeatable builds make verifying the binary easier for someone with the skills necessary, but even with those things, they still have to verify the binary.
How does the Signal project handle reports of potential vulnerabilities? I haven't seen any security contact information on the OpenWhisperSystems site.
> However, WhatsApp on iOS still backs up chat logs to iCloud, and despite any effort by Facebook, those could be given to a law enforcement agency. It's not known whether the backups are encrypted, but we've reached out to Open Whisper Systems and will update with any new information.
Apple stores iMessage backups unencrypted and hands them out when given a lawful request (per https://thehackernews.com/2016/01/apple-icloud-imessages.htm... WhatsApp needs to store encrypted backups to prevent this attack.
Curious because when sending a .webm video from an Android device to a iOS device, the video file was transcoded on WhatsApp servers and then delivered to the iOS device as H264/mp4 (since iOS can not play .webm files)
This should no longer be working.
Are you sure encryption was active, both devices on the latest version?
Is there a chance that the conversion was done on either of the devices?
Otherwise, yeah, something isn't right.
- What if the government forces WhatsApp to write and push a targeted software update in order to compromise the end-to-end encryption (I'm of course thinking of the FBI vs Apple case)? Is there a way for the user to be notified?
- Does WhatsApp Auto Backup encrypt messages before sending them to Google Drive or iCloud?
- Would it be possible for WhatsApp Web to rely on backend servers storing an encrypted version of messages, instead of relying on a connection to the user's phone, and still be able to perform keyword search over the encrypted messages with something like github.com/strikeout/mylar?
So you make a backup and loose the phone. What about the key? Is that gone too? Without it, encrypted backup is useless. How do you backup the key? You and me maybe will manage to do this, but what about grandma and all those people without anybody close who knows about this?
When Android backups Wifi passwords, those are encrypted with your Google password that Google has only a hash of. Okay, it's easily interceptable the next time you login and Google has an warrant. But still an option.
Great job from everyone, I'm glad WhatsApp has done this. I look forward to these features on my device.
I'm asking because I could imagine that Whatsapp might get banned in some countries soon (as recently happened in Brazil) and thus, lose market share.
Zuck is way too savvy to publicly reverse that announcement once the deal closed. Imagine how bad that would look...
Facebook's asset is the social network, your contacts. WhatsApp was a threat because they were getting a social network to match. That could then have been acquired (by Google) or developed a more social platform on top of it. By owning it Facebook can keep WhatsApp limited to a messaging platform only and remove the risk and limit the damage to Facebook itself. Next step moves to encourage WhatsApp users onto Facebook for group messaging type services.
If i take a look at axolotl, in scenario Alice send message to bob when Bob is offline:
(1) , (2)
MK = HMAC-HASH(CKs, "0") // (3)
msg = Enc(HKs, Ns || PNs || DHRs) || Enc(MK, plaintext)
Ns = Ns + 1
CKs = HMAC-HASH(CKs, "1") // (4)
We can see that Alice re-use CKs to get a new symmetric key.
So if an attacker get CKs(n) he could easily compute CKs(n+1)
CKs is not a long term key, but we cannot honestly call this _perfect_ futur secrecy...
One more thing, if I remember correctly, according of perfect forward secrecy definition, an implementation must NOT re-use previous session key to derive a new one ...
I'm wrong ?
(1) Quoted from https://github.com/trevp/axolotl/wiki
(2) see session_cipher_get_or_create_message_keys (https://github.com/WhisperSystems/libaxolotl-c/blob/0640b5ac...)
(3) i think we should read MK = HKDF(HMAC-HASH(CKs, 0x00) see ratchet_chain_key_get_message_keys (https://github.com/WhisperSystems/libaxolotl-c/blob/0640b5ac...)
(4) i think we should read MK = HMAC-HASH(CKs, 0x02) see ratchet_chain_key_create_next (https://github.com/WhisperSystems/libaxolotl-c/blob/0640b5ac...)
I am not a cryptographer.
"In cryptography, forward secrecy is a property of secure communication protocols in which compromise of long-term keys does not compromise past session keys."
If someone were to compromise your key and they had a packet log of all your communication then PFS, which Signal has, guarantees that they wouldn't be able to derive previous keys from the current key to decrypt previous messages from the packet log that came before the key compromise.
The thing you're talking about can be resolved by revoking compromised keys but knowing when to revoke those keys is a whole other problem that hasn't been solved by anyone to my knowledge...
Actor is a messanger that seems to focus on federation and they want to use Singal Protocol as well. So maybe they will devlop software for that. However their is still the issue if Whatsapp would want to do that.
It supported federation with Signal. But they removed it as the hassle of supporting the server was hard, the code was outdated and everyone agreed if you wanted that level of security, just install Signal yourself.
That seemed to be a bit hacky, not proper federation, it was not designed to scale. Maybe Im wrong, I would like to hear somebody that knows more.
However, I have always wondered one thing about WhatsApp: How does it generate any kind of meaningful revenue?
Apparently they've ditched the old $1 subscription model , and even that was so loosely enforced that I have never paid a single cent for WhatsApp in my life--and never will (got it while it was free on the iOS App Store and now have a 'Lifetime' subscription, if they don't change those at some point).
And even back then, maybe half of their 900m monthly active users  were iOS users who paid only once, and the rest may have dodged the fee in various ways. I have a really hard time believing the revenues so gained could ever actually cover the cost of R&D (especially for so many platforms) and infrastructure (which should be huge, given the amount of data they shift). Now they say they want customers to use WhatsApp as a platform, the way Facebook Messenger is doing it, but I'm not seeing any of those features implemented anywhere.
I always assumed there was some heavy data analysis going on behind the scenes--which would have been fair, I guess, since we're neither being shown ads nor really paying. Facebook's involvement added to that conviction. Now that they're encrypting everything (which, again, is wonderful), they can't analyze what is really, really interesting data anymore (keywords, etc.). And it's not like there was a public outcry for them to take this step--I would guess that not many end users actually appreciate the importance of E2E encryption.
So the question remains: How are they making money? You still have metadata (I presume), but then again, how do they use this data to make money if they can't always match it to a Facebook profile (where they can show you ads), and also, does this data really provide such a big improvement over all the data collected by Facebook and Facebook messenger? It just seems strange to me that WhatsApp apparently does not want to make any money.
Does anyone have any insight on this? What am I missing?
FB shelled out $15B for Whatsapp, and clearly the contents of these messages was a major factor in that valuation, especially once the subscription fee was phased out.
Now they're voluntarily encrypting. The only explanation I can come up with is that they are realizing that they will not be able to keep the contents of these messages for themselves in the face of motivated nation-state actors, and so they're killing that golden goose rather than share it. This seems pretty extreme and makes me wonder what is really going on behind the scenes.
My assumption is that there is a contingency to monetize metadata (something I'm not seeing discussed here too much) but I can't help but wonder if FB is now looking at the $15B as a huge overpayment.
The next step is some kind of noise injection into the metadata. There are almost certainly ways to look at who is chatting with who when. It'd be fantastic to automatically generate realistic-looking traffic to hide the normal stuff within. Plus, you'd be adding deniability to any communication you're having.
There's likely some pretty severe battery usage issues with it. If you offload the metadata fuzzing to a proxy server of some sort, then you're adding a vector to filter out that fuzz. It might be too big of a technical tradeoff to be worthwhile.
Now, it's a shame Whatsapp doesn't have an open source client as well. As much as I appreciate encryption, it still looks like we're not going anywhere with a closed sourced program that is a pain in the arse to run on Linux.
Then they would be at least somewhat legally committed to using end-to-end encryption for the foreseeable future in which they'll keep using e2e encryption. I'd have a little more trust in them that they aren't just going to drop the E2E encryption for various individuals with just a phone call from government officials.
(I would assume so, but I would like to have it confirmed. From someone who actually knows what he's talking about.)
This is not something E2E protocols can protect you from. You'd have to audit every piece of firmware and software on your device to verify that's not happening.
This is kind of worrying. I'm sure it's not malicious but I have literally no idea if things are encrypted right now.
IMO it would be very nice if calling someone and verifying the short code would confirm their text identity as well and if, once someone's text identity is verified, if voice calls to that person were protected by the verified text identity.
(IIRC the reason that Signal does not work this way is that texts use Axolotl whereas voice uses ZRTP and the key material is completely independent.)
We've long planned to do the same thing in Signal, but WhatsApp is ahead of Signal here. Axolotl is now called Signal Protocol, btw.