Of course the good part about a crowd is all views come out and so th closed source thing has its place but we should atleast give them their due and some kudos. We know people will try to evaluate the implementation and see what happens. In this case it's just a PR article. Let's wait for them to release detail and see if it stands out. May be the protocol is enough to give us confidence that their claim is true. We don't know yet.
No, open source does not guarantee they are using the secure algorithms advertised on their servers. But open source does is let you run your own server, that you can put much more trust in. People spin up their own signal and matrix servers all the time now for just that reason.
Signal and iMessage both don't guarantee true privacy as we can't see the servers.
For common users the question is if you trust the server operator, and if not, to consider your communications insecure. What they do about that is up to them. And it is up to these providers to earn trust.
And there are no services that do that. Since that doesn't seem to be possible. Open sourcing the backend doesn't help here since you still have to blindly trust the server operators.
I mean when a company does that, it makes it pretty clear that they do want to monitor your location, making it annoying for you to turn it on and off.
And even then it's only 4 taps. Which is 1 more tap than what's involved in changing WiFi networks something a lot of people do.
I don't need to read Signal code to have confidence in its working.
I roll an OwnCloud server for me and my family, and I know that I'm not invading their privacy, but I could very easily do so and not leave a trace of my having done it even that a pro user could find, let alone a layman. To say that Apple is less trustworthy than a given power user solely because they use closed source is ridiculous, there is zero correlation there.
Power users can make sure the code they're running is safe for them. There's no guarantee that Signal for example is running the same code they release to others.
Not sure about that. I think even if a power user were to say "this is safe", you still have a boatload of integrity problems with the software.
-How do we know the power users can be trusted?
-Even if they can, what guarantee do we have that the service provider we use is using the same open source code that the power user validated?
-Etc, etc, etc. Basically, a lot of trust issues.
I agree with asadlionpk, open source can generally only be proven to help power users in a trustless environment. And where security is concerned, we cannot ascribe the "safe" attribute to any system where that safety cannot be proven.
What do you define as true privacy? Why isn't other privacy "true"?
What do you mean by "see the servers"? Surely you can see them as computers at the other end of a TCP connection, and the server cannot read the cleartext of an E2E encrypted message.
For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.
If the server is assumed to be compromised by the threat model, then none of this matters.
> For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.
It is better to have to put your trust in any number of independent auditors than to have to put your trust in a single corporation.
Hell, I can't even trust that with my desktop computer. Further, how do I know the light in front of me isn't fabricated and that I'm not a brain floating in fluids connected to a simulation?
Realistically, I believe Apple's intentions are in the right place. And I believe that, for the most part, iPhone backdoors are not a thing yet. Being able to look at the client code is not something I believe will happen, but I believe it would be good if it did because then the security of iMessage could be independently verified much easier. It is true that Apple could just lie about it and put different code up, but assuming their intentions are in the right place, it seems like a win-win for everyone.
1. It's easier to audit, although binary analysis is still useful (and reverse engineers are often better at finding security holes than someone doing a source code review).
2. Reproducible builds.
Server software doesn't need to be open source. If you have E2E in your open source software, you don't need to trust the servers at all: https://paragonie.com/blog/2016/03/client-authenticity-is-no...
TL;DR: Keychain recovery relies on a cluster of hardware security modules to enforce the recovery policy. After 10 tries to guess your PIN, the HSM will destroy the keys. Apple support gates all but the first few of these tries. The paper also implies that you can use a high entropy recovery secret as an alternative, though I can't figure out how you would enable that.
This seems like a pretty reasonable point in design space to me. Of course, you are relying on Apple's trustworthiness and competence to implement this design. But that is true without recovery, since the client software is also implemented by Apple.
As I speculated elsewhere in this thread, I think they're going to do it with multiple recovery keys ostensibly written down by the user and never transferred directly to Apple, which each then redundantly encrypt all user data before transmitting respective copies to iCloud.
That would pull it off, and it basically just shifts the trusted device redundancy problem to a trusted key redundancy problem. The only remaining usability obstacle is to make sure the user has safely recorded all recovery keys.
The option to record the keys yourself is also described in the whitepaper:
"If the user decided to accept a cryptographically random security code, instead of
specifying their own or using a four-digit value, no escrow record is necessary. Instead,
the iCloud Security Code is used to wrap the random key directly. "
As I said, I can't actually find this option in my iOS settings. Maybe you have to disable Keychain first?
I'm no expert but this is my recollection.
In the scenario where you have 2 devices, one is iOS 9/10 and have migrated from 2SV to 2FA, the other is iOS 6/7/8, you can still access the recovery menus on the iOS 8 device, but it does weird things to the keychain if you mess about with it.
For secure, private messages, your sane current options are Signal, WhatsApp, and Wire. Signal is the best option, but you're going to make some UX sacrifices for security. WhatsApp and Wire are extremely comparable. If you worry about implementation or operational security flaws, WhatsApp has the Facebook security team behind it, and a long-term relationship with OWS; no cryptographically secure messenger is better staffed. If you're worried about Facebook seeing your metadata, which is a sane worry, Wire is approximately as slick and usable as WhatsApp with mostly the same underpinnings.
Regardless of the underlying cryptography, in the absence of a well-reviewed published crypto messaging protocol, iMessage is basically just an optimization over SMS/MMS. It's great for that, but it shouldn't be anyone's primary messenger.
Thomas addressed this:
If you're worried about Facebook seeing your metadata, which is a sane worry, Wire is approximately as slick and usable as WhatsApp with mostly the same underpinnings.
> I'm not sure I'd elevate WhatsApp over iMessage
I would! Consider the following: https://blog.cryptographyengineering.com/2016/03/21/attack-o...
That sort of attack is not possible against WhatsApp.
> as implemented in versions of iOS prior to 9.3 and Mac OS X prior to 10.11.4
This fall, these will both be two major versions behind the current state of the software. Considering how openly pro-privacy/pro-security Apple has been over the past couple years, I'd expect they've fixed these issues and more by now.
I'm not making an argument that Apple is infallible; I acknowledge that it's possible there are still security flaws in iMessage. But I think that Apple is the only one of the really big tech companies who seem to be taking a privacy stand that the users can get behind, which is something to applaud.
Signal doesn't work without a phone number and it doesn't have a web app, otherwise it's great if you only use it on the phone.
Wire looks promising for desktop users.
I'd trust iMessage over WhatsApp any day of the week. The parent company doesn't make money by potentially knowing who I am and what I say.
https://www.ieee-security.org/TC/SP2017/papers/84.pdf Is super interesting when thinking about secure messaging from the perspective of ordinary folk.
If they don't, I don't think it is that hard for Apple to extend their current security model to iCloud. They currently rely on senders encrypting messages with each destination device's public key, so they can store the individually encrypted messages separately in iCloud.
When a new device arrives, they could have an existing device perform re-encryption of the messages for it (after the user authorizes that the device should be added).
Even without the new iCloud functionality, Apple has always been in control over the key exchange, which would allow a malicious employee / government to write code that could add a new authorized device/key silently and thus allow Apple to eavesdrop from that point on in future conversations.
This is exactly what Apple fought to protect "recently". Once a government entity forces backdoors on citizen, privacy goes out the window. Apple knows this and I think you can expect them to make a gigantic media mess if they were forced to. It would ruin their business world-wide.
Based on the company's movements towards public-key based two-factor authentication, I think they can reasonably get away with phasing out password-based account recovery by relying on two methods:
1) The user has more than one trusted device authenticated to the iCloud account; account recovery can take place using the other trusted device and passwords are not required
2) The user only has one trusted device; the user has a primary public/private key pair that encrypts all data on the client, but in addition there are 9 backup keys which are generated on the client, never transferred to Apple and (hopefully) written down by the user
In the second scenario, Apple bypasses the obstacle to full PKI-based access control by implementing authenticating key redundancy instead of authenticating device redundancy. User data can be end-to-end encrypted by each key, transferred to iCloud, and if the user loses access to the device they can recover their account data using one of the recovery keys.
If Apple has changed backups to function in a more private manner, then they would announce that, not something exclusive to iMessage.
More detail: iMessage syncing has always been maximally private from day one. However a drawback to the current implementation is that new devices cannot sync message history. The reason is that each message is encrypted separately by senders for each currently registered device for the receiver. And yes that means if you have 3 devices on your iCloud account, whenever someone sends you an iMessage, 3 separately encrypted copies get sent. Apple has gone to great lengths to ensure that private keys are never shared by devices.
So what's new is apparently Apple's figured out a way to sync history via iCloud. I'm interested to hear the implementation details, but there can be no doubt that it still respects the design goal of never sharing private keys.
Now, the privacy goals for backups are different. You obviously want them to be as private as possible, but most people generally want to be able to recover their life in the event of a simple forgotten password. There are certainly scenarios where you want to encrypt your backups, but it always should be an informed, opt-in choice. You should clearly be aware that if you forget your password, you lose your backups. So generally it's desirable to default to having a fallback recovery method.
Like I said earlier, if Apple has figured out a fallback recovery method that somehow does not involve storing your data in a manner they can decrypt, that would be something they announce as part of iCloud Backup... not just for iMessage. But it seems almost a fundamental design constraint. You can either have something impossible for anyone else to decrypt or conveniently recoverable backups, not both.
No it hasn't and yes they can. I've done it several times. The ability to restore messages to a device is specifically what breaks the otherwise end-to-end encrypted iMessage architecture, which is why Federighi talking about the new iOS 11 capabilities is intriguing.
To your last point, my personal hypothesis is that Apple has designed a cryptosystem that uses a PKI with redundant key pairs to extend the redundant encryption. That shifts the recovery usability solution from redundant trusted devices to redundant keys that are written down.
Interesting. To be clear, you're not just talking about restoring messages to the same device, like after resetting it?
Looking more closely at Apple's security whitepaper, perhaps restoring history on a new device is possible if you enable iCloud Keychain. Looks like that would in fact share the private decryption keys among devices.
Ah, and that more clearly points at what this iMessage change may be: Mandatory iCloud Keychain, at least as far as iMessage keys are concerned. Which would suggest another, hidden improvement: no more need to redundantly encrypt a copy of every message for every recipient device!
I want to add however that this still does not suggest anything about changing the security of backups, which was the implication of the article. Nor would I necessarily characterize iCloud keychain as "breaking" encrypted architecture.
I just noticed that you misquoted me.. I said iMessage syncing has always been maximally private, and drew a distinction between that and backups. The article you cite mentions the tradeoff between strong cryptography (maximal privacy) and user pain (losing your data forever). Apple has made the intentional design choice of enabling the former for syncing but allowing backups to survive an account password reset. I think it's pretty clear that's a good choice for a consumer device. You can always turn off iCloud backups and back up locally via iTunes if you want maximal backup privacy.
And I'm not sure what we "seen this week". If you mean the link posted below, that's unrelated from some "unencrypted Cloud" -- it seems to talk about data on Apple customers (names, addresses, email, numbers, etc), from some POS/CRM system or such.
It says "The suspects allegedly used an internal company computer system to gather users’ names, phone numbers, Apple IDs, and other data, which they sold as part of a scam worth more than 50 million yuan (US$7.36 million)."
So this is about CRM style data being stolen.
Look at the "recent" FBI case. The insecurities became very apparent because Apple had to cooperate.
I would recommend having an option to generate keys based on something you have and something you know that you won't easily forget, such as a passphrase. That way you can always recover them later!
Good or bad idea?
I don't recall any specific examples off the top of my head, but I believe it's probably happened and does happen. But backdoors are much more common; so much so in fact, that I'm led to believe the NSA doesn't have significantly greater cryptanalytic capabilities than academia and industry these days, given that their modus operandi is usually to demand a backdoor rather than breaking it. Their advantages probably stem from access to superior computing power or simply much more of it. I imagine a lot of the agency's research is in fundamental paradigm shifts that can broadly attack many algorithms (like quantum computing) - my edit at the bottom gives an example of this.
To your (implied) second question: it's probably not true that zero days are easier. When a company like Apple develops a novel cryptosystem, the NSA is not likely to break it for years (barring conspiracy-theoretic capabilities that we have no way of verifying). Zero days incur massive amounts of research and development time to go from identifying a useful cryptanalytic weakness (i.e. get an algorithm from exponential, to sub-exponential to quadratic time) to deploying an exploit. All the while, earnest cryptographers in industry and academia are attempting the same thing, except they'll publish their results. And if the NSA has a functional exploit, they will use it like you would a classified weapon: sparingly.
EDIT: Actually, your question reminded me of differential cryptanalysis. That's more of a paradigm of attacks against a variety of algorithms instead of a zero day against any one particular encryption algorithm; still, the NSA apparently developed differential cryptanalysis and maintained it as a classified capability before the public community independently came up with it. That probably qualifies for your question.
color me skeptical.
I think the last remaining piece of trust is trusting that Apple's servers are correctly advertising your iMessage public keys, at least if I'm understanding how it works correctly.
After all, if you lose access to all devices you can still reset from scratch and get back on iMessage and at least receive messages. Your recipients won't know this happened. How would anyone know if Apple is performing a MITM using this mechanism?
> "Our security and encryption team has been doing work over a number of years now to be able to synchronize information across your, what we call your circle of devices—all those devices that are associated with the common account—in a way that they each generate and share keys with each other that Apple does not have."
> It's unclear exactly how Apple is able to pull this off, as there's no explanation of how this works other than from those words by Federighi. The company didn't respond to a request for comment asking for clarifications. It's possible that we won't know the exact technical details until iOS 11 officially comes out later this year.
> Meanwhile, cryptographers are already scratching their heads and holding their breath.
This might be uncharitable, but in my mind I think this writing and presentation of facts (probably unintentionally) implies that this capability is novel, when it's not. Sharing keys between multiple devices is a straightforward issue if you're willing to make user experience trade offs. Cryptographers are not scratching their heads wondering how Apple could achieve E2EE with a network of devices, they're wondering how they did it without sacrificing account recovery. It's not clear to me that readers would automatically understand this, because the real head scratcher isn't addressed until near the end of the article, which brings me to my next point:
> "The $6 million question: how do users recover from a forgotten iCloud password? If the answer is they can't, that's a major [user experience] tradeoff for security. If you can, maybe via email, then it's [end-to-end] with Apple managed (derived) keys," Kenn White, a security and cryptography researcher, told Motherboard in an online chat. "If recovery from a forgotten iCloud password is possible without access* to keys on a device's Secure Enclave, it's not truly e2e. It's encrypted, but decryptable by parties other than the two people communicating. In that sense, it's closer to the default security model of Telegram than that of Signal."*
I'm hesitant on how much faith to put in Apple's scheme here. On the one hand I generally trust Apple very highly when it comes to security and cryptography in particular. On the other hand I don't see them making account recovery impossible.
However, over the past few years they have been increasingly pushing two-factor verification, and then full two-factor authentication based on a network of trusted devices. The iCloud password used to be enough to manage the account's security and trust, but now it frequently defaults to requiring authenticated approval from a trusted device (instead of e.g. security question responses).
I could see Apple abandoning conventional account recovery if they keep proceeding down this path by providing a huge amount of access redundancy. For example, they could keep redundant copies of all user data synced in iCloud which are respectively end-to-end encrypted on the client with a user's backup keys. Each authenticated user device might have 10 backup keys, with a typical warning that they should be written down and will not be displayed again, etc. The keys could be downloaded from the device and stored by the user but never given to Apple, and would primarily be useful in circumstances where a user only has one trusted device authenticated to iCloud. Then if a user loses primary access to any given Apple device, the user has two ways to recover data:
1) Authenticated approval from another of the user's trusted devices, or
2) Use the backup keys, which do not provide a method of changing the account password, but which instead decrypt the redundant user data corresponding to the key.
The basic idea is that removing conventional password-based account recovery required inordinate redundancy to counter usability loss; you can do this with redundant authenticated devices (each with their own keys), or you can simulate it on one device with redundant keys that are ideally harder to lose.
> It's unclear exactly how Apple is able to pull this off, as there's no explanation of how this works other than from those words by Federighi.
All Apple says is "end to end encryption". From your phone to the cloud is 2 ends, and then from the cloud to the FBI is 2 more. Yay!
1. encrypts her data at the source i.e. on her own computer and
2. sends the encrypted blob over the untrusted network, or so-called "dumb pipes".
Hardware company that makes the users computer tries to dictate whether and how #1 can be done.
The software for doing #1 does need to be open source.
On mobile, does such software even exist?
And even if it does, is a mobile phone really the users computer? It is an effectively locked enclosure with several computers controlled by third parties.
The way to do secure mobile messaging would be to encrypt the message on a computer the user controls, then move the message to the "mobile phone" and then send to the untrusted network.
Alternatively, do not use a mobile phone for messaging if worried about others have access to the messages. Wait for a pocket sized portable computer that can be tinkered with. No baseband, etc.
Even more, nothing requires these computers to be connected to the untrusted network. ME requires an internet connection.
The message encrypted on the user-controlled computer can be moved to the "mobile phone" via a wired local network, serial link or removable media. Is such transfer to and from the device made difficult by the way these mobile phones are constructed and configured? Yes, and probably this is intentional. Companies want user data and the way they get it is by encouraging users to store data in the "cloud".
Most importantly, these Intel and AMD based computers are not the only computers capable of encrypting messages.
If ME scares you then do not use computers that have it.
If you have computers with ME then disconnect them from the internet and get a computer without ME for your internet needs.
This message was typed on a computer that does not have ME, would not be considered "desktop" and would probably not qualify as "modern". It makes no difference. No problems encypting messages on it.