In general, I totally agree with the article, however, there is one thing:
> From what I can tell, the iMessage app gives the sender no indication of how many keys have been associated with a given iMessage recipient, nor does it warn them if the recipient suddenly develops new keys.
iMessage does warn you on all devices when a new device gets added to the account. Of course that's during normal operations. Unless somebody reverse-engineers one of the apps, we can't know whether the protocol has a provision for "please add this key but don't warn the user".
That's the thing with crypto in closed source software: it's completely useless because you have to trust the software vendor not to put any backdoors in.
And since two weeks, unfortunately, the only think we can trust is that the vendor actually DID add a backdoor.
The problem here is that you need to know the other person's encryption key. Since I'm not warned whenever you get a new device, I have no idea how many keys are assigned to you. Hence, the key distribution problem still remains.
Oh I agree with you. The public key directory thing breaks the whole system. No question there. I totally agree with the original article.
I just wanted to highlight the fact that devices DO inform the user when a new key gets added. So if the directory didn't serve fake public keys and if the devices didn't have code to not warn if certain keys were added, then you would have the guarantee to only talk to me.
Sure: I could add more devices, but I have the full control over them, so it doesn't matter to you how many devices your client encrypts the message for because I have the full control over what devices I authorize or not.
That of course would be the perfect world, when in fact, Appke probably adds surveillance keys to the directory server and doesn't warn the user as such keys are added.
Don't use iMessages for anything you would not want Apple or the NSA or any other law enforcement agency to read.
But then again, don't use any of email (envelope in the clear), SMS (your carrier can read it), any other IM service (same issue as iMessage), snail mail (can be read by the post office and anybody opening your letter box when you aren't there).
> I just wanted to highlight the fact that devices DO inform the user when a new key gets added.
This is totally false. When you are sending iMessages to someone, you get no UI indication at all when they buy new devices and add them to their Apple ID.
You are confusing it with adding devices to your own account.
Well, they are now—if they MITM their connection to the APNS. Messages they send get encrypted to all the devices associated with your iMessage account.
The data's available to them from Apple, though there is no UI for it.
I would hope so. It's none of their business if I buy a new device. This is plain respecting my privacy.
BUT: If I can be sure that only my devices can decrypt data sent to them and you know that my account will only announce keys to devices I actually own, then you don't need a notification when I add a new device because however many devices I have, only I will be able to read your messages.
Of course because we can't trust apple to not silently add more keys to an account, all of this is moot anyways.
> I would hope so. It's none of their business if I buy a new device. This is plain respecting my privacy.
What about THEIR privacy? End-to-end encryption to device-specific keys means that they have no idea which devices THEIR messages to you are being encrypted to.
Why is everyone so critical? I'm far from an Apple fanboy, having opted to never buy one of their products again, but they deserve major props here. They have the constraint of "the system must be easy to use" and, from what I can tell, did pretty much as much as they could to add strong crypto to the mix at no benefit to themselves.
Apple could have just gone "fuck it, let's store it in plaintext, since it's never going to be good enough anyway", but adding strong, transparent encryption to one of the most-used messaging services in existence right now is a very good thing, in my book.
Sure, the transparency introduces some flaws, but Apple has protected itself by a whole slew of passive attacks, which, to my knowledge, is the kind of attack that the mass surveillance state uses, and what a large messaging service is most at risk of.
Getting a warrant and saying "hey, we have a warrant for X, can we look at their communications", and then having Apple MITM that person is much, much more preferable to me than having a government say "we need all your data" and then the provider just handing the data over, since it's all in plaintext.
As it stands, Apple has pretty much disabled large-scale surveillance of iMessage accounts, which is what PRISM and all the furore is about, and people still shit on them?
Can we have some perspective please? If we lambaste any provider who takes steps to safeguard our privacy (again, at no benefit to them, since most people who use iMessage don't even know what encryption is), then what do you think the next service to have the choice in a product will say? "Hmm, I could either store everything in plaintext and have governments strong-arm me into sharing it with them (which I couldn't really care less about), I could build really strong encryption into it and make it an unusable mess for no added benefit to me, or I could build as much encryption into it as doesn't impact usability, again for no benefit to me, and get shat on for not providing option #2. Gee, how can I ever decide?"
If you want strong privacy/security, use Silent Circle (disclosure I work for them etc). If you care about your day-to-day privacy and want to talk to people you know, you could do much worse than iMessage.
Apple could have just gone "fuck it, let's store it in plaintext, since it's never going to be good enough anyway", but adding strong, transparent encryption to one of the most-used messaging services in existence right now is a very good thing, in my book.
That's the point of the article: the messages stored on iCloud aren't encrypted. The claim that iMessage uses end-to-end encryption is worthless.
The messages are encrypted end-to-end. What you do with them once they've arrived (e.g. backing them up to iCloud) has nothing to do with the transport.
A co-worker and I send encrypted e-mails back and forth occasionally. If he decrypts my attachments and saves them to his hard drive to work with, that doesn't mean that e-mailing them in an encrypted form was worthless.
They might well be encrypted end-to-end, but who manages the keys? If Apple manages the keys on the server side, then that end-to-end encryption does not prevent Apple from accessing the messages.
If you read the article, you'll note that the author was successful with an attack that demonstrates that Apple can access the plain text of at least recently-delivered messages (and possibly all of the backed up messages).
Apple manages public keys, not private. They'd have to lie to you and give you their own public key, and then man in the middle your conversation to get it.
Sure, but this is all transparent to the user. Apple heavily pushed iCloud backups and doesn't mention that it throws any data security out of the water. You hear iMessages are secure and then you accept the "use iCloud backup" question that appears right after you get your device. Except now your messages aren't secure.
Apple did a great thing by making iMessage secure by default. They should do a similarly great thing by making iCloud backups secure or by making it very obvious to the user that they are not secure (and what that means for your other "secure" data).
Isn't it possible that iCloud backups are encrypted in storage? Apple could be holding the key in encrypted form and sending that to the device with a backup. If the user's password is used as the passphrase to the key, then only the user can use the key to decrypt it. Apple could hold a second key, with passphrases set to secret answers for reset scenarios - that would let them regenerate the normal password-encrypted key for the chosen changed password.
This seems likely given they offer to store your key for FileVault[1] under some secret answers. I think this delivers the same level of "secure" in the sense that, provided they don't store the passwords you submit (possible law enforcement request), then they shouldn't be able to decrypt your iCloud backups either.
Apple can still get at them because they can reset your password on you. Certain files in the backup will not be accessible to them if they were encrypted with a key that wraps the Device UID. iMessages don't seem to be one of these files, but Mail backups are clearly inaccessible.
i think the most important takeaway here is that the reactionary press release "Apple’s Commitment to Customer Privacy" was very carefully worded to give the customer a sense of security. it's definitely not telling the full story, but it serves its purpose of reassuring a concerned user base.
I'd like to see a more comprehensive test to prove that the messages are part of the backup. Since you have to sign in to iMessage, it's possible that these are being retrieved from the iMessage server in an encrypted form - "I restored an iPhone and saw old messages" is not proof that they are stored in plaintext.
Also.. it's important to note that the iMessage client only retains about a day's worth of messages on the device. Earlier messages can be obtained via a "load earlier messages" button. This means that Apple does store the message history on the iMessage servers, and there is a way for the client to retrieve them in batches while the contents are not accessible to Apple.
It seems extremely likely that this 'iCloud Backup' vulnerability is a red herring. The messages are not stored in the regular backup but are fetched encrypted by the client when you sign in.
That's not what's happening with the "load earlier messages" button at all. The long term iMessages history is stored in an unencrypted sqlite database on the phone (which is included, unencrypted, in local iTunes backups).
The fact that it is possible to perform encryption on the entire backup file does not change the fact that many people are (wrongly) speculating about things which are easily verifiable.
They're not loaded from the server but the sms.db (sqlite) database on your phone.
Tools exist to read the database from the phone directly, from backups, etc. Which is great when you accidentally delete them... I wouldn't know that though. :)
But what's the difference? Whether the messages are stored in plaintext, or whether they are stored as part of the iCloud backup, the author proved that Apple can get to them in order to restore your newly purchased phone.
It matters because in one scenario Apple itself can't get to them - it can only restore them to your phone where they are decrypted using your password.
Because Apple said they were doing end-to-end encryption:
"For example, conversations which take place over iMessage and FaceTime are protected by end-to-end encryption so no one but the sender and receiver can see or read them. Apple cannot decrypt that data. "
The author did demonstrate that recently-delivered messages may be accessed without device-specific or user-specific secrets in place. So, even if all the transports on all the hops between all the machines in the message delivery chain are encrypted, Apple is able to access the plain text of at least the recently-sent messages.
Actually the author didn't prove this. The author entered their account password to retrieve the backup. Restoring the most recent day of messages could be done from a plaintext backup or it could be done by loading an encrypted batch from the iMessage servers. We don't have enough information to know for certain either way.
The author said that he changed his password and then got a new device and accessed recent iMessages on it. So that rules out the password as the encryption key.
It rules out the password as the encryption key, but it doesn't rule out encrypted backups. Implementations of encrypted volumes, such as in LUKS on Linux, commonly use the password as something that unlocks the real encryption key. So during password change, the new password could be used to re-encrypt the backup encryption key, making it so that as long as Apple doesn't store the passwords, Apple can't have access to the encrypted backups.
However, this is really far into the weeds on a topic that doesn't matter so much, as none of it really relates to Apple's claims about iMessages.
He reset his password using the IForgot service. So either data is encrypted under a key apple has or its encrypted under a key derived from your secret answers( which is unlikely). In the latter case, Apple might as well have your key given the limited entropy
Or it could be encrypted by a random key with high entropy, that Apple does not have, because it is encrypted by a key derived from your password. See, for example, LUKS:
And that key got onto his new device how precisely? Observed behavior was reset password with questions/email, prevision a new device, get old chats on new divice. If the new device had the old encryption key, how did it get there?
you present a false choice between security and ease of use. They could easily add a security layer that uses a short authentication code, that let's people in physical connection(say spouses) verify their texts are secured. Just a simple optional feature.
But how much security can you guarrantee with closed source anyway?
Why is everyone being so critical of Skype? Skype deserves major props here. "They have the constraint of 'the system must be easy to use' and, from what I can tell, did pretty much as much as they could to add strong crypto to the mix at no benefit to themselves."
Which is actually, as far as we know, true for Skype too. The reason is both systems acquired a reputation (one that Apple's PRISM press release courted) that they were three letter agency proof(at least for most TLA's).
That reputation, we now know, at least for Skype was erroneous long before Microsoft bought them and might be for Apple. Yes, Apple deserves major props for deploying end to end crypto , but IChat security reputation is inaccurate.
> The sad thing is there's really no crypto to understand here. The simple and obvious point is this: if I could do this experiment, then someone at Apple could have done it too. Possibly at the request of law enforcement. All they need are your iForgot security questions, something that Apple almost certainly does keep.
It's an interesting meta-question about why Apple's claim was taken at face value. Not just by the public but by Apple's many scoffing critics? The Mat Honan incident should have been a clear sign that Apple's security mindset had a gaping hole:
> Apple tech support confirmed to me twice over the weekend that all you need to access someone's AppleID is the associated email address, a credit card number, the billing address, and the last four digits of a credit card on file. I was very clear about this. During my second tech support call to AppleCare, the representative confirmed this to me. "That's really all you have to have to verify something with us," he said.
That's a pretty grievous oversight. Why should Apple get the benefit of the doubt that they've solved the holy grail of security...that is, having lockbox security without impact to user convenience? In the above-cited case of AppleID access, they hadn't overcome that tradeoff and appear to have erred on the side of user convenience, at the cost of security.
To anyone who thinks this is just a theoretical problem: It's worth noting that law enforcement complained abut both Skype and Hushmail being impossible to tap before tapping both. It seems vanishingly unlikely they either haven't figured out a way to get Apple to help them or to get will at some point. This just confirms it.
Also, for what its worth, at least one of the IMessage handshake headers mentions Fairplay. Which makes me suspect the system was done by Apple's DRM team. Given that DRM is typically an exercise in obfuscation more than actual crypto, this does not bode well.
There's lots of other things to worry about in the iCloud ecosystem anyway. The iCloud backups that devices make aren't encrypted or protected at all by Apple's two factor authentication. Apple can pick and export these at will. Alternately, plug in somebodies email and phished password, and you've got the entire history of their iMessage conversations in plain text, along with a complete copy of all their devices.
I'm much more worried about that happening than an evil-Apple scenario. I doubt many people have strong AppleID passwords, or particularly unique ones either. Heck, I know people that share accounts to avoid paying for apps.
They can sign their own alternate ramdisks, and the default PIN is only 4 digits, so that's not surprising really. It's been possible to load similar forensic software on A4 devices by anybody for years now.
The wipe after 10 attempts is moot anyway, we are talking about Apple loading new software into a ramdisk and brute forcing it. I've personally done this at an owners request.
A normal user could do it on any device with the A4 chip or prior, vulnerable to the limera1n exploit. Apple could do it with any device, as they own the signing keys for the bootloader.
I worked as an IT security consultant before and one of my co-workers had the task of brute-forcing the encryption. Conclusion was that with a newer generation (was it the iPad 3 or iPhone 4S? I don't remember exactly), brute-forcing was very infeasible. It took forever. My bet is on the back door.
How is locking/encryption actually implemented on the iPhone? If you use a 4 digit pin that seems incredibly easy to brute force, unless perhaps the actual encryption key is protected in hardware (and the "wipe after 10 failed attempts" policy is also enforced in hardware)
I don't believe the "wipe after 10 failed attempts" is enforced in hardware, but the actual encryption is. Everything got much better with the 4S/iPad 2 and later. So, if you can get the phone to boot from your replacement image, you should be able to run arbitrary attacks against the encrypted image. This would be complex. Much much easier for Apple, and still doesn't really count as a "backdoor" in the hardware key storage itself. If you could bypass the 10 tries lockout again, you can do ~12 tries/second, so 20 minutes to defeat a simple 4-digit passcode.
I suspect someone could decap the chip itself and extract the key, at the cost of the phone (and burning a bunch of whatever chip they're using in advance to prep for the attack). That probably would be worthwhile for a conference presentation. I'm confident any decent intelligence service which doesn't have Apple's cooperation can do this now, since extracting keys from hardware is a pretty key capability for anyone doing black bag stuff. A friend of mine just started a consultancy in this space, so maybe will look at this for a presentation in 2014 if there's time.
The rate limiting is the security element's speed (separate from the "too many tries" timeout and the "wipe after 10 tries" thing). Unless the security element itself has a backdoor, even Apple can't go faster than 12 tries per second. It is still about 20 minutes.
But a longer pass phrase (admittedly a ux problem) helps. Plus, rumors of biometrics, which may or may not be implemented in a smart way. A biometric which allows a 4 digit pin, or if it fails, a 16 char alphanumeric, would be proper.
To be honest, I don't remember exactly. I just talked to my colleague for a couple minutes and watched him using his brute forcer and I remember his conclusion that the old iPad could be easily brute forced, but the new one couldn't. Either because the iPad would be wiped after 10 attempts or because the hardware would allow re-entering a PIN only every few seconds. I remember that it was incredibly slow. Sorry ;(
iOS 3 was "instant bypass". iOS 4/5 on everything up to the iPad 2/iPhone 4S was still device-bound, at 4-10 tries/second, but the backoff and wipe logic could be bypassed.
There are no public ways to extract from the newest devices on ios 5 or 6, except for a few corner-case bugs (which were limited, and patched).
Plenty once you jailbreak the phone, or compromise a paired device. I believe the best practice among public attacks is to do that. There are probably various exploits in iOS itself which let you root phones remotely. Beyond that, either secret attacks or hardware attacks, or figuring out how to get the phone to boot from one ramdisk while talking to the security element from before (which is trivial if you can sign stuff as Apple, and may actually be possible otherwise.)
To me, the specific and carefully explained experiment is the big contribution here.
In other comment threads I'd heard the opposite: that when new iOS devices were activated, they didn't get access to past iMessage threads, just future ones. Perhaps that's because there's a "new device" activation flow and a special "replacement device" one. But that doesn't matter. Your iMessages are only as secure as the weakest link. OP has found a pretty weak one here.
Doesn't the experiment just show that Apple is storing the iMessages in some form that they can decrypt? The article mentions iCloud backup - it's possible iCloud takes a copy of decrypted messages first. This might be feasible given it has to back up SMS messages too.
Perhaps a better experiment would be to see if messages are readable after a restore (with and without password changes) if the device isn't backed up to iCloud.
iMessage uses the Apple Push Notification Service. This is like a secured email relay server where the sender & receiver IDs are ephemeral tokens. If you trust the CA & certificate chain, then it's reasonably confidential. The question is where you keep your historical messages - on your device, or on their cloud.
The long & short of it is that Apple can't get at your iMessage contents or history if you don't use iCloud backups and don't subscribe to Apple's desktop password recovery service (duh).
(a) iMessage is end-to-end encrypted. There is metadata about who is messaging whom because it is distributed through the Apple Push Notification Service. This identifying metadata consists of ephemeral tokens generated by the sending server and the receiving device. And there is a small amount of encrypted message history kept for the PNS to resolve pending messages to all subscribing devices.
(b) Some iOS Backup files are not accessible by anything but the original device because they are encrypted by a combination of the device UID and your key. Mail for example. No one can get at them other than someone with 1. your password and 2. your device.
(c) iMessage files aren't protected like this since you CAN restore them across devices. But this is by design -- only way for iMessage history to be retained is through a backup.
(d) There are two backup approaches: iTunes (on your PC/Mac), which encrypts via a password, or iCloud.
(e) With iCloud, Apple could get at your backups because they could reset your password.
(f) Therefore, make a determination as to what is more secure: your PC/Mac's password, or your iCloud password (and Apple's willpower not to reset it at the request of the NSA). Back up your iOS (and thus iMessages) devices there. Or don't backup at all, and no one will see any history.
It's a shame iCloud backups aren't encrypted, but the only information that seems newsworthy is the fact that it takes so little to reset an AppleID password. Although I suppose if the intruder knows the AppleID and last-four of my CC, then they probably know my name and birthday as well. :/
The rest of it though... iMessage is not perfect, but it's better than most alternatives.
* Plaintext logs of iMessages (iOS or OS X) allow you to view past messages. Duh. So do plaintext logs of OTR chats via Adium or Pidgin.
* You don't generate your own key nor verify the keys of others; you rely on a third party to do it. The effect is that the system is actually used by normals unlike, sadly, GPG or OTR. Heck, even many of the technical people I know give up on GPG and OTR because they're just too much work. The point is the messages are encrypted in transit and cannot be read by just anyone ... in contrast to SMS and typical instant messaging.
Only a very few niche services encrypt user data such that the service provider cannot read it, because then users lose everything just because they forget their password.
And said services are explicit about this upfront.
In short, Apple can't read your email backups, and neither can anyone attempting to restore your device from iCloud (they would need your email passwords). iMessage doesn't have the same protection.
I agree with you...but to clear any confusion. What I'm implying is the same hypocrisy that Snowden is trying to thwart. By the gov releasing a PR statement (the one that I noted) as such, it instills confidence in iMessage, while the OP, you, and I have noted that there still are vulnerabilities.
The second option (MITM through access to the public key server) is not comparable to the first. Targets would have to be singled out and surveilled in advance of the messages, which actually lends itself pretty well to the sort of due process surveillance that law enforcement is generally trusted with.
It's not unthinkable that the NSA could access iCloud backups (with some sort of FISA rubber stamp). Access to everyone's backups is much more conducive to dystopian mass surveillance than the key server's tradeoff of vulnerability to MITM.
The OPs first point is a lot stronger than the second. Distributed backups are probably bad from a privacy/security perspective. That seems like the better point to make, provided we understand that iMessage is not a guarantee of complete safety from any surveillance.
How do we even know that the backups aren't encrypted? For all I know, the author had a second iDevice and it shared the key with the first one when the latter came online. That way, the key never leaves your devices, but you can still sync your messages.
Is there a way to access the messages on iCloud itself (i.e. the web interface)? That would be much stronger evidence that Apple can read it.
So, can we say Apple is lying by saying "Apple cannot decrypt that data (iMessages)"? Is there a creative way of reading their press release that doesn't entail their dishonesty?
On a second reading the author did include this footnote:
"In practice it's not clear if Apple devices encrypt to this key directly or if they engage in an OTR -like key exchange protocol. What is clear is that iMessage does not include a 'key fingerprint' or any means for users to verify key authenticity, which means fundamentally you have to trust Apple to guarantee the authenticity of your keys. Moreover iMessage allows you to send messages to offline users. It's not clear how this would work with OTR."
So I guess the need to send messages to offline users ruled out the use of OTR.
> From what I can tell, the iMessage app gives the sender no indication of how many keys have been associated with a given iMessage recipient, nor does it warn them if the recipient suddenly develops new keys.
iMessage does warn you on all devices when a new device gets added to the account. Of course that's during normal operations. Unless somebody reverse-engineers one of the apps, we can't know whether the protocol has a provision for "please add this key but don't warn the user".
That's the thing with crypto in closed source software: it's completely useless because you have to trust the software vendor not to put any backdoors in.
And since two weeks, unfortunately, the only think we can trust is that the vendor actually DID add a backdoor.