Seems pretty clear how they do this -- booting the device off an Apple-signed image which just extracts unencrypted data and exports it. (I suppose it's possible there's some weird signed-apple-driver way to do this on a running phone, too.) Only Apple can do it because only Apple can sign an evil bootloader like this.
There is a lot of room to improve the iOS security model against Apple/USG threats, but otherwise, it's still pretty good.
A non-Apple-friendly intelligence agency would still probably be better off attacking the actual security element; DPA, physical attacks, etc. should be able to pull the key. I'd estimate this capability would cost $10mm to develop and maybe $10k per device to attack, which would be great if you knew your adversaries used iPhones.
The biggest ongoing risk I see is that Apple could push an "evil" OS update to specific users if it wanted; if it can get the users to install it, all the hardware protections are irrelevant; you just get the user to enter the passcode, decrypt, exfiltrate. Solving that problem is really difficult without somehow having your own organization handle all OS updates.
Seems pretty clear how they do this -- booting the device off an Apple-signed image which just extracts unencrypted data and exports it. (I suppose it's possible there's some weird signed-apple-driver way to do this on a running phone, too.) Only Apple can do it because only Apple can sign an evil bootloader like this.
That would seem correct. However, given that some of the data on the device is also encrypted by a derivation of the passcode, this limits the scope of data which could be retrieved.
A non-Apple-friendly intelligence agency would still probably be better off attacking the actual security element; DPA, physical attacks, etc. should be able to pull the key. I'd estimate this capability would cost $10mm to develop and maybe $10k per device to attack, which would be great if you knew your adversaries used iPhones.
I expect you'd need physical access, and indeed that you'd probably need to uncap the AES chip to retrieve the UID - barring any unknown vulnerabilities. Possible, but destructive and not cheap
The biggest ongoing risk I see is that Apple could push an "evil" OS update to specific users if it wanted; if it can get the users to install it, all the hardware protections are irrelevant; you just get the user to enter the passcode, decrypt, exfiltrate. Solving that problem is really difficult without somehow having your own organization handle all OS updates.
That's the problem. The secure boot chain gives Apple ultimate control over the software on the device, and that requires trust.
On #2, if you were Ministry of State Security, you could also use 0-day code execution to gain the same data the Apple process retrieves. I guess existence-of-0-days is pretty reasonable, but I'd rather build the repeatable physical attack, as I could then pull a lot more than just Dkey, and do it forever.
(I'm more interested in the "built an enterprise-root-of-trust mobile platform", but "build an awesome, repeatable attack against everything" is also pretty tempting, if only to sell the former.)
I don't believe iMessage or Facetime are safe. All this announcement says is that if you weren't a target then Apple can't retroactively gain access to previous sessions. Since the user can't see which keys are used Apple can simply add itself to any conversation. And since Apple can do it the FBI could secretly order them to do so.
I also think it's only a matter of time until companies like Apple are compelled to use their remote update capabilities to trojan target devices.
If you change the passcode from simple to complex and then set a 4-digit passcode it doesn't automatically try after you enter the 4th digit. It asks you to press OK. This means you could enter 4 digits but not know whether the passcode string length is 1 or 100 characters (includes all alphanumeric characters. Also, I'm just using an absurdly high number. I don't know what the max passcode length is.) before attempting to brute force it.
If you have an all-numeric passcode which isn't a "simple passcode", it doesn't display length, but does show only the numeric keypad, indicating to an attacker it's only numeric.
I personally would rather enter a 12 digit numeric to an 8 character alphanumeric. (Also included 80 character full upper/lower with numbers and symbols, but this is unrealistic given how many keypresses are required for modifiers.)
Specifically, Apple says it can extract active user-generated data from native apps on passcode-locked iOS such as SMS, photos, videos, contacts, audio recording, and call history.
I think you're misinterpreting the guidelines; specifically:
Upon receipt of a valid search warrant, Apple can extract certain categories of active data from passcode locked iOS devices. Specifically, the user generated active files on an iOS device that are contained in Apple’s native apps and for which the data is not encrypted using the passcode (“user generated active files”), can be extracted and provided to law enforcement on external media.
In other words, the only data which can be extracted is data which is not encrypted on the device using the passcode. So I don't really think this qualifies as a backdoor; it's just that physical access to the device allows them to retrieve unencrypted data.
I was of the opinion that iOS uses full disk encryption, throwing away the key when the device is locked. This is further substantiated by the fact that a full reset is now instantaneous whereas it took a while in the old days.
In that case I wonder how some data can both be instantaneously be wiped but not be encrypted.
Which is why I believe there to be a backdoor for the full disc encryption on the device. That's the only way how to reliably get access to the device when it's full disk encrypted and the key is not in memory any more.
My understanding is that files under iOS are grouped into various classes with different levels of protection. Some data is under a class for which the key is discarded when the device is locked; this requires the passcode to be entered again before access can be gained.
Some data doesn't have full protection applied to it - for example, the phone must be able to display the name of a caller even when locked, so it's intuitively obvious that contact names/numbers can't have this applied.
I'm not sure exactly what the scope of each class is, but Apple do claim to be enable to retrieve things like mail, and if the implementation is as they describe then I see a limited scope for backdoored encryption - obviously it's always possible, but none of the information they've released about capabilities is contradictory.
Was my impression aswell. So this means if I sell my iPhone to someone (after resetting it obviously) it is possible the buyer could retrieve my Contacts and SMS messages from the flash memory?
> Why do they go through all the trouble with their encryption when they leave themselves a backdoor.
I am no means an expert on this, so someone else please correct me if I'm wrong, but I seem to remember reading that it's not so much a backdoor as much as there are only 9999 possible 4-digit passcodes (and most people only use 4-digit passcodes). The iPhone hardware rate-limits attempts to prevent you from brute-forcing it, but Apple can reflash the firmware to get around this, thus allowing them to be able to bruteforce the PIN. Whether this counts as a "backdoor" or not, I'm not sure.
If that is true, then I imagine there'd be nothing they could do if you used a longer (random) password instead of a simple PIN.
> Specifically, the user generated active files on an iOS device that are contained in Apple’s native apps and for which the data is not encrypted using the passcode (“user generated active files”), can be extracted and provided to law enforcement on external media.
The keywords are "data [which] is not encrypted using the passcode".
* You'll be notified unless they are prohibited from doing so
* The passcode lock means nothing.
* Apple can tap your email and information from native applications (so call history, photos, SMS, etc), excluding iMessage and Facetime (Apple can't even access those).
* They can't access information from third party applications
"Apple can tap your email and information from native applications (so call history, photos, SMS, etc), excluding iMessage and Facetime (Apple can't even access those)."
Misleading - Apple can extract that data from an iPhone at it's Cupertino Headquarters. Not tap.
Significant difference in that for this LE would have to obtain/confiscate the from you, this isn't remote data extraction.
There is a lot of room to improve the iOS security model against Apple/USG threats, but otherwise, it's still pretty good.
A non-Apple-friendly intelligence agency would still probably be better off attacking the actual security element; DPA, physical attacks, etc. should be able to pull the key. I'd estimate this capability would cost $10mm to develop and maybe $10k per device to attack, which would be great if you knew your adversaries used iPhones.
The biggest ongoing risk I see is that Apple could push an "evil" OS update to specific users if it wanted; if it can get the users to install it, all the hardware protections are irrelevant; you just get the user to enter the passcode, decrypt, exfiltrate. Solving that problem is really difficult without somehow having your own organization handle all OS updates.