
Apple releases guidelines for law enforcement data requests - RougeFemme
http://www.cnet.com/news/apple-releases-guidelines-for-law-enforcement-data-requests/
======
rdl
Seems pretty clear how they do this -- booting the device off an Apple-signed
image which just extracts unencrypted data and exports it. (I suppose it's
possible there's some weird signed-apple-driver way to do this on a running
phone, too.) Only Apple can do it because only Apple can sign an evil
bootloader like this.

There is a lot of room to improve the iOS security model against Apple/USG
threats, but otherwise, it's still pretty good.

A non-Apple-friendly intelligence agency would still probably be better off
attacking the actual security element; DPA, physical attacks, etc. should be
able to pull the key. I'd estimate this capability would cost $10mm to develop
and maybe $10k per device to attack, which would be great if you knew your
adversaries used iPhones.

The biggest ongoing risk I see is that Apple could push an "evil" OS update to
specific users if it wanted; if it can get the users to install it, all the
hardware protections are irrelevant; you just get the user to enter the
passcode, decrypt, exfiltrate. Solving that problem is really difficult
without somehow having your own organization handle all OS updates.

~~~
matthewmacleod
_Seems pretty clear how they do this -- booting the device off an Apple-signed
image which just extracts unencrypted data and exports it. (I suppose it 's
possible there's some weird signed-apple-driver way to do this on a running
phone, too.) Only Apple can do it because only Apple can sign an evil
bootloader like this._

That would seem correct. However, given that some of the data on the device is
also encrypted by a derivation of the passcode, this limits the scope of data
which could be retrieved.

 _A non-Apple-friendly intelligence agency would still probably be better off
attacking the actual security element; DPA, physical attacks, etc. should be
able to pull the key. I 'd estimate this capability would cost $10mm to
develop and maybe $10k per device to attack, which would be great if you knew
your adversaries used iPhones._

I expect you'd need physical access, and indeed that you'd probably need to
uncap the AES chip to retrieve the UID - barring any unknown vulnerabilities.
Possible, but destructive and not cheap

 _The biggest ongoing risk I see is that Apple could push an "evil" OS update
to specific users if it wanted; if it can get the users to install it, all the
hardware protections are irrelevant; you just get the user to enter the
passcode, decrypt, exfiltrate. Solving that problem is really difficult
without somehow having your own organization handle all OS updates._

That's the problem. The secure boot chain gives Apple ultimate control over
the software on the device, and that requires trust.

~~~
rdl
On #2, if you were Ministry of State Security, you could also use 0-day code
execution to gain the same data the Apple process retrieves. I guess
existence-of-0-days is pretty reasonable, but I'd rather build the repeatable
physical attack, as I could then pull a lot more than just Dkey, and do it
forever.

(I'm more interested in the "built an enterprise-root-of-trust mobile
platform", but "build an awesome, repeatable attack against everything" is
also pretty tempting, if only to sell the former.)

------
Canada
I don't believe iMessage or Facetime are safe. All this announcement says is
that if you weren't a target then Apple can't retroactively gain access to
previous sessions. Since the user can't see which keys are used Apple can
simply add itself to any conversation. And since Apple can do it the FBI could
secretly order them to do so.

I also think it's only a matter of time until companies like Apple are
compelled to use their remote update capabilities to trojan target devices.

------
Evolved
If you change the passcode from simple to complex and then set a 4-digit
passcode it doesn't automatically try after you enter the 4th digit. It asks
you to press OK. This means you could enter 4 digits but not know whether the
passcode string length is 1 or 100 characters (includes all alphanumeric
characters. Also, I'm just using an absurdly high number. I don't know what
the max passcode length is.) before attempting to brute force it.

~~~
rdl
If you have an all-numeric passcode which isn't a "simple passcode", it
doesn't display length, but _does_ show only the numeric keypad, indicating to
an attacker it's only numeric.

I personally would rather enter a 12 digit numeric to an 8 character
alphanumeric. (Also included 80 character full upper/lower with numbers and
symbols, but this is unrealistic given how many keypresses are required for
modifiers.)

<PRE> 10^12 = 1000000000000 36^8 = 2821109907456 80^6 = 262144000000 80^7 =
20971520000000

At 10 Hz, the first two options get you 3000 to 10000 years.

A reasonable lower bound, given 10Hz on the tries, is 10^9 = 100000000 36^6 =
60466176 80^5 = 40960000

47-90 days.

</PRE>

------
sprite
Specifically, Apple says it can extract active user-generated data from native
apps on passcode-locked iOS such as SMS, photos, videos, contacts, audio
recording, and call history.

Are they talking remotely or with device in hand?

~~~
pilif
According to

[http://www.apple.com/legal/more-resources/law-
enforcement/](http://www.apple.com/legal/more-resources/law-enforcement/)

 _> The data extraction process can only be performed at Apple’s Cupertino, CA
headquarters for devices that are in good working order._

this can't be done remotely. Still. Why do they go through all the trouble
with their encryption when they leave themselves a backdoor.

Once a backdoor is in there, it'll be abused. Either by disgruntled employees
or by everybody when it leaks.

~~~
Osmium
> Why do they go through all the trouble with their encryption when they leave
> themselves a backdoor.

I am no means an expert on this, so someone else please correct me if I'm
wrong, but I seem to remember reading that it's not so much a backdoor as much
as there are only 9999 possible 4-digit passcodes (and most people only use
4-digit passcodes). The iPhone hardware rate-limits attempts to prevent you
from brute-forcing it, but Apple can reflash the firmware to get around this,
thus allowing them to be able to bruteforce the PIN. Whether this counts as a
"backdoor" or not, I'm not sure.

If that is true, then I imagine there'd be nothing they could do if you used a
longer (random) password instead of a simple PIN.

~~~
nanofortnight
> Specifically, the user generated active files on an iOS device that are
> contained in Apple’s native apps and for which the data is not encrypted
> using the passcode (“user generated active files”), can be extracted and
> provided to law enforcement on external media.

The keywords are "data [which] is not encrypted using the passcode".

------
zdiddy
How I determined this ambiguous PR/ carefully crafted legal language to read
is:

1.) Apple can only extract your data from [remotely, using a tool that only
resides and can only be used at] their Cupertino HQ.

2.) Devices must be in good working order.

Meaning = device doesn't have to be present to WIN

------
Karunamon
tl;dr:

* You'll be notified unless they are prohibited from doing so

* The passcode lock means nothing.

* Apple can tap your email and information from native applications (so call history, photos, SMS, etc), excluding iMessage and Facetime (Apple can't even access those).

* They can't access information from third party applications

* iOS4 or later only

~~~
djrogers
"Apple can tap your email and information from native applications (so call
history, photos, SMS, etc), excluding iMessage and Facetime (Apple can't even
access those)."

Misleading - Apple can _extract_ that data from an iPhone _at it 's Cupertino
Headquarters_. Not tap.

Significant difference in that for this LE would have to obtain/confiscate the
from you, this isn't remote data extraction.

