Hacker News new | past | comments | ask | show | jobs | submit login
Why can't Apple decrypt your iPhone? (cryptographyengineering.com)
222 points by silenteh on Oct 4, 2014 | hide | past | favorite | 127 comments



There's another technical surveillance method here that I feel more people should be talking about: monitoring iMessage communication.

iMessage is extremely secure[1], except for the fact that Apple controls the device list for iCloud accounts. The method would simply be for Apple to silently add another device to a target's account which is under law enforcement's control. I say "silently" in that they would need to hide it from the target's iCloud management UI to stay clandestine, but that's it, just a minor UI change. iMessage clients will then graciously encrypt and send a copy of every message to/from the target to the monitoring device.

This would still work even with impossible-to-crack encryption. It wouldn't allow access to old messages, just stuff after the monitoring was enabled. It's the modern wiretap.

It mirrors wiretapping in that sufficiently sophisticated people could discover the "bug" by looking at how many devices iMessage is sending copies to when messaging the target (just inspecting the size of outgoing data with a network monitoring tool would probably suffice), but it would go a long way and probably be effective for a high percentage of cases.

The main thrust of the article is that encryption is not new, just the extent of it, particularly iMessage. Here's a way around that.

[1] http://images.apple.com/iphone/business/docs/iOS_Security_Fe...


> by looking at how many devices iMessage is sending copies to when messaging the target

Is iMessage centralised? I'm pretty certain it is, and if that is the case then you couldn't find out if you were tapped or not; one message gets sent to the server (perhaps with a list of the devices you want to send it to) and the server under Apple/LEO control sends a copy to their "device".


Each message is encrypted individually for each device that will be receiving the message. As a result, unless Apple slip a public key they have control over into the keys reported for the receiver, they cannot read your messages. (This is why abalone mentions that Apple do not have access to your old messages.)

http://blog.quarkslab.com/imessage-privacy.html goes into detail as to how the key exchange process works.


Whenever you add a device to your pool of iMessage devices, all of the other devices get a popup message telling them. This is the step where each of the other devices get the decryption key for that device.

My hunch is that Apple doesn't have a way to prevent the popup messages so if they were forced to add a law enforcement iPhone to an iMessage pool then the target would notice an extra device was added.


Maybe I don't understand the problem, but it seems to me that it would be very simple for apple to prevent the popup messages at their discretion, since they write and control the software that causes the popup to happen in the first place.


They would have to update the OS on the device first. And that's definitely not something that can be done silently.

I'm assuming here that the OS already doesn't have the ability to suppress the popup, and I think that's a safe assumption because Apple doesn't want to have this ability.


>They would have to update the OS on the device first. And that's definitely not something that can be done silently.

how do we know that the iMessage protocol doesn't already have support for a "silent" flag, which, if set, will not cause the message to appear?


I already answered that in the second sentence of my comment.


> And that's definitely not something that can be done silently.

Can you link me to a technical analysis about this?

It would be one of the few phones where the baseband/SIM couldn't make changes to the system.


iMessage has nothing at all to do with the baseband or SIM. And carriers definitely can't push updates to the OS. Only Apple has the technical capability to produce updates to the OS, if for no other reason than the fact that all OS code must be codesigned with Apple's certificate, and only Apple has the keys. But since the baseband can't update the OS anyway, that's a moot point.

I don't have a link for a technical analysis of this, but I shouldn't need one. It should be self-evident that updating the OS is a process that is very obvious to the user. It's also something that Apple has never done without explicit action by the user to perform such an update. It would in fact be incredibly dangerous to update the OS without explicit action, if for no other reason than the fact that this would not give the user the chance to back up their phone in case something goes wrong.


"It's obvious that neither of the other two processors in your phone with DMA could update your OS" is a statement that needs more justification than "it's obvious". Quite a number of backdoors from the baseband to the actual processor of the phone have been discovered over the last few years, including commands that seem to indicate that it could possibly write arbitrary memory under the right conditions.

Similarly, the conjectured scenario is that the government is trying to get in to your phone secretly: it's hardly unreasonable to think they could compel both Apple and your carrier to assist them, in the form of Apple generating a custom wiretap update and your carrier silently pushing it.

> It would in fact be incredibly dangerous to update the OS without explicit action, if for no other reason than the fact that this would not give the user the chance to back up their phone in case something goes wrong.

Again, we're talking about responding to a sealed court order to aid in tapping a telephone, and not about what makes good business sense under normal conditions.


> Quite a number of backdoors from the baseband to the actual processor of the phone have been discovered over the last few years

Every single one is a security vulnerability, not an intentional blessed mechanism by which the OS can be updated. And every single one is patched as soon as Apple learns of it.

> it's hardly unreasonable to think they could compel both Apple and your carrier to assist them, in the form of Apple generating a custom wiretap update and your carrier silently pushing it.

Except a) any known mechanisms by which this could be done would have already been patched, and b) I find it highly implausible that the government could compel Apple into deliberately breaking the fundamental security architecture of their product. If Apple already had the keys to decrypt the message, they could be compelled to hand them over, but that's very different than compelling them to actually modify their end-user software.

If the government did have the power to compel Apple to do something to aid wiretapping, it would be to compel them to add the ability to suppress the popup into a future OS update. It would certainly not be to compel them into creating and installing an OS update on the fly to a specific phone. Security implications aside, installing custom OS updates to specific phones would also have tremendous consequences on a lot of other stuff, including future OS updates (installing an OS update certainly can't brick the phone even if it's running an unknown custom OS), customer support (what if the target brings their phone in to an apple store?), even their internal build process.

That said, I don't believe the government can compel Apple to deliberately violate the advertised security guarantees of their product. Especially when such tampering is potentially visible to the target (which this would be; people have reverse-engineered the iMessage protocol, which means it's possible to intercept and analyze the traffic, which means it would be possible to write a tool to dump out the keys that your message is going to, which can be used to detect when new keys are added even if the OS doesn't alert you).

> Again, we're talking about responding to a sealed court order to aid in tapping a telephone

No we're not. iMessage isn't a telephone. The fact that the device you're using iMessage on almost certainly also has phone capabilities is irrelevant (and of course you can use iMessage without a telephone, by using a Mac or an iPod Touch).


> I find it highly implausible that the government could compel Apple into deliberately breaking the fundamental security architecture of their product.

Well I don't know what news you've been reading the past year, but personally I've been given the impression that your NSA will try and compel whoever to do whatever they please, be it through court order, economic/political pressure and/or psy-ops.

> If Apple already had the keys to decrypt the message, they could be compelled to hand them over, but that's very different than compelling them to actually modify their end-user software.

It's also very different from the NSA actively hacking into, breaking security infrastructure of their ALLIES. Which is what they've done repeatedly.

It's also been shown that the NSA doesn't (can't or won't) really make a very fine distinction between who/what exactly are enemy, allied or US-targets. In particular if they really really want certain information that can be considered of high tactical value in their pursuit of foreign targets. Such as, say, private encryption keys for OS updates or whatnot.

While this is not proof that this happened or is happening, I'm arguing that there is very little stopping the NSA if they wanted to.


It's one thing to say "you have a communications product, give us access to the server". It's a rather different thing to say "you have a communications product where your server can't decrypt the messages; go radically break your security architecture in a software update".


Couldn't the change simply be included in one of the regular .x updates? They don't need to mention it in the changelog. It's not something significant enough that people would really notice.

I don't think Apple would want that capability either, but just for argument's sake.


They could, but that's not something that law enforcement can just ask them to do on the fly.


> that's definitely not something that can be done silently.

I hate to sound so paranoid, but what reason do you have to believe this?


That the OS can't be silently updated? Many years of experience working with Apple platforms, and iOS in particular.


That is probably a fair hunch. However I was thinking.. there may be a way to do this with stock iOS without tipping off the target device in any way.

Whenever an iMessage client wants to message the target, it gets a list of target device public encryption keys from Apple. It then sends a separately-encrypted copy of each message to each of those recipients.

What Apple could do is to "bug" the lists that are sent to the devices wanting to sending something to the target, without modifying the list that the target device sees. So all incoming messages to the target would get cc'd to law enforcement. This could probably be done completely server-side.

As for outgoing messages, Apple could do the same thing in reverse: whenever the target device asks for the list of recipient devices, just add the monitoring device to it. Again, all by simply modifying the device directory server, no client changes.

The bigger point here is that Apple claims their hands are tied due to the design of the encryption. But as long as the directory service is still under their central control, there is still a technical means for complying with law enforcement requests to monitor iMessage communication.

Perhaps law enforcement just needs to get more specific with their demand. Don't demand "decryption" anymore.. demand a wiretap.


That would only catch messages sent to the target, not messages that the target sent itself. For example if I send a iMessage to a friend on my Mac, I won't see my sent message on my iPhone later.


iMessage syncs the sent messages across the devices as well. The might be some conditions on the sync (e.g. messages sent before the device was added to the account, or message older than x months during which the device was offline, etc..) but I'm looking at my mac app and I definitely my iPhone messages.


well also mentioned in the security document is that if you do an iCloud backup the iMessage security is sort of negated...the archive is "clear text" in that it's encrypted by Apple's keys and such instead of the device key. Hence why you can restore it to other devices.


> The Secure Enclave is designed to prevent exfiltration of the UID key. On earlier Apple devices this key lived in the application processor itself, and could (allegedly) be extracted if the device was jailbroken and kernel patched.

Speaking as a jailbreaker, this is actually incorrect. At least as of previous revisions, the UID key lives in hardware - you can ask the hardware AES engine to encrypt or decrypt using the key, but not what it is. Thus far, neither any device's UID key or (what would be useful to jailbreakers) the shared GID key has been publicly extracted; what gets extracted are secondary keys derived from the UID and GID keys, but as the whitepaper says, the passcode lock key derivation is designed so that you actually have to run a decryption with the UID to try a given passcode. Although I haven't looked into the newer devices, most likely this remains true, since there would be no reason to decrease security by handing the keys to software (even running on a supposedly secure coprocessor).


Is jailbreaking becoming more difficult and would that be a sign of iOS/iPhone becoming more secure?


It's a step towards becoming more secure. Having control of your operating system (that is, preventing other programs from taking control prior to the operating system starting up), is clearly a desirable behavior if you want to prevent applications like keyloggers, data sniffers, etc...


Yes, some other people who are familiar with the design have also corrected me. I've updated the post.


Technically, this sounds about right (am mostly a n00b though) but the comments on this thread seem to me terrifyingly naive for a post-Snowden world. Apple has semi-convincingly denied the presence of a few, very specific attack vectors, and the article is speculating about the details of those denials, which is all well and good.

But it is an absolute certainty that communications technologies built and operated by major American industry are wholly compromised. To believe otherwise is to grossly misunderestimate the nature of State intelligence actors. The historical record is clear that big telecom + hardware providers have always been in bed with State power, both in America and elsewhere, and the Snowden docs pretty clearly show that's still true today.

Maybe Apple's announcement means that the county sheriff can't read your teenage son's weed-dealing text messages. But if bin Laden had an iPhone, the men in the windowless buildings would beyond a shadow of a doubt be reading his communications, probably via seven or eight independent attack vectors (not counting the compromised publicly switched telephone network, over-the-air signals, etc.)

If you have secrets, keep them off of communication technologies run by large companies. Especially when those technologies are 100% closed source and the companies in question have openly admitted including backdoors in previous versions of the tech you're currently using.


Yes, the NSA has boasted of having a surveillance "partnership" with U.S. companies, but those would be telecommunications carriers -- AT&T, Verizon, etc., not Silicon Valley firms: http://www.cnet.com/news/surveillance-partnership-between-ns...

Also look at the sworn affidavit that EFF obtained from local SF bay area whistleblower Mark Klein -- an AT&T technician who revealed the existence of the NSA's fiber taps at the 2nd & Folsom Street SF facility.

There is no such entity as "major American industry." There are different companies with different incentives and different willingnesses to protect their users. Some companies do the right thing; others don't.


Now if only it was possible to turn off remote installation of applications on both iOS and Android devices, this kind of security would actually mean something.

Right now, you can do full disk encryption on an Android device (which seems likely to become hardware-assisted on future devices similar to the solution mentioned in the article). If you pick a sufficiently strong passphrase, that should keep your data secure even on devices without hardware assistance. However, if the device is turned on and locked (the common case), it's trivial to remote-install an arbitrary app, including one that unlocks the phone. (You can do this yourself with your Google account, which means anyone with access to that account can do so as well.)

It would help to be able to disable remote installation of any kind; that wouldn't have to come at the expense of convenience, because the phone could just prompt (behind the lockscreen) to install a requested app.


On Android devices at least, you can use the always-on VPN functionality and a VPN Server and HTTP/S proxy to achieve this, it's actually not as hard as you might think.

For home users, Sophos has a Home Edition of their UTM that you can install on an old PC. The requirements are a bit high, there's an IP limit (that you could always overcome with a NAT) and it doesn't allow dual-homed ISPs, but the UI is better than anything else I've tried (not saying there aren't plenty of warts). Once installed, you can setup a VPN and HTTPS proxy within literally 2 minutes.

Disclaimer, I worked there for a short time.


I've been thinking about the possiblity of commercial software updates being used as an attack vector to overcome WDE. Could you imagine if the NSA went to Apple or Microsoft and said "push this compromising update to computers from this IP address/MAC address/serial number"?


That's quite possibly no longer a theoretical scenario at this point. It would really surprise me if you were the first person to think of that trick (it's pretty obvious) and that + gag orders would do nicely. Parallel construction to plug any holes in case someone wises up that this is already done in practice.


Parallel construction has to be one of the most unconstitutional things I have ever heard of. It's fraud and perjury.


But but but... they're criminals! The ends justify the means, right?

/s


As far as I know, it can install an app but not run it (EDIT: on Android, that is). So it shouldn't be able to do any such decryption.


The problem is the attack surface for an app on a device vs. in the wild is much, much larger and much, much less secure.

Once its on your phone, it can just screenshot things if nothing else.


I wrote myself a database that I use for passwords and the like. I get around this by masking text until I tap a button, and then waiting a random number of ticks before decrypting the value and progressively unmasking the text. It takes between ½ and two seconds. The problem I've yet to solve is hiding the text. Leaving it on-screen for, say 10 seconds before re-masking and encrypting is fine for now but sometimes that's not long enough.

The other thing it does is with a tap and waiting for the same random number of ticks is to copy the value to the clipboard so that I can paste it into a browser.

Not perfect but its a version 1 at least.


You should try 1Password: https://agilebits.com/onepassword

It's the most recommended password manager on Hacker News.


Thanks, I'm aware of it but it doesn't work for me.

Beyond what I described above, I need something that works on Windows and Windows Phone, and I need it to securely sync between the two.


This implies that there is a version of 1Password for windows phone?

https://discussions.agilebits.com/discussion/12133/1password...



Can you use KeePass and just sync the database via dropbox?


Android requires root privilege and/or system-level app to take screenshots. iOS is probably similar. I don't think an app could spy like this without user enabling it (which is a valid but separate concern about how knowledgeable the average rooter/jailbreaker is.)

[0] http://android.stackexchange.com/questions/10930/why-do-we-n...


From http://stackoverflow.com/questions/12462944/how-to-take-a-sc...:

======

Using hardware partners

Now we get into the solutions which require commercial action.

Talking to the Android chipset makers often presents a solution. Since they design the hardware, they have access to the framebuffer - and they often are able to provide libraries which entirely avoid the Android permissions model by simply accessing their custom kernel drivers directly.

If you're aiming at a specific phone model, this is often a good way forward. Of course, the odds are you'll need to cooperate with the phone maker as well as the silicon manufacturer.

Sometimes this can provide outstanding results. For example I have heard it's possible on some hardware to pipe the phone hardware framebuffer directly into the phone hardware H.264 video encoder, and retrieve a pre-encoded video stream of whatever is on the phone screen. Outstanding. (Unfortunately, I only know this is possible on TI OMAP chips, which are gradually withdrawing from the phone market3).

======

Probably not something an average crapware author can do, but certainly within reach of the NSA.


On iOS it's possible to capture the framebuffers of other apps via the IOSurface APIs with an app that's running in the background. These APIs are not documented well, but it's certainly possible.


Maybe I'm not fully understanding here but the last two phones I've used do not require root or any app to take screenshots. Nexus 5 and Moto X.


By root, I meant that an app with root access could probably take screenshots, not that a root app is needed for the user to access the system's screenshot capability.


Once its on your phone, it can just screenshot things if nothing else.

If it can't run, how can it screenshot things?


It could be an update of an app that runs by default, or an update of a core component of the OS.


No apps "run by default". The OS's core components require user confirmation, as that's just a normal iOS update.


What about when you get a phone call -- the app that pops up the call UI could be updated, pushed, then triggered by calling the phone.


The obvious way to mitigate this threat vector is to not have a Google Account associated with your device that can install apps. At this point, a Google Account is just a huge privacy/security risk.


If someone obtains your phone, and prevents you from initiating a remote wipe (perhaps they have you in custody, or perhaps they have isolate the phone so that it cannot receive the wipe command), it sounds like this technology will do a good job of preventing them from decrypting your data from the phone if you have a decent passcode. They cannot throw GPUs or FPGAs or clusters or other custom hardware at the problem of brute forcing your passcode because each attempt requires computation done by the Secure Enclave using data only available in the Secure Enclave. That limits them to trying to brute force with no parallelization and 80ms per try [1].

However, assuming they have an appropriate warrant, can't they get your iCloud backups and try to brute force those? Maybe I'm being an idiot and overlooking something obvious, but it seems to me the encryption on the backups CANNOT depend on anything in the Secure Enclave.

That's because one of the use cases that iCloud backup has to support is the "my phone got destroyed, and now I want to restore my data to my new phone" case. To support this, it seems to me that the backup encryption can only depend on my iCloud account name and password. They can throw GPUs and FPGAs and all the rest at brute forcing that.

My conclusion then is that when I get a new iPhone, I should view this as a means of protecting my data on the phone only. It lets me be very secure against an attacker who obtains my phone, but not my backups, provided I have a good passcode, where "good" can be achieved without having to be so long as to be annoying to memorize or type. A passcode equivalent to 32 random bits would take on average over 5 years to brute force.

To protect against someone who can obtain my backups, I need a good password on iCloud, where "good" means something considerably more than equivalent to 32 bits.

[1] I wonder if they could overclock the phone to make this go faster?


Yes, currently iCloud backups are not encrypted so they can be extracted by law enforcement, but on the other hand they are not mandatory, as Apple also offers a full local backup solution through iTunes (albeit, admittedly, they could make it work automatically like Time Machine, instead of manually; I guess they'll get there, now that they're using privacy in marketing).

On the other hand, it is perfectly possible to devise a system to locally encrypt iCloud backups and still be able to restore them. Look at how iCloud keychain works, in the Apple documents, as those datas (= all your passwords and secrets) are synced through the cloud between your devices but Apple can't access them. For iCloud keychain, in case you lost access to all your devices, you need a master recovery key that's generated when you first activate it; if you don't have it, you lose the data.


> iCloud backups are not encrypted

This is wrong. iCloud uses AES 128 and 256 encryption:

http://support.apple.com/kb/HT4865?viewlocale=en_US&locale=e...


This doesn't really say much unless we also know how the AES key itself is derived.

If you don't have two-step authentication on, then Apple's password reset is based on asking some security questions and sending an e-mail challenge. Even if they actually derived the key from your security question answers (unlikely), that's the kind of thing law enforcement would have no trouble cracking. More likely, the whole authentication system simply returns true or false, and the key is stored in some separate place -- perhaps more secure than a random hard drive in the datacenter, but still somewhere where Apple can get it if forced.

If you do have two-step authentication enabled, Apple's docs imply that it is impossible to recover your account without at least two of:

* Your password.

* Your "recovery key" -- a secret that they give you and instruct you to print out and keep somewhere.

* Your phone (or some other device that can generate one-time codes).

They say that if you can't produce two of these then all your data is lost and you'll need to create a new Apple ID. This implies that they might indeed store your data encrypted by an AES key which is in turn stored encrypted in two different ways: once with your password, and once with your recovery key. Thus it would actually be impossible to recover your data without one of these, and Apple doesn't store either one on their servers, therefore Apple would not be able to produce your data for law enforcement.

That said, I would be very surprised (and impressed) if Apple actually does this. Consider that this would prevent their services from doing any offline processing of your data at all -- a compromise few product designers would be willing to make for a security guarantee that almost no user actually understands. More likely, the language in the documentation is a matter of policy -- Apple refuses to recover your account simply because that's the only way to guarantee that social engineering is impossible, not because they are technically incapable.

And anyway, even if your data is stored encrypted at rest with a key that is actually derived from your password, there's nothing stopping law enforcement from demanding that Apple intercept your password (or the key itself) next time you log in. That's basically what they did with Lavabit, after all.

(Things I think about while working on sandstorm.io...)


But Apple says "iCloud secures the content [backups] by encrypting it when sent over the Internet, storing it in an encrypted format, and using secure tokens for authentication"


That would be better and it would prevent certain types of attacks. But at the end of the day you cannot verify what software is running on the phone so circumventing encryption for targeted individuals remains trivial.


Agreed, but that's true of any system where you regularly install updates without checking and compiling them one by one, and that cover most computers nowadays.

Let's say I have infinite resources and I want to target your Debian server; it's sufficient to bribe one Debian maintainer of a default package and you're basically doomed. Until they don't get to the point of reproducible builds and don't embed something in apt to make sure the build is correct, you still need to trust the whole Debian community.

Any time you run an operating system released by a vendor, you're basically trusting the vendor. It doesn't strictly have to be like that for FLOSS systems, but it is like that right now.

So your comment is indeed correct, but doesn't specifically highlight a defect in iOS.


   1. [...]
   2. [...]
   3. [...]
   4. [...]
   5. The manufacturer of the A7 chip stores every UID for
      every chip.
I'm a total layman, but the UID has to be created at some point and so it can be known by someone. Wouldn't it be the easiest way to just record it for every chip? Apple wouldn't even have know about it.


This is the fundamental problem: unless you are rolling your own silicon, at some point you have to take some big corporation's word for it that a chip does what they say it does. This fundamental problem is the reason that nuclear launch codes are protected by a relatively low-tech solution:

http://en.wikipedia.org/wiki/Gold_Codes


But if this is the case, why bother with bullet point 1. to 4. The chip is probably manufactured in China, why spend a thought about whether US law enforcement can somehow via Apple decrypt the data of my phone when the Chinese Government can do it anyways?


I fear the American government's totalitarian/police state leanings far more than I fear the PRoC.

Though the Chinese government is undoubtedly an enthusiastic squasher of political dissent, and their secret police are surely quite brutal, and I hate commies with a passion, I am hardly ever likely to have a conflict with the Chinese state.

An Unconstitutional American surveillance state is a far more immediate problem. The NSA can break down my door tomorrow after reading my politically unpalatable text messages, and there's nothing I can do to stop them. So if I had to pick someone to have the keys to my phone's backdoor, I'd pick a "hostile" foreign power any day. Though of course it would be better to have no backdoors at all :)


I agree with you, but I would add that: if any power has the key to your phone's backdoor, there is a chance of the key getting into the NSA's hands.


> The chip is probably manufactured in China

Available info indicates that the A8 is fabbed on a 20nm process by both Samsung and TSMC [1]. For Samsung, that would indicate production in either the US or South Korea [2]; for TSMC, that would indicate production is in Taiwan [3].

[1] http://recode.net/2014/09/23/teardown-shows-apples-iphone-6-...

[2] http://www.samsung.com/global/business/semiconductor/foundry...

[3] http://www.kitguru.net/components/graphic-cards/anton-shilov...


1-4 from the original article are based on the premise that the iPhone is in fact as secure as Apple claims it is, and tries to reverse engineer how that could be done. Your point #5 is a possible way that the phone could in fact be insecure despite 1-4. My point is that unless you have your own silicon foundry you have no choice but to trust someone, or resign yourself to the possibility that your iPhone may not be secure despite what Apple says.


It could be generated by the chip itself with built-in hardware RNG. The outside world never needs to know what it is.


> (Apple pegs such cracking attempts at 5 1/2 years for a random 6-character password consisting of lowercase letters and numbers. PINs will obviously take much less time, sometimes as little as half an hour. Choose a good passphrase!)

Do not use simple pin passwords on your phone. In particular, if you use fingerprint access, there is no reason not to have a long, complex password.


Your advise is good, I'm not trying to dismiss or counter it.

> In particular, if you use fingerprint access, there is no reason not to have a long, complex password.

Very, very often my fingerprint isn't recognised properly and I have to type in my password. It has been getting better as of the last few updates, but I still need to input my password multiple times per day.


Try training it for a few minutes for each finger (go to Touch ID setting, tap with your finger, see the background flash, repeat ad libitum slowly rotating your finger in any direction; be patient; eventually it should recognize each finger even if rotated almost 90 degrees). It should get much better afterwards


Or use multiple slots for the one or two fingers that you actually use to unlock. One doesn't need to have every finger's fingerprint stored.


There is an argument against using the fingerprint access and that is that a user gives up the right of consent while in custody. If law enforcement gets a judicial order to forcibly press the prisoner's finger to the sensor to unlock the device, then he or she has little recourse as the right to remain silent is not implicated. One cannot be similarly physically compelled to disclose a code only held in his or her memory.


> One cannot be similarly physically compelled to disclose a code only held in his or her memory.

You can in the UK.

https://en.wikipedia.org/wiki/Regulation_of_Investigatory_Po...


It's amazing how hard it can be to remember a password that one has not typed in a long time, and has been subjected to solitary confinement.

"Grassian and Haney show that a cluster of different symptoms, which they refer to as SHU syndrome, occurred in something like 90 percent of the prisoners they studied. Included symptoms can be affective, like paranoia and depression; cognitive, like confusion, memory loss, perceptual distortions, hallucinations; or even physical, like headaches and insomnia. So there is documented, psychiatric evidence that even a comparatively short term in solitary confinement can have negative consequences" (source http://www.vice.com/en_au/read/solitary-confinement-is-a-leg...)


> One cannot be similarly physically compelled to disclose a code only held in his or her memory.

Are you sure? In the UK and I'm pretty sure in my native Australia, they definitely can, under pain of "contempt of court".


That gets into key disclosure law and it does vary by jurisdiction. Though I am not a lawyer, it is my understanding that this is an area of dispute in the United States with respect to the 5th amendment, which forbids the government the power to compel one to ever testify against him or herself.

Even under a mandatory key disclosure regime, it's still a choice to remain silent even if that means one remains jailed. That situation sure seems like a form of torture to extract information from the incarcerated individual.

This 2012 Forbes article is a good read on the matter: http://www.forbes.com/sites/jonmatonis/2012/09/12/key-disclo...


In the US it depends. If they know you had child porn on your phone they can force it. If they just suspect you have child porn, they can't.


If you have the ability to lawyer it enough, I think you'd have a few layers of court required to sort out whether this is allowed (assuming they don't have proof you have something on the phone). You are basically forcing self-incrimination, a violation of the Fifth Amendment.

I totally agree that, if you are really concerned about this, fingerprint is a bad idea, but the legal ramifications are interesting.

Not quite relatedly, does anybody know if there is a heat sensor? Or can I just cut somebody's finger off to use it? (We are obviously well outside of judicial channels here! :)


The Fifth Amendment protects people from being forced to witness against themselves, i.e. give testimony.

Evidence is not covered by the Fifth Amendment. If you have papers that would incriminate you, and a warrant is issued for those papers, you cannot refuse to surrender them. If you destroy them having received the warrant, you'll be prosecuted for obstructing justice.

Orin Kerr's interpretation[1] is that a phone is evidence, and contains evidence in the form of digital data. The only need for testimony is to establish that the phone is actually yours. You can't be forced to admit "yes that is my phone." But if the fact that it is your phone can be established in other ways (say, with the testimony of your wireless provider), then you can be forced to unlock it.

You can't be beaten or tortured until you type it in, of course, at least within the U.S. But you can be jailed for contempt of court for your refusal. And the limits of contempt imprisonment seem to be pretty murky.

[1] http://www.washingtonpost.com/news/volokh-conspiracy/wp/2014...

Edit: to clarify the source of my argument


Or can I just cut somebody's finger off to use it?

You don't need a finger to circumvent touchid. You can just obtain your victim's fingerprint from anywhere (e.g. a glass that they used) and create a fake "finger" using household items:

http://www.ccc.de/en/updates/2013/ccc-breaks-apple-touchid


As long as you can lawyer up for 48 hours, the iPhone will time out and require a passcode, and ignore your fingerprint.


The obligatory xkcd reference had to be posted here. :)

http://xkcd.com/538/

"Drug him and hit him with this $5 wrench until he tells us the password."


Fingerprints should not be treated as a substitute for a password. A password is "what you know", and a fingerprint is "what you have". If you also throw in "who you are" (retina sensors, blood biometrics, etc.), you have to combine at least two to have a proper system. By allowing the fingerprint scanners to become ubiquitous, they (Apple and Samsung) are training their users poorly.

Now one could make the argument that the fingerprint is more secure than a poor password, for most use cases. I'd tend to agree with that, with the caveat that it is no longer your decision (read: subject to your interrogation resistance) to open your phone if you are captured.


I read through the article - including the hand wavey "Apple has never been cracking your data conclusion" - but I don't understand what has changed since previous versions of iOS other than more data being encrypted.

Apple claims they can't decrypt data, but, the article suggests that they can simply run the decryption on the local phone with custom firmware. Most people chose a 4 digit pin, and, @80 millisecond/guess, that means Apple should be able to crack your phone in 12 minutes.

If you use a longer passcode, your data is more secure - but I thought that was always the story with Apple.

So what, if anything, has changed (other than more data being encrypted?)


The passcode and the pin is not the same thing, most people don't have a passcode.


Re: "The passcode and the pin is not the same thing, most people don't have a passcode."

On iOS I believe this is incorrect, the Passcode is another name for PIN on the iPhone.

http://support.apple.com/kb/ht4113

You can have complex or simple passcodes, but everybody I've ever seen who bothers to have a passcode (myself excepted), sets it to a 4 digit code (aka PIN). To make it worse, it's usually their ATM PIN.


Invasive attacks for extracting the UID depend on exactly how it's 'implemented in hardware'.

It could be a total lie, and hardwired or masked-rom per-revision (but I doubt that, too easy to discover)

It could be in a one-time programmable block somewhere that gets provisioned during manufacture - a flash block/eeprom with write-only access (externally at least), or a series of blowable fuses, or even laser/EB-trimmed traces.

All of those 1-time prog methods are susceptible to the person operating the programmer to record the codes generated, although managing and exfiltrating that much data would make it rather tricky.

The method of storage also influences how hard it is to extract through decapping and probing/inspection.

If I had to design something like this (note: not a crypto engineer), I'd have some enclave-internal source of entropy run through some whitening filters, and read from until it meets whatever statistical randomness/entropy checks, at which point it is used to seed & store the UID into internal EEPROM. That way, there's no path in or out for the key material, except when already applied via one of the supported primitives.

Then you need to protect your secrets! Couple of layers of active masks (can they do active resistance/path-length measurements instead of just continuity yet? That would annoy the 'bridge & mill' types :)) Encrypted buses, memory, and round-the-houses-routing is also pretty standard for the course, but I'm sure it too could be improved on.

IIRC there was someone on HN who was working for a TV smartcard mfgr who was reasonably confident they'd never been breached. Curious what he'd have to say (without an NDA :) )


Edit: I wonder if anyone is mask/shielding against back-side attacks yet? I did like the buried light-sensors dotted around some hi-sec core I was watching, although they didn't end up being particularly useful.

My understanding here is that this Enclave is just a specific part of the overall die, so they're somewhat constricted in the crazy-fabtech methods they might otherwise be able to consider.


> Apple doesn't use scrypt. Their approach is to add a 256-bit device-unique secret key called a UID to the mix, and to store that key in hardware where it's hard to extract from the phone. Apple claims that it does not record these keys nor can it access them.

Technically, this is where it breaks down. As in "Trust me I don't store the keys."

If that hypothesis is true(they don't store these keys), then they'll have a hard time breaking your encryption indeed. But you must trust Apple at that point.

If there was a way to buy an anonymously replaceable chip with this cryptographic key in it and replace it on the phone like a SIM, then we'd be much closer to stating "Apple can't decrypt your phone".


Right. We'd instead be having a discussion about how Atmel can decrypt your phone.


It was just an example how you could detach the secret and the device made by Apple. I'm sure there are better ideas for that. The way this is configured now Apple can decrypt any phone, which voids the argument made in the OP. But I can understand why this will be the unpopular opinion here.


> Secure Enclave allows firmware updates -- but before doing so, the Secure Enclave will first destroy intermediate keys. Firmware updates are still possible, but if/when a firmware update is requested, you lose access to all data currently on the device.

Given that the end-user has entered the passcode it shouldn't be hard to retain the data: after upgrading the Secure Enclave firmware simply unencrypt all data using the old key and reencrypt it using the new key (derived from same passphrase but a new UID).

You can also use a "two stage" approach where the encryption key derived in hardware is only used to protect a secondary key. In this case you just reencrypt this secondary key which in turn protects the data.


Is Apple's "Secure Enclave" anything more than ARM's TrustZone?

http://www.arm.com/products/processors/technologies/trustzon...


Contrary to speculation ( there are whole articles which "explain" the secure enclave to be ARM trustzone) secure enclave is documented ( only very recently) to be a _seperate_ chip inside the A7 chip running it's own L4 based microkernel. (From https://www.apple.com/privacy/docs/iOS_Security_Guide_Sept_2...)

" The Secure Enclave is a coprocessor fabricated in the Apple A7 or later A-series processor. It utilizes its own secure boot and personalized software update separate from the application processor. It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has 
 been compromised.

The Secure Enclave uses encrypted memory and includes a hardware random number generator. Its microkernel is based on the L4 family, with modifications by Apple. Communication between the Secure Enclave and the application processor is isolated 
 to an interrupt-driven mailbox and shared memory data buffers. "


That's pretty much exactly how AMD implements TrustZone. http://www.anandtech.com/show/6007/amd-2013-apus-to-include-...


It sounds more like they are using a Cortex-A5 to gain access to TrustZone with an existing x86 core.


And it sounds like Apple is using a separate unspecified ARM processor (probably a Cortex-A5 since that's the cheapest possible one) to gain access to an existing A7 or A8 core.


In Apple's case, they use the ARM ISA but implement their own micro architecture and from vvhn's comment seems to also use a co-processor specifically for the secure enclave. But the link above on TrustZone hardware architecture mentions that this isn't a requirement.

"TrustZone enables a single physical processor core to execute code safely and efficiently from both the Normal world and the Secure world. This removes the need for a dedicated security processor core, saving silicon area and power, and allowing high performance security software to run alongside the Normal world operating environment."

I guess since Apple use the ARM ISA, it's still binary compatible with ARM but with a different implementation. AMD uses an x86/ARM hybrid where the ARM part is an off the shelf Cortex-A5 which already contain TrustZone.


I highly doubt they use their own micro architecture. It'd be a lot cheaper to license Cortex-A5. Using their own micro architecture for the main processor gives them a huge competitive advantage. For the security co-processor, COTS would work fine.


What's stopping Apple from being coerced by the PTA (Peeping Tom Agencies) from installing an update to bypass the lock for specific targets?


Nothing.


> Apple has built a nuclear option. In other words, the Secure Enclave allows firmware updates -- but before doing so, the Secure Enclave will first destroy intermediate keys. Firmware updates are still possible, but if/when a firmware update is requested, you lose access to all data currently on the device.

That seems ideal. Let's hope Apple actually does that (probably not).


So the key is derived from passcode? isn't that 5 digits that are easy to brute force?


[deleted]


> Also, it's my understanding that authentication can be trivially bypassed if the phone is kept powered on? (ie. authentication data is stored in memory? Seems to be the case for at least Touch ID)

It can be bypassed if the phone is kept powered on -and someone is able to start running code on it-. The normal approach of Apple signing a malicious recovery/update image would require rebooting first; there are probably exploits to be found somewhere, but Apple probably doesn't have any intentional mechanism for running code without either rebooting or unlocking the phone (at which point a lot more services open up including developer tools, etc.).

Also, Touch ID expires after a while, and (edit: I said the key is probably wiped, but that makes no sense if the phone continues to display contact names and such on the lock screen; there may be details in Apple's white paper, but since I don't remember any, disregard.)


Contact names are not displayed in the lock screen until the first unlock (new in iOS 8).

Notice how now all devices, not only ones with Touch ID, say that your phone needs to be unlocked for full functionality after a reboot.


5 digits would be easy. It would take a little over an hour on average. However, passcodes are not limited to digits. Use upper and lower case letters and digits, and then a 5 character passcode would on average a little over a year. Make it 6 characters, and that's 72 years.


I'm guessing based on my observations of 40-50 people entering Passcode, that north of 90% of people who bother to use a Passcode (or care to) - use a 4 digit pin.

Even those who are super security conscious tend to just use numeric PINs. It's the very, very rare individual who enters an alphanumeric passcode.


Two questions:

* Regarding the fixed 80ms timing: has there been study on the average time needed (aside from the WHY 80ms instead of 70ms or 90ms). I also want to ask for clarification: where is the entire PBKDF2-AES is done? On the AES engine (which I believe is part of the A7 chip)? On a TPM chip (which might be a NO based on unauthenticed source [1])?

* So this UID created in every device and stored in Secure Enclaved which there is a dedicated path between SE and AES engine. But can we conduct any side-channel attack? I am pretty noob with hardware security.


I wonder, is it in the realms of possibility for big-budget organizations like the NSA to simply read the UID from the silicon by means of physical analysis (e.g. a scanning tunneling microscope)?


It's very probably within the realms of possibility, yes.

It's very probably not within the realms of practicality just yet, however.


Go check out some conference presentations by Christopher Tarnovsky. He's made a career out of it, and acquired some very expensive toys (focused ion-beam equipment doesn't come cheap), but there are lectures of his explaining how he broke the (iirc) STMicro TPM chips for fun. These sorts of devices have all sorts of countermeasures against direct invasive attacks like these, but with enough cash and bricked test phones, I'd be greatly surprised if it wasn't entirely practical.

The only issue would be making the process so 100% reliable that you succeed first time, because a single mistake or misunderstanding could trash the single copy you have of the code.

I'm curious now if flylogic or chipworks have done any serious teardown of the 'secure enclave' stuff.


If the iPhone actually does become very popular, particularly with terrorists, it will be hard to imagine that the NSA doesn't just go develop this capability internally.

"enough cash and bricked test phones" - the great thing about this, is you can just buy the $650 phones - you can get a thousand of them for less than a million dollars, which probably is under your typical line managers budget in the NSA techOps group.

And, lets be realistic, Apple isn't trying to defend against the NSA or Nation States, just your average hacker without access to $100mm+ in hardware.


The security is based on the premise that Apple is unable to decrypt as they do not keep a record of the devices unique ID that is the base of all the cryptography.

What if that is not true? What if the device has a built-in keylogger to just get all the crypto from the user input? be it a passcode or a fingerprint.

Wouldn´t it be partly better if this were based on a trully public key cryptography with a randomly generated private key generated each time the device is factory reset?


With all the talks about supper crypto tech use in the Iphone, isn't cloning phone's data as simple as declaring your phone is loss and paying a mobile phone store clerk a new phone and reset the password for the I-Account? The new iphone would have access to all your info, pictures, msg logs in 1-2 hours?

Is this any reasonable PI, police, FBI, hackers can easily social engineer via legal and/or illegal means?


What about iCloud? It used to be that a user could reset their password and then restore from iCloud backup on a new phone... Is this no longer true?


From what I can understand, if you wanted to hypothetically maximize your security, it would mean turning off iPhone backups.

Apple could also have it set that you must have the iPhone passphrase to restore a backup but obviously those can be "easily" brute forced (because for the restore to work, it must mean you can bypass the old device's UID)


You can still fully backup your phone through iTunes, if you want to avoid iCloud backups, until those get encrypted as well.


If you're wondering when Secure Enclave first appeared, I looked up the A7 processor. It's in the iPhone 5S, the iPad Air, and the iPad Mini (2nd generation).


So the article's answer to "Why can't Apple decrypt your iPhone?" is: "Because Apple says that no software can extract the UID".

In our post-Snowden world this is just ridiculous and intellectually insulting. The author is either naive beyond belief or he got paid to write this PR shill piece.

cf. https://gigaom.com/2014/09/18/apples-warrant-canary-disappea...


And what about a backdoor? Code is not open-source. Just saying..


I'm skeptical as well. Seems to me that so many things on your phone are talking to Apple (and other 3rd parties) anyways, that this might not even matter?

Although the FBI seems to be not very happy about this (if it's not just "for show" that is)[1]. The FBI is using the age-old "Save/Protect the children" argument, literally.

[1] http://www.washingtonpost.com/business/technology/2014/09/25...


In particular Apple provide photo backups and (speculation) may be doing something server side to allow continuity features around text messaging from other devices.

This is getting into speculation about their role in Prism but I'm wondering how the iCloud encryption actually works. They say everything is encrypted while stored [0] but it's not clear (or I haven't found) whether that's using a key derived from the password or something Apple control. Either way I'm not entirely sure there's any way to stop Apple getting it if they're told to given the lack of transparency.

[0] http://support.apple.com/kb/ht4865


Never mind that none of this matters if you have unlocked the phone and it's on (default protection policy is protect until first unlock, which happens right after turnup). That's gotta be 99.9% of cases. Once the police or apple have a locked phone, all bets are off. Apple can just install an app remotely that gets them past the lockscreen, and unless this is from a cold boot, you have access to all apps and all data. This includes access to e.g. The logs of any configured skype session, the company mails ...

Add to that the usual closed software problems. Apple says they don't have a specific backdoor anymore (!), and they won't let you audit anything.


>> Apple says they don't have a specific backdoor anymore (!), and they won't let you audit anything.

Yes. Its amazing to me how eager some people are to take a corporation's spokepeople at their word.


Rtfa, its about backdoors.


Dont worry, the NSA can.


All the criticisms I've yet seen of Apple's iMessage security comes down to "yes it's probably completely locked down now and for all historical messages, but here's this obscure way they could open it up for messages in the future therefore it's not secure".

Well duh! It's their software. Of course they could backdoor it in future, such as if required to by the government. That's true of any software. Apple are asserting that right now there are no such backdoors and iMessages are secure. I've not seen any credible argument that this is not the case other thst "maybe they're lying". Ok. What's the alternative? Run everything through OpenSSL? That didn't worm out do well. Maybe we should run everything on Linux using Bash scripts. Oops again!

Maybe Apple are lying. Maybe they will sell us all out. But if they do these things always have a tendency to come out in the open eventually. So far they've had a pretty good track record of being on the level. In the end it's tfag reputation, and their appreciation of its value, that is the best and really the only guarantee we have, as with anyone else we rely on.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: