Hacker News new | past | comments | ask | show | jobs | submit login
What Is the Secure Enclave? (mikeash.com)
163 points by ingve on Feb 19, 2016 | hide | past | favorite | 57 comments



In a more ideal world, all devices would have something like the Secure Enclave, but with the hardware and software open sourced. There would be a public process for vetting and verifying the design, as well as for verifying embodied instances. Ideally, it would be implemented in such a way that the security could be mathematically provable. This would let the public have the benefits of trusted execution, which they could then use to protect their information in the hands of corporations and governments.

This is the exact same asymmetry embodied in openness/privacy/surveillance. When governments and corporations have unfettered access to people's private information, this is very bad for human rights and an open democratic society. On the other hand, when individuals have open access to information from government organizations and corporations, this is generally good for human rights and an open democratic society.

Organizations using trusted execution technologies against individuals has been a disaster for individual rights. However, empowering individuals to use such technologies to protect them against corporations would have tremendous benefits for individuals.


It may be achievable to build a phone case ,that connects to the USB, and some storage and a touchpad that achieves something similar to the secure enclave(and the pass code),in a more open fashion.


It isn't possible to open source the hardware. If the secure enclave is really secure, it has parts that by definition cannot be examined or verified. So hardware specs can be open, but you'll never have any guarantee that's what you're getting because hardware is always a compiled blob.


Can you list some of these parts that, by your definition, cannot be examined or verified? If you're talking about physical security features to make decapping difficult, surely a high-res xray would be good enough to verify that the hardware is as specified without being able to read the contents of nonvolatile memory, and without compromising physical security.


Question: could an x-ray device be made sensitive/high resolution enough to "read" whether a bit is flipped in silicon? (and extract the private key)


It's a tricky thing, and I fully admit that I have a limited understanding of the specific effects of x-rays on semiconductor devices. My understanding though is that you get higher resolution with higher-energy x-rays, and that as you increase the energy, you're more and more likely to damage the semiconductor (e.g. by inducing bit flips).

Here's some material from a flash memory manufacturer with more details: https://www.spansion.com/Support/Application%20Notes/X-ray_i...


Aren't the contents of NV memory pretty important though?

Anyway, it's just different than software. You can't sha sum your hardware to verify its what you expected.

I'm not sure, maybe an x-ray could see inside. Not unlike how binary blobs can be examined, but usually that's not considered open.


Manufacturers already use x-rays to make sure they're not installing counterfeit chips. Maybe we can look at the verification techniques used for US military procurement or by NASA.

I'd wager that we can have open hardware, and that its correct manufacture can be verified to a satisfactory degree by a combination of auditing and testing without running into limits of the laws of physics or inherent design requirements.


I think one thing that needs to be said is Mike Ash is just one of those super humans where Apple & code are concerned.

I've been reading Friday Q&A for years and the posts never cease to amaze. He doesn't politicize or whine about anything. It's always thoughts from a teacher, a master.

I wish there was more in the tech world like him. Thanks Mike!


That's really nice of you to say. I appreciate it.

I do actually whine a lot, I just try real hard to keep it off my blog. Complaining is fun, but it's not helpful.


So one thing this article kind of takes granted/doesn't describe is physical security of the Secure Enclave. What exactly is done (physically) to make it tamper proof? How is the UUID stored in such a way that the Secure Enclave can still read it, but somebody with access to unlimited resources can't dissect the processor die to read it out? I understand that there have to be some sort of countermeasures, but I haven't ever really seen anybody describe what they are.


I'm more familiar with HSMs (Hardware Security Modules), which are larger devices used to securely store and manipulate cryptographic data by certificate authorities and so on. But the security requirements are similar, and HSMs are designed to destroy their secured data (e.g. key material) if they're physically tampered with. If you physically breach their security envelope, then it protects the data by immediately wiping internal storage of all key material.

I suspect the iPhone's Secure Enclave is designed to self-destruct in a similar way.


> it protects the data by immediately wiping internal storage of all key material.

Yeah, so how, exactly, do they wipe their data? Is it a firmware process? What if they are unpowered as they are tampered?

Or is the media attached in such a way that physically removing it would damage it physically?


A common HSM approach is to keep the key material in battery-backed SRAM so it evaporates when unpowered or tampered. The single-chip solution used in smartphones probably has no budget for extra parts just for key security, so the key will be fixed and stored in processor antifuses. You theoretically could get at them with a scanning electron microscope, but only with extreme difficulty and no guarantee of success on a single device. And it's a destructive process.

http://www.microsemi.com/document-portal/doc_view/132857-ove... : see page 5. That's Microsemi but the general approach of Apple/TSMC/Samsung is likely to be the same.


do you have any idea of what success rate you would be looking at there? 99% 50% 10%?


I don't know, but evidently the manufacturers think it's "low enough". This is definitely the kind of security which is about increasing the resource spend per attack rather than guaranteeing impossibility.


All of the sibling comments have great explanations of common processes, but one is missing: a metal mesh as part of the top of the CPU silicon. It's talked about a little bit here: http://users.encs.concordia.ca/~clark/courses/1501-6150/scri... Essentially, there's a "trap" on the top of the chip that resets the memory if touched by a conductive probe of any kind. I don't know the specifics of how you'd construct such a thing, but it seems like it wouldn't be too complicated to do.


Electronic fuses and secure (fusing) EEPROMs are not uncommon in HSMs, but I honestly don't know what Apple is using.


Yes, my question exactly. I know one of the countermeasures devices like the RSA tokens use is to fill the body of the device with plastic or resin to make it really hard to pull apart, but I'm curious how it works for a microprocessor.


You might be interested in this old BlackHat presentation by Christopher Tarnovsky: https://www.blackhat.com/presentations/bh-europe-08/Tarnovsk...


> Given the goal of protecting the user's data, it would make a lot of sense for the Secure Enclave to refuse to apply any software update unless the device has already been unlocked with the user's passcode.

This is also speculation, but perhaps this is why you have to enter the passcode on device reboot. This may be simply a software protection (see talk about being compelled to provide a fingerprint), but it may actually be a necessary step for the secure enclave to boot as well.

I also suspect that Tim Cook's announcement doesn't mean to imply that such a theoretical attack currently exists, but rather than one may exist in the future that Apple could be compelled to comply with.


I take it these suspects didn't have any backups of their phone? It clearly cannot be the case that the backup can only be decrypted by the original device, since the entire point of the backup is to be able to restore it to a different device.


The suspects DID have backups of the phone on Apple's iCloud platform, and Apple already provided that to the FBI.

But that doesn't meet the FBI's actual needs. The FBI's ACTUAL needs are to have a case with a lot of public sympathy in which they can force a major tech company to very publicly comply with their order to add a backdoor to a phone (without calling it that) in order to influence the legal and legislative systems (and perhaps public opinion, if the FBI even cares about that).


Has the FBI (or Apple) been able to unencrypt the backups from iCloud?


Apple has provided the FBI with decrypted copies of the iCloud backups. But the phone only backed up "intermittently" so recent activity would not have been included in those backups. (Well, it COULD have been, except that the FBI told San Bernadino County to change the password, which messed that up.)


iCloud encrypts backups at rest but it's not encrypted with a key that only the user has, it's encrypted with Apple's key.


They probably have, but the whole point is they want a way to decrypt the device, to use as a precedent in future cases.


Today's editorial in NYT states that Apple has, in fact, provided the latest iCloud backup for the phone in question:

http://www.nytimes.com/2016/02/19/opinion/why-apple-is-right...


“[Apple] executives […] initially offered to help recover the iPhone's contents by connecting it to the Web from a network that the device had already accessed. That would have backed up the iPhone to iCloud and granted the FBI a way to obtain data without requiring it to crack a password, they said. […] The Friday DOJ filing to the court indicated that the county health department, which employed Farook and owned the phone, had remotely reset the password in an attempt to gain information, eliminating the possibility of an auto backup.”

Source: http://www.politico.com/story/2016/02/apple-iphone-privacy-j...


Indeed, you're supposed to encrypt the backups with a password. If you don't, some data like wifi passwords won't be backed up, as far as I recall. If they did have backups that were encrypted, FBI could brute force that password too (if it's not just sitting in the keychain).


Indeed - is the backup encrypted using the apple account password or a user specified password? I am just wondering if a backup exists then it could be decrypted simply by doing a password reset (would only work if using the apple account password not a specific password for the backup)


An iPhone backup file managed by iTunes is encrypted with a separate key, which can be stored in the OS X keychain. A paranoid user might opt not to do so. The OS X keychain could be accessed by compromising the security of OS X, depending on how much encryption the user opted for. It's entirely possible for a paranoid OS X user to make things difficult in a case like this, even for the FBI.

On the other hand, if the FBI had some lead time, all of the above could be circumvented without Apple's cooperation.


Couldn't we find more about this by searching for patents around the Secure Enclave submitted by Apple Inc?


If "secure enclaves" are actually secure, companies manufacturing those "secure enclaves" could just log the encryption key(s) at fabrication time to render them useless?


The chips do not have an unique identifier on them. It will be impossible for the manufacturer to look at a chip and then query their db for the encryption key. At best they will be able to provide all the encryption key that they have produced possibly reducing the search space form 2^128 to ~10B. Also, secure enclave manufacturing process is done is such a way that even the manufacturer does not know what the key is. They don't generate a key for each chip, a natural phenomenon which is truly random is used to burn in the encryption key.


If the enclave chips don't have a unique identifier on them after being installed in an Apple device does not mean that they didn't have a temporary identifier...

Also, are you saying that even the machines manufacturing them do not know what they are doing ("... secure enclave manufacturing process is done is such a way that even the manufacturer does not know what the key is.")? Sounds a lot like a PR campaign... I would be curious to know how this manufacturing process really works.


It sounds like the encryption key is generated when the device is running as a combination of the UID and passcode/password, so it's not quite as simple as being able to decrypt everything straight away if you are the manufacturer.

Even so, if they do have the UID that greatly reduces the security of the encryption - especially if you are using a short passcode.


The idea here is that the secure enclave is a small microprocessor with hardware RNG, encrypted memory, and encrypted storage. The secure enclave generates its own key and it never leaves the chip.


Trusted computing modules have been broken before. Someone presented a working attack a few years ago at BlackHat using an electron Microscope to figure out how to crack the TPM in the Xbox.


Yes, but the process is destructive and quite time-consuming, requires expensive equipment and cannot really be automated.

That's not a problem if your want to, say, extract secret keys once from one device that you own in order to break DRM.

But it makes it completely impractical if your aim is to extract crypto keys from smartphones to decrypt peoples data.


If you can backup the data ahead of time then it doesn't matter if the process is destructive. So couldn't they (in theory) dump the flash memory, then extract the key from the processor and use it to decrypt the memory?

Obviously not practical for mass surveillance, but it would work to read one particular person's phone, which is the issue at hand.


> Obviously not practical for mass surveillance, but it would work to read one particular person's phone, which is the issue at hand.

Eh. I mean, yes, if the issue at hand were ACTUALLY read this one particular person's phone, that would probably be a valid avenue of attack.

But the actual issue at hand here is "establish precedent that you can use the All Writ's Act to get a judge to hand you an ex parte order that lets you walk in to a tech company and order them to build you (and, crucially, cryptographically sign) the tools you want so that you can get whatever data your heart desires".

Once that precedent is in place for this "just this one phone we swear" order, nothing's stopping them from walking into Apple or Google or whoever with an order to build and sign a custom OS version that, say, copies all data to an FBI server and push it as an OTA update to a target.

Once All Writs has been expanded to mean "you have to build us signed, custom versions of your software to get us data we want", all bets are off.


No, because the data you actually need isn't the ciphertext, it's the key, and the key is stored in hardware; the process of trying to recover it through imaging is what's destructive and risky. If you ruin the device trying to decap a chip, you don't get a second crack at the key, and the ciphertext is forever useless.


It makes it harder or impossible to do it in covertly, and it may also not be possible for legal reasons.


Is it possible that the enclave is in fact an ASIC with the crypto logic and udid burned into the silicon? That would ensure that it couldn't be updated or compromised, by Apple or anyone else.

This is pretty far outside my area of expertise, so that may be a very dumb question.


It's likely implemented as a part of the System-on-Chip that Apple is using in their phones, yes, either reusing or is very similar to ARM's TrustZone extensions.


Since the Apple chip is derived from an ARM design it would make sense to have the secure enclave implemented with TrustZone rather than being provided as a separate piece of hardware. Most probably a TEE (Trusted Execution Environment). Lots of TEEs are based on L4.


Nope, it's a separate core, according to Apple.


Educated guess: it may be the application processor also has a trusted execution environment containing stubs that communicate with the Secure Enclave. This would prevent kernel level exploits from writing to the shared memory and mailboxes.


On my Android phone, I have to unlock the screensaver and approve updates. Is it the same on the iPhone? If so, it seems as if the FBI are wasting their time asking Apple to update the phone, since they'd have to unlock it first.


According to the article, that isn't known.

>The first possibility is that the Secure Enclave uses the same sort of software update mechanism as the rest of the device. That is, updates must be signed by Apple, but can be freely applied. This would make the Secure Enclave useless against an attack by Apple itself, since Apple could just create new Secure Enclave software that removes the limitations. The Secure Enclave would still be a useful feature, helping to protect the user if the main OS is exploited by a third party, but it would be irrelevant to the question whether Apple can break into its own devices.

If we assume this is the case, it might explain what McAfee meant when he mentioned social engineering.


Don't forget that the phone the fbi want to decrypt doesn't have a secure enclave. If apple agreed it would be trivial to write an OS without the exponential back off when entering incorrect passcodes


No, see earlier in the article. That is true for the 5S and later, but not for the device in question (5C) and older.


On an iPhone, you can update the OS from recovery (aka DFU) mode, without it being unlocked.


Which usually results in the data being deleted though


You have to hand it to apple - they are awesome at naming.


...awesome at naming what, the industry standard term "Secure Enclave"?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: