The iPhone 5C in question uses an A6 processor. It encrypts data by comingling the passcode with the unique device ID to create a a strong 256-bit key, so you can't just pull and brute force the flash memory chip. Meanwhile the OS will wipe the key if you guess a wrong passcode too many times, making the data forever inaccessible.
However there is one vulnerability with the A6 that some have theorized. If you could somehow get around the wiping part, you could keep guessing passcodes. A typical 4 or 6 digit passcode could be guessed in under a day. So it may be possible to copy the flash memory into a soldered-in test rig that is effectively wipe-proof. It would restore the contents every time it's wiped. So that's the best guess I know of for what happened here.
But starting with the A7 Apple added the secure enclave. This now enforces at the hardware level an escalating time delay with each wrong passcode guess. It goes all the way up to a one hour delay. That's also where the (unreadable) unique device ID resides, so there's no swapping out the processor with a rig. The key is forever wedded to this protection against brute forcing.
That is pretty darn spiffy from a security standpoint. If it works as designed, about the only hope anyone has of getting at data from an A7 or later device is through iCloud backups.
 https://www.aclu.org/blog/free-future/one-fbis-major-claims-... see also http://blog.cryptographyengineering.com/2014/10/why-cant-app...
 Either that or some kind of peek into the secure enclave. It's specifically designed to inhibit this at a hardware level, but perhaps a nation-state could figure it out (e.g. verrrrry carefully grinding it down without destroying it and looking at state with an electron microscope).
Something that is tamper-resistant isn't tamper-proof.
It will likely take some time to develop a capacity to attack these chips, but it is not impossible in principle nor intractable.
In other words yes, I'd bet money there's a "secure enclaves" team at the NSA with a few million bucks to play with. Or teams.
One of the downsides of the way global manufacturing works today that there's so many stages in which components can be intercepted.
You don't know that the device you bought actually consists of unmodified versions of the components that were part of the original design.
Could the NSA partner with the Korean government and Samsung to put a backdoor into components? I wouldn't rule it out.
The secure enclave isn't so magic it's just a secondary processor that handles cryptography it has it's own memory to store variable such as failed attempts.
Attacking the SOC might be more complex and expensive but eventually it's exactly the same as attacking the NAND or any other integrated circuit.
For all we know the NSA could (and most likely does) develop their own in circuit debuggers for common ASIC/SOC's and just dumps what ever unique values the target SOC stores and takes a crack at it.
This also isn't out of the realm of possibilities for companies that specialize in in-circuit emulation, hardware design, and forensics to create as a turn key solution.
Not really. Secure enclaves have added defenses that NAND does not. They don't have an API that lets you read their embedded secrets, for instance. You can't just hook up a debugger.
You'd have to try to get at the state of its transistors with an SEM or something. But additionally some have physical defenses against delayering that will self-destruct their contents in the event of a physical compromise. So while I ultimately agree that a nation-state could potentially craft an attack against a specific design, you're understating the difficulty.
There are other ways to attack hardware, you do not need to get a SEM(or AFM for that matter).
Devices that probe transistors on a microscopic level exist in the industry (e.g. http://www.tek.com/sites/tek.com/files/media/document/resour...), hence the more complex and expensive part.
Also, you cannot "desolder" the secure enclave and hook it up to a "mirroring" device. That attack requires the NAND to be encapsulated in a desolder-able memory chip that supports reading out state. Not the case with a secure enclave.
"The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing."
What this says to me is that while rewritable data storage is indeed kept in regular commodity flash memory chips, it's all encrypted by a unique device-specific key that is somehow burned into the secure enclave. So that one little secret kept inside the enclave would allow it to store everything else off-chip.
If you can envision a procedure for hacking around this I would love to hear it.
Today pretty much anyone can buy a probing station these range from several 1000's of dollars for very basic IC's (such as ones used on cheap smart cards) to 100,000's or millions of dollars for something that can probe say any modern CPU/SOC.
Probing stations are used by chip manufacturers and designers and quite often they are also used in the post production QA process where completed packages will be depackaged and inspected using probes. This isn't "rocket science" there are plenty of people trained to operate such devices and the NSA is more than capable of hiring engineers from the semi-conductor industry and contracting the most advanced probes out there to look into any chip they want.
Heck the NSA could easily afford cryonic probes which allow you to cool down the IC to very low temperatures this isn't only required to fully probe certain IC's that could easily be fried without sufficient cooling but also to execute cryonic attacks in which you cool down specific parts of your IC to a very specific temperature one for example that could allow the IC to read from it's memory but write operations would fail in this case for example it might enable a party to attack the secure enclave which will generate keys but will be unable to store the failed attempts counter in it's own private memory.
Apple isn't magic, I know you like to think it is, but you really don't seem to grasp just how many types of physical attacks are there on IC's.
Or is there a different piece of tech that can self destruct when tampered with?
That doesn't get you the key stored in the target device.
It's not the same as attacking any other integrated circuit, because this one is rigged to blow away its secrets if you try to get instruments/debugging tools inside its enclosure in a way its designers thought of.
Apple already suspects that this is happening
Wouldn't Apple be able to detect such an attack, if they were looking (e.g. decap sample chips, image at high magnification, and compare to the original design files)?
I don't think such an attack would work very well as a tailored access sort of thing. If the backdoored chips got into the supply chain, the general public would be affected. If the NSA wanted to only target certain people, they'd have to have a huge amount of control over Apple's supply chain, which would surely be noticed.
If you want to ensure you get a phone from the standard supply chain, buy it in-person at a store where you can see someone take it off the shelf.
You are assuming it's perfect in conception, design and implementation. That is a very unlikely assumption for any system.
Is this burnt silicon, or just new configuration (genuinely curious)? I'd assume that anything but a smouldering chip is in the realm of circumvention.
Isn't that 5S or later? Touch ID and Secure Enclave were introduced w/ the 5S model.
In the 80's, one could do that by cutting a single line in the connector between motherboard and hard disk. I would guess something similar is possible between motherboard and flash. It would be more complex because 'write' now isn't a single signal, but part of a protocol and because one would have to des older the flash, but that shouldn't stop the FBI.
So, I guess one can get rid of the "restore the contents every time it's wiped" step, speeding up the process.
As to the A7: as you indicate, people will try to do essentially the same thing to the flash storage inside it. Cut one open and try to figure out where its memory is, try cutting that loose from the CPU part, etc. The scale will be smaller, and the challenge (a lot, if the A7 has anti-tamper devices on-board) harder, but impossible?
does the use of the word 'comingling' here implies any kind of special combination technique??
source: just search "San Bernardino iCloud" and you'll get a whole bunch of articles on the subject
(1) Does the public accept that the FBI is even telling the truth on this (i.e., did they actually "break into" this iPhone)?
(2) If they did gain access to the iPhone's info, was it actually through the use of a vulnerability, or did they discover some other info that led them to the passcode?
It seems reasonable the FBI could still notify Apple of their method of entry and then later notify the public in time.
The EFF, as much great work as they do, is showing a bit of impatience here. Perhaps they feel they have a bone to pick with the government, as the government seems to feel they have one to pick with tech. Neither party looks great by leveling such public complaints prematurely.
Not to mention that if indeed this was a physical attack NAND mirroring or ASIC replication there isn't really what to inform Apple about.
Apple can't design a chip that wont be borken, all of them including those which use a secure enclave can be broken by physical attacks.
Let me just say at the outset that I am entirely unsympathetic to the FBI with respect to the Apple case. I side with Apple unreservedly. But the FBI started this case because they claimed there was "no other way" to get into the phone. Then, Lo and behold!, it turns out that there was another way.
The next time the FBI tries this, I think the public reaction will be that the FBI can find a way, just like they did the last time. In other words, the FBI is now the Boy Who Cried Wolf.
If there is a known vulnerability, I'm willing to bet Apple will find it rather quickly. I'd imagine Apple has engineers pouring over the source code now.
In fact the contract they have with the FBI doesn't supply anything that can break into an iOS 9 device so unless they had a separate product line not included in a contract only two months old it's unlikely it was them.
They have allot of turnkey solutions for various markets as mobile forensics is only a part of their portfolio, but quite a few of their forensic services and products are not publicly listed.
*I worked for an Israeli information security firm that is a research partner and service provider for cellebrite.
Years ago, I could never get a good answer from them whether they could extract information from encrypted devices. I guess we now know what the answer is, they can if they really want to.
Did you see Charlie Stross's speculation that from Apple's end this is all about them becoming a retail bank via Apple Pay: http://www.antipope.org/charlie/blog-static/2016/03/follow-t...