Hacker News new | past | comments | ask | show | jobs | submit login
The bumpy road towards iPhone 5c NAND mirroring (arxiv.org)
129 points by praving5 on Sept 15, 2016 | hide | past | web | favorite | 64 comments

Probably useful to mention this in the comments here: this works for the 5c, but not the 6 and beyond, due to the addition of the Secure Enclave.

So the secure enclave must have an embedded flash inside the processor that includes the pin code attempt counter -- meaning that writing to the external flash would not happen at all right?

There's a feature provided by a fair number of flash chips called a "Replay Protected Memory Block" or RPMB. The idea is that you provision your flash chip with a secret shared only with the secure enclave (presumably stored on fuses in the secure enclave) and then you can use that key to read and write a small block of storage on the flash chip.

It's "replay protected" because the crypto prevents an attacker from replaying old contents of the chip back to the secure element.

If Intel ever stopped mucking around with SGX licensing policy, SGX + RPMB would be a really nice combination on laptops.

From what Apple's released on how iPhone security works [1], it sounds like such keys are still written to external flash, just in a much more low-level way. So there may be a theoretical way to do this attack on a more recent iphone, but you'd have to do a lot more reverse engineering to figure out a few layers of undocumented proprietary protocols.

[1] https://www.apple.com/business/docs/iOS_Security_Guide.pdf

The basic issue with using an external memory is that you simply become a man in the middle and control what is happening the whole way. There is not some sort of magic "more low level way" available on flash ICs.. The flash device on any apple device can be fully emulated either by an FPGA or a special high speed setup that still has the flash IC attached to it. When the magic command comes in to write the value to store how many attempts have occurred you respond as if the value was written correctly but then don't actually write it. Assuming the block of memory that is being written is encoded with a particular checksum that is also including some checksum that was calculated on the local copy inside the secure enclave then the main problem we would run into would be that the secure enclave may store some value like that checksum in its memory locally in flash. So when you go to attempt the next passcode it reads the previous checksum from the external nand flash IC and sees that you are using the correct checksum, but the value you stored for the attempt counter does not sum up properly. So basically you would also need to reverse engineer their checksum process to screw up some other value to make the checksum add up properly. The alternative as I suggested is to just store the actual attempt counter in the internal flash of the main A8 or whatever processor in the secure enclave. That way it forces the hacker to have to be a much more sophisticated user and take more risk to damage the chip to basically completely remove the chip, FIB it to cut down to the proper layer--if apple was smart they would bury the flash for the secure enclave under a bunch of important metal routing that would be super difficult to get around, then even a super sophisticated nation state actor would be highly challenged to do this modification.. Desoldering the flash IC and soldering in an interposer that has an FPC that connects to an FPGA that is purpose built for this setup could be done in like 30 mins or less. So if apple wants to make this scheme difficult to do, they should embed it deep inside the main processor and not rely upon the external flash at all.

Great description. You remind me of Joe Grand (L0pht Heavy Industries), Joe FitzPatrick (NSAPlayset), Hector Martin (fail0verflow) and Micah Scott (scanlime). All are brilliant Electric Engineers with a security background. Do you have an EE degree too? Anyways, thanks for the detailed explanation. :)

Ha thanks--quite a nice set of compliments there! Yea, you figured me out, I'm an EE who grew up as a wannabe hacker.. :-)

From skimming the paper, :

"Each Secure Enclave is provisioned during fabrication with its own UID (Unique ID) that is not accessible to other parts of the system and is not known to Apple. When the device starts up, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave’s portion of the device’s memory space.

Additionally, data that is saved to the file system by the Secure Enclave is encrypted with a key entangled with the UID and an anti-replay counter. "

This sounds like the secure enclave chip has a secret key, and all of its uses of external memory are encrypted using said key. This sounds like one would either need to break the crypto system itself, or compromise the secure enclave co-processor.

How would one provision a secure enclave at fabrication time? What does that even mean?

Most likely it's a fuse type thing. It literally is a one time programmable fuse that is set by some subroutine. Basically you execute that function and bam you get back some key that says that it worked. You try to execute again and the fuses have already been set and the key is the same. Many ICs have built in one time programmable memories or fuses to be used during provisioning. Such as to set a MAC address on a Ethernet chip or wifi chip. It's literally burned into the chip in the factory and cannot be changed. It's a one time thing and does not change after reboot or anything.

In chip manufacturing, there's a way to make a "random" pattern on a chip, unknown even to the manufacturer. So a part of the SoC is random, but can be read by other parts. That becomes the random ID.

Imagine it like taking that piece of paper to give lottery numbers, throwing a couple darts on it, and then using that as the ID. Except the dart throwing happens in a way where you can't actually control/see the result during manufacturing.

There's a term for this that eludes me.

I'm curious what this process would be. My understanding was they'd typically have a small section of write-once fuses/PROM, and then some final process step to permanently program an ID into that area. That would mean the process to do so could possibly be recorded (or compromised), so I'm interested if there's a fabrication technique to reliably create random ROM sections.

Do you have any more info?

https://en.m.wikipedia.org/wiki/Physical_unclonable_function it's called a physically unclonable function.

The page is a bit obtuse, honestly I might have misunderstood a part of it.

I kind of doubt they would use some process like this for the iPhone, they probably program it in to avoid ID collisions. Yea I'm also curious if there is some name for this scheme where it generates some random pattern, I haven't heard of it myself and I previously worked in the semiconductor device world. Not saying it doesn't exist, just haven't heard of it. There are plenty of random sources that are used to generate a random bit such as thermal noise or clock jitter etc, but that is a single bit that you would then need a circuit to read that bit over and over to generate a random string like a UUID, and you would need some sort of statistics that would prove that that UUID that you generate over a finite time (probably a few milliseconds) is not highly self-correlated.

The UID is 256-bit. With a proper random number generator you don't do anything additional to avoid collisions. Hardware random number generators are standard for crypto processors like the secure enclave and there are standard tests used to test the output of a RNG for problems like the one you're talking about (e.g. the NIST test suite).

Stochastic process perhaps?

No, it does not rely on "undocumented proprietary protocols". The document you link clearly states that external flash contents are encrypted with a key that resides in the secure enclave, entangled with the user passcode. Furthermore the secure enclave has guards against brute forcing the passcode with an escalating time delay.

What's different in more recent phones (>=A7 processor) is the secure enclave enforces that time delay, as opposed to the operating system, which is the reason why this brute force attack works on the 5c/A6.

This is contradicted within the linked research:

>The same approach could be applied to the newer models of iPhone. The same type of LGA60 NAND chips are used up to the iPhone 6 Plus. Any attacker with sufficient technical skills could repeat the experiments. Newer iPhones will require more sophisticated equipment and FPGA test board

They are wrong. The A7 added a hardware passcode attempt counter that would defeat their method.[1]

And that's not all. With the introduction of Touch ID, Apple has shifted to 6 digit passcodes as the standard. The authors note that their method would not work so well, even if they had infinite time:

"Given six attempts per each rewrite this method would require at most 1667 rewrites to find a 4-digit passcode. For a 6-digit passcode it would require over 160 thousand rewrites and will very likely damage the Flash memory storage."

If you want to decrypt the modern iPhone you'd probably have to try to poke inside of the secure enclave with an SEM or something. And then you're getting into the realm of hardware defenses against this kind of intrusion, like physical self-destruction when probed.. not sure if Apple's doing anything there but it's crazy stuff.

[1] https://www.apple.com/business/docs/iOS_Security_Guide.pdf (page 12)

You dont need to constantly rewrite flash, you can emulate it with fpga. In-Circuit eprom emulation is old as a rock.

That's a fair point, but it's not the method they used, and even if it were feasible to implement the secure enclave passcode attempt counter would still render it ineffective.

The iOS security guide would appear to contradict this:


> On devices with an A7 or later A-series processor, the delays are enforced by the Secure Enclave. If the device is restarted during a timed delay, the delay is still enforced, with the timer starting over for the current period.

This is not super specific, but would imply that the secure enclave has its own storage.

The secure enclave was added in the 5S

Awesome that someone actually took the time to do what many engineers suggested could be done.

No one with the slightest technical background thought the phone was uncrackable, maybe not directly by the FBI but by pretty much any 3rd party contractor that does even basic flash data recovery. The FBI was looking to set a precedent in order to be able to gain easy access to phones that could be also used by law enforcement, a local sheriff's department is less likely to be able to contract it out for every case and even if it would be a commodity it would be price prohibitive at scale. Overall probably even iPhones with a secure enclave are not "immune" to physical attacks, not as easy as doing simple nand-mirroring but if can decap the main CPU without destroying it you should be able to directly probe the secure enclave logic.

Also while NAND mirroring does work, there is a good chance it wasn't used in that case, the current "theory" is that one of the patched USB DMA exploits were used to unlock the phone, NAND mirroring was too "risky" for the FBI to consider at the time, and it wasn't forensically secure IIRC one of the reasons for the very high price was the non-physical attack approach that the 3rd party provider they've selected had in their toolkit.

I think this is provably false. The post to HN was flagged and people told the guy off. It's probably safe to assume several of those people have "slight" technical background.

Can you explain how the "USB DMA exploit" works? Also how would you run this exploit on the phone if you were locked out of it?

Like any other DMA exploit through the USB host or any other DMA enabled data connector/port.

Cellebrite has a few of them I know of a few for MediaTek chipsets that work through the USB host and the camera connector.

This isn't any different than the DMA "skeleton keys" for PCs and Macs that work via firewire/thunderbolt/pcie/expresscard/PCMCIA etc.

That's two of us:


Mobile phones aren't secure. Period. Anyone saying otherwise about theirs better provide convincing evidence that they mitigated everything on that non-comprehensive list. Then we talk about what I left off. When FBI said they couldn't get in, I knew they were lying to try to establish a precedent & technical method to increase their convenience. It was also only time Richard Clarke's pronouncement was relieving where he said (paraphrased) "I'd just ask the NSA to break into it." Because they or a contractor could due to risks not mitigated. Obvious.

Honestly not sure if I'll ever trust a tiny, slim smartphone for high-security INFOSEC given the effect of EMSEC on size and hardware POLA on power consumption. If their physical specs are Apple-grade, then they cut corners on INFOSEC and some government hacker has job security.

I'm surprised the post was not unflagged. HN team wanna ring in?

When posts say "flagged", that means they crossed some threshold of user flags. The moderators didn't flag it.

Though it would never happen, would love an investigative piece that followed up on who flagged it, their background, why they flagged it, and what they have to say now.

I'm sure all the "experts" who were so vocal about how wrong you were will speak up now and admit they were wrong. Happens all the time around here. ;)

Can I come along and note that I was in that thread supporting that guy? Because that would be valuable for my self-esteem.

This is because many people don't care and/or switch to autopilot and double-down on their wrong point instead of being able to switch to a position where they can rationalize alternative answers and admit they are wrong. It is quite frustrating to deal with these people[0].

[0] https://youarenotsosmart.com/2011/06/10/the-backfire-effect/

I wonder how much the entire rig cost him? Not $1 million, that's for sure!

< $50 in parts (just a PIC24)

It would have cost the agencies a lot more though (contracting fees), I highly doubt they'd have the technical capabilities to handle this. But it just goes to show that they didn't really reach out and were simply using it as a means to push their backdoor agenda. :(

What worse is that they (or at least Comey) explicitly said that NAND mirroring does not work. When he was testifying before Congress, we was asked by several Congress people about NAND mirroring and (if I remember correctly) said that the technical experts had looked into it. This is either gross incompetence or outright perjury.

Well for the FBI NAND mirroring might not be a solution.

In the article the author mentions that this technique can effectively damage the flash memory because you can get effectively into a state where you are causing wear due to writes.

In theory you can scale this up and copy the contents of memory into an FPGA which emulates NAND (both logically and physically) or just hold multiple copies on different chips but even the initial process is not without risk.

While it's easy to say that the FBI lied or was incompetent because they had a motive to say set a precedent, but given the FBI's own rules for what is a forensically accepted method of extracting data NAND mirroring can easily be considered as something that "doesn't work".

Perhaps the FBI had a valid reason for not doing NAND mirror [0]. However, I maintain that it is incompetent for the FBI director to testify before congress on the case and not be prepared with a basic answer to why the most obvious technical answer would not work. Similarly, with him saying in a press conference that it would not work.

Of course, the FBI director does not need to know this level of detail about every case, but given how much he appears to have known, he should not have been speaking about the case beyond directing people to ask the person in charge of the case.

[0] Although, I would argue, in cases where no alternative exists, if the FBI policies prevent NAND mirroring, they should be revisited. Even if it has the potential to be destructive, they would only be destroying otherwise unusable information.

[0] Although, I would argue, in cases where no alternative exists, if the FBI policies prevent NAND mirroring, they should be revisited. Even if it has the potential to be destructive, they would only be destroying otherwise unusable information.

Eh? no, destructive methods are a big no-no, this isn't a dichotomy.

The information ins't unusable, they can try force apple to unlock the phone, or wait until some one else comes that can do it without using a destructive method.

You can't destroy evidence because you can't do anything with it at the given time.

> In theory you can scale this up and copy the contents of memory into an FPGA ...

In practice it would have to be in external memory (DDR, flash, etc.) that the FPGA would use to back transactions because FPGAs don't have 32GB of memory capacity. Problems then become meeting bus timings that may very well be tuned for the PCB layout.

Yea they definitely just wanted the GovtOS from Apple. Basically would allow them to hack somebody's phone without them knowing. Whereas requiring them to disassemble the phone it would be more prone to error and take more time to do.

There are quite more parts to that, as well as the scope, logic analyzers and quite a few other components.

It doesn't end up to 1M$ but it's also not 50$.

It's stated in the end of the paper. Less than $100

With access to a research facility quality electronics lab, and they still haven't managed to create a functional backup on a new NAND chip.

"Unfortunately, the 1:1 backup copy did not work in the iPhone 5c. Even the boot Apple logo did not appear on the power up. There were some references to hidden partitions used in iPhone NAND storage which makes cloning a challenging task."

... keep reading

I kept reading, from what I understood they restored the copy on the original chip, not made a clone. They could use the backup process to restore a specific partition but then you have a problem of wear due to writes, they've said that a 6 pin passcode is unlikely to be able to be brute forced on a single NAND chip without inducing possible damage due to wear.

Edit: You are correct, I've misread it.

Less than $100. It is stated on page 9

Are you suggesting that they minimized overhead and maximized profits, Mao is going to be livid!

Interesting read

A note -- if you're linking to arXiv, it's better to link to the abstract (https://arxiv.org/abs/1609.04327) rather than directly to the PDF. From the abstract, one can easily click through to the PDF; not so the reverse. And the abstract allows one to do things like see different versions of the paper, search for other things by the same authors, etc. Thank you!

You can also see when the paper was submitted.

Oh I find it so aggravating when I don't have a date. It seems like most academic papers I found on interesting security and programming stuff didn't have dates. I had to pull them off Citeseerx or the school's site. You'd think it would be a requirement in the style guides given importance of when something was published for relevance or context. I've never heard an explanation for why this isn't so.

I do appreciate the authors that put one on there, though. Saves much Googling. :)

Conference papers include a copyright notice with the name (and year) of the conference.

Papers that aren't published at conferences (preprints, technical reports... etc) don't have any standard format for this. Sometimes (especially for preprints) it's left out in expectation of adding the copyright notice in a future version, sometimes it's left out because the document is in a very preliminary stage and hasn't been officially published and sometimes it's left out because people just didn't think of it and there's no standard format that includes a date for things that aren't published to a specific venue.

You don't need a standard format. You just need to put "DRAFT - [today's date]" somewhere in the document.

Im not saying it has to be done a certain way. Im saying my college required the paper to have margins, a title, etc. Im surprised most of them dont require a date of any kind on the document sonewhere. They could write the year in Size 8 in upper-right corner in light grey and that would be better than many in my collection. ;)

The first four digits of arXiv papers links are the year and month of submission (to the arXiv). Just from this link I can tell that this was submitted in September 2016.

For those who don't have a PDF viewer in their browser it's also quite annoying to have to download the paper just to see what it's about.

What modern browsers don't support PDF? (besides lynx)

This is a new result published yesterday, with no revisions. A direct link is extremely preferable to those of us who are interested in actually reading the paper. If you are doing homework, you can search the paper title.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact