Hacker News new | comments | show | ask | jobs | submit login
[flagged] The FBI can almost certainly crack the San Bernardino iPhone without Apple (rongarret.info)
124 points by lisper 570 days ago | hide | past | web | 32 comments | favorite



This assumption is not correct. The (4-digit) passcode is entangled with the device's unique ID, which is burned into the A6 application processor (on the other side of the logic board). It's theoretically possible to extract the UID from the A6 at the silicon level, but not realistically possible.

I suggest you read the iOS Security Guide: https://www.apple.com/business/docs/iOS_Security_Guide.pdf


It doesn't matter that the UID is incorporated into the key. If you have a copy of the flash then you can restore the device to its current state, at which point you can brute-force the PIN. The only way this could not work is if the A6 has some non-volatile storage on-chip and it is used to prevent this kind of replay attack, but AFAIK this is not the case.


I think it does have non-volatile on-chip storage, which is used to store a randomly generated key that is encrypted with the key derived from the PIN and UID. It is that randomly generated key that is used to encrypt flash data.

I cannot find documentation to verify this. I presume the people down voting you do, but unfortunately they've chosen to down vote instead of being useful and posting a link. (The only link I've seen is for A7 and later systems)


What you describe would not defeat a brute-force attack on the PIN using a duplicate flash. The only way NV-storage on the A6 chip would do that is if it stored the attempt counter there.


all of you were wrong and he was right



The author is absolutely, totally wrong.

The PIN number is entangled with the device's 256-bit "UID", which itself is on-die in the SoC/CPU and NOT extractable without either being able to run code on the CPU, or decapping the SoC, reverse engineering its implementation, and extracting the UID, all from the SEM imagery.

The PIN number and the UID are fed to key derivation code for strengthening; the result of that process is used to actually perform encryption of the data on the NAND.

The weak point here is the PIN number; the FBI would be extremely hard-pressed to brute force the derived AES keys.

This is described in the "Tangling" definition on page 59 of the iOS Security Guide: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

I flagged the article, as the entire argument is predicated on a factually false premise.


For what it's worth I unflagged/vouched for the article because I think the premise and the comments refuting it are an interesting discussion.

Even if he's wrong, the reason why he's wrong is informative.


Thanks! I really did want to respond to this.

It's possible that I'm wrong, but not for the reasons given so far. Once you have a copy of the flash you can always get back to the current state by installing a copy of the flash in its current state. Then you can simply brute-force the PIN.


Is that true though? Other commenters are arguing that the encryption of that flash chip is a value that, while based on the PIN, is sufficiently long as to resist brute force encryption. Is it possible that you're mistaken?


He's not claiming that you can brute-force the encryption key -- he's proposing another way to brute-force the PIN itself that won't trigger exponential timeouts or auto-wipe of the device. I don't know if he's right or wrong, but let's at least make sure we're clear what his argument is first.


Of course it's possible I'm mistaken. But not because of any of the reasons given here so far.


Just posted an update, but I thought I'd reiterate it here: it doesn't matter that the UID is used in the key derivation. Once you have a copy of the flash you can always get back to the current state by installing a copy of the flash in its current state. Then you can simply brute-force the PIN.


How do you brute force the PIN if the KDF process operates over the PIN, plus a 256-bit UID that's only accessible to signed code running on the SoC?

If you modify the kernel on the NAND, signature checks in the on-die bootloader will fail, and you won't get to run.

If you modify application binaries on the NAND, signature checks in the signed kernel will fail, and you won't get to run.


Turn on the device. Make 5 PIN guesses. Replace the flash chip with a copy of the original. Reboot the device. Make 5 more guesses.


In other words, introduce a much, much longer iteration delay that makes the entire effort pointless, while risking destroying the device and all data on it by desoldering the (presumably BGA) NAND?


You wouldn't solder the chip in every time, you'd install a ZIF socket.


Due to the high density, a socket will not fit without a making a pigtail to fit onto the board (commonly available sockets are fairly big).

The flash in question is a BGA part on a high density double sided board. It definitely would be hard to desolder the flash without desoldering anything else nearby (as a bonus, the CPU is directly on the other side of the PCB). Specialized machines exist for doing this, but require a highly skilled operator.

They want something they can plug the phone into so they can give every site one and not send it out to a expensive technician for modification and data extraction.


> Specialized machines exist for doing this, but require a highly skilled operator.

That's true. But come on, this is the FBI we're talking about.

> They want something they can plug the phone into so they can give every site one and not send it out to a expensive technician for modification and data extraction.

Yes, of course that is what they want. That is the whole point.


Hmmm....the flash and the CPU are opposite each other? Interesting. I wonder if that was done on purpose to make it harder to mess with things, or they just ended up that way because it made the layout and routing simpler?

If they were not opposite each other, then the following approaches to spying on or tampering with the system would be possible:

• Cut the write signal between the CPU and the flash so that the CPU cannot erase anything in the flash.

• Cut all the control, data, and address lines between the CPU and the flash and insert a man-in-the-middle device that can allow or block CPU access to the flash and that can read/write the flash itself.


That sounds time consuming. 5,040 possible pins with the 4x length pin and 151,200 possible pins with the 6x length pin.


I don't know how long it takes a 5C to boot, but my iPod takes 30 seconds. Add another 30 seconds to swap out the RAM chip (I'm assuming a ZIF socket has been installed) and you can brute-force 150k PINs in 30k minutes which is about 3 weeks.


How do you get these numbers? Aren't the number of possible pins 10^4 = 10,000 and 10^6 = 1,000,000?


The flash is not wiped after 10 attempts. The key is thrown away after 10 attempts, and it's stored in the secure enclave. So flashing the memory to its last state does nothing, it's not even necessary. The actual key to the files is made of Device-ID (can't be read, binds any attack to the device) + random key (can't be read, can be securely thrown away) + file-specific data (so you need a different key per user file) + the users passcode (only component not on the device). It's a very elegant system actually.


The 5c doesn't have a secure enclave. That came with 6.


This author doesn't support the technical claims they're making with technical fact.

They point at the NAND chip and just say "pull off the encryption key, then decrypt the device" but without supplying any fact as to how the key was stored in the iPhone 5S, what kind of resources reversing it would take, and if Apple has employed any kind of anti-reverse engineering means.

For all we know Apple could have thrown different elements of the decryption key all over the device, the A7, the NAND, and elsewhere. Without more technical specifics it is hard to draw the conclusion this article draws.


The 5C has an A6, not an A7.


Misread it as a 5S, not 5C. So yes an A6 in the 5C.

What say you about my overarching point about the technicals of key extraction and decryption? Do you have any information on more specifics? I am not calling you wrong, I am saying you aren't showing us enough specifics to say you're right.


> They are not worried about the data on the San Bernardino iPhone, because if they were they would have had it by now.

I wish this was the point made by more mainstream media outlets. Not for the reasons the article author posts, but rather that the SOP for Apple devices involves bringing it to a known access point and powering the phone to allow an iCloud backup to occur.


Unfortunately the iCloud backup option is no longer available to the FBI now as they changed the iCloud password.


Precisely - from where I'm sitting, the disregard for standard forensic examination procedures [0] shown by the reset of the iCloud password proves that their actual desire to obtain the data is not in proportion to their assertion that Apple gives in to their demand.

Additionally, had they brought the device to a known access point and plugged it in to a charger, they could have availed themselves of another handy thing: Access to a computer synchronized with iCloud would've yielded an iCloud-specific token that could be used to download and extract the backup (without Apple's involvement), bypassing even TFA. [1]

[0]: http://www.nij.gov/publications/Pages/publication-detail.asp...

[1]: https://www.elcomsoft.com/eppb.html


This attack vector also assumes that 100% of the phone state is in the NAND chip -- is that known to be true? The phone has a lot of modules.

The article is interesting, but it presents a hypotheses with far more confidence than it deserves. People finding the article, and not this discussion, will be misled.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: