Hacker News new | past | comments | ask | show | jobs | submit login
FBI Breaks into iPhone. We Have Some Questions (eff.org)
323 points by panarky on Mar 29, 2016 | hide | past | web | favorite | 64 comments



Most likely it's already been fixed.

The iPhone 5C in question uses an A6 processor. It encrypts data by comingling the passcode with the unique device ID to create a a strong 256-bit key, so you can't just pull and brute force the flash memory chip. Meanwhile the OS will wipe the key if you guess a wrong passcode too many times, making the data forever inaccessible.

However there is one vulnerability with the A6 that some have theorized.[1] If you could somehow get around the wiping part, you could keep guessing passcodes. A typical 4 or 6 digit passcode could be guessed in under a day. So it may be possible to copy the flash memory into a soldered-in test rig that is effectively wipe-proof. It would restore the contents every time it's wiped. So that's the best guess I know of for what happened here.

But starting with the A7 Apple added the secure enclave. This now enforces at the hardware level an escalating time delay with each wrong passcode guess. It goes all the way up to a one hour delay.[2] That's also where the (unreadable) unique device ID resides, so there's no swapping out the processor with a rig. The key is forever wedded to this protection against brute forcing.

That is pretty darn spiffy from a security standpoint. If it works as designed, about the only hope anyone has of getting at data from an A7 or later device is through iCloud backups.[3]

[1] https://www.aclu.org/blog/free-future/one-fbis-major-claims-... see also http://blog.cryptographyengineering.com/2014/10/why-cant-app...

[2] https://www.apple.com/business/docs/iOS_Security_Guide.pdf

[3] Either that or some kind of peek into the secure enclave. It's specifically designed to inhibit this at a hardware level, but perhaps a nation-state could figure it out (e.g. verrrrry carefully grinding it down without destroying it and looking at state with an electron microscope).


I wouldn't assume that nation-state attackers would be thwarted by the Secure Enclave. Organizations which can deploy taps on undersea fiber optic cables from nuclear submarines, attack centrifuges deep inside air-gapped underground nuclear facilities, etc would likely be able to mount the kinds of specialized attacks needed to break into HW trusted computing models.

Something that is tamper-resistant isn't tamper-proof.

It will likely take some time to develop a capacity to attack these chips, but it is not impossible in principle nor intractable.


Hey, I updated my comment with a footnote on exactly that. Not sure if you saw it. I agree it may be possible to develop an attack against the secure enclave with a big enough budget and access to lots of sample chips to practice on. But keep in mind secure enclaves are designed to self-destruct under these circumstances, so I almost think it may entail pushing the frontier of electron microscopy and/or techniques for exposing it that are more delicate that grinding down the chip.

In other words yes, I'd bet money there's a "secure enclaves" team at the NSA with a few million bucks to play with. Or teams.


Another possibility is a tailored-access type attack further upstream the supply chain. For example, Apple relies on Samsung or TSMC to fab chips. It's possible the chips could be modified before manufacture, or after, to contain a flaw. I've read about such attacks demonstrated in principle already. There was a worry about the Chinese government executing such attacks for example.

One of the downsides of the way global manufacturing works today that there's so many stages in which components can be intercepted.

You don't know that the device you bought actually consists of unmodified versions of the components that were part of the original design.

Could the NSA partner with the Korean government and Samsung to put a backdoor into components? I wouldn't rule it out.


This is a legitimate concern I think. The US government has designated secure foundries in the US to ensure the security of government silicon.

See[pdf]: http://www.ndia.org/Divisions/Divisions/SystemsEngineering/D...


It would be considerably cheaper to just replicate the ASIC you want to attack in your own silicon.

The secure enclave isn't so magic it's just a secondary processor that handles cryptography it has it's own memory to store variable such as failed attempts.

Attacking the SOC might be more complex and expensive but eventually it's exactly the same as attacking the NAND or any other integrated circuit.

For all we know the NSA could (and most likely does) develop their own in circuit debuggers for common ASIC/SOC's and just dumps what ever unique values the target SOC stores and takes a crack at it. This also isn't out of the realm of possibilities for companies that specialize in in-circuit emulation, hardware design, and forensics to create as a turn key solution.


> it's exactly the same as attacking the NAND or any other integrated circuit

Not really. Secure enclaves have added defenses that NAND does not. They don't have an API that lets you read their embedded secrets, for instance. You can't just hook up a debugger.

You'd have to try to get at the state of its transistors with an SEM or something. But additionally some have physical defenses against delayering that will self-destruct their contents in the event of a physical compromise. So while I ultimately agree that a nation-state could potentially craft an attack against a specific design, you're understating the difficulty.


Hence why I said it was more complex and expensive if you are going to quote some one please do so in full. Additionally NAND doesn't have an "API", NAND mirroring works by desoldering the memory hooking it up to a device and mirroring it to another chip by flagging the mirroring bit.

There are other ways to attack hardware, you do not need to get a SEM(or AFM for that matter). Devices that probe transistors on a microscopic level exist in the industry (e.g. http://www.tek.com/sites/tek.com/files/media/document/resour...), hence the more complex and expensive part.


The tool you linked to actually requires an SEM.

Also, you cannot "desolder" the secure enclave and hook it up to a "mirroring" device. That attack requires the NAND to be encapsulated in a desolder-able memory chip that supports reading out state. Not the case with a secure enclave.


The NAND is encapsulated in a desolderable memory chip that supports reading out state. There's an anti-replay counter, but supposedly that's just stored in another external NOR flash chip with the Secure Element having no onboard flash storage at all - the process Apple builds their chips on doesn't support on-chip flash memory even if they wanted it.


Interesting. What's your source? Apple's whitepaper suggests otherwise, to my reading:

"The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing."[1]

What this says to me is that while rewritable data storage is indeed kept in regular commodity flash memory chips, it's all encrypted by a unique device-specific key that is somehow burned into the secure enclave. So that one little secret kept inside the enclave would allow it to store everything else off-chip.

[1] https://www.apple.com/business/docs/iOS_Security_Guide.pdf


That unique device-specific key provides no protection against replay attacks. So in practice, the newer Apple devices don't appear to provide any more protection against an attacker with physical access than the one that the FBI just cracked - they should be able to get everything they were demanding in their warrant without Apple's help on any iPhone.


Maybe I don't understand what you mean by "replay attack" in this context, but the secure enclave does in fact provide protection against brute forcing passcodes. It is detailed in Apple's security whitepaper (see p12). Basically, you have to give the passcode to the secure enclave to get the data decryption key which is derived from the device-specific key contained therein. And the enclave enforces time delays between wrong guesses.

If you can envision a procedure for hacking around this I would love to hear it.


Nanoprobes do not require an SEM to function, the SEM is only used to setup the probes initially. SEM probing is different SEM probing works because when the circuit is active the electrons emitted from the SEM will pile on the gates to balance out the charge. Noneporbes hookup wires directly to the components and can measure voltage and capacitance to gain the exact state, this is effectively hooking up an oscilloscope on transistor/logic gate level. These probes are constantly used in the industry during development and can read anything in the silicon, and it doesn't matter if you store the secrets in NAND or any type of NVRAM or build some unique deterministic array for each chip (which Apple obviously won't do since int will require a unique stencil for each processor which will make a single A7 chip cost as much as a jet). If you have access to the silicon there is nothing anyone can do, taking out the private key from a FIPS certified hardware token that isn't vulnerably to side channel attacks can cost as little as 10,000 dollars depending on the ASIC in question. Infact to some extent the secure enclave can make physical attacks easier since you know what to focus on and you do not have to reverse engineer the entire SOC but rather a single component.

Today pretty much anyone can buy a probing station[0] these range from several 1000's of dollars for very basic IC's (such as ones used on cheap smart cards) to 100,000's or millions of dollars for something that can probe say any modern CPU/SOC.

Probing stations are used by chip manufacturers and designers and quite often they are also used in the post production QA process where completed packages will be depackaged and inspected using probes. This isn't "rocket science" there are plenty of people trained to operate such devices and the NSA is more than capable of hiring engineers from the semi-conductor industry and contracting the most advanced probes out there to look into any chip they want. Heck the NSA could easily afford cryonic probes which allow you to cool down the IC to very low temperatures this isn't only required to fully probe certain IC's that could easily be fried without sufficient cooling but also to execute cryonic attacks in which you cool down specific parts of your IC to a very specific temperature one for example that could allow the IC to read from it's memory but write operations would fail in this case for example it might enable a party to attack the secure enclave which will generate keys but will be unable to store the failed attempts counter in it's own private memory.

[0]https://en.wikipedia.org/wiki/Mechanical_probe_station

Apple isn't magic, I know you like to think it is, but you really don't seem to grasp just how many types of physical attacks are there on IC's.


We already have self destructing circuits[0], but it requires glass. Not something I'd think Apple would go with. Not to mention just dropping the device the wrong way won't just crack your screen anymore, but destroy your precious data. Not something I'd think the average user would want.

Or is there a different piece of tech that can self destruct when tampered with?

[0]: http://www.extremetech.com/extreme/214185-new-xerox-parc-pro...


> just replicate the ASIC you want to attack in your own silicon.

That doesn't get you the key stored in the target device.

It's not the same as attacking any other integrated circuit, because this one is rigged to blow away its secrets if you try to get instruments/debugging tools inside its enclosure in a way its designers thought of.


http://arstechnica.com/information-technology/2016/03/report...

Apple already suspects that this is happening


The article contains two links to previous stories where this actually happened.

[0]: http://arstechnica.com/information-technology/2013/12/inside... [1]: http://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-...


> It's possible the chips could be modified before manufacture, or after, to contain a flaw.

Wouldn't Apple be able to detect such an attack, if they were looking (e.g. decap sample chips, image at high magnification, and compare to the original design files)?

I don't think such an attack would work very well as a tailored access sort of thing. If the backdoored chips got into the supply chain, the general public would be affected. If the NSA wanted to only target certain people, they'd have to have a huge amount of control over Apple's supply chain, which would surely be noticed.


Depends on how you buy it. If the phone is mail ordered, it would be possible to swap with a compromised device during shipping. NSA's Tailored Access Operations group is known to perform this sort of attack [1].

If you want to ensure you get a phone from the standard supply chain, buy it in-person at a store where you can see someone take it off the shelf.

http://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-...



> I agree it may be possible to develop an attack against the secure enclave with a big enough budget

You are assuming it's perfect in conception, design and implementation. That is a very unlikely assumption for any system.


> self-destruct

Is this burnt silicon, or just new configuration (genuinely curious)? I'd assume that anything but a smouldering chip is in the realm of circumvention.


> That is pretty darn spiffy from a security standpoint. If it works as designed, about the only hope anyone has of getting at data from an iPhone 6 or later is through iCloud backups.

Isn't that 5S or later? Touch ID and Secure Enclave were introduced w/ the 5S model.


Sorry, yes. Updated.


"If you could somehow get around the wiping part, you could keep guessing passcodes"

In the 80's, one could do that by cutting a single line in the connector between motherboard and hard disk. I would guess something similar is possible between motherboard and flash. It would be more complex because 'write' now isn't a single signal, but part of a protocol and because one would have to des older the flash, but that shouldn't stop the FBI.

So, I guess one can get rid of the "restore the contents every time it's wiped" step, speeding up the process.

As to the A7: as you indicate, people will try to do essentially the same thing to the flash storage inside it. Cut one open and try to figure out where its memory is, try cutting that loose from the CPU part, etc. The scale will be smaller, and the challenge (a lot, if the A7 has anti-tamper devices on-board) harder, but impossible?


Potentially this might not only possible for nation-state actors. This is for not for an iPhone, but for an secure element (TPM): https://www.youtube.com/watch?v=h-hohCfo4LA https://en.wikipedia.org/wiki/Christopher_Tarnovsky#Acts_of_...


The Secure Enclave supposedly has the same problem - the information about how many guesses have been attempted is stored entirely in external flash memory that can be rolled back to give unlimited tries. Additionally, Apple have the ability to sign and load new software for the Secure Enclave that doesn't have the escalating time delay, leaving only the same 80ms per check hardware limit that the A6 had.


Just get the data while the device is in use and unencrypted, using a warrant? (wirelessly, before you apprehend the suspect)


Hmm the executive branch remains unfixed. Obama and the DOJ still think they can compel all encrypted messaging platforms to provide the US government with decryptable data.


Phone is just locked, key is not wiped.


>It encrypts data by comingling

does the use of the word 'comingling' here implies any kind of special combination technique??


I'm just using the term casually. Technically I think it's a hierarchy of keys wrapping keys. It's all explained here: https://www.apple.com/business/docs/iOS_Security_Guide.pdf


That's an interesting point that I haven't heard mentioned. What if the FBI got to the data not from the phone, but iCloud. That honestly actually seems easier.


I think they attempted that, however due to a mistake wiped the iCloud data.


It wasn't wiped, it was disabled. They actually did get the iCloud backup data, but from about a month and a half before the phone was seized. Instead of waiting for another iCloud backup to occur, they reset the password, which disables these backups. The FBI then lied about this, saying that resetting the password was done against their will by the county. The county disputed this claim, and the FBI eventually admitted that they had actually instructed the county to do so, leaving open the question of why they did this.

source: just search "San Bernardino iCloud" and you'll get a whole bunch of articles on the subject



They didn't wipe it. They got a six week old backup. What they did prevented triggering a newer backup.



They could have mentioned that just like they could have mentioned that they asked for the password to be changed.


Due to a mistake, or "accidentally on purpose" so they could punt with this law suit…


Two questions that I still have on this:

(1) Does the public accept that the FBI is even telling the truth on this (i.e., did they actually "break into" this iPhone)?

(2) If they did gain access to the iPhone's info, was it actually through the use of a vulnerability, or did they discover some other info that led them to the passcode?


And if they gained access, did they have it all along and lie about it all along? I suspect it is a career-limiting maneuver to prosecute the FBI.


Main stream media seemingly prints anything said by any government official as fact, even when it is dubious at best.


I've gotten so used to gargantuan lies from both governments and corporations that worst case scenarios now seem the most likely. It's easy for me to believe that apple already did the work with an agreement put in place to keep it all under wraps by having the FBI drop the request.


> If the FBI used a vulnerability to get into the iPhone in the San Bernardino case, the VEP must apply, meaning that there should be a very strong bias in favor of informing Apple of the vulnerability. That would allow Apple to fix the flaw and protect the security of all its users. We look forward to seeing more transparency on this issue as well.

It seems reasonable the FBI could still notify Apple of their method of entry and then later notify the public in time.

The EFF, as much great work as they do, is showing a bit of impatience here. Perhaps they feel they have a bone to pick with the government, as the government seems to feel they have one to pick with tech. Neither party looks great by leveling such public complaints prematurely.


The FBI said that they've gained access to the phone via a 3rd party (the 3rd party (who it was, not that one was used) is unconfirmed, potentially cellebrite) so most likely they have no way to inform Apple of anything.

Not to mention that if indeed this was a physical attack NAND mirroring or ASIC replication there isn't really what to inform Apple about.

Apple can't design a chip that wont be borken, all of them including those which use a secure enclave can be broken by physical attacks.


The FBI has shot themselves in the foot on this one, if you ask me.

Let me just say at the outset that I am entirely unsympathetic to the FBI with respect to the Apple case. I side with Apple unreservedly. But the FBI started this case because they claimed there was "no other way" to get into the phone. Then, Lo and behold!, it turns out that there was another way.

The next time the FBI tries this, I think the public reaction will be that the FBI can find a way, just like they did the last time. In other words, the FBI is now the Boy Who Cried Wolf.


Apparently a 3rd party from Israel (http://www.cellebrite.com/ [unconfirmed]) helped the FBI which begs the question, how did they do it? Do they have universal access to all iOS devices or just this particular device?

If there is a known vulnerability, I'm willing to bet Apple will find it rather quickly. I'd imagine Apple has engineers pouring over the source code now.


It is not known that Cellebrite helped them. The rumors that they did stem from a contract they have with the FBI that is unrelated.

In fact the contract they have with the FBI doesn't supply anything that can break into an iOS 9 device so unless they had a separate product line not included in a contract only two months old it's unlikely it was them.


Cellebrite has allot of products that aren't advertised directly which are provided through their CAIS services (and there are also at least additional 2 unlisted service tiers).

They have allot of turnkey solutions for various markets as mobile forensics is only a part of their portfolio, but quite a few of their forensic services and products are not publicly listed.

*I worked for an Israeli information security firm that is a research partner and service provider for cellebrite.


If anyone was going to break into this device, it would have been them. Cellebrite have been the leaders in forensic data extraction from mobile devices for years. Most police departments have at least one of their readers for collecting information from seized phones.

Years ago, I could never get a good answer from them whether they could extract information from encrypted devices. I guess we now know what the answer is, they can if they really want to.


What is your source on that?


CNBC. Here is a link to the video where they mention Cellebrite.

http://video.cnbc.com/gallery/?video=3000505023


While stumbling upon different articles, I came across this video on YouTube of a guy in Shenzhen, China, that can replace the internal 16GB storage with a 128GB capacity NAND, so it's highly possible that a modified version of this is what's been done to the iPhone.. https://www.youtube.com/watch?time_continue=198&v=2bGb5AOwp4...


There's no doubt that the FBI could arbitrarily edit/replace the contents of storage, the trick is getting it to boot after that. That's why they wanted Apple's cooperation to sign code or hand over the code signing keys.


this is so cool, don't understand why you get downvoted. I thought we are in hackernews.


It's going to be amusing when Apple tries to sue the FBI under VEP and they're told pound sand.


It'd make great Apple marketing PR though :-)

Did you see Charlie Stross's speculation that from Apple's end this is all about them becoming a retail bank via Apple Pay: http://www.antipope.org/charlie/blog-static/2016/03/follow-t...


Really interesting article, thanks.


How does a corporation "sue the FBI under VEP"? This sounds more than a little silly to me but I am not a lawyer. What is the cause of action here?


It doesn't have to be vulnerability to get around it. I think the EFF is a bit too eager here, assuming that encryption is absolute when it is far from it. All encryption gets cracked sooner or later.


If the FBI pays a third party to break in, does VEP still apply?


FBI allegedly* breaks into iPhone.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: