Hacker News new | past | comments | ask | show | jobs | submit login
FBI unlocked iPhone 11 Pro via GrayKey, raising more doubts about Pensacola case (9to5mac.com)
235 points by miles on Jan 16, 2020 | hide | past | favorite | 137 comments



For those who want to read the original journalism and not a blog's re-hash of someone else's hard work:

https://www.forbes.com/sites/thomasbrewster/2020/01/15/the-f...


I'll take the blog re-hash so long as Forbes insists on trying to force me to accept their (compromised once-monthly) ad network, but the effort in digging up and reposting the link is well-spent generally, so thanks.

(to be clear: if Forbes handled their ads on their own or used ad networks that invest in the security of their platform, I'd actually probably bite the bullet, but if I visit the site on mobile and end up being redirected to another "your phone is infected!" ad exploiting their current provider... I'll keep steering clear without ublock enabled.)


https://i.imgur.com/dmgV13H.png

Image copy of the article for the interested.


Thank you for saying this and explaining it. I've noticed that a lot with Forbes and a couple other sites.

So basically bad actors are hijacking the ad network that Forbes is using because it's a poorly secured ad network.

I didn't realize that was what was going on but that explains a great deal.




Also - besides its name and the company name - there is nothing AFAIK connecting it to "black/gray market", at least initially those were offered only to LEO and only in US and Canada.

And the "long been used" is IMHO a tad bit exaggerated, it came out in the first months of 2018, and - set aside the FBI and whatever other US three or four letter government agencies, it is not likely that US$ 15,000 or US$ 30,000 is something that any police department has in a drawer and can spend instantly, more probably it has taken at least a few months for everyone to get the expense authorized:

https://www.forbes.com/sites/thomasbrewster/2018/03/05/apple...

No idea on volumes of sale or how many departments got one, in either the "online" or "offline" version, but by now - if it was "widely" used - I presume we would have a lot of evidence based on unlocked iPhones in trials.


>it is not likely that US$ 15,000 or US$ 30,000 is something that any police department has in a drawer

I wish that were true. Police in any small town in America could start at lunchtime and have that amount by nightfall. Let me introduce you to "civil asset forfeiture" ( https://www.heritage.org/research/reports/2014/03/civil-asse... )

There are state and federal version of this, and a "sharing" program to share the loot. It's pure corruption in blatant violation of the US Constitution, but it's good for government/police business (e.g. buys a lot of helicopters, tactical/swat equipment, training, etc.) so it persists.


As a non-American, it's almost unbelievable that the police can basically rob you in the so-called land of the free and home of the brave.


Worldwide problem.

"The idea that the State originated to serve any kind of social purpose is completely unhistorical. It originated in conquest and confiscation - that is to say, in crime. It originated for the purpose of maintaining the division of society into an owning-and-exploiting class and a propertyless dependent class - that is, for a criminal purpose." -- Albert J. Nock


That may not be a safe presumption. Organisations like the FBI keep their investigative techniques secret as long as they can, to preserve their effectiveness in ongoing and future investigations. Trials are the exception; most cases end without one, and the decline of local reporting means the public is less informed about the daily business of the courts.


Yep. it is actually largely dubitative.

I don't know, I simply have no idea of the number of device units of the Gray thingy sold, and how much they could have been used, and as well I have no idea on how many crimes involve the access to a (password protected) iPhone, but unless (yet another piece of data I have no idea about) all suspects either use an Android (or however non-iPhone) or give to the police access to the device without need to "crack" the access code, they should be non-trivial amounts.

I mean, I understand that it is a fraction (going to trial) of a fraction (criminal cases involving a phone) of a fraction (suspects not providing access to the device) of a fraction (suspects using specifically an iPhone), still the ubiquity of iPhones and the large number of criminal cases should anyway result in tens or hundreds, while we have AFAICT none or maybe a few actually documented/published about.

The declining of local reporting may be indeed another factor, but there would certainly be lawyers making a fuss (right or wrong) about this or that right violated through the use of this or similar tool.

The contract with ICE:

https://www.forbes.com/sites/thomasbrewster/2019/05/08/immig...

is likely to produce no public court records, just like many national agencies, still the sheer fact that not a single actual court record of its use can be found (at least by me) could mean that the device is not used that much by local police, a few cases should have been brought to the attention of the public.


I would assume 90% of iPhone users if not more use 6 digit codes. So this is still a big win for the creators of GrayKey


90% of iPhone users with a passcode probably use the (default) 6 digit passcode option, but I would be surprised if that many people set that up.


The default is to create a passcode - you have to specifically opt out if you don't want one.

https://support.apple.com/en-gb/HT202033


A huge percentage of people use stupid passcodes, because convenience is more important than security for them.

Six ones, or up and down the middle column, that sort of thing.


Guilty. Convenience is much more important than security for me.


Back in 2016, it was estimated between 85-89% used a passcode [0]; I expect it's probably even higher today.

[0]: https://techpinions.com/apples-penchant-for-consumer-securit...


So what's the deal with this? Isn't it just brute force? I would've expected the NSA to have something like that since iPhones first hit the market.


The main trick is bypassing the increasing timeouts for multiple incorrect guesses.


Shouldn't you always be able to completely bypass that by taking out the chip or something and just probing it directly? I guess that might be a bit much for normal police work, but in terms of "can the government (read: NSA) get into this" this doesn't really seem to change anything.


The lockout delay is implemented in the chip itself in newer iphones (I think starting with iphone 5S). So no, taking the chip out buys you nothing.

> To further discourage brute-force passcode attacks, there are escalating time delays after the entry of an invalid passcode at the Lock screen. ...On devices with Secure Enclave, the delays are enforced by the Secure Enclave coprocessor. If the device is restarted during a timed delay, the delay is still enforced, with the timer starting over for the current period.

https://www.apple.com/in/business-docs/iOS_Security_Guide.pd...


The external interface of the chip does not let you just read its contents - that's the whole point of a secure enclave device. You'd have to decap it and make some extremely fine connections to the memory parts of the chip.


Possibly, but that still defaults back to brute forcing a long passphrase with a salt. If guessing the passphrase goes above a certain entropy and Apple derives the password with a key stretching algorithm, then the timeouts won't matter.


That's exactly what I mean - the usual key complexity rules apply, and have always applied.


But they shouldn't. The key you enter as a user is actually only half of the key needed to decrypt the data. The other half of the key is generated by the secure enclave when the device was first powered on. This part of the key alone should be impossible to brute force and is stored in tamper resistant flash memory on the secure enclave co-processor.

So you would need to break open the co-processor without destroying its contents and read the secure enclave key before any brute forcing could happen directly on the encrypted data.

And no you can't just try to brute force it by sending decryption requests to the secure enclave. The secure enclave itself implements a exponentially increasing lockout time and won't respond to decryption requests during it. [0]

[0] https://www.apple.com/in/business-docs/iOS_Security_Guide.pd...


If the secure enclave implements the time lockout (well, supposedly I guess considering it seems broken) then the device is indeed a game changer, thanks for that information!


> I would've expected the NSA to have something like this since iPhones first hit the market.

If they do, why would they reveal it for something as 'basic' as a single attacker? Or they may have broken it, and simply 'reverse engineered' the path to evidence with public means:

* https://en.wikipedia.org/wiki/Parallel_construction

Agencies don't want to reveal means and methods if they don't have to.


Once again, it seems that it was an excuse to weaken the security of all iPhone users rather than getting information from a specific device. In the case of San Bernandino the FBI was able to use Cellebrite to crack the attacker iPhone without Apple creating backdoors.


In that case, Apple already had a backdoor, or they wouldn't have been able to comply in the first place: the device in question did not yet have the "secure enclave" enforcement of pin code back-off and supported firmware updates by Apple without the user's pin code. Apple spending the time to make a firmware--which incidentally anyone with Apple's private firmware signing key (the real back door) could easily have done (as we seriously already have had custom firmwares for ssh bootstrap and pin code brute force in the community)--isn't them "creating" a backdoor, it is them "using" a backdoor. Thankfully, it is my understanding that Apple decided to fix both of these issues in subsequent devices, and so while there are clearly still bugs there hopefully are no longer any obvious backdoors.


>Thankfully, it is my understanding that Apple decided to fix both of these issues in subsequent devices, and so while there are clearly still bugs there hopefully are no longer any obvious backdoors.

Does this mean. There's a new unpatched exploit out there that greykey is using?


From what I can tell, it simply tries to brute force the password (perhaps with some informed suggestion). It does appear to have access to an exploit that bypasses/disables the encryption lock that wipes data off the phone after failed attempts, but it does not appear to utilize an exploit/backdoor to gain access to the device; it gains access the "legitimate" way.


The phone was released with iOS 13 which prominently featured enhancements to USB restricted mode[1] which was supposed to defend against GrayKey/Cellebrite attacks. Seems like GrayKey can easily bypass that feature. Does not really inspire much trust in Apples security team as the USB restricted mode was already a bandaid itself.

[1] https://blog.elcomsoft.com/2019/09/usb-restricted-mode-in-io...


Apple's security team has been given the near impossible task of defending a physical device in the hands of an attacker.

I wouldn't blame them for any lack of success. Perhaps instead blame them for suggesting to the user physical security is possible at all.


I think people don't fully realize the magnitude of such a task. We're not talking about something like consoles where the attack has to meet a higher bar to be viable (be persistent across reboots and upgrades, work over the internet against the provider's infrastructure, etc.). And as usual, the attacker only needs to get it right once and they can afford to wait for months or years to find the exploit on the particular device they have seized.


As far as I know, there is actually a pretty easy defense mechanism against this: strong alphanumeric password.

The hard part is trying to defend a physical device in the hands of an attacker while using a simple 6-digit passcode.


It doesn't need to be perfect. The main requirement is that a chip holds on to a secret key, releases it upon getting the correct pin, has a limit on attempts, and is resilient to voltage and timing attacks. That's difficult but not exceptionally difficult.

Apple has chosen to run a ton of code inside the secure enclave, and bugs from that are on them.


> has a limit on attempts

This is potentially a quite difficult problem IMO


Is it? Have nonvolatile storage inside the chip, and increment+verify the attempt counter before checking if the supplied PIN is correct. What do you need beyond that?


How do you know when to reset the counter? You dont want invalid attempts incrementing forever...there is your chink in the armor.


What’s wrong with resetting the counter when unlocked successfully?


What about unintentional passcode attempts such as a phone in a pocket getting butt-dialled?


The interface should be designed so that you can't butt-dial more than a couple attempts.

But if that does happen then the system of timeouts will prevent you from using up all the attempts.

None of that gets in the way of resetting the counter only when the user succeeds.


The difficulty (in my view) comes from ensuring that I can't just clone/replicate the state of the device from when I had more tries left and then try again.


As I understand it, you can't clone the secure enclave chip because it doesn't expose the key or its code externally.

The only way would be to physically decap the chip which would most probably destroy it.


> the near impossible task of defending a physical device in the hands of an attacker.

If you assume the device is off and the user chose a strong password, it's pretty easy to defend. You simply encrypt the data with a key which is encrypted with the user's password.

If you want to protect devices that are on, or want to protect devices with less than stellar passwords, then it becomes harder.


That is not very strong protection - it lets you perform dictionary attacks at high speed.

It is often more secure to generate a random, high-entropy key and storing it in secure storage, which is what the iPhone does.


If you assume a strong password you don't need to worry about dictionary attacks.

There are 2 ways to slow down the attacks: key stretching and secure storage. Key stretching is a good idea.

I recommend not relying fully on secure storage, because I've heard of tons of hardware vulnerabilities (side channel attacks, undervoltage, electron microscopes, buggy implementation). I trust math more than a physical object. In fact it seems impossible to me to build fully secure storage, because if someone has a delicate enough measurement tool to measure the atoms inside the storage, the data inside can be extracted. If you store the password (or hashed password) as well as the key in the secure storage, and have it only return the key if the input password is correct, you run the risk of someone finding a bug in the storage to extract the key without the password. Then you're compromised.

But you build a system so that the secure storage is no worse than regular crypto. You do the encryption using a combination of the user's password and the output of the secure storage. That way even if the secure storage is fully compromised, the password is still needed.


You can't really assume a strong password, because if you have to type in 12 characters, letters and punctuation marks every time you want to look at your phone, you're going to give up on the whole thing pretty quickly.

To be usable, phones need to allow relatively weak passwords.


I've had a password like that on my (Android) phone for ~7 years and haven't given up. I don't use punctuation though, it's not worth the extra taps to get to the punctuation keyboard for the entropy you gain. I've never had fingerprint or face ID enabled either.

12 characters gives 62 bits of entropy. That's plenty if proper key strengthening is in place.

Linus Sebastian says that when his phone got slower to open up, he got happier, because it caused him to use his phone less, cutting out the useless stuff. https://youtu.be/WGZh-xP-q7A?t=305


> If you assume the device is off

When was the last time a regular person turned their phone off? Not counting reboots or out of battery incidents I'm going to guess not since it was purchased.


Do you really think it's impossible to "defend a physical device" – prevent an attacker from accessing and decrypting the data stored on it? I believe it is possible, that's the promise of hardware security modules. The article is about a mostly secure physical system that Apple undermines by encouraging the use of easily-cracked numeric PINs. I am not sure if the implementations are there yet, but biometric authentication looks like a promising solution to this problem.


> biometric authentication looks like a promising solution to this problem

Yep, and if it gets hacked, then all you need to do is change your fingerprints.


You need to think about this a bit more: biometrics are bad if you pass them over the network where an attacker can replay them but it's different in a local context where they never leave the device. You get a high-entropy key and an attacker who can get both your device and a sufficiently high-quality biometric scan can also simply do things like like you in a room until you unlock the device. That seems like a reasonable compromise.


> it's different in a local context where they never leave the device

Until the next round of FBI tools, where they extract the fingerprints to their database as part of their unlocking process.


The handling of biometric data is designed to be secure between the sensors and the secure enclave. For example, data from the fingerprint sensor is encrypted when it is sent over the wires inside the device. The secure enclave does not store images of the fingerprint, but a representation of it which is not enough to reverse back into a fingerprint.

This is covered in the Apple Platform Security Guide.

https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app... (I believe this link can change when the guide gets updated)


I don’t think any of the existing exploits actually involve extracting keys from a hardware security module.


How do they extract fingerprints? All competent biometric implementations store hashes for exactly this reason.


Fingerprints are famous for being left behind on pretty much everything you touch...


What makes you think that's relevant to the discussion here? The person I replied to was under the incorrect assumption that someone in possession of a phone could extract stored fingerprint images, which is not true of any well-designed biometric system.

If you do a little bit of reading about the topic, too, note how well-designed biometric systems require more than a simple fingerprint or photograph — e.g. Apple's FaceID has liveness checks for eye motion and uses a 3D scan. None of these are impossible for a well-resourced attacker but that's true of the alternatives as well. This is why you need to think in terms of threat models — e.g. the attacker who can get a high-resolution 3d scan of your face can also watch you type your passcode in so the latter isn't more secure in practice.


> the attacker who can get a high-resolution 3d scan of your face can also watch you type your passcode in so the latter isn't more secure in practice

If an attacker watched you type in your passcode, what would you do about it?


Probably not notice because it’s a hidden camera or drone way out of notice?


Well, too bad about your bank account. Better luck next time.


Biometrics shouldn’t be trusted this way. Biometrics are closer to a username, not a password.


Apple specifically uses biometric authentication (faceID and their fingerprint thing) except when the device is first powered on. This is (at least partly) because of US legal rulings that allow LEOs to compel you to provide a fingerprint and similar biometric id but cannot compel you to provide a password.


Directv won that war years ago.


I think it’s important to stress that we have essentially no solid information as to whether these attacks are real, and if they are, then the methods they use or the amount of time they take, nor what measures can be used against them.

Let’s not sing the requiem for their security team just yet.


from the GrayKey website:

>GrayKey is not for everyone. We kindly request that you tell us a bit about yourself and your organization.

This feels like its begging for DMCA litigation, but its likely Apple already knows how and why GrayKey works. Keeping GrayKey around serves apple as law enforcement has (for now) an easy means of hacking some iphones at an entry cost, while permitting Apple to continue insisting their phones are just too secure to help hack.


TMK, There are multiple independent implementations of the attack their current gen tech is based on. Any determined actor could hire a greyhat and have their own system.


>while permitting Apple to continue insisting their phones are just too secure to help hack.

How? If the FBI can get into someone's iPhone with their magic box and you can read about it in the news, how can Apple convince people that the iPhone is "too secure"?


How else will Apple market the iPhone 12 with Secure Enclave v2™?


Basically GrayKey can unlock numeric passcodes by bypassing the cumulative limits that engage when the wrong code is typed in. The process still takes a long-ish time.

It can't do anything for FaceID, TouchID or alphanumeric passcodes.


According to these estimates:

https://appleinsider.com/articles/18/04/16/researcher-estima...

4 digit = less than 15 minutes 6 digit = less than 24 hours 8 digit = less than 92 days

Besides those estimates, other sources talk of 4 digit = a couple of hours 6 digit = some three days

https://blog.malwarebytes.com/security-world/2018/03/graykey...

even if those could be "lucky" events.

The "clever" bit, according to reports is that the procedure "frees the device", i.e.:

1) you connect the iPhone to the Graykey

2) Graykey does something in a few minutes

3) you disconnect the iPhone and it is the phone itself that in due time unlocks itself

Which plainly means that several devices can be processed per hour (and then kept on a shelf, connected to a power supply as long as it is needed).


> 2) Graykey does something in a few minutes

It’s probably using some kind of exploit to upload a custom ramdisk (similar to jailbreaks). Except, unlike jailbreaks, it doesn’t jailbreak the device, but instead tests the passwords.

What’s also interesting is that the Secure Enclave (which holds the encryption key) is supposed to enforce the 10 password limit, so they’re doing something really clever here.


Occam's Razor here suggests that the SEP doesn't actually enforce the limit (it's clearly not enforcing the delays, either), and that the almighty Marketing Department defeated the security engineers once again.

It's much more likely that a marketing-driven product development culture would lie to its customers than it is that Grayshift engineers have compromised the SEP itself. Researchers have been "compromising" HSMs by standing on its external signaling implementation rather than the chip itself for decades, now.


Isn't numeric passcode the default fallback to biometrics that millions of users use? Also, isn't it supposed to be safer than FaceID or TouchID?

Apparently GrayKey can't crack long passwords, since it's essentially brute-forcing, but almost everyone I know uses a four-digit code.

Also, this is troublesome because in the US, we're told that cops can force you to hand over your fingerprint but not your passcode. It's a bit problematic if those passcodes are easy to crack.


> Also, this is troublesome because in the US, we're told that cops can force you to hand over your fingerprint but not your passcode.

It's not settled law that you can't be compelled to provide a passcode. In general, the 5th Amendment prohibits compelled testimony that is incriminating. Disclosing a password to your own phone is usually not per se incriminating. Contrast that with disclosing a password to a device you're accused of hacking, where showing knowledge of the password is evidence of guilt. Many (most?) courts haven't yet been prepared to defend such a fine distinction, and seem to be more comfortable with a simpler rule that prohibits compelling password disclosure, period. But that could easily change, especially at the Supreme Court.


> Disclosing a password to your own phone is usually not per se incriminating

That's why my passcode is always "I murdered her, officer".


Better to go for "them" to cover twice (or more in case of multiple) as many situations.


My understanding is that it's settled law that Americans have the right to remain silent unconditionally.


It is not that straightforward. Generally you cannot be compelled to testify against yourself. But you can be legally compelled to perform certain actions, and held in contempt if you fail to do so. Whether or not disclosing a password is self incrimination, or something you are required to provide is an actively changing area of the law.


> But you can be legally compelled to perform certain actions, and held in contempt if you fail to do so.

I anal but isn’t this basically slavery/forced indenture? Could a court of law compel Apple to write some software? If so, can a court of law require George RR Martin to write a novel and send him to prison if he declines?


Once you're in the court system of most any country, what things you legally can and can't do are a lot harder to enforce. You're essentially needing to fight a well-trained opponent on their home turf with biased referees and no real downside to them for cheating.

I'd imagine neither of your examples could or would happen given the power Apple and Martin's representatives have, and how absurd forcing someone to write a book as part of a court decision would be, but I'm fairly sure you could find examples of relatively similar things. I could certainly see something like a contract-related case being resolved by essentially legally compelling someone to write a book that they said they'd write, or else be fined/imprisoned.


> I could certainly see something like a contract-related case being resolved by essentially legally compelling someone to write a book that they said they'd write, or else be fined/imprisoned.

Again, I anal so I don't know the law but contracts sound like a strictly civilian (not government vs not government) court case where there should be no possibility of imprisonment. If there is it sounds like a bug to me and we ought to amend the laws so that it is not possible.


Look up the All Writs act. It's an open question as to whether it actually applies, and Apple has successfully fought off the FBI on this, but it hasn't gotten a court decision that sets a solid precedent.

Conceptually I agree with you, the law is not as clear.


Courts can not compel arbitrary things but they can compel participation in the trial process, in order to bring the trial to conclusion. Imagine what a miscarriage of justice it would be, if people had an unrestricted right to hold up the process.


For an example, Francis Rawls is (AFAIK) still in jail, will be there indefinitely, and has not been charged with a crime. So long as he refuses to divulge his password, he is held in contempt of court.


The "right to remain silent" comes from the Miranda warning, which is a simplified formulation of the constitutional right. Your "Miranda rights" are a superset of the rights directly guaranteed by the constitution. But they're judge-made (i.e. "activist judge") procedural protections which only exist in a narrow context. Importantly, the Miranda warning also says you have a right to an attorney. The notion is that until you have an attorney to explain your inviolable constitutional rights and to counsel you which questions you must, should, or should not answer, the court grants you a [mostly] unconditional right to silence.

The whole notion of Miranda rights is still highly contentious, especially among conservative jurists, and that's why an increasingly conservative Supreme Court has narrowed and carved out exception to Miranda rights. Because of the conservative exceptions, you should never just remain silent. You should politely ask for an attorney whenever a question is asked, but even then you might still get dinged for refusing to answer some types of questions. For example, if you're not yet in custody, or for simple questions like your name. It's complicated, which is precisely what the original Miranda rights were intended to safeguard against.

The irony is the right against self-incrimination, and many other rights copied from English law and enshrined in the constitution, were originally judge-made rules. By creating Miranda rights the "liberal" Supreme Court was following in the footsteps of traditional Anglo-American legal practice, and exercising their inherent powers. Courts have inherent, constitutionally protected authority to control what is and is not allowed to be presented in court. (Though it overlaps with legislative powers to control court procedures.) Almost all the rules for doing so were crafted by the English courts over nearly a millennium. It's not at all out of sorts for the highest court in the land to craft a new rule instructing lower courts that testimony is presumptively compelled if given before a Miranda warning and to reject it at trial.

To be fair, the conservative counter argument is that in England the highest court in the land was the House of Lords, which was also a House of Parliament, which was and remains the seat of legislative power. Procedural protections didn't always arise in the House of Lords, but the closer you get to 1776 the fewer instances there were of lower courts making such procedural rules. So similar to the Second Amendment, you can pick and choose a window of time that best support a claim to historical precedent.


4 digit codes are obviously easy to crack. That's a pathetically low number of combinations.


This isn’t really obvious. While there are clearly a limited number of codes with four digits, the idea is that the Secure Enclave can throttle and limit cracking attempts to the point that an attack would be infeasible.


Does the timeout between attempts increase with each failed attempt?



Just checked. The default passcode on my iPhone 8 is a 6 digit numeric code, not 4. although you can switch to 4 it seems.

Even 4 digits could be enough, given that iOS enforces (very long) delays after a few failed attempts.


> given that iOS enforces (very long) delays after a few failed attempts.

This is, apparently, the thing that GrayKey is able to bypass.


I'm guessing this means it also bypasses the setting to auto-erase my device when the code has been entered wrong a couple times?


Correct. It doesn't trigger the incorrect code functions. My guess is the device is somehow (through a private exploit) obtaining the hashed secret of your passcode and brute forcing that value instead of entering it through the iPhone itself.

That's just a guess though, it could be far more complex than that


To my knowledge there is no technical reason why a hashed value of the passcode should be stored on the iPhone in any way. If a hashed value could be extracted from the device, there would be no reason to perform the brute forcing procedure on the device itself, which seem to be the case with GrayKey.


How do you validate the passcode if you're not storing it or a hashed version of it on the device?


For example if you encrypt your disk or a file using a password, you also do not store a hash, the decryption/encryption key is derived from your password using a key derivation function such as https://en.wikipedia.org/wiki/Scrypt



My speculation is that it maybe tries to crack the hashed value and input that to the phone instead, rather than interfacing with the password screen.

in other words: the encryption/wipe code may be a function of the password screen, but the phone may accept a hashed key as a valid unlock attempt through a different interface that does not contribute to the failed attempts limit.


Of course it is highly unlikely that it interfaces with the password screen. My point is that if you could extract a hash from the secure enclave it would make much more sense to brute force it on a powerful external cluster. However this seems not to be possible as the decryption is only possible inside the secure enclave element unique to the device, thus decryption attempts have to be done on the the device itself, GrayKey seems to have managed to circumvent the wrong attempts counter and/or the triggering of subsequent protection mechanisms.


> So why are the FBI, President Trump, Attorney General William Barr, and others all calling on Apple itself to break iPhone encryption? The most obvious possibility...

The most obvious possibility is that Apple can one day succeed and iPhone will be uncrackable by these tools.


The box on their website looks really similar to iphone unlock tools sold around in the unlock industry.

Can it be that Greykey is just a relabeled Chinese unlock box made to crack the lock screen?


The government is just trying to establish its preferred pecking order — government, oligarchs, then Soylent Green. Expect more High Publicity cases especially those involving child victims.


isn't breaking encryption like this illegal under the DMCA? Wouldn't that mean that the FBI was actively breaking the law by using it?


Breaking encryption is not illegal under the DMCA except in the cases listed in the DMCA.

Lots of things are illegal without a warrant or without some other legal requirement.

The FBI breaking encryption in a legal investigation is not illegal.


> Breaking encryption is not illegal under the DMCA except in the cases listed in the DMCA.

isn't it the other way around?


No. DMCA makes most breaking of encryption illegal when used to circumvent copyright. "Most" because it has exemptions even for copyright circumvention. This is a very limited place to make it illegal.

I can probably think of hundreds of places you can still do it and it's not illegal. Some examples: in school classes to learn, at home for practice, researchers attacking each others new algorithms, estate clearance to get dead person assets released, for interoperability in may cases, for archiving old things in many cases, and on and on.

In fact, almost any place I can think of where I might break encryption (which I have done both as a hobby and professionally and as while a PhD student) is perfectly legal. It becomes illegal when used to do something else illegal, like violating copyright, or stealing things.

So no, breaking encryption is not illegal as a whole.

Here, break this: Lbh qvqa'g oernx gur ynj.


> "Most" because it has exemptions even for copyright circumvention.

Right that was what I meant, that the specific "list" I could think of is actually of exemptions. Irrelevant nitpick really, sorry.


for the curious: caesar (a -> n)


They have a court order to access the data, similarly, you're not burglarizing a house you have a warrant to enter

doesn't mean Apple has to accommodate


I suppose it's possible that in one case the device had USB Restricted Mode off, but it was on in the Pensacola case?


I fear where this will end up or already ended up is in a place where Apple gets to say they are pro user privacy, doing everything they can to keep my data secure, etc. When in reality they could be purposefully not closing vulnerabilities so they have an out with the police. Until I see Apple genuinely go after these companies like Cellebrite and GrayKey in court I will remain skpetical.


I always considered the "you can't get at the data of an iPhone" a marketing slogan, not really worth my trust. I am still off the opinion, once you give up/loose physical access to a device, the game is over. Also, I dont see why this is such a big deal for so many people. Is the percentage of people that have the FBI as their adversary really so high? :-)


Physical access does not at all need to mean that the game is over, with today's architectures.

With an iPhone, any storage you can access with physical access is going to be encrypted. The encryption key lives in the secure enclave, which you do not have access to even with physical access, barring drastic measure like trying to take chips apart and scanning their contents, which may or may not be feasible at all.

What still may be possible is bugs and exploits, but that's up to whether you are lucky enough to have one that works against a particular phone and firmware.


I dont believe this explanation. Not because it comes from you. I never believed it. "Secure enclave", yeah. sure. My point is, if you have sensitive data that you dont want to loose, you should probably not carry it around with you all day. Apple is trying to convince us that they have invented a sort of security that hasn't existed before. They claim they have invented a piece of paper that nobody else can read. This claim is so old, and has never been true.

If I wanted to hide an important piece of information, I guess I'd find a physically obscure and secured playce, like a hidden safe or whatnot. But I wouldn't trust crypto, no matter if the key were placed in a "secure enclave" (haha), or not.


I don’t think Apple are claiming to have invented security that didn’t exist before.

The idea of a Secure Enclave has existed for years in the form of Chip-and-PIN bank cards. Before that HSMs are built on the same principle (a bank card I just a tiny HSM).

Of this works on the principle that it’s very hard to reverse engineer silicon, and that you can make it harder by creating silicon designs that deliberately obfuscate their purpose.

End result is a piece of hardware that very difficult to take apart with irreversible damaging it, and destroying the data in the process. The attack itself would also require an extremely high level of skill.

Ultimately no security is perfect, it’s not meant to be. Security is just meant to skew the effort-reward equation enough that no one can be bothered to break into your thing.


No, Apple is using known technology that has existed for quite a while now. Secure enclaves aren't new or mysterious, they are established technology.


A piece of paper no-one can read is extremely doable, if you encrypt your HDD at home you can walk around with it as long as you like and no-one will decrypt it, the problem is making it an accessible device you can comfortably use and take data in and out of without accidentally making it readable.


> This claim is so old, and has never been true.

Citation needed.

> If I wanted to hide an important piece of information, I guess I'd find a physically obscure and secured playce [sic], like a hidden safe or whatnot.

That's just basic threat assessment. Obviously if you have a chunk of data that's really sensitive, it would be best to "air gap" it from the internet.

> But I wouldn't trust crypto, no matter if the key were placed in a "secure enclave" (haha), or not.

Please review Apple's "Apple Platform Security" document before continuing to comment. [1]

[1] https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...


It's not possible to generate a computer system that will remain secure for the rest of time. All encryption can be decrypted.

Every phone ever sold can have its internal data read, it's not a case of if but when. This also applies to every HDD, SDD and any encryption software you may be using, it will be cracked eventually.


How do you read data inside the secure enclave, then?


Disassemble the chip, and use scanning electron microscopes.

https://www.cl.cam.ac.uk/~sps32/cardis2016_sem.pdf


Sure. That is incredibly expensive and time-consuming, and due to anti-tampering protections has a high possibility of destroying the data before it can be extracted.

So, not realistically viable in 99.99% of attack scenarios.


If FBI can do it, that means any other third party is able to do it too, it is just a matter of time.


A phone thief may theoretically spend a few months to unlock your phone, but in practice they won't.


When an exploit is leaked and turned into software that can run on commodity hardware then the thief might only have to spend a few minutes.


Time also helps, because the software on the phone stops getting the fresh security fixes.


Any other third party with funding on the level of FBI


So long as the reports are "FBI can unlock iPhones by unknown means" we don't know the cost to execute the exploit.

Just because the FBI pay $$$$ per unlock, doesn't mean the marginal cost to the supplier is $$$$, that could just be GrayKey trying to cover the $$$$$$$ they paid for an exploit, and make some profit.


This is the point. Third party being foreign government, or eventually your own government. You may not have anything to hide, but you’re only one law away from what constitutes ‘something to hide’.


So, I recommend, just dont loose your shit... Besides, the typical thief aims to resell the hardware. They dont care about the pics you have of your gf...


Completely locking down a physical device and making it unhackable is downright impossible. If the attacker has physical access to the device and the ability to regulate network access, then it's only a matter of time before the device gets hacked.

Right now they are using USB to gain access to the device. There is nothing stopping an attacker from actually prying open the device and fitting alternative components that can bypass security. Consoles are cited as an example of an unhackable device, which is not true. Consoles were designed to be unhackable by cheap mass-market methods but a determined hacker can still do it, just that he will have to spend more to do it than the cost of a new console. The hacker in this scenario is FBI with a lot of money and resources with the physical device in their custody. I doubt if FBI is going to lose the ability to hack devices anytime soon.


Replacing components does not give you access to the secure enclave which is on-chip. And that is where the encryption key for the storage lives.


Lots of iPhone decryption kits, specifically kits which reset the time or limit on unlocks worked and do work by unsoldering chips and processing them externally.


Is this true? You can buy iphone decryption kits that require desoldering chips?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: