Hacker News new | comments | show | ask | jobs | submit login
A Message to Our Customers (apple.com)
5771 points by epaga on Feb 17, 2016 | hide | past | web | favorite | 967 comments

Huge props to Apple - here's hoping against hope that Google, Facebook, and Amazon get behind this.

One thing I was wondering is how Apple is even able to create a backdoor. It is explained toward the end:

"The government would have us remove security features and add new capabilities to the operating system, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by “brute force,” trying thousands or millions of combinations with the speed of a modern computer."

This is actually quite reassuring - what this means is that even Apple can't break into an iPhone with a secure passphrase (10+ characters) and disabled Touch ID - which is hackable with a bit of effort to get your fingerprint.

".. what this means is that even Apple can't break into an iPhone with a secure passphrase (10+ characters) and disabled Touch ID - which is hackable with a bit of effort to get your fingerprint."

That is not exactly true. They wrote the OS, they designed the phone, they know where the JTAG connectors are. Cracking the phone apart and putting is logic board up on a debugger would likely enable them to bypass security.

From what I understand Tim is doing, and I greatly admire, is trying to avoid a judicial requirement that they be able to do this on demand. The so called "back door" requirement, because he knows, as others do, that such a feature would be used by more than the intended audience, and for more than the intended uses, to the detriment of Apple's users.

What I really find amazing is that I was at a talk hosted by the East-West Institute where the Air Force General of the new cyber command (whose name escapes me) complained that "we" (silicon valley) had let the government down by not writing strong enough crypto to keep our adversaries out. I remarked that it was the ITARS regulation and the Commerce department at the behest of the NSA which had tied our hands in that regard, and that with a free reign we would have, and could do, much better. Whit Diffie was there and also made the same point with them. And now, here we are 10 years later, and we "fixed" it, and now its our fault that they can't break into this stuff? Guess what? Our adversaries can't either!

The right to privacy, and the right of companies to secure that right with technology for their customers, is a very important topic and deserves the attention. I am really glad that one of the most valuable companies in the world is drawing a bright line in the sand. So I really support Tim's position on this one.

> That is not exactly true. They wrote the OS, they designed the phone, they know where the JTAG connectors are. Cracking the phone apart and putting is logic board up on a debugger would likely enable them to bypass security.

No, they can't. A quick update to recent hardware practices: modern SoCs like Apple's have something called "Secure Enclave Processor" that's on-die. This is the first thing to start when the chip is powered up, the thing that loads a cryptographically-signed bootloader, and the thing that gates a lot of IO with the outside world (like the NAND).

Hardware encryption on consumer hardware has existed for over a decade (look up Intel's TPM), and while it hasn't obviously taken hold on the more open Intel world, locked-down hardware platforms like Apple's top-to-bottom design has had much more liberty in implementing secure computing.

Furthermore, all debug/probing features can be disabled by fuses at the factory. The manufacturer can test the chip with those features on, and once verified, blow those fuses. No-one's JTAG-debugging that chip, not even Apple.

That said, Apple's focus on security and privacy ramped up in recent years. You want more secure, get more recent hardware. The downside, of course, is that if even Apple can't hack the software... neither can you.

"Computationally-signed bootloader" - but does Apple have the private key? If so, why can't they create another bootloader?

As I understand it, if the code running in the "secure enclave" (containing the private keys) is ever upgraded, the hardware side intentionally deletes the private keys as part of the upgrade, whether the upgraded code would want it to or not.

Dan Guido just tweeted otherwise: https://twitter.com/dguido/status/699992079594864640

The reasoning there is far from conclusive. The argument is that the secure enclave has been updated in the past (to lengthen enforced delays) without wiping user keys.

However, without more information, this does not tell us whether it is possible in this case. The obvious implementation for a secure enclave resisting this sort of attack is to only allow key-preserving updates when already in unlocked state (which would be the case for any normal user update). All other cases should destroy the user keymat, even if the update is validly signed by Apple. This would be done by the hardware and/or previous firmware before it loaded the new firmware so you can't create an update that bypasses this step.

If this isn't how the secure enclave works now, I'll bet it will be in the next version (or update if possible).

You can't update the firmware on the SE without unlocking the device first.

We assume.

I'm also confused by a lot of this since don't you need the password anyway to upgrade?

> We assume.

I bet if Apple is forced to comply with this order they will make sure that they will find a way to design the iPhone such that they physically can't comply with similar requests in the future.

> No, they can't. A quick update to recent hardware practices: modern SoCs like Apple's have something called "Secure Enclave Processor" that's on-die.

Yes, they can. The particular phone in question is from before the Secure Enclave Processor

To play devil's advocate: If they can be compelled to produce de-novo firmware for the purpose of data extraction they could also be compelled to design the means necessary to extract the data from the secure enclave, e.g. by prying the chips open and putting it under a scanning tunneling microscope.

The packages on those chips are usually designed to be melded to the underlying structures enough that opening them destroys the keys they're attempting to hide.

I think you're missing my point here.

Some people trot the argument that it's OK for the government to compel apple to deliver the backdoored firmware because the measures it would circumvent are not of cryptographic/information-theoretical nature.

Then one could expand that argument by saying that compelling physical reverse-engineering is also OK because the devices are not built to be physically impossible (read: laws of nature) to pry open.

The devices ARE built to be physically impossible to open without destroying their contents.

In security, impossible usually means "there is no documented method yet".

> They wrote the OS, they designed the phone, they know where the JTAG connectors are. Cracking the phone apart and putting is logic board up on a debugger would likely enable them to bypass security.

When a passcode is entered, the SoC queries the Secure Enclave with the passcode. If the passcode is correct, the Secure Enclave responds with the decryption key for the flash storage.

The best Apple could do is sign a malicious update to the Secure Enclave firmware that either removes the time delays or dumps the keys. However, some people suspect the SE erases its secrets on firmware update, although this behavior isn't documented in Apple's security reports.

> The best Apple could do is sign a malicious update to the Secure Enclave firmware that either removes the time delays or dumps the keys.

Dumping the Secure Enclave would not result in the keys necessary to read the files on the filesystem. Each file has a unique key, which is wrapped by a class key, and for some classes, the class key is wrapped by a key derived from the passcode. If you don't have the passcode, you can't unwrap any of the keys (Page 12 of https://www.apple.com/business/docs/iOS_Security_Guide.pdf).

You could bruteforce the passcode.

Yes, if you could recover the SE key (or factory-burned SoC key for older phones), you could crack the passcode on your own cluster rather than being limited to what the phone hardware/software can do.

> for older phones

To my knowledge, all keys are still wrapped with the UID, and the UID is still a factory-burned SoC key (not accessible to any firmware). Possible to extract, but not easy to do at scale.

The phone involved in the San Bernadino case is a 5C, which does not have Secure Enclave.

Where have you heard about the erase-on-update feature?

The SE doesn't erase-on-update, but you can't update it without unlocking the phone first.

>>> I am really glad that one of the most valuable companies in the world is drawing a bright line in the sand. So I really support Tim's position on this one.

Tim's position today might not be apple's position tomorrow. Apple is a large publicly traded company. They owe a duty only to shareholders. Fighting this fight will probably impact the bottom line. Tim's continuation may turn on the outcome.

Cooperation may see Apple hurt. The perception of cooperation was part of RIM's fall from grace. Non-cooperation may also cause issues. Through it's various agencies, the US government is Apple's largest customer, as it is Microsoft's. Large contracts might be on the line should Apple not play ball. Either way, this order has probably wounded Apple.

> the US government is Apple's largest customer

I'm having trouble finding numbers, but I seriously doubt this. The reason that's true (or more likely true) for Microsoft, is Windows. The US gov't has massive site licenses for Windows and most of MS's software portfolio. Apple is used where in the US government? Some cell phones? A few public affairs offices that convinced their purchasing officer to buy a Mac Pro for video editing? Maybe some labs that wanted a unixy OS and, again, convinced their purchasing officer to buy a Mac Pro?

Per: http://investor.apple.com/secfiling.cfm?filingid=1193125-14-...

The bulk of Apple's revenue comes from outside the US. Perhaps the US government is their largest single customer (I still hold this is a dubious claim), but it is not essential to their continued existence. They would do just fine without those sales.

I agree with your general point (which I take to be, Apple is not seriously threatened by loss of sales to the US Government).

But remember, iPhones and MacBooks are quite popular everywhere, including US government procurements (e.g., https://37prime.wordpress.com/2012/08/05/nasa-mars-science-l...).

My experience is primarily DoD, which is the bulk of the government's spending on this sort of thing. For enterprise spending, Apple doesn't get much love, outside a general trend towards iPhones for "company" phones, but even that's rare, with people using their own phones instead of being issued one a lot of the time (they can get their plans paid for or subsidized if the phone is a requirement of the job). Labs often end up outside a lot of the enterprise type procurements and so have a bit more leeway in that regard, but while they spend a lot of money, it's still a drop in the bucket for Apple.

I do agree with your general point.

Your characterization ("Maybe some labs that wanted a unixy OS and, again, convinced their purchasing officer to buy a Mac Pro?") is a little off -- where I work, MBP's for laptop replenishments are treated exactly the same way as Windows systems, you just tick a different button on the order form.

Fighting will probably impact the bottom line.

As in, improve it.

Tim Cook is probably more popular than Obama (and surely is WRT this issue.) Apple is about a thousand times more popular than the NSA and blessed with almost infinitely deep pockets and a very, very good marketing team.

Not to mention the fact that most of the people who use computers and phones don't even live in the USA.

Unfortunately, the NSA/FBI likely have far more money than Apple, and if Apple spends too much, their shareholders can demand that leadership backs down.

Tim Cook is responsible to his BoD and shareholders.

The NSA has an annual budget of ~10B, the FBI even less.

In comparison, Apple had a net income of ~50B in 2015.

Let that sink in for a moment.

> Large contracts might be on the line should Apple not play ball.

That almost certainly isn't the case. It is doubtful whether any other government organization cares about how they handle this case. Heck the FBI likely doesn't care as long as Apple doesn't do anything illegal.

Lots of places OUTSIDE the US will care, though. This is exactly the sort of $hite that is causing European companies and governments to avoid dependencies on US providers: there's no way to garauntee freedom from US government surveillance.

They owe a duty only to shareholders.

Because Freedom Markets(tm), booyah!

Society can bumble along just fine without corporations. Corporations serve society.

Take away society, with its culture, laws, rules, regulations, courts, people, economy, markets, capital, etc, there can be no corporations.

The Shareholder Fallacy http://www.salon.com/2012/04/04/the_shareholder_fallacy/

Historically, corporations were understood to be responsible to a complex web of constituencies, including employees, communities, society at large, suppliers and shareholders. But in the era of deregulation, the interests of shareholders began to trump all the others. How can we get corporations to recognize their responsibilities beyond this narrow focus? It begins in remembering that the philosophy of putting shareholder profits over all else is a matter of ideology which is not grounded in American law or tradition. In fact, it is no more than a dangerous fad.

The Myth of Profit Maximizing

“It is literally – literally – malfeasance for a corporation not to do everything it legally can to maximize its profits. That’s a corporation’s duty to its shareholders.”

Since this sentiment is so familiar, it may come as a surprise that it is factually incorrect: In reality, there is nothing in any U.S. statute, federal or state, that requires corporations to maximize their profits. More surprising still is that, in this instance, the untruth was not uttered as propaganda by a corporate lobbyist but presented as a fact of life by one of the leading lights of the Democratic Party’s progressive wing, Sen. Al Franken. Considering its source, Franken’s statement says less about the nature of a U.S. business corporation’s legal obligations – about which it simply misses the boat – than it does about the point to which laissez-faire ideology has wormed its way into the American mind.

>>In reality, there is nothing in any U.S. statute, federal or state, that requires corporations to maximize their profits.

Laws and statutes don't enforce contracts. But courts do. You are trumpeting a theory I've heard many times before. It's creators lack a basic understanding of contract law or corporate organization. :ookup "shareholder derivative actions".

In a lot of consumer devices, JTAG is at least partially disabled (sometimes they can throw a fuse that only lets you do boundary scan for manufacturing).

I would not be surprised at all that Apple's internal 'backdoor' (if you can call it that) is just resetting the security enclave, essentially erasing everything on the NAND. That'd be fine for refurb/manufacturing, desirable even as that guarantees that full system wipes happen before a refurb goes to a new customer.

There are no more actual fuses, but JTAG access can be limited or completely disabled on most (or possibly all) modern ARM chips by setting a value in the non-volatile (flash or EEPROM) memory. Even the IC manufacturer (Freescale, ST, etc.) can be locked out completely.

Most ICs can be completely erased to remove the limitations on access, but this usually requires a 'mass erase', where the entire non-volatile memory is erased (taking any codes, passwords, and encryption keys with it).

source: I am an embedded software engineer who works with these settings in bootloaders and application software.

> That is not exactly true. They wrote the OS, they designed the phone, they know where the JTAG connectors are. Cracking the phone apart and putting is logic board up on a debugger would likely enable them to bypass security.

Is this true? That would have to mean that either the passphrase is stored on the device or that the data is not encrypted at rest. Neither of these sound likely, frankly

It doesn't have to mean any of that. As long as the 10 mistakes limit is enforced in software or in a separate chip that can be replaced without replacing the actual encryption key, it can be bypassed. Then it is a simple matter of simply brute-forcing the pin code. Since these are usually 4 digits, there's only 10000 possibilities, which is laughable to a brute force attacked.

They don't even have to do that. They wrote the OS, they have the signing key for OS updates. All they need to do is push an update to the device with a backdoor that allows reading off the unencrypted contents post-boot (possibly with the addition of a judicially compelled fingerprint scan or PIN brute force to get the encryption key out of whatever on-device escrow it's stored in).

The only way to secure the device against that would be to have the users manually memorize and key in a complete 128 bit encryption key, which they could then refuse to provide.

I think we tech folks were fooling ourselves with the idea that Apple had somehow delivered a "snoop-proof" device. They really didn't (no one can!) as long as they're subject to government or judicial control.

This is not really true. The secure enclave is a separate computer. It doesn't get software updates.

> possibly with the addition of a judicially compelled fingerprint scan or PIN brute force to get the encryption key out of whatever on-device escrow it's stored in

This is the whole problem. The keys are in the SE. You can't brute force the PIN because the SE rate-limits attempts (and that rate limiting cannot be overridden by an OS update because the SE is not run by the OS).

If you can get a fingerprint scan then all bets are obviously off, but then you don't need Apple at all.

The device in question does not have a secure enclave. It's a 5c.

But it's more interesting to think about the case where the phone does have a secure enclave.

and yet it would be less interesting to consider if the password was a "six-character alphanumeric passcode with lowercase letters and numbers" because even if the software rate-limiting was disabled with a rogue firmware update, the PBKDF2 or similar iteration count makes brute-forcing impractical.

> A large iteration count is used to make each attempt slower. The iteration count is calibrated so that one attempt takes approximately 80 milliseconds. This means it would take more than 51⁄2 years to try all combinations of a six-character alphanumeric passcode with lowercase letters and numbers

(Page 12 of https://www.apple.com/business/docs/iOS_Security_Guide.pdf).

In that case, they could just bring the phone down to the morgue and unlock it with touch id.

Touch ID does not work after a reboot, to many attempts or a long of delay.

Additionally you don't have to use your thumb, so if you don't know what body part was used your out of luck.

If the phone is rebooted or if 48 hours have passed since the last passcode was entered, the touch ID can't be used to unlock the phone.

Touch ID requires the passcode after reboot or timeout. They'd need to do it very quickly.

Fingerprint scan really isn't enough though since after a restart, iPhones require the password to be entered.

Because fingerprints are not sufficient to generate / recover KEKs from.

Is that really true? The enclave's firmware is in ROM and non-upgradeable? I'd always assumed it got a signed blob like everything else does. Obviously it's possible to design a system like that, I just haven't seen it reported anywhere that it actually works like that.

Edit just to be clear: the requirement really is that the firmware be stored in a ROM somewhere, probably on the SoC. That's a lot of die space (code storing a fully crypto engine isn't small) to dedicate to this feature. Almost certainly what they did is put a bootstrap security engine in place that can validate external code loaded from storage. And if they did, that code can be swapped by the owner of the validation keys at will, breaking the security metaphor in question.

According to a former apple engineer that worked on this stuff, the enclave's firmware is indeed a signed blob:


The key thing would be for it to lose all stored keys on update when the current passphase has not been provided, and it sounds like that may not currently be the case.

Maybe in this case, Apple could comply, but a simple tweak would make it impossible in the future?

But "on update" isn't really the issue. If the code can be swapped underneath it, how does it know an "update" took place? Again you're in the situation where all of that process would have to be managed by hardware, when what is really happening is that the enclave is a computer running signed software that can be replaced.

Sure, but the signed firmware could be written to delete any stored keys when it accepts an update and the phone's passphrase has not been provided. That's assuming that it manages it's own update process, has the firmware securely stored within it's own die, etc. It's entirely possible ... but only Apple really knows.

I would not actually be shocked if they originally did wipe out stored info on firmware update, but had some issues with people updating their phone and losing everything, so they ifdef'd that particular bit out in the name of usability.

But... the firmware is stored in external storage. How does it even detect that an update was performed? You're saying that if the hardware had affirmative control over all the secure storage (e.g. the RPMB partitions on eMMC, etc...), then it could have been written to do this blanking.

Well, yeah. But then you'd have a system that couldn't be updated in the field without destroying the customer data (i.e. you'd have a secure boot implementation that couldn't receive bug fixes at all).

It's a chicken and egg problem. You're handling the problem of the "iPhone" not being a 100% snoop-and-tamper-proof device by positing the existence of an "interior" snoop-and-tamper-proof device. But that doesn't work, because it's turtles all the way down.

Ultimately you have to get to a situation where there is a piece of hardware (hardware on a single chip, even) making this determination in a way that has to be 100% right from the instant the devices go out the door. And that's not impossible, but it's asking too much, sorry. We're never going to get that.

It's certainly possible to design such a system with external firmware and still allow for secure updates.

The enclave would store (in secured storage) a hash of the last used firmware. Hardware would have a hash update capability, but this destroys all other stored information (i.e., keys) if used when the enclave is not currently in an unlocked state.

On boot, hardware verifies firmware signature as usual but also compares the firmware hash (already calculated for the signature check) to the stored value. If there is a mismatch, update the stored hash. Since the enclave is currently locked, the hardware clears the keys.

Since it's in hardware, you're correct that it would have to be 100% right, but that's quite feasible for a simple update mechanism (indeed, the most complicated bits are reused pieces from the signature check which already has this requirement).

Have it store the firmware itself encrypted with the UID. It never leaves the secure enclave so only the secure enclave itself could "sign" updates. You could still allow for recovery by providing a method to reset the UID.

Having the hardware require both the signature to pass as well as a valid PIN/fingerprint can't be that difficult.

Die space is cheap nowadays, especially stuff that doesn't need to be on all the time because of the death of Dennard scaling.

I'm not claiming it is in ROM or that it is not upgradeable if you are Apple and have physical access to the device. I'm not sure on that point. What I think must be the case is that Apple can't remotely upgrade the SE firmware as part of its iOS update mechanism. Although, to be perfectly honest, I have not seen this explicitly documented.

So... it sounds like you more or less agree with me. Apple can comply with this court order and open your secure device. We just differ as to whether they can do it over the air.

(FWIW: OTA firmware updates are routine in the industry. I've worked on such systems professionally, though not for Apple.)

The "bootstrap security engine" could store a hash of the blob in it's secure flash storage, and only update the hash if the user enters their pin, then reload the firmware blob. If the stored hash and blob hash ever don't match on boot, wipe the keys.

How can they simply "push an update"? I've never seen iOS auto-update without first prompting the user, which I'm assuming is a very intentional limitation.

Probably by using DFU, and uploading the image via the lightning cable.

Except that wipes the device.

The Air Force and the FBI are different organizations wth different missions. I suppose they are both under the executive branch.

"From what I understand Tim is doing, and I greatly admire, is trying to avoid a judicial requirement that they be able to do this on demand. The so called "back door" requirement, because he knows, as others do, that such a feature would be used by more than the intended audience, and for more than the intended uses, to the detriment of Apple's users."

To be fair - the only reason he's doing it is because it would cause a significant drop in sales for Apple devices. People overseas would immediately stop buying because "The American government is listening", and that's assuming countries like China and Russia wouldn't ban them outright.

This is the big thing American politicians are missing or glossing over in their campaigns to get re-elected: Forcing American companies to compromise their products will result in a significant loss of revenue overseas. Microsoft, Google, et al, have already reported it and foreign governments have already started banning goods/services (due to the Snowden revelations).

"To be fair - the only reason he's doing it is because it would cause a significant drop in sales for Apple devices."

That's not being fair at all. To say the only reason he is doing it is to protect iPhone sales doesn't speak to Tim's character. Of course he cares about sales, but he also cares about privacy.

It's definitely one of those rare times doing the right thing is also the most profitable thing.

Doing the right thing is very often (if not almost always) the most profitable thing. It's very difficult to make a business out of serving your customers poorly; if you disagree, give it a shot, and let me know how it turns out for you.

Serving your customers well is good for profit, but it is not the same thing as doing the right thing. Ad-based companies have customers and users. By serving their customers too well they easily screw over the users, with obnoxious and even unethical ads.

I've got a set a special hex screws and soldered on RAM to talk to you about...

That seems like the classic mistake of believing that the larger market assigns importance to repairability. I haven't seen much evidence this is true.

It's not necessarily the most profitable thing. Apple is picking a fight with a very big adversary. This takes some serious backbone.

I promise you, even a protracted legal battle is far cheaper than significant global sales losses.

Normally people on HN are much more skeptical about airy promises and assertions from corporate executives. I don't see what behavior on Tim Cook's part has indicated he's more to be trusted than anyone else.

Tim Cook was asked at a shareholder meeting to only do things that were profitable. He passionately rejected the idea, and named accessibility for the blind, environmental issues/climate change, and worker safety as areas where Apple invests because it's right, without considering the "bloody ROI". [1]

Compare to GE, which rolled over [2].

So I believe Mr Cook when he says his opposition to the FBI's request is rooted in a desire to do the right thing, and not the bottom line.

[1] http://www.macobserver.com/tmo/article/tim-cook-soundly-reje...

[2] http://www.nationalcenter.org/PR-GE_Climate_Change_022114.ht...

Just to sharpen your comment, here's the summary of that interaction with a shareholder:

[Cook] didn't stop there, however, as he looked directly at the NCPPR representative and said, "If you want me to do things only for ROI reasons, you should get out of this stock."

That's a very blunt statement that the immediate stock valuation is not Cook's only consideration.

Some people seem to have a hard time taking Cook at his word, but he's been quite consistent. This massive skepticism feels more like nostalgie de la boue than anything based in facts.

I think the only problem here is the strongness with which it is worded. It would be fair to say one of the reasons he is doing it is because of stock price, but to assert the only reason he is doing it is to cast him as completely uncaring about security and privacy. Without evidence to back this up (in this case, evidence that he doesn't care about security and privacy), it's an attack on his character. It's entirely possible his reasons include both, and that the moral sentiment behind the message is entirely truthful.

We should be careful about plainly stating what someone else's motivations are when it contradicts their own story.

Edit: s/it's/his reasons include/ for clarification.

He's an older gay man, I would be shocked if this didn't influence greatly his opinion in this realm. He's lived through some times that weren't too friendly to "his kind" that were open.

He's pretty much said so:

We still live in a world where all people are not treated equally. Too many people do not feel free to practice their religion or express their opinion or love who they choose. A world in which that information can make a difference between life and death. If those of us in positions of responsibility fail to do everything in our power to protect the right of privacy, we risk something far more valuable than money. We risk our way of life.

See pages such as http://qz.com/344661/apple-ceo-tim-cook-says-privacy-is-a-ma...

Growing up a target sure changes the way you look at everything.

Not more to be trusted than anyone else, perhaps. But the claim was, "the only reason he's doing it is because it would cause a significant drop in sales for Apple devices" (emphasis added). I can trust him no more than I trust anyone else, and still not be totally cynical as to his motives. They're mixed, they're not completely altruistic, but they're (probably) not completely mercenary, either.

"more to be trusted than anyone else"

Cook's responsibility is first and foremost to the stockholders, and secondarily to the customers. Decrypting the iPhone would seriously compromise the security of Apple's products, gravely damage the company's credibility, hurt sales, and drive the stock price down.

No CEO is going to take such a drastic step unless they are a craven, cowardly type who meekly obeys ask-for-the-sky demands from overbearing federal law enforcement types, and Cook surely did not rise to his current position by being a pushover.

That's not to say there won't be some kind of secret deal made behind closed doors, but secrets tend to get out. Apple would not be so foolish, I think. Yahoo? Microsoft? They just handed over the keys to their email to anyone who demanded it -- the Chinese government, the NSA -- but Apple has no history of this type of behavior. Surely Snowden would have revealed it if they had.

Why would it cause a significant drop? Where would those people go?

Not saying I necessarily agree (or disagree), but the premise is that right now iOS phones are the only ones that do security right, out of the box. They're worth the premium price for that feature. As soon as they no longer have that edge, there's no reason to choose them over any commodity Android device, so sales would drop as they lose their differentiating feature.

Honestly, I doubt that a significant number of people is buying the iPhone for its security features.

Yeah, I don't think so either. Just explaining the parent's line of reasoning.

Just like people buying Beats headphones for sound quality (they're not).

People that really care about security don't use smartphones.

Not that they're more intuitive to use, have a higher quality app ecosystem, and "just work" for most people.

They'd go to Android. Apple only has a significant share of mobile users in the USA. Most other countries they are losing (globally they have something like 8% vs Android's 85% market share).

>Most other countries they are losing (globally they have something like 8% vs Android's 85% market share).

I'm not sure this is true once you factor in price range. The general knowledge is a lot of those Android devices are sub-$250.

> Most other countries they are losing

Maybe by share of units shipped. By revenue share they dominate, and their margins are estimated to be very good.

You're probably right, but it's great to explain this decision in economic terms. It'll sink through to people who think privacy is only for "good" actors, and those who don't like the government hurting businesses.

Tim answers to shareholders. Shareholders look at the bottom line and not his character.

This is a good way to think about things skeptically, but to say it as though it is fact is misleading. Is it that unlikely that a business professional can frame his beliefs and knowledge in this domain in such a way to justify himself to the shareholders? Or are we all just too skeptical that people in power act against morality whenever easy or possible?

Shareholders, which includes Tim Cook, all get to choose what they care about. Humans have complex motivations.

Let's suppose you can see in to Tim Cook's heart and this is true.


I understand wanting to know people's motivations, from both the perspective of predicting future action and just because we're nosy monkeys. But frankly, what's in Cook's heart doesn't matter. Actions do. And to date, in my view, he's done pretty much exactly the right thing on this issue all along.

Maybe he's defending customer privacy because he believes the Lizard People have religious objections to invading until all humans have Freedom of Math. It doesn't simply matter.

Being the vocal advocate to stand up to the gov't IS risky. The wording is carefully crafted to prevent spin damage (pro-terrorist, anti-law enforcement).

> the only reason he's doing it is because it would cause a significant drop in sales for Apple devices. People overseas would immediately stop buying because "The American government is listening", and that's assuming countries like China and Russia wouldn't ban them outright.

That hasn't happened with other devices or earlier iPhones that aren't as secure.

I don't see how this "reassuring"; to me it's rather very confusing (as mentioned in many other comments).

If Apple could in fact write a software backdoor, doesn't it mean that the backdoor exists, at least potentially?

And how can one be sure that Apple is the only company able to build that door? At the very least, couldn't the right Apple engineer be either bribed or forced (by terrorists or the government) to build it?

"Impossible" should mean "impossible", not "not yet done, but possible".

What it means is that the best the FBI can come up with is "Make a way for us to brute force attack the passphrase." And brute force attack is worthless for a strong enough passphrase. That's what is reassuring.

Not to mention that this is for the iPhone 5c. As other comments have mentioned, newer iPhones have the hardware-based Secure Enclave which add to the difficulty of breaking into the phone. https://www.apple.com/business/docs/iOS_Security_Guide.pdf

My reading of that PDF is that Secure Enclave software can be updated too, it simply does an independent verification of the Apple's digital signature.

So while the Secure Enclave enforces the delay between brute force attempts, Apple could still release an update that removes that delay.

It could also be that the FBI has already obtained the info, or can obtain the info through illegal means and simply want to use the issue to force legislative and policy changes or public opinion changes in their favor, and or just want to legitimize their having the information.

In the world of cryptography, it is always possible, because you can always be lucky and guess the right "unlock" code. In fact, social engineering is normally used to find the right "unlock" code[0].

The FBI can also unsolder the components in the phone, make a full image of the content, find the encrypted section and then brute-force. This is what is done for SSD. They do not power up the drive, unsolder, put the memory modules in a special reader and copy the data before the controller of the SSD automatically wipe out data because of automatic optimization after a delete/trim.

[0]: https://xkcd.com/538/

It's kind of hard to social engineer dead people, though.

This is why the "normally" in my sentence. But the point I really wanted to make is that you have no "impossible" with encryption.

dead people were once alive. go through their pockets or their apartment.

You can go through each and every physical object I own or even was in contact with, but you won't find any of my passwords

What happens to all your stuff when you die?

I guess my family will take care of my physical stuff. For the online part, some of it can probably be handled through support (facebook, etc...), and the rest will stay as is until it is deleted for lack of use. Or never deleted. Both are okay.

Leaving a physical trace of my passwords is not only bad practice from security point of view, but quite useless since I know them. Also, my online accounts are useless if I can't use them because I'm dead, so I don't really care if no one can access them anymore. What happens to important things such as banking is already dealt with.

I just got a Facebook birthday notification for my cousin, who died three years ago. So, that has some impact on me and others in our extended family. Maybe that's ok, maybe it'll get weirder at some point; but its definitely something to think about.

You can have FB set the page as a memorial, and I believe that also gives you the option to disable all those types of notifications.

That's a very real issue indeed. Sorry for your loss.

But I think the right solution here would be for Facebook to have a way of handling deceased people, not giving your password to everyone in case of sudden death.

They do. If someone contacts Facebook and let's them know that a person has died (with some sort of verification), they will memorialize the account.


If facebook doesn't want to delete his account, why don't you unfriend him?

You can have the account memorialized (and/or removed): https://www.facebook.com/help/150486848354038

Would you?

Why care about "stuff" once you are dead?

Because you don't want to make a difficult situation even harder for your relatives?

See, for example, the people who know they're going to die and who leave their iPads to their relatives in their wills. Apple doesn't take grants of probate as sufficient legal documents (everyone else does (eg banks)) and insist on a court order.


Significant other? Kids? Relatives?

Well your opinion is kind of moot at that point.

With a $5 dollar wrench? What's that going to get you?

The media already did that :P

From a forensics point of view that is a very risky path. You risk destroying the evidence and you're tampering with it.

It's not a backdoor, it's a frontdoor. In cryptography, there's no way to make repeated attempts more computationally expensive. The lockout just an extra feature Apple put on, that Apple could easily remove. If we're going to have 4- and 6- digit PINs, there is no way to stop a dedicated attacker frome brute-forcing it. None.

You can't make crypto behave slower on repeated attempts but you can still make each attempt more expensive. For example: https://en.m.wikipedia.org/wiki/Pepper_(cryptography)

True. But Apple, with such a focus on UX, cannot reasonably afford more than ~200 millisec when checking a password; and still it scales linearly, so the solution for concerned users still involves creating a more complex password. Doubling the amount of time it takes to hash a password will have the same effect as adding 1 more bit of entropy to the password, which can easily be beaten by adding a single character to it.

If you consider caching of keys, there's no reason that the first login attempt after a cold boot couldn't take 1-2s. Each subsequent login would be roughly instant.

"there's no way to make repeated attempts more computationally expensive"

That's not true actually. For example, the industry standard for storing passwords on a server (bcrypt) is specifically designed to slow down password match attempts.

It is true. You're confusing making _repeated_ attempts progressively more expensive with making all attempts more expensive to start with

Ah yes. You are right, I was confusing those two things. Thanks for the clarification!

Bcrypt isn't an industry standard.

  > no way to stop a dedicated attacker from brute-forcing it
Wipe after x incorrect? Can't stop the attacker, but you can make it futile, surely.

Nope. If the attacker (Apple in this case) can replace the OS, they will just do so before the phone gets wiped—replacing the OS will remove that wipe feature.

Not if the check and wiping is done in hardware as claimed by Apple for newer devices than the one in question here.

Meh. Then the attacker can simply replace the hardware. Remember, our attacker model is Apple; non-cryptographic security measures mean very little to a company with such complete knowledge of the hardware and software involved.

Nope. On newer devices the key is derived from a random key fused into the SE during manufacturing, a key fused into the ARM CPU, and a key randomly generated on-device during setup (derived from accelerometer, gyro, and altitude data) and stored in the SE. The SE's JTAG interface is disabled in production firmware and it won't accept new firmware without the passcode.

You can't swap the SE or CPU around, nor can you run the attempts on a different device.

Can't you? Seems like the kind of problem you can point an electron microscope at, and perhaps some very high precision laser cutting. In any case, I imagine if you are willing to spend resources on it, you could read the on-chip memory somehow and start cryptoanalysing that.

Against a sufficiently capable adversary, tamper-resistance is never infalible, but good crypto can be.

    > Against a sufficiently capable adversary, tamper-
    > resistance is never infalible, but good crypto can be.
Nonsense, it all comes back to "sufficiently capable", every time.

To a sufficiently capable adversary, _all_ crypto is just "mere security by obscurity".

"Oh, mwa-haha, they obscured their password amongst these millions of possible combinations, thinking it gave them security - how quaint. Thankfully I'm sufficiently capable.", she'll say.

The point is that the key is stored there too (part of it burned during production in silicone) and can't be read or changed.

Sure, if they wanted to they could implement a backdoor. But assuming they correctly created and shipped the secure enclave it shouldn't be possible to circumvent it even for Apple.

It's sounding like that's the problem. They left an opening for this sort of thing by allowing firmware updates to the secure enclave. That basically makes it a fight over whether the FBI can force Apple to use the required key to sign an update meeting the specifications the FBI directs.

Well, i read elsewhere in this thread, that updating the firmware for the secure enclave wipes the private key contained within. Which means you've effectively wiped the phone.

You need to unlock the phone before you can update the firmware on the secure enclave.

Secure Enclave is not really ‘hardware’; despite being isolated from the main OS and CPU, it is still software-based and accepts software updates signed by Apple.

If those software updates force it to erase its private key store, though, then it's functionally isolated from that attack vector. An updated enclave that no longer contains the data of interest has no value.

Ok, if that part is updatable you have indeed a backdoor.

In theory it should be possible to make it fixed (which Apple doesn't seem to have done).

Making it fixed just means you can't fix future bugs. The secure approach is to ensure that updates are only possible if the device is either unlocked or wiped completely.

The Secure Enclave enforces time delays between attempts.

Not possible without backing it into iOS with some update that FBI is claiming will be used on one particular phone in this particular case.

Apple reasons that there is no way to guarantee no one will take the same update and apply it to other iOS devices. Or government taking this a step further by making Apple to build that into future update for the whole user base.

It's not a backdoor to the phone only being unlocked by the passphrase, but a backdoor to the number of attempts limitation.

This limitation must be built into security hardware used by iPhone so software couldn't do anything about it. I was under impression that it's how iOS security model works. If it's not and in fact this check implemented in iOS itself, it's much weaker protection and it's really looks like an intended backdoor from Apple.

It sounds like it is built into hardware with newer iPhones containing the secure enclave, but not for an older phone like the iPhone 5C.

It's not really built into ‘hardware’, it's enforced by the Secure Enclave, which is software-based and accepts software updates signed by Apple. It's secure against kernel exploits and third-parties, but not against Apple.

I'm really interested to know more about this. Does TouchId secure enclave really enforce the password attempt limits?

Its really a pretty impressive design. Android phones are lacking here.

They didn't make any mention of how feasible it would be, just that they wouldn't even try because it would threaten the security of their users.

Let me correct it for you. "Perceived security of their users".

A lot of it about PR.

> If Apple could in fact write a software backdoor, doesn't it mean that the backdoor exists

Schroedinger's Backdoor? ;)

Only Apple has the ability to sign updates to software (barring jailbreak).

It sounds like it'd be trivial to nop out the timer on repeated passcode attempts. Which makes sense... Leaving any short passcode trivially crackable.

Yes but like where would you nop? You can't statically analyse the code because the image is encrypted at rest (and potentially partially in ram also?)

The code which decrypts the system (and is responsible for wiping the drive on repeated failures) is definitely not encrypted. How would it be able to take the input in order to decrypt the drive.

Yes but that code is all running in ram, precluding static analysis. You can still dynamically analyse it, but that is much harder.

The way I understand it (and, correct me if I'm wrong) is that the code flows from disk through the aes engine where it is decrypted and then placed in a presumably interesting/hard to reverse place in ram at which point it is executed. I imagine even more interesting things are done to higher value data in ram, but that's not code - because as you said, code has to be decrypted (at the latest) by the time it reaches the registers.

Their security PDF says that the system has a chain of trust established, anchored at an immutable loader residing inside the chip, and each step verifies the digital signature of the next step against the hardcoded Apple CA certificate.

So to clarify you're saying that you can't just nop instructions, or it's not as simple as that, right?

I agree. I was under the impression that Apple's security was such that even they didn't have the power to decrypt a device because the crypto made it impossible without the password/pin/key. I'm interested to understand the reasons that it was not done this way.

The current implementation is done this way indeed: even Apple cannot decrypt without using the right password.

The right password can be obtained by either knowing it, or by guessing it.

As an additional security measure, the software shipped with the phone prevents brute force attacks by wiping the device after a given number of failed attempts.

Apple has been asked to modify the software so that it won't wipe the phone, thus allowing the authorities to try many passwords.

If anybody could circumvent this additional security measure, actual security would be lower. The authorities are not asking Apple to ship this change to all users: they only want to install it on the device in their possession.

However, Apple is concerned that once they provide the authorities such a modified software, it could be leaked and thus be used by third parties to breach the security of any Apple device.

It should be noted that currently all data encrypted for example on your laptop's hard drive is already subject to this kind of brute force attacks. It's a well known fact that authorities or malicious users can already attempt brute force attacks on encrypted data if they can access the data on a passive device such as a hard-drive.

It's important to understand that Apple (at least not in this case) is not being asked to implement a backdoor in the encryption software. It's also important to understand that even if Apple was forced to install a backdoor, it would affect only the ability to access future data and not help the investigation of the San Bernardino case.

This very request by the Authorities suggests that Apple does not currently install any backdoor on stock phones.

However there is a logical possibility that the Authorities are either not aware of any such backdoor or in the worst case they are publicly requesting this feature just to hide the real reason of a possible future success at decrypting the phone: they can claim that Apple didn't have any backdoor, and they were just lucky at bruteforcing the device; in fact Apple wasn't even cooperating with them at relaxing the brute force prevention limit, so they could claim they did it in house.

(I'm personally not inclined to believe in such improbably well coordinated smoke and mirrors strategies, but they are a logical possibility nevertheless).

Why doesn't the FBI simply clone the current device, make brute force attempts and then clone again if locked out? Yes, lots of work but also doesn't force Apple to participate.

I thought @csoghoian's take was interesting. The FBI doesn't want one specific phone unlocked, they want precedent.


The iPhone uses AES encryption which would prevent cloning the flash storage [1]. There was an informative discussion of this over on AppleInsider - http://forums.appleinsider.com/discussion/191851

[1] http://www.darthnull.org/2014/10/06/ios-encryption

from that discussion, super informative details on how secure enclave is actually implemented:


Would it be infeasible to construct part of the private key out of hardware-specific id's + time-of-creation hashes? I assume it's not only the PIN?

That's what the A7 (iPhone 5S and later) design does:

“Each Secure Enclave is provisioned during fabrication with its own UID (Unique ID) that is not accessible to other parts of the system and is not known to Apple. When the device starts up, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave’s portion of the device’s memory space. Additionally, data that is saved to the file system by the Secure Enclave is encrypted with a key entangled with the UID and an anti-replay counter.”


The device in question is an iPhone 5C, which uses the older A6 design.

Thanks for the link! I knew there had to be more technical information out there but couldn't find it on an initial search.

Yeah, it wasn't exactly unknown before but it wasn't terribly common outside of certain security / compliance circles. I think I've seen more links today than in the previous year.

Each device has a device-specific AES key (UID) burned into it such that you cannot clone devices (or move flash chips between them).

Everything is encrypted with a derivative of this UID, and extracting the UID is not a thing you can do without destroying the device.

I'm really curious as I have been hearing this many times.

"even Apple cannot decrypt without using the right password."

Could you please explain?

You can read more about it in this well written article: https://www.mikeash.com/pyblog/friday-qa-2016-02-19-what-is-...

It sounds like it has been built that way - the data cannot be decrypted without the correct passphrase - but the user secured the phone with a 4-digit PIN, giving a mere 10,000 possible combinations - easily brute-forceable.

The iPhone prevents this by locking up (and potentially erasing the phone) after 10 failed attempts, but this a restriction created in iOS. If they provision a new backdoored version of iOS to the phone, that restriction wouldn't apply any more, and they could brute-force away.

To clarify, on A6 and earlier this is enforced in software. On A7 and later this is enforced in hardware.

I think they're being asked for a few things—private keys and the ability to automate and not limit passcode entry attempts.

But why would you even think apple, google or facebook would be a good bet to defend your privacy in the first place ? They got the most terrible track record of not caring about.

If you have things that you need to be private, don't put it on a smartphone.

I wish people would stop lumping Apple with Google/Facebook with regards to privacy.

Apple has implicitly for a long time, and lately much more vocally, cared about privacy. They don't have the same data-driven business model that Google and FB do.

> Apple has implicitly for a long time, and lately much more vocally, cared about privacy.

They say that. But with closed source software we can't verify that it's true. I'm not saying they don't care about privacy, only that we don't really know if they do or not.

With open source software, it doesn't appear that people can verify things are safe either given the long-term security issues with things like OpenSSL et al.

We found the bug in OpenSSL BECAUSE it was opensource. If it weren't, nobody would have seen it.

Plus, with open source you can verify intent, which you can't with apple.

Which provide a device getting your finger prints, all your phone numbers, internet search, bank details, some paiements, network communication, voice communications, text communications, localisation using GPS and wifi + hotspot + phone towers and soon ihealth device collection body metrics.

And they are profit oriented, not people oriented.

> We found the bug in OpenSSL BECAUSE it was opensource.

Sure but they were there for years before anyone noticed. Same with PHP's Mersenne Twister code. Same with multiple other long-standing bugs. It's disingenuous to toss out "Oh, if only it was open source!" because reality tells us that people just plain -don't- read and verify open source code even when it's critical stuff like OpenSSL.

I never said they could. There is a better chance that they can, but that line of thinking ends up trying to prove a negative.

Actions speak louder than words. The most revealing test of the strength of a company's commitment to privacy is how it handles situations when privacy can conflict with profits. Privacy on the internet relies critically on browsers only trusting trustworthy certificate authorities. When CNNIC breached its trust as a certificate authority last year, Apple sat tight waiting for the furor to subside (https://threatpost.com/apple-leaves-cnnic-root-in-ios-osx-ce...).

I would argue that handling security problems in general has not been Apple's strength historically.

I agree that failing to fix a problem like this in a timely fashion is bad, but sins of omission are generally judged differently than sins of commission, for better or worse. Apple failing to apply proper prioritization to security holes isn't the same as Apple collecting data to be sold to the highest bidder.

So, again, Apple should not be treated as equivalent to Google and Facebook. Feel free to judge them harshly, but don't paint them with the same brush.

"...and disabled Touch ID - which is hackable with a bit of effort to get your fingerprint."

As long as we're on the topic of encryption, phones, and law enforcement it's worth keeping in mind that in the US at least courts can compel you to unlock your phone with Touch ID, even though they can't compel you to give them a password. Communicating a password is considered speech, so self-incriminating speech is protected by the fifth amendment. Physically holding your finger to a device is not considered speech and so it's not protected.

I think this is an interesting, and perhaps underappreciated, aspect of a shift from passwords to biometrics for verifying identity. It would shift the power dynamic between civilians and government a bit - here in the US at least.

Of course, hopefully no one is in a situation where they need to protect themselves against over-reaching or unjust government officials any time soon.


When entering any questionable law enforcement situation (TSA, walking near a protest, traveling internationally) I always switch my phone to not use TouchID. Say what you will.

Absolutely. I'm quite amazed they had the guts to go through with this and I applaud it. I will support them with my dollars as much as possible.

Talk is cheap and these internet posts - from Tim Cooke, you, and me - are just talk. The security of this nation depends on Apple (and Google et. seq.) supporting us with its dollars as much as possible.

And I'm not optimistic that the stockholders care about anything more than doing the opposite.

Talk is not cheap when you have to back up such talk with a substantial legal defence.

And any stockholder with an ounce of intelligence will understand that Apple's choices here are both morally right and effective marketing. As the information age matures, privacy is becoming a valuable asset, and Apple is starting to gain a positive reputation in this space.

Totally agree. As someone who is currently an Android fan, I find that Apple's apparent commitment to privacy is appealing, and it could eventually mean I buy an iPhone. Assuming I'm not the only consumer who has made this connection, then Apple has very legitimate business reasons for continuing to trumpet their commitment to privacy.

>And I'm not optimistic that the stockholders care about anything more than doing the opposite

AAPL is one of the most mainstream, widely-held stocks on the planet. Assumptions about the opinion of some homogeneous "stockholder" are useless.

I'm positive HN is filled to the brim with AAPL shareholders who care deeply about this issue.

Some people already complain that the iPhone is too expensive, especially compared to non-equal Android phones. Not enough people complain about any company not fighting for their user's security.

Talk is all we need so far. Publicly saying no to the FBI is a step rarely taken.

I am but one data point, but I recently had opportunity as a stockholder in a fund to cast a vote on a shareholder led proposition. Voting in favour of the proposition would have removed a set of investment choices that the fund managers could pursue on our behalf. The fund managers (I forget the exact terminology of what they were - maybe the word "board" was used) recommended that shareholders voted against.

I voted for, and to my surprise so did a majority of other stockholders, and that avenue of opportunities was removed.

What it sounds like is they've been asked to prepare a new OS release that allows an unlimited number of attempts to enter the passphrase via some network link. The press release is written to sound like without a software release, it wouldn't be possible to mount this kind of attack, however attacks like this are generally possible regardless of having some specially modified and signed OS image: for example, by cutting power to the hardware precisely when it is clear a password was incorrect, before the hardware has time to implement any destructive actions. Attacks like this have been used against SIM cards since the 90s.

I'm ambivalent regarding Apple's stance. In principle they are doing the right thing, but in practice, it seems they may be kicking up a whole lot of fuss over a relatively minor issue (with the exception that providing an easy means to brute force a phone to the authorities sets a horrible precedent). As for creating a universal backdoor, it seems highly unlikely they couldn't produce a signed OS / coprocessor firmware image that wasn't locked to one of the various serial numbers associated with this particular device

edit: as mentioned below, this order entirely originates with Apple's use of DRM to prevent software modification. Had users actual control over the devices they own the FBI wouldn't need to request a signed firmware in the first place. Please think twice about what Apple might really be defending here before downvoting

> with the exception that providing an easy means to brute force a phone to the authorities sets a horrible precedent

This is the entire concern (in my opinion and in my reading of Tim Cook's opinion). If the government can force Apple to backdoor this one iPhone (because terrorist), then they can force Apple to backdoor any iPhone for any person given a valid warrant, subpoena or otherwise granted power. Once the flood gates open...

It's worse than that. There's no guarantee that "the government" is "your government".

Imagine this scenario:

1.) Apple creates the custom iOS build for the FBI to use to decrypt this iPhone.

2.) China hacks into either Apple or the FBI and downloads this build. (We know they have the capability, because it's already happened. [1])

3.) A visiting U.S. diplomat, politician, or military officer has his iPhone pickpocketed while in China. (This also happens all the time.)

4.) The Chinese government uses this stolen software to brute-force the encryption on the device, finding access codes for classified U.S. military networks. (Because we know U.S. diplomats never use their personal email for state business [2], right?)

5.) Now a foreign power has access to all sorts of state military secrets.

The problem with backdoors is they let anyone in. Right now, there's a modicum of security for Apple devices because knowledge of how you would bypass the device encryption is locked up in the heads of several engineers there. The FBI is asking Apple to commit it to source code. Source code can be stolen, very easily. Tim Cook's open letter is making the point that once this software exists, there is no guarantee that it will stay only in the hands of the FBI.

[1] https://en.wikipedia.org/wiki/Operation_Aurora

[2] http://graphics.wsj.com/hillary-clinton-email-documents/

> knowledge of how you would bypass the device encryption is locked up in the heads of several engineers there

WARNING — THIS Apple Engineer IS CLASSIFIED AS A MUNITION --rsa--------------------------------8<------------------------------------- #!/usr/local/bin/human -s-- -export-a-crypto-system-sig -RSA-in-3-lines-HUMAN ($k,$n)=@ARGV;$m=unpack(H.$w,$m."\0"x$w),$_=`echo "16do$w 2+4Oi0$d-^1[d2% Sa2/d0<X+dLa1=z\U$n%0]SX$k"[$m]\EszlXx++p|dc`,s/^.|\W//g,print pack('H' ,$_)while read(STDIN,$m,($w=2*$d-1+length($n||die"$0 [-d] k n\n")&~1)/2) -------------------------------------8<------------------------------------- TRY: echo squeamish ossifrage | rsa -e 3 7537d365 | rsa -d 4e243e33 7537d365 FEDERAL LAW PROHIBITS TRANSFER OF THIS APPLE ENGINEER TO FOREIGNERS

That is the whole point of warrants. If apple made iphone with unlocked bootloader it would have been really impossible.

It used to be possible to do what you describe (cutting power at the right moment) and there were even physical bruteforce devices built to implement this but it's all been hardened in iOS 8 and there are no currently known ways to bruteforce a passcode. Obviously there might theoretically exist a software bug to do so, but there's no information around and it sounds even FBI couldn't find it if it exists

Could you elaborate on how's that prevented? I'm quite curious. [Or did someone already explain in some other post somewhere in this thread?]

edit: self-answer: the following post seems to have an answer: https://news.ycombinator.com/item?id=11115579, although it seems to describe a newer device than the one in the case; but I was interested in how such protection is possible at all, so that seems to answer it for me.

> they may be kicking up a whole lot of fuss over a relatively minor issue

I strongly disagree. They are taking a stance in the debate about government mandated backdoors in software.

Did you read the release? They are up front that its entirely an issue about setting a bad precedent. Its completely and totally about the fact that it would be used over and over again, and nothing to do with the fact that is it possible. Your overly cynical stance on this is misguided, as you seem to not have grasped the information in the letter.

The court order says that the software must only work for the specific device in custody. Apple is not supposed to create a general tool.

If you believe that the FBI wouldn't abuse a tool like that after the past few years of coverage of the Security sector there is really very little hope for you.

Personal attack much?

The order says that Apple's exploit should only work on this specific, already existing device.

The problem is with the legal prescident this would provide. Although right now it's just limited to once specific use case, this ruling could and would be used in the future to require Apple (and other tech companies) to compromise security in ever increasing scope.

> sets a horrible precedent

That's a large part of the fuss!

Hypothetically, it should be relatively simple to prevent a power-off-dodges-destructive-action attack, by simply making the operation (incrementing and storing the attempt counter, checking the password) an atomic operation.

So they would still get 10 bites at the cherry, and sure, on the tenth, they could depower the phone and prevent the wipe, but if each attempt is persistently stored before the password-check is carried out, depowering the phone wouldn't give them any more chances.

The government wants Apple to disable the auto-erase after so many unlock attempts. Apple argues in this letter that with modern computing power, this amounts to a backdoor.

The details of the gov't request are in another story on the HN front page


The encryption key used on the root filesystem is too hard to brute force. It's not based on some crappy password that a human created, it's some hash value stored in the hardware. In a scenario like that it is easy to create a key that would require all of the computers working till the heat death of the universe to crack.

They're not trying to brute-force the encryption key. They're attempting to brute-force the PIN lock on the phone. Currently they can't because the settings on the phone cause a timeout after each unsuccessful login, the phone wipes after 10 failed attempts, and the PIN cannot be accepted from anywhere except the device display. These are the security functions that the FBI wants Apple to remove. They want to be able to hook up a computer via the lightning cable to brute-force the PIN and give them access to the phone's contents.

Like you, I appreciate the sentiment - happy to hear Apple speaking up. However this shouldn't change how we use Apple products. I operate under the implication that the device is compromised from the factory. Closed source software cannot be trusted, good faith is not enough.

I can't disagree with your point of view; I merely point out that a world where everyone who wants a secure smartphone needs to own their own smartphone factory isn't a very practical world.

Only needs to have access to software source code, and presumably a way to verify the binaries match. Not an entire factory.

They also need to verify the hardware does what it's supposed to do, and the CPU doesn't have secret opcodes.

Considering how many massive, gaping security flaws have been found in Open Source software in the last year or so alone that have been in place for years or decades, I think we can say that open source software cannot be trusted either.

Open source software can be trusted as much as we can trust ourselves - and that's as good as trust gets. I'll take that any day over trusting one party with a vested interest in making money.

> here's hoping against hope that Google, Facebook, and Amazon get behind this

I would assume that Google, Facebook, and Amazon are already sending every single keystroke we type straight to all the three letter agencies pretty much in real time. (I also assume the same about Apple, so I'm not sure what to make of this open letter.)

Why wold Google and Facebook get behind this? They store their customers data in a way they can access and subsequently have to give it to persecuters when there's a court order

If government agencies has unfettered access to data, it undermines the reputation of these companies. For instance, in Europe, many companies and organisations are likely even more hesitant to use Google's services since the Snowden leaks. This is a large loss of potential revenue.

Google has been (even more) proactive about security and encryption since then, since part of their business model relies on trust.

> Why wold Google and Facebook get behind this? They store their customers data in a way they can access and subsequently have to give it to persecuters when there's a court order

Exactly like Apple. Or do you think that the emails in iCloud are not given to the prosecutors?

Not exactly like Apple.

Google and Facebook's core competency is using your personal data to sell ads.

Apple's core competency is selling you appliances. Yes, they wind up with some personal data because of the services they also provide, but it's far less valuable to them than Google or Facebook.

> They store their customers data in a way they can access and subsequently have to give it to persecuters

What has to do your post with this claim?

Yes exactly like apple. To quote the link: "When the FBI has requested data that’s in our possession, we have provided it."

The point I was trying to make is that Google and Facebook have direct access to all the data of their customers, and already provide access to government agencies. Contrary to Apple they don't safely store some data of their costumers safely on the device, which this case is about.

> The point I was trying to make is that Google and Facebook have direct access to all the data of their customers, and already provide access to government agencies. Contrary to Apple they don't safely store some data of their costumers safely on the device, which this case is about.

Your point is wrong regarding Google and smartphones if the smartphone is encrypted

There's a huge distinction between Google (Android) and Apple (iOS) though: Apple affirms they don't have your keys, and this case bears that out (else the FBI would obtain the keys via subpoena to Apple rather than asking the court for a circumvention tool). Google is ambiguous about whether they have your Android keys; they claim they don't, however if you forget your device password it is possible to unlock your device via your Google account on a PC[1], and that alone is telling. If this were an Android device, the FBI would have already unlocked the phone with a simple subpoena.

Beyond that, Google definitely has the keys to your encrypted backups on their servers, so access to the phone might not even be necessary.

[1] http://visihow.com/Recover_Android_Device_in_case_of_Forgot_...

> For either company to unlock the device without the owner’s permission the smartphone or tablet must not be encrypted, according to the report.[0]

[0] http://www.theguardian.com/technology/2015/nov/24/google-can...

From your link:

"The situation is different for Android. Google’s version of Android, which runs on most Android smartphones and tablets in the western world, only implemented encryption by default with the latest version Android 6.0 Marshmallow released in October 2015."

That version of Android is only on a handful of devices, not even a full percentage point of global market share. Even on Lollipop and older devices that do support encryption, it has to explicitly be turned on by the user. And once again, Google is not expressly clear that they don't have your encryption keys on Lollipop and lower; they only claim not to have them for Marshmallow devices. They definitely have the keys to your encrypted data on their servers no matter what, which can include complete backups of your device.

> That version of Android is only on a handful of devices,


> Google is not expressly clear that they don't have your encryption keys on Lollipop and lower;

They have explicitly said that if the device is encrypted they don't have the key.

> They definitely have the keys to your encrypted data on their servers no matter what, which can include complete backups of your device

Source for that?

If you can still access your backups after changing your password, you don't control the keys.

In other words, if you can ever actually use something, it's probably not secure.

Was going to point out that you spelt "prosecutors"as "persecutors", but then realised you might have genuinely meant that spelling!

I'm not a native english speaker and that actually was just a typo.

Don't worry about it. Your misspelling is actually quite apt!

Because their SSL certificate authority is going to be the next one to be legally compelled to sign something.

I'm afraid I'm too skeptical to get the same assurances as you.

Apple accuses the FBI of playing language games with the term "backdoor", but I think Apple has done the same. The fact that they can push weak OS updates to a locked phone is the backdoor. This means that they can already comply with the court order, and they likely will. This letter covers them from PR damage.

I'm not sure you can draw the conclusion that Apple can push OS updates to a locked phone.

What Tim Cook wrote is that > "install it on an iPhone recovered during the investigation." > "the potential to unlock any iPhone in someone’s physical possession."

So the FBI has the physical phone already. They can deliver to Apple who can disassemble it and either use a JTAG/Flash programmer on an internal connector to manually write new software, or they could desolder the Flash holding the old OS and place a new one.

Both of these techniques are common enough in the embedded industry that I expect this is what Apple means. They probably can't push an OTA software update and force the install on a locked device.

The 5C at issue in this case does not have the modern secure enclave like the 5S and newer devices.

The newer devices run a special L4 kernel on the secure enclave. It is not updateable without providing the existing passcode. It enforces the attempt rate limiting and key deletion on too many attempts (if enabled). Special limited communication channels allow the CPU to talk to the SE. In production devices the SE has JTAG disabled. Encryption and decryption of the master keys happen inside the SE with its own private AES engine so even oracle/timing attacks on the main CPU are useless.

Why doesn't Apple just help hack this phone but wash their hands of newer devices and tell customers to upgrade? Because if the FBI and this court get away with using the All Writs act to compel Apple to write new software they'll eventually be forced to add a backdoor to SE-equipped devices too. Courts won't understand or care about the differences.

If the government forced them, Apple could insert a backdoor into the next major version of iOS or the hardware; then everyone inputs their passcode during the upgrade and the backdoor is deployed. Their primary defense against that so far (and the only real one you can have as a corporation) is to never build the capability in the first place. This judge's order is telling them to go build the capability (in theory for this one phone). The fact that you can't retroactively build the backdoor for 5S and newer devices isn't the main issue.

Better to fight every step of the way and draft as many pro-privacy people as possible into the fight to apply political pressure.

> Because if the FBI and this court get away with using the All Writs act to compel Apple to write new software they'll eventually be forced to add a backdoor to SE-equipped devices too. Courts won't understand or care about the differences

The whole point is that it doesn't matter what the court thinks if Apple cannot comply due to the laws of nature. That was their whole argument to begin with. Their argument now is pretty mushy in comparison.

"I'm not sure you can draw the conclusion that Apple can push OS updates to a locked phone."

The iphone contains a sim card.

A sim card is a complete, general purpose computer with its own CPU and RAM and the ability to run arbitrary java programs that can be uploaded, without your knowledge by your carrier.

You are owned. Deeply, profoundly, in ways that you have no way to manage/mitigate.

The real question, for me, is why authorities are dealing with Apple at all and not just working with the carriers who have proven to be their trusted allies.

You are owned. Deeply, profoundly, in ways that you have no way to manage/mitigate.

The international legal framework of sovereignty basically says you are owned. (Not universally de jure, but pretty much de facto.) Whatever rights you have are effectively granted to you by your country. Unfortunately, this notion is seldom given any thought, and the current most visible proponents of such an idea are unpleasant angry underclass men using it as an excuse to behave badly. There are others who have given thought to this, however, and it is part of the motivation behind such things as The Universal Declaration of Human Rights.


I think the answer to your question is embedded in your assumption: that updating the SIM card would be sufficient to recover data from this iPhone.

In my experience, law enforcement does not make their own jobs harder on purpose. If there is an easy way to get that data, they would use that way to get it.

With a sensibly-built phone, that SIM card does not have the ability to access anything of value on the device.

Is there a list of sensibly built phones available? I'd like to buy a phone where the modem and SIM do not have access to main memory (AIUI most phones use a single-chip SoC with a built-in modem).

What's the point of accessing main memory in a locked and encrypted phone?

Main memory is rarely encrypted, unless you have special security features in your CPU to do so. Only the disk is encrypted; main memory is vulnerable while running. Also see https://en.wikipedia.org/wiki/Cold_boot_attack

So you don't want any hardware to have access to main memory if it doesn't need to. For instance, you can use an IOMMU to ensure that devices can only access the specific areas the OS wants to allow them to DMA to/from, not all of memory.

> What's the point of accessing main memory in a locked and encrypted phone?

The phone isn't always locked and encrypted; for example, whenever the user is using the phone it's unlocked and decrypted.

The iPhone, for one.

> why authorities are dealing with Apple at all

I'd guess "security by obscurity". Just because they have the device rooted via SIM card doesn't mean they have available a signed build of a multi-gigabyte OS with most security libraries expunged.

One possible answer to your "real question":

I want to disclaim that this is pure speculation. I have no insider knowledge or indeed any particular familiarity with the institutions in question.

The FBI may want this authority and this precedent and think that this is a good chance to get it. They may say, "Well, the San Bernadino case is a high-profile case that may sway people, including judges, who would otherwise be less inclined to back our request. Who knows when the next nationally-publicized case will be in which the likely perpetrator carries an iPhone?" They may also believe that the current political climate is good for their case.

And they probably also believe that there's no harm in trying. If the courts rule against them, they haven't lost anything. If the courts rule for them, they get a brand new tool.

A sim card gets to send messages to the baseband in response to requests from the baseband. It doesn't have arbitrary memory access unless the baseband has really nasty bugs.

They need to break the boot trust chain to load unsigned code. Simply rewriting the flash isn't enough.

Why would the code be unsigned? If Apple wrote the backdoor OS, they could presumably sign it.

I incorrectly totally misread the OP and thought was talking about FBI flashing it themselves, without Apple help. Yes, of course Apple can sign it. I stand corrected but can't delete my comment.

To clarify, I agree that nothing they ask of Apple is technically impossible or even that difficult for Apple to pull off, probably via simple DFU without touching the flash at all.

Hence the need for a validbOS from Apple.

why load unsigned code? can't apple sign it?

Or anyone else with the apple key such as the NSA.

The judge wouldn't need to ask Apple if that were the case.

Or the NSA doesn't want to reveal that they have Apple's code signing key.

True, but a judge wouldn't allow the NSA to decrypt with a stole key either.

That wouldn't be valid evidence in court.

Pretty sure you can upgrade the OS on a locked phone if you have physical access to it.

Negative. You need the passcode.

If you lose the PIN on an iPhone you need to do a wipe and restore it from backup. You had better hope you remembered the backup password. You can't make a backup of a locked phone either.

The backup is probably easier to attack if you have it, since it doesn't have hardware imposed timeouts on password guesses. It may not be current however.

I'm not sure the backup would be much help even if you could break into it, I believe it's encrypted with the same method used to protect the keychain and is tied to the victim's Apple ID. I attempted to help a coworker restore their device with my Mac and couldn't because my iTunes was using a different Apple ID than the device and the device's backup.

There is no way to recover a phone if you lose the passcode?

If you have access to the iTunes account you can do a physical backup with iTunes and then erase and restore that backup. It won't be pin protected.

How would one connect the phone to iTunes to do this? You must enter the PIN on the phone to connect to iTunes iirc.

Not to do a backup and restore.

Not even sure what you mean by "iTunes account"?

Errr, sorry I think it's called iCloud now :P

Apple ID

Not that I know of. Like others have said you can pull a backup (but the machine backing up had to have been trusted prior or you're SOL) and then restore the phone.

Nothing's bulletproof but the iPhone is the most trustworthy IMHO.

I believe this is to deter theft.

I've locked out my galaxy s6 using the wrong password and it just wiped and reinstalled by itself. Then allowed me to restored everything that was on cloud backup.

Nope. I've been there, did resonable research and had to start over.

+1. If it is possible to push software updates to a "locked" phone then is this not tantamount to remote code execution with root privileges, and hence the BACKDOOR ALREADY EXISTS?

"Locked" seems like an improper term for such a scenario.

I applaud apple for appealing this case to the public however there is a HUGE HUGE difference between "we can't unlock" and "we shouldn't unlock". This distinction will likely be lost on the general public unfortunately.

Nowhere in this letter they say that it's possible and it seems very carefully worded to avoid stating that. They say, if it were possible they wouldn't do it anyway. That's an important legal and moral distinction.

To be fair, they could have stated it explicitly.

It's stated very clearly that they can push an update to an already existing device that would make it possible to retrieve "encrypted" data from said device.

If the data was truly encrypted, the concept of pushing an update or creating a master key would not be possible.

They state that they can push an update that makes brute-forcing possible by disabling software-enforced delays between attempts.

Apple's security PDF says that the iteration count is calibrated so that one attempt takes 80ms in hardware, so that's the hard limit on the brute forcing speed, regardless of any updates Apple releases.

This means that a long alphanumeric passphrase is secure, but a 6-digit passcode could be broken in half a day, and a 4-digit passcode would take just a dozen minutes.

It's so weird how hard it is for the brain to handle exponential growth. I was amazed that a 4-digit password can be cracked so quickly at 80ms a pop, but you're right. Just for the hell of it, here's how long it would take for different length passcodes for digits, digits plus letters (case insensitive), and digits plus letters (case sensitive):

    # characters  [0-9]         [0-9a-z]            [0-9a-zA-Z]
    1             0.8 seconds   2.9 seconds         5   seconds
    2             8   seconds   1.7 minutes         5.1 minutes
    3             1.3 minutes   1   hour            5.3 hours
    4             13  minutes   1.6 days            2   weeks
    5             2.2 hours     8   weeks           2.3 years
    6             22  hours     5.5 years           140 years
    7             1.3 weeks     200 years           9   thousand years
    8             13  weeks     7   thousand years  550 thousand years
    9             2.5 years     260 thousand years  34  million years
    10            25  years     9   million years   2   billion years

Does this consider the "too many incorrect attempts" lockout that iOS imposes though?

Nope, it's just 80ms multiplied by the number of possibilities.

According to Snowden, the NSA can brute force at the speed of over a trillion guesses a second, of course, they would need to be able to disable other security features first.

Ok, I read the entire thing once again. Nowhere in there do they state that they can comply with the request, only the consequences that would result if it were possible. In fact they say they "even put that data out of our own reach".

If it is stated very clearly, can you quote me a sentence?

In the security guide linked here it seems possible for this iPhone model but not later ones.

Edit: According to the discussion below Apple can ship updates to the secure enclave. I don't know if that's possible to a locked phone.

The word "push" appears nowhere in their letter. There is no way (in evidence) of "pushing" anything to the locked phone OTA. Physical access as a requirement? Sure, there's likely some way to get something onto the phone. But any DFU or JTAG-enabled update (likely the only vectors available on a passcode-protected device) would not be able to gain access to any appreciable fraction of the data on the phone, since doing so would invalidate the keys.

I wouldn't be surprised (It isn't stated in their iOS security doc) if the key generation uses a hash of the system files as part of a seed for the entropy source used for keys, though that's pure speculation on my part.

Edited for clarity regarding "push" vs physical access.

No, they state clearly that this is what the court ordered them to do. That doesn't mean it is possible. The court doesn't care whether something is possible or not.

That doesn't sound all correct. Assuming the phone holds an encryption key that can read/write local data, a software update could simply command it to decrypt all data and save it as a copy.

A device containing an encryption key that's just protected by a software password check would be absolutely useless. Part or all of the encryption key (maybe even the IV) is derived from the phone passphrase, this is why you can't just pop the NVRAM off a phone and try to find the key.

yes this is very interesting!

No. The word "remote" is not applicable to an attack that only works with physical possession of the device.

As far as I'm aware there is no known technique to prevent someone with physical access, a bunch of engineers, and the code signing keys from replacing firmware.

"locked" is a relative term. Anything encrypted can be broken with enough effort. But that is the semantic difference between leveraging a back door and brutally busting open the front door. I want a device where there is no back door. I hope you can appreciate that difference.

The iPhones with an A7 or later CPU should be secure against this. This whole thing is only an issue because the phone in question is an iPhone 5C, which uses an older CPU without the "secure enclave" system.

If it only applies to older iPhones, why did Cook write, "this software ... would have the potential to unlock any iPhone in someone’s physical possession"? (emphasis mine)

Because it's likely that it wouldn't end with "unlock this 5C" -- it would eventually extend to the government forcing Apple to either stop providing the additional security features in its newer models, or find ways to ship something that looks kinda like the security feature but isn't really.

Drawing the line in the sand at "the government can't force us to hack this guy's phone this time" thus ends up being "can't force us to provide features to hack anyone else's phone down the line".

I don't know. Either Cook is confused or I am. From everything else I've read, it's Cook. If I'm the one who's confused, then Apple really dropped the ball.

Would you expand upon this?

Sure thing.

Starting with the A7 CPUs, the iPhone CPU has a "secure enclave" which is basically a miniature SoC within the SoC. The secure enclave has its own CPU with its own secure boot chain and runs independently of the rest of the system. It runs a modified L4 microkernel and it does all of low-level key management.

The secure enclave contains a unique ID burned into the hardware. This ID can be loaded as a key into the hardware AES engine, but is otherwise designed to be completely inaccessible. Assuming AES is secure, that means the key can be used to encrypt data but can't be extracted, not even by the supposedly secure software running in the secure enclave. This key is then used to generate other keys, like the ones used to encrypt files. That means you can't extract the flash memory, connect it to a computer, and then try to brute force it from there. Or rather you can, but you'll be brute forcing a 256-bit AES key, not a 4-digit PIN, making it effectively impossible.

One of the secure enclave's tasks is taking a PIN (or fingerprint) and turning it into the encryption key needed to read the user's files. The main system just hands off the user's code to the secure enclave, and gets back either a key or a failure. The escalating delays with successive failures and wipe after too many failures are both done in the secure enclave. That means that updating the device's main OS won't affect it.

All of this is discussed in Apple's security guide here:


The one open question is software updates for the secure enclave. According to that guide, its software can be updated. Does that mean it can be updated with code that removes the restrictions and allows brute-forcing passcodes? The guide doesn't address how the updates work.

My guess, based on how meticulous Apple is about everything else, is that updates are designed to make this scenario impossible. The secure enclave must be unlocked to apply an update, or if updated without unlocking it wipes the master keys. This would be pretty simple to do, and it would fit in with the rest of their approach, so I think it's likely that this is how it works, or something with the same effect.

So after having asked you to expand upon the topic, I ended up reading a few articles on the matter. While they were all very thorough and informative, your summary above was by far the clearest and most succinct. Thanks very much!

Thanks for saying so, and I'm glad you found it helpful. Maybe I should expand this into a real article of my own.

Late reply, but yes, I think you could totally do so. You already have a lot of material off which to build.

You probably meant "this is tantamount". Otherwise, I don't understand what you're saying. Are you claiming the backdoor already exists, or that it does not?

Parent was asking a question, "is this not tantamount to x?" With the expected answer being "no, this is not not tantamount to x" which reduces to "yes, this is tantamount to x."

In this case, the intended answer/conclusion/implication is "yes, this is."

Freedom from forced update is one of Stallman's motivations for free software.

Just one terrorist attack + PR letter to customer + forced update away from loosing encryption on your phone.

On the other hand, if iOS were open source / the iPhone was able to run unsigned code there would be nothing stopping the FBI from just doing what they've asked apple to do themselves (assuming it's actually technically possible).

Open source doesn't preclude securing the boot chain. TianoCore implements UEFI Secure Boot, and it's BSD licensed. I think the bigger issue is the (unreasonably) low trust in things like secure elements and TPMs in open source, but that needs to change, and is rapidly off topic.

More on topic is whether Apple or even Google get out from under this if their on disk encryption mechanism is open source. If everyone owns e.g. LUKS (in a sense no one company owns/controls it) then can any one company be burdened by a court to effectively break everyone else's software by being told to create a backdoor?

In that case the FBI don't get a warrant and compel a company to comply, they put out an RFP and pay a contract company to comply.

It may be that only the 5C or older devices have the ability to push a custom OS update to a locked device.

The real problem is that you don’t want to set any precedent at all. Once it’s possible to do something for the 5C, weasel words can be introduced to make claims like “well: now you must maintain the current level of access by law enforcement”. Next thing you know, that excuse can be used to interfere with all future hardware designs.

And my understanding is that everything after the 5C is less vulnerable to even this attack (Which itself may not be possible, even with a firmware update.)

From what I heard later revisions won't accept a firmware update without either providing the passcode, or erasing the private key.

Arguably, the fact that the 5C accepts a firmware update without the passcode is a security vulnerability and ought to be patched.

The hardware required for enforcing this is not found on 5C or older devices.

This isn't the case with the newer devices though, where the delays and key destruction are enforced in hardware, and tampering would destroy the key.

It's not just PR. It's actually really hurting their company. Weaker security means less data can be stored on the iPhone which means less need for it, which means fewer sales.

Thank you for answering one of my questions: "Why does a multi-billion-dollar company give one lick about personal freedom ?".

Companies exist to make money, not to protect our rights. It even crossed my mind that the possibility that NSA et. al. rooted these devices long ago, and that this whole "debate" is just a staged thing to make it appear as though we had any privacy and feigned adherence to the democratic process.

But your point that, as a matter of policy, many organizations will simply not use the product if they know it has backdoors, relates it back to their capitalist motivation and makes any conspiracy less likely.

Apple is also looking to foster growth in foreign markets. That's likely going to take a hit if the U.S. Government has special access to the phone.

Same here. I find this doubly reassuring.

I thought the same thing when I read it. If Apple can do this, then it's already a backdoor, and all the publicity of "Even Apple can't hack your phone" was false. But it turns out this is an older 5C, not the new 5S and up. So those old phones were effectively backdoored.

I agree that Tim Cook has spun it as if it's not a backdoor when clearly it is. Still quite a hard-to-use one though. It seems like they need the physical phone and maybe Apple's private key for signing updates.

The idea that the act of writing software makes it insecure is silly though. The security doesn't come from no-one having made the appropriate changes to iOS. If it was, that would be security through obscurity and any motivated hacker or the FBI could modify iOS themselves. It must be about signing the update so that it can actually be installed.

I found this article about this [1] to be rather enlightening.

1) http://blog.trailofbits.com/2016/02/17/apple-can-comply-with...

I'm afraid that I have to agree with you here. But Apple (and the FBI) recognize that this is the first salvo in a battle that will rage for MANY years. These early skirmishes could dis-proportionally affect the outcome - hence the ensuing PR battle.

This is not apparent from the article. Where could I find out about this ability?

If such an ability doesn't exist than how does this potential threat follow?

(article) > But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices.

I am also missing a key part of the technical details involved in this this situation.

I'd also like to point out that Apple has a record of not practicing cautiously to security exposure, like _the inclusion of CNNIC root certificate despite the public exposure of security breach_.


Exactly. Plus, all backups of the iphones on icloud are already unencrypted, so half of the phones are already indirectly unlocked.

EDIT: backups are encrypted, but apple have the keys. See below.

iCloud backups are 100% encrypted. Only some iTunes backups are not, and only if the option to use encryption to protect the backup is not selected.

They are, but apple have the keys : https://thehackernews.com/2016/01/apple-icloud-imessages.htm...

So basically, they could be in clear text, it's pretty much the same.

Please see this link[1], Apple explains exactly how keys are stored in their datacenters (Hint: it is not in clear text). They use HSM's which destroy the user's key after 10 failed attempts.

[1]: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

This is only for Keychain escrow, device backups are not protected by HSM's.

>The fact that they can push weak OS updates to a locked phone is the backdoor.

This times nine hundred and eleven thousand.

Let's get behind Google, Facebook and Amazon to protect our privacy?

Let's get behind Google, Facebook, and Amazon in their desire to not to hand over our data to the government.

Even with as much data as each of them has, it's still better that the data isn't given to yet another party (the government, or each other).

Is it "our" data? Or is it their data? You don't think they share this with their business partners?

What this says to me is that the current iPhone encryption is able to be broken by Apple. Note: "[the FBI is asking us to] make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation." I interpret that as there is already a backdoor, but one that is controlled by Apple. If that is the case, the gov't is asking for dual control of that pre-existing backdoor in addition to Apple. If I encrypt a message with a public key, and Phil Zimmerman can decrypt it (in addition to the holder of the private half of that public key), then there's a backdoor in PGP, regardless of who owns/knows about it. The only secure system is one where the data is well and truly unrecoverable if the private key is gone, missing, or unavailable/unwilling to be applied by the owner.

Does Apple do an amazing job protecting their users' privacy? Yes! But frankly in this case I find the FBI makes more sense than Apple.

Apple says:

All that information needs to be protected from hackers and criminals who want to access it, steal it, and use it without our knowledge or permission

As I understand, Apple complains about the introduction of this new threat model:

  1. criminal steals someone's iPhone,
  2. gets hold of a special iOS version that Apple keeps internally to assist the government,
  3. pushes the OS update to the iPhone by themselves,
  4. uses some tool to automate brute-forcing the user passcode
At least that's what I get from these excerpts:

Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation.

The government would have us remove security features and add new capabilities to the operating system, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by “brute force,” trying thousands or millions of combinations with the speed of a modern computer.

Now how serious is this threat? And how useful is it to have the FBI be able to look into a phone when they have a warrant? Does the balance between the right to privacy and the need to assist criminal investigation really tilt towards privacy in this specific case?

Meanwhile there are serious threats that do affect a lot of users in practice, where Apple does a good job but could do better still such as:

- Remote code execution on iOS (6 vulnerabilities in 2016 so far[1])

- Phishing, brute force and social engineering attacks (the improved two-factor authentication is not yet available to everyone[2])

Am I wrong in thinking that users are way more likely to be affected by these threats except when the government has a warrant?

If Apple is actually worried about the FBI getting access to the modified iOS version, they should focus their complaint on that and propose to do the whole data extraction in-house.

[1] http://www.cvedetails.com/vulnerability-list/vendor_id-49/pr...

[2] https://support.apple.com/en-us/HT204915

I think the question is less about whether or not a "correct / valid use" of this new technology is acceptable. The problem is that it's impossible (once it exists) to guarantee it won't be used in malicious ways.

Right, but isn't "impossible" too high a bar? Or is the ability to comply with a warrant worth nothing?

It's not like iOS is impossible to hack now and it would be terrible that it becomes possible. There are other aspects of the system that allow malicious exploits more easily than this theoretical threat. So it doesn't make sense to preserve at all costs (e.g. making warrants unenforceable) an "impossibilty" that never was.

> Right, but isn't "impossible" too high a bar?

No, not really.

> Or is the ability to comply with a warrant worth nothing?

What if I issued a warrant for you to give me a 3 headed dog? Is your inability to comply with a warrant worth nothing?

Important to remember: Anyone who has your phone also has an object covered in your finger prints. Don't rely on Touch ID for actual security.

Important to remember: All iPhones since the 3GS have oleophobic surface coating (which combined with cloth pockets means the screen stays relatively clean). Also important to remember, fingerprints on a random surface are VERY hard to copy onto latex milk or PCB substrate.

Please continue to think

> Huge props to Apple

They may be doing it for people right to privacy, but don't forget they might also be doing this because their image would become tainted irrevocably if they complied with this. Trust in Apple devices would be shattered (across those who currently trust Apple).

What puzzles me is why the government didn't impose a gag order like they sometimes do in cases like this.

Because a gag order would have to be absolutely legally watertight, which is likely impossible in this case.

> Huge props to Apple - here's hoping against hope that Google, Facebook, and Amazon get behind this.

Why against hope?

I know that it is an idiom, I asking why he has no hope

From the link that was posted in reply to you, 'hope against hope' means 'to have hope even when the situation appears to be hopeless'. Which means he does have hope, even though the situation appears to be hopeless.

Why is the situation hopeless? Google, for example, has been undertaking efforts for years to improve security on the Internet, in part for the purpose of protecting privacy. Seven days ago they were featured in an article about how they're going to warn users of Gmail whose incoming emails are not protected by SSL: https://news.ycombinator.com/item?id=11067050 - this is part of their Safer Email initiative. Some time ago they released a Transparency Report as part of that initiative describing which senders to/from Gmail support encryption: https://www.google.com/transparencyreport/saferemail/ - and that's not to mention their efforts promoting HTTPS.

I see these efforts as Google going far out of its way to support privacy and security on the web.

I think Oletros understands that perfectly, but is wondering why OP feels that the situation appears to be hopeless.

Yeees, I'm asking why he thinks there is little hope that they will stand

Sorry, misinterpreted you, in over-literal mind mode right now!

No problem

When a US senator expressed upset with Wikileaks, Amazon shut them down. Not even a court order, just a tantrum from a politician. If you really think Amazon aren't cheerfully inviting the US government into their datacentres, you haven't been paying attention.


You should read your link, because this is not what it says

You're right, the headline is sensationalizing his actual words. I'm retracting that comment.

Google worked with the NSA and Facebook not only worked with the NSA, but now has ties to China and the Chinese government (which have the most advanced spying and censoring Internet technology around).

So, I'm doubtful.

it's probably pretty safe to assume google did the opposite

Isn't it odd that this press release spends so much time discussing encryption? Neither the FBI nor Apple will be touching any encryption functions of the device.

It seems the primary aim of this statement is to prevent rumors that would spook Apple users into thinking their data is fair game

When I forgot the password for my iPhone I tried untill it said "connect to ITunes" when I connected to iTunes nothing happened so I clicked restore realised I had not backed up my iPhone so I clicked backup hoping that it would backup but it said that I needed to sign in this then relied the boundary of not letting me type the password in or holding me back. So I have created a program that repeatedly does that. (There are no need for fancy backdoors)

What do you want thone other companies to get behind? They don't manufacture phones like Apple does, right...?

Note that this letter says: "When the FBI has requested data that’s in our possession, we have provided it."

Seels like that detail is getting very little attention in this announcement. Really? If the FYI requests any data, they hand it over...?

Would you complain about Apple handing in information to solve the murder of a loved one? Why is it always the "bad government" argument? It's not that it doesn't happen, but usually those requests are aimed towards more "mundane" cases.

For example, I have friends who work in law (though not in the US), and the number 1 data request -which is revised by a judge, and only then given by companies- are call logs from telephones (just from/to, date, time, duration, nothing fancy). And this are extremely helpful and information rich, if you know how to use them.

I'm all for supporting law enforcement efforts, but the wording seemed to imply that Apple hands over any requested info to the FBI, which seems excessive.

But I was mainly pointing out that quote because it wasn't clear what lesson the parent commenter wanted Google, Facebook, and Amazon to be learning from Apple. I would have guessed it'd have something to do with protection of user data, but the letter says they turn over any user data they have!

They probably refer that they have handed all the information they could in this specific case. There surely was some kind of authorization (from a judge perhaps -- US law is almost unknown to me) too.

As for the comment of Google, Facebook, etc, learning, I agree with you.

Can the information bring my loved one back to life? Or will it just result in more terrible things happening to someone else's loved one?

Utilitarian philosophy argument. The benefit gained by solving one murdered loved-one case is badly offset by the loss accrued to everyone by no longer having the security protections they expected.

This specific case, in fact, is pretty close to "murder of a loved one;" the phone's owner killed people, and the FBI wants to find out if they were part of a bigger plot.

Goole makes Android; Amazon makes FireOS. No idea what't the deal with Facebook is.

> Really? If the FYI requests any data, they hand it over...?

They have to, there's no legal wiggle room here. Creating backdoors OTOH seems to be sufficiently legally questionable that Apple can risk noncompliance.

(Of course, Apple could not accumulate all that data in the first place, but that would be silly, obviously. Now please sync your wifi passwords to iCloud.)

... you can't just ignore a subpoena if you wish to do business in the US. providing a backdoor is different

With Google's Android, this issue will never arise because Android is open source. Any attempt to plant a backdoor will be outright monitored by the community.

Not too sure about this. Keep in mind that in most commercially sold Android phones, closed source, self updating, Google Play Services is installed by the manufacturer with system level privileges. That alone is enough to create a non insignificant back door.

Or, said differently, Play Services are already a backdoor. They can (and do) install updates or other software pushed by server automatically, without you being able to do anything about it. And they have access to anything on the phone.

And it can reportedly grant itself new permissions without the user's knowledge, bypassing a fundamental security mechanism of Android (any Android devs know how this is done?). http://arstechnica.com/gadgets/2013/09/balky-carriers-and-sl...

Google Play Store manages the permission dialog for apps installed via Play Store (Play Services is one). It just doesn't show it for Play Services and just auto-accepts installation.


But you can always root your phone and install Cyanogenmod from scratch, right? Agreed that this is not too trivial right now because of standardization issues, but it could be done if you really care about privacy.

Not all of CyanogenMod is free software (you still have a bunch of binary blobs, and everyone has to use Google Play Services anyway because every app seems to implicitly require it). Replicant would be a much better alternative if it actually supported anything newer than 2G.

> and everyone has to use Google Play Services anyway

This isn't true. You can stick to app repositories like F-Droid and use Raccoon to download Play Store apps via your desktop without using a Google account on your phone.

It is true that you can get Play Store apps without a Google Account, but the Place Services framework does a lot more than this. Many apps rely upon the framework for certain pieces of functionality from Google's libraries.

You mean Google Apps specifically?

I don't use them personally but I imagine Goole Now, GMail and Google Maps would need Play Services.

The apps I do use (non-google) tend to function well enough without Play Services though.

Many Android apps include the Google Play Services framework in themselves, as they provide extra functionality that is not in the baseline Android API, e.g. a JSON parser.

Fdroid is wonderful. They even strip ads from otherwise foss projects since the licenses are usually not compatible.

Replicant supports 3G devices:


Nobody builds their Android from source. Nobody uses Cyanogenmod. And nobody runs Android on a phone where the entire stack is open source and blob free.

Anyone who does is a rounding error.

> And nobody runs Android on a phone where the entire stack is open source and blob free.

> Anyone who does is a rounding error.

I'm actually curious if there is literally anyone who uses no proprietary software, including the radios and the SoC, on their Android device.

My bet is that there's not even a single device out there for which this is possible. (If there is, I'd love to see it.)

Not quite, but you can come close. I have Cyanogen installed on all my Android devices and I try to use as little proprietary software as possible. However I am patiently waiting for the Neo900, which is a free (libre) hardware design based on the Nokia N900: https://neo900.org/

According to them, there are unfortunately no baseband modems on the market that can legally have their firmware distributed as free software. Their workaround is to keep the modem as isolated from the CPU/RAM as possible.

> According to them, there are unfortunately no baseband modems on the market that can legally have their firmware distributed as free software.

What is the legal restriction here? (It sounds like you're referring to some restriction beyond simple copyright protection on some of their components - are there FCC regulations regarding the firmware?)

EDIT: Ah, of course, the FCC needs to certify devices before they can actually be used.

From their FAQ: https://neo900.org/faq#peripherals

>We unfortunately cannot provide free baseband modem firmware, as there is no option available on the market which would be able to fulfil this requirement. Even if it existed, it would bring very little value to the users, as operating a radio device with modified firmware on public networks without recertification is prohibited in most jurisdictions of the world and privacy concerns in cellular networks are mostly related to what happens on the network side, not inside the device.

I don't have any more information than this. If someone can quote specific FCC regulations to back this up, I would find that very interesting :)

> Even if it existed, it would bring very little value to the users, as operating a radio device with modified firmware on public networks without recertification is prohibited in most jurisdictions of the world and privacy concerns in cellular networks are mostly related to what happens on the network side, not inside the device.

Not entirely true, publishing the code isn't the same as allowing its modification. Code signing can be used to limit which versions are allowed to run.

Reproducible builds of the source would allow one to ensure that the binary, certified version of the code their baseband processor is running is legit (i.e. not backdoored). It would also help audit the code and spot security holes.

> Not entirely true, publishing the code isn't the same as allowing its modification. Code signing can be used to limit which versions are allowed to run.

If I can't run my home-compiled versions of your code - whether because of code signing restrictions or because of federal law prohibiting firmware that hasn't been certified - it's not free[0]. So without without reproducible builds, providing the source code for the firmware provides very little benefit (since I have no way to prove that the code corresponds to what's actually running on the device, nor any legal way to install and run it on the device myself.)

Reproducible builds could in theory work, but actually getting builds to be bit-for-bit reproducible is not an easy feat. I'd be very surprised if firmware were capable of this.

[0] This is a great example of why a free software license doesn't necessarily mean that the software is free. It means that the author has waived his/her ability to restrict your freedom to use/modify/distribute the software, but that doesn't mean that third parties (ie, the government, or a patent troll) have done the same.

Not free, but open. I'd argue the latter is significantly more important than the former if you're trying to protect against the code working against you. At least if the code is open, you can inspect it and verify its operation.

Although replicant currently can't do free software on the modem and bootloader, everything else is: http://www.replicant.us/supported-devices.php

That's the current state of affairs. I know of no baseband manufacturer who has ever offered (nor seemed open to the idea of releasing) source for their chips.

Basebands aside, the rest of the device is somewhat feasible to see being open.

> including the radios We no for a fact that they don't because that's illigal in the US.

It's not possible because pretty much all baseband processors (radios) run nonfree software.

What's the closest one can reasonably get?

Not necessarily.

Despite the openness of Android/AOSP, there are still, unfortunately, things like binary blobs for certain graphics chips and closed-source firmware for things like Wi-Fi chipsets. Given what we've seen agencies like NSA are capable of (intercepting hardware in transit to apply backdoors, paying off RSA to make Dual EC the default pRNG in their crypto libraries, etc.), them compelling a manufacturer of a component to include a backdoor in their closed-source blobs is no stretch of the imagination.

Apple even has this problem: basebands in cellular modems are notorious for being the source of exploits in otherwise-secure phones.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact