Hacker News new | past | comments | ask | show | jobs | submit login
Apple Is Said to Be Working on an iPhone Even It Can’t Hack (nytimes.com)
620 points by rquantz on Feb 24, 2016 | hide | past | favorite | 401 comments

They're presumably already 99% of the way there. If the Secure Enclave can be updated on a locked phone, all they need to do is stop allowing that, right?

To me, the more profound consideration is this: if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone. The AES-XTS key that protects data on the device is derived from your passcode, via PBKDF2. These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec.

The real lynchpin here is not hardware, but iCloud. Apple can pull data out of an iCloud backup, and the only reason the San Bernadino case even got off the ground is because somebody at the county screwed up and effectively prevented the backup from occurring.

iCloud backups can be secured so not even Apple can get in them, but it is fundamentally much harder to secure (can't be hareware-entangled and still restore to a new device), and it would significantly complicate iCloud password changes. I'm sure they are working on it, but it is nontrivial.

That (software) problem is the real reason 99% of users are still exposed, as you say the hardware and secure enclave holes are basically closed.

> iCloud backups can be secured so not even Apple can get in... I'm sure they are working on it, but it is nontrivial.

There is no way they are working on this. It is an intentional design decision that Apple offers an alternative way to recover your data if you lose your password.

Or if you die without telling your next-of-kin your password. Most people do not actually want all of their family photos to self-destruct when they die because they didn't plan for their death "correctly". That would be a further tragedy for the family. (Most people don't even write wills and a court has to figure things out.)

Making data self-destruct upon forgetting a password (or dying) is not a good default. It's definitely something people should be able to opt-in to in particular situations, but only when they understand the consequences. So it's great news that in iOS 9.3 the Notes app will let you encrypt specific notes with a key that only you know. But it's opt-in, not the default.

Has Apple even given access of someone's iCloud account to next-of-kin after they died? I've never heard of this, and I don't expect Apple to be responsible to preserve photos. You already can have shared photo streams, and there are many solutions for other data that could be potentially lost that don't involve Apple getting directly involved in these cases.

The idea of Apple (or some other big corporation) providing my protected personal data to my next-of-kin is more frightening than the idea that the government has the ability to spy on me while I'm alive. It's the most morbid kind of subliminal marketing that could possibly exist.

"Hey, we're really sorry about fluxquanta's passing. Here is his private data which he may or may not have wanted you to see (but we'll just assume that he did). Aren't we such a caring company? Since we can no longer count on him to give us more money when our next product comes out, keep us and our incredibly kind gesture of digging through the skeleton closets of the dead in mind when shopping for your next device."

The thing is, you can opt in to destroy-when-I-die security. You can encrypt notes or use a zero-knowledge backup provider (backblaze offers this). But for most people that's the wrong default for things like decades of family photos.

In absence of a will it would be terrible to assume that a person meant to have all their assets destroyed instead of handed down. It should be an explicit opt-in. The default should be, your stuff is recoverable and inheritable.

> But for most people that's the wrong default for things like decades of family photos.

That seems like a weird assumption, that there'd be a single person with access to an account containing the only copies of decades of family photos. If someone else has account access or if there are copies of the photos elsewhere, then "destroy-when-I-die" isn't a big problem.

On the other hand, it also violates the way that I think things would usually work in the physical world. That is, if there's a safe that only the deceased had the combination to, I can still drill it to access the contents.

Far from a "weird assumption", that is exactly how most families operate. There's a family computer with all the photos on it that's always logged in, but maybe only dad or mom knows the iCloud password ("hey mom what's the password again?..") Or maybe they are split between family member iPhones, and they just show them to each other when they want to see them.

It would be a pretty big bummer for most families if when a family member passed away so did all those memories. That's probably not what they would have wanted. Or even if they just forgot their password.. that when they reset it all their photos go poof.

You are I might understand the consequences, but for most people it should really be a clear opt-in to "you can turn on totally unhackable encryption, but if you lose your pw you are totally screwed".

> that is exactly how most families operate.

Do you have non-anecdotal evidence for that? Among my own friends and family, there are some images that only exist on one device or account, but most of the stuff likely to draw interest ends up somewhere else (a shared Dropbox account, e-mail attachments, on Facebook, copied onto some form of external storage).

There are likely some demographic groups that are more likely to behave one way than the other, and that could perhaps account for our differing experiences.

On second though, it is the easiest way to use the account (each person having an account on each device). I wonder what percentage of people that would benefit from it actually use the Family Sharing option?

I see what you're saying, and I know that I'm the odd man out here. My original comment stems mostly from my own messed up familial situation. My parents, (most) siblings and I don't get along very well, and I'm single.

If I were to die today I wouldn't want my personal photos, online history, or private writing to fall into the hands of my family. Hell, I don't really even want my physical assets to go to them (something I really should address in a will one of these days to donate it all to charity).

There has been a lot of fighting and backstabbing over who gets what when relatives have died in the past, and the more emotional items (like photographs) have been used to selfishly garner sympathy online through "likes" and "favorites" and it makes me sick. My position is that if you didn't make the effort to get to know a person while they were alive, you should lose the privilege of using their private thoughts for your own emotional gain after they're gone. And I do realize how selfish that sounds on my part, but in my current position I feel like it's justified. If I got a long term partner I would probably change my mind on that.

So yes, an opt-in would be ideal for me, but I don't think many online companies provide that right now.

That's pretty standard, though: once you no longer exist, all your private data, all your private money, all your private goods become part of your estate, to be disposed of by your executor according to your will.

Things like money and personal physical property, sure, I understand that. But I feel like personal protected (encrypted) data should be treated differently. I'm thankful Google at least has options[0] available for their ecosystem, but I guess I'm going to need a will to cover the rest.


Historical, pre-digital precedent:

In the case of sudden death, there would not have been any way to securely dispose of any private "data". So your private information, diaries, works you purposefully didn't publish, unfinished manuscripts you abandoned - everything was handed down to your estate, and more often than not used against your intent.

I'm not entirely clear whether your will could specify such disposal to be done, or could prohibit people from at least publishing these private notes and letters if not reading them, in any kind of binding and permanent way.

Yes. http://www.cnet.com/news/widow-says-apple-told-her-to-get-co...

Shared photo streams are only a solution if they are used. Most people don't even write wills.

If you fail to write a will should the state just burn all your assets, assuming that's what you meant? No, that's the wrong default. Burn-when-I-die should be opt-in for specific assets, not the default.

And the good news is Apple is providing opt-in options like secure notes. Perhaps even backups too (3rd parties already do). But only after presenting the user with a big disclaimer informing them of the severe consequences of losing the password.

> Farook disabled the iCloud backup six weeks prior to the attack


They did not even attempt to get it to send a fresh backup to iCloud before they reset it making it impossible.

[0] http://daringfireball.net/2016/02/san_bernardino_password_re...

On the other hand, "turn it on and let it do its thing" is a terrible idea from a forensics standpoint. You want to lock the account down ASAP to prevent potential accomplices from remote wiping your evidence.

In an alternate universe it may have been a plausible deliberate measure, but in this universe, it was a fuckup.

The exact reason i simply just don't use the iCloud backup.

call me a cynic, but I'm not buying "somebody at the county screwed up"

Indeed, "The County was working cooperatively with the FBI when it reset the iCloud password at the FBI's request." https://twitter.com/CountyWire/status/700887823482630144

The "screwup" grandparent is suggesting is that the county didn't think to disable the setting that would let employees turn off iCloud backups for their devices, however many months or years ago, not that they've messed up during the investigation now.

No, they're probably referring to this, from the second letter,

"One of the strongest suggestions we [Apple] offered was that they pair the phone to a previously joined network, which would allow them to back up the phone and get the data they are now asking for. Unfortunately, we learned that while the attacker’s iPhone was in FBI custody the Apple ID password associated with the phone was changed. Changing this password meant the phone could no longer access iCloud services."


It's not 99%; adoption of iCloud backups is not nearly that high.

Uhh, well it's probably pretty high. Considering their adoption rate for new software is sitting somewhere around 95%. iCloud backups default to on - just like automatic updates - when the user sets up their phone. Not to mention most Geniuses would ask to turn on iCloud backup when upgrading the device for convenience.

Well the specific phone that started this controversy didn't have any iCloud backups, so regardless of the percentage it doesn't pertain here.

It did have iCloud backups, but the latest was six weeks prior. The FBI requested the iCloud password be reset, which prevented a new iCloud backup they could have subpoenaed.

Naive quedtion perhaps, bit why wouldn't they be able to employ the same hardware on icloud than on the phone?

Uploading the encrypted content has no value as backup, if you don't have keys that can decrypt it. If the keys are backed up as well, all security is gone.

Is it that hard to have the phone display an encryption key and have the user copy it to dead tree?

As above, not a good idea for a default, but don't see why it wouldn't be technically viable for opt-in protection.

The hardware key is designed to be impossible to extract from the device. That's part of the security, so you can't simply transfer the data to a phone where protections against brute-forcing the user key have been removed.

> An encryption key

To spell it out (1) request new encryption key from device (let's call it key4cloud); (2) encryption key generated, displayed for physical logging by the user, & stored in the secure enclave; (3) all normal backups to iCloud are now encrypted via key4cloud; (4) user loses phone; (5) user purchases new phone; (6) new phone downloads data; (7) user enters key4cloud from physical notes & decrypts backup

Yes, it requires paper and a pencil and user education (hence the opt-in). But it's also incredibly resistant to "Give us all iCloud data on User Y."

It can be the same hardware but I believe that not usually meant with "hardware based encryption". The point is that the private keys never leave the hardware of the phone, thus making it secure. So they could employ the same hardware but the hardware does not have the necessary keys.

Does Apple owning the iCloud data center have an impact?

Why would they have made the Secure Enclave allow updates on a locked device without wiping the key in the first place? Either they didn't think it through, assumed they would never be compelled to use it as a backdoor, or perhaps they were afraid some bug could end up having catastrophic consequences of locking a billion people out of their phones with no way to fix it? Do we even know for certain that the Secure Enclave on the 6s can be reflashed on a locked phone without wiping the key?

From what's been said, it seems like it was made to be updated so that Apple could easily issue security updates. They've already increased the delay between repeated attempts at password entry. Probably they were worried about vulnerabilities or bugs that hadn't been found and wanted to maintain debugging connections to make repairs easier. A tamper-resistant self-destruct mechanism with no possibility of recovery introduces extra points of failure, and it seems that until now, they didn't think it was necessary.

Look at the controversy over the phone not booting with third-party fingerprint reader repairs as an example. People were upset when they found out that having their device worked on could make it unbootable, but Apple was able to easily fix it with a software update. If it had been designed more securely, it might have wiped data when it detected unauthorized modifications, which would have meant even more upset people. Now that this has become a public debate, there will be a very different response to making it more secure.

How much easier? If all they had to do to not have access to it themselves is to ask the user for his password when there's a new update, that's hardly that inconvenient...

I'm not saying that it was the right thing to do in hindsight, but I get a little nervous even when updating a small web server, so I understand the tendency to leave repair options open on something as big as iPhones. Real hardware-based security is about more than just about asking for a password. It means making the device unreadable if it's been disassembled or tampered with, and that could have unintended side-effects if any mistakes are made or something is overlooked. It's definitely worth pursuing considering the political situation the world is in right now.

As I understand it, Secure Enclave firmware is just a signed blob of code on main flash storage that's updated along with the rest of iOS, which can be done via DFU without pin entry. I assume DFU updates are very low level, with no knowledge of the Secure Enclave or ability to prompt the user to enter their pin.

Making the DFU update path more complex increases the risk of bugs and thus the risk of permanently bricking phones.

You could imagine an alternative where on boot the Secure Enclave runs some code from ROM which checks that a hash of the SE firmware matches a previously signed hash, which is only updated by the Secure Enclave if the user entered their pin during the update. If it doesn't match, either wipe the device or don't boot until the previous firmware is restored.

This way Secure Enclave firmware updates and updates via DFU are still possible, but not together without wiping the device.

Let us direct our attention to the superhero Mike Ash and his latest post on secure enclave. https://www.mikeash.com/pyblog/friday-qa-2016-02-19-what-is-...

Honestly, this is really the shit..

Yeah, the key question is how Secure Enclave firmware updates work, and whether they can be prevented without pin entry. One former Apple security engineer thinks they are not subject to pin entry: https://twitter.com/JohnHedge/status/699892550832762880

> or perhaps they were afraid some bug could end up having catastrophic consequences of locking a billion people out of their phones with no way to fix it?

That basically happened (at a smaller scale) just last week. When Apple apologized and fixed the "can't use iPhone if it's been repaired by a 3rd party" thing, the fix required updating phones which were otherwise bricked. It's not an unreasonable scenario.

If the device has a manufacturer's key and the user's key, then it's basically down to simple Boolean logic: does the innermost trusted layer allow something to be installed or altered if it is authorized by the manufacturer's key OR your key? Or the manufacturer's key AND your key? Or just your key? (With a warning if it has no other key?)

Underrated post.

>If the Secure Enclave can be updated on a locked phone, all they need to do is stop allowing that, right?

That probably also means removing most debugging connections from the physical chip, and making extra sure you can't modify secure enclave memory even if you desolder the phone.

A lot of that stuff was already in the original threat model for the Secure Enclave ("assume the whole AP is owned up").

No one has been talking about the fact that you can rebuild transistors on an existing chip. It's very high tech stuff, the sort that Intel uses to repair early engineering samples painstakingly, but it is used.

You decap the chip to expose the die with HF, and then use Focused Ion Beams and a million dollar microscope setup, you can rearrange the circuits. So, if the NSA absolutely had to have the data on the chip they could modify it to make it sing. So, if say they know an iPhone had the location of Bin Laden on it, they could get the goods without Apple.

They're not anywhere near 99% of the way there; they've destroyed the heterogeneous decentralized ecosystem that broad security requires.

Locking themselves out of the Secure Enclave isn't anywhere near sufficient. As long as the device software and trust mechanisms are totally opaque and centrally controlled by Apple, the whole thing is just a facade. There's almost nothing Apple can't push to the phone, and the audibility of the device is steadily trending towards "none at all".

If the NSA pulls a Room 641A, we'd never know. If Apple management turns evil, again, we'll never know. If a foreign state use some crazy tempest attack to acquire Apple's signing keys ... again, we'll never know.

Then again, nobody is suing over android phone crypto, and as recently as last November bugs have been discovered that sookmg things like entering an excessively long password allows you to bypass the lock screen.

In android world too many parties have the keys to kingdom and people that protect their devices take that into consideration. Also once the bootloader is unlocked and custom firmware put there - all bets are off.. I have yet to see viable attack against sufficiently strongly protected LUKS at rest.

I think from the context it's pretty clear that "hack" in this case is referring to "being forced to unlock". Yes, they could still deliberately break encryption for future OSes and phones, but the same could be said of any software, open or closed source.

I don't think acting like an open ecosystem is the be-all and end-all of security is productive. Most organizations (let alone individuals) don't have the resources to vet every line in every piece of software they run. Software follows economies of scale, and for hard problems (IE, TLS, font rendering, etc) will only have one or two major offerings. How hard would it be to introduce another heartbleed into one of those?

How does a 3rd-party researcher find the next heartbleed if they can't even decrypt the binaries for analysis?

Binaries can be converted back to assembly and quite often even back to equivalent C; bugs are most often found by fuzzing (intentional or not) which does not require source code. The difference between open and closed source is that open is more often analysed by white hats who rather publish vulnerabilities and help fixing them, while closed by black hats who rather sell or exploit them in secret.

You misunderstand; if you can't even decrypt the binary, you can't disassemble, much less run a decompiler over it.

As someone who has done quite a bit of reverse engineering work, I have no idea how I'd identify and isolate a vulnerability found by fuzzing without the ability to even look at the machine code.

If it runs, it has to be decrypted (at a current level of cryptography); at most it is obfuscated and the access is blocked by some hardware tricks which may be costly to circumvent, but there is nothing fundamental stopping you.

> don't have the resources to vet every line in every piece of software they run

For the same reason I do not independently vet every line of source code I run, but still reasonably trust my system magnitudes more than anyone could - and I argue, nobody can - trust proprietary systems. And that is because while I personally may not take initiative to inspect my sources, I know many other people will, and that if I were suspicious of anything I could investigate.

Bugs like Heartbleed just demonstrated... well, several things:

1. Software written in C is often incredibly unsafe and dangerous, even when you think you know what you are doing. 2. Implementing hard problems is not the whole story, because you also need people who comprehend said problems, the sources implementing them, and have reason to do so in the first place.

Which I guess relates back to C in many ways.

I look forward to Crypto implemented in Rust and other memory / concurrency / resource safe languages. There is always a surface vector of a mistake being made that can compromise any level of security - if you move the complexity into the programming language the burden falls on your compiler. But in the same way you can only trust auditable in production heavily used sources, nothing is going to be more heavily used and scrutinized, at least by those interested, than languages themselves.

C is not a problem -- you can make a bug in every language. Even with memory safety and a perfect compiler, bug may direct the flow in bad direction (bypassing auth for instance) or leak information via side-channel.

We all understand that as long as apple can update the phone the can do all kinds of bad things.

The important thing about the secure enclave thing is that it pushes security over the line so that the attacker has to comprimise you befor you do whatever it is that will get you on somebodys shitlist.

> if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone

Is this true even if you use Touch ID?

Probably not. If you're dead, they probably have your fingers. If you're alive, they can compel you to unlock the device with your fingerprint.

The only point I'm making is that Apple already designed a cryptosystem that resists court-ordered coercion: as long as your passcode is strong (and Apple has allowed it to be strong for a long time), the phone is prohibitively difficult to unlock even if Apple cuts a special release of the phone software.

Using a strong pin is pretty annoying, and a relatively visible signal when using the phone on the street etc, So it can be a good filter(maybe via street cams) to filter suspicious people - which isn't a bad goal for law enforcement.

That sounds good until you remember the Bayesian Base Rate Fallacy: there are very few terrorists (the base rate of terrorism is very low), so filtering on "people with strong passphrases" is going to produce an overwhelming feed of false positives.

Be careful not to take the base rate fallacy too far, with enough difference in likelihood even a small base rate won't prevent an effect from being significant, and regardless of the base rate you'll still get some information out of it, it might just not be as much as you wanted.

Nobody cares that you're using an alphanumeric passcode on your iPhone.

Some corps require or strongly encourage it. My employer does.

And most parents I know use alphanumeric to keep their kids from wiping their phones and iPads just by tapping the numbers. (A four digit number code auto-submits on the 4th tap, so all it takes is 40 toddler taps. An alphanumeric code can be any length and won't submit unless the actual submit button is tapped.)

Corporate email profiles on BYOD phones often enforce a long passcode requirement, so you've got a lot of Fortune 500 sales guys to screen out if you're stopping and searching anybody with a suspiciously long password.

I'm at a loss as to how alphabet agency can determine a weak passcode vs strong passcode was used. how does a pin get stored on the phone? surely, not plain text of a 4 digit pin. if they do any encryption to the 4 digit pin, how would it appear any different than a significantly stronger passcode?

The grandparent post was about determining the complexity of a PIN/Passcode by watching it being entered - more screen interaction = more complex.

It uses a different screen. If you have a 4 digit pin, the entry screen looks a lot like the phone dialer, with the numbers 0-9.

If you have a stronger passcode, you see a full keyboard instead.

The prompt is different based on the type of code you use.

Except that with Touch ID, you only have to enter it when you reboot the phone, or if you've mis-swiped 5 times. I've had a strong pin for a couple of years, and really don't find it even a slight inconvenience (in the way that I use a super-weak password for Netflix, as entering passwords on an Apple TV is a real pain)

People who desire to be secure in their electronic papers and effects are not and should not be considered "suspicious people".

If they have access to a live finger for the TouchID, sure they can bypass - but they could do that with the $5 guaranteed coercion method as well [1].

Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer.

All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).

[1] https://xkcd.com/538/ [2] https://blog.lookout.com/blog/2013/09/23/why-i-hacked-apples...

> All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).

Excellent advice. Even better, if you're about to pass through US customs and border patrol, backup the phone first, wipe, and restore on the other side. Of course, this depends on your level of paranoia. I am paranoid.

If you're paranoid, making a complete copy of all your secrets on some remote Apple or Google "cloud" where the government can get at it trivially is the exact opposite of what you want to be doing.

If you're paranoid, you don't have a cell phone.

Or you have several, and send them on trips without you, etc.

Well, yeah, if you back it up with a 3rd party backup tool, you are trusting the 3rd party.

I recommend you make a backup to your laptop, which you then encrypt manually. That way the trust model is: you trust yourself. Then you can do whatever you want with the encrypted file. Apple's iCloud is perfectly fine at this point.

The real challenge is to find a way to restore that backup, because you have to be on a computer you trust. If you decrypt the backup on a "loaner" laptop, your security is broken.

If you decrypt the backup on your personal laptop but the laptop has a hidden keylogger installed by the TSA or TAO, your security is broken.

It would be necessary to backup the phone on the _phone_ _itself_. Then manually encrypt the file (easy to do). Then upload to iCloud. At this time, no such app exists for iOS.

Since you plan to restore the backup to the phone anyway, it's no problem to decrypt a file on the phone before using it for the restore.

> I recommend you make a backup to your laptop, which you then encrypt manually.

You mean your laptop that was manufactured by a 3rd party, with a network card that was manufactured by a 3rd party? And you're using encryption software that, even if it's open source, you probably aren't qualified to code review. I'm not downplaying the benefit of being careful, but unfortunately you can keep doing that pretty much forever.

All laptops and cameras entering the US are subject to search and seizure.

Well you can make an encrypted Backup via iTunes (that would involve firing up iTunes though shudders)

There's a reason Google decided to encrypt all communication between machines inside their datacenters.

Are you sure it's not just communication between data centres?

Probably not. FB is doing the same thing. In most cases your app or service does not actually know if the remote service it is talking to is local or in another DC. Yes, you can find out if you need to, but that requires contacting another service and introduces some delay and latency. Use a service router to try to keep the calls local to a rack or a DC, but you know that if there are problems with local cells you might get routed across the country so start with the assumption that _all_ connections get encrypted even if the connection is to localhost.

backup ==> zip/rar => encrypt with pgp or whatever => split => upload various parts to different cloud storage providers => wipe device => pass checkpoint => download => combine => decrpy => uncompress => restore.

its not trivial, but its sure easy to do in this day and age.

What data is likely on someone's phone that is not also in the cloud one way or another?

I wonder this too. The only personal data on my phone are my text and email messages. I'm not sure how other data would get onto the phone.

Wiping the phone doesn't help you. Using the strong password renders the information inaccessible, at least as inaccessible as your phone backup is. Touch ID isn't re-enabled until the phone's passcode is used. Presumably if the authorities have access to your phone's memory they also have access to your laptops, and neither will do them any damn good.

And it's paranoia if there's a legitimate threat, that's just called due diligence. ;)

> Touch ID isn't re-enabled until the phone's passcode is used.

Do the docs confirm that there is no way around this? I'd guess generating the encryption key requires the passcode, which is discarded immediately, and Touch ID can only "unlock" a temporarily re-encrypted version which never leaves ephemeral storage?

From the iOS Security Guide - How Touch ID unlocks an iOS device;

  If Touch ID is turned off, when a device locks, the keys for Data Protection class
  Complete, which are held in the Secure Enclave, are discarded. The files and keychain
  items in that class are inaccessible until the user unlocks the device by entering his
  or her passcode.

  With Touch ID turned on, the keys are not discarded when the device locks; instead,
  they’re wrapped with a key that is given to the Touch ID subsystem inside the Secure
  Enclave. When a user attempts to unlock the device, if Touch ID recognizes the user’s
  fingerprint, it provides the key for unwrapping the Data Protection keys, and the
  device is unlocked. This process provides additional protection by requiring the
  Data Protection and Touch ID subsystems to cooperate in order to unlock the device.
  The keys needed for Touch ID to unlock the device are lost if the device reboots
  and are discarded by the Secure Enclave after 48 hours or five failed Touch ID
  recognition attempts.

TouchID I believe unlocks the passcode so the phone can use it to login, but TouchID itself is not enabled until you enter it once, presumably because it isn't actually stored on the device in a readable way.

OK, I guess the effect is the same (as long as the passcode isn't recoverable until after startup). Thanks.

Could the "code equivalent" of your fingerprint be stolen by a rogue app if it's allowed to read it? I don't have a touchId phone but have wondered what would happen if your "print" is stolen -- passwords can at least be changed.

Speaking as an App Developer, we cannot touch stuff like that. We're allowed to ask Touch ID to verify things and process the results, but we don't actually get to use the Touch ID system. It's similar to how the shared keychain is used: We can ask iOS to do things, but then must handle any one of many possible answers. We don't actually see your fingerprint in any way.

Now Cydia and 3rd party stuff? I have no clue.

iOS itself does not see fingerprints, it refers to SE.

Wouldn't surprise me if true, iOS as a whole is built in a very modular fashion when it comes to the different components of the OS and developers only get access to what Apple deems us worthy of, hehe. Not that I want access to Touch ID, I much prefer to not have access to that...

Can non-US citizens be coerced into giving up their passcode?

Depends on if they're at a border crossing or in the interior of the country. Laws apply to citizens and non-citizens alike. If you haven't been admitted to the country, about the most they can do is turn you away at the border checkpoint and put you on the next flight back to your home country.

and if you're a citizen of the country you're trying to enter...

Then the TSA drops a paper clip while you bend over and pick it up

No, at least, not by the DOJ, and not for any use in a court of law.

We wrote about this in our border search guide and concluded that there is a risk of being refused admission to the U.S. in this case (in the border search context) because the CBP agents performing the inspection have extremely broad discretion on "admissibility" of non-citizens and non-permanent residents, and refusing to cooperate with what they see as a part of the inspection could be something that would lead them to turn someone away. (However, this is still not quite the same as forcing someone to answer in the sense that they don't obviously get to impose penal sanctions on people for saying no.)

One reason I'll never visit the states.

If I absolutely had to I just wouldn't take a phone/laptop with me.

" they don't obviously get to impose penal sanctions on people for saying no"

I wonder if there is any negative effects associated with being refused entry by a CBP? Could it be the case that if you are refused entry once, that in the future they will be more likely to refuse you entry? If so, that's a fairly significant penalty/power that the CBP person has.

> I wonder if there is any negative effects associated with being refused entry by a CBP? Could it be the case that if you are refused entry once, that in the future they will be more likely to refuse you entry? If so, that's a fairly significant penalty/power that the CBP person has.

Yes, some categories of non-citizen visitors (I don't remember which) are asked on the form if they have ever been refused entry to the U.S. (and are required to answer yes or no). If they're using the same passport number as before, CBP likely also has access to a computerized record of the previous interaction.

Plenty of countries will ask if you've ever been refused entry to any country. And you're also generally automatically excluded from any Visa Waiver Programme from then on too. So it's a major issue.

> If they're using the same passport number as before, CBP likely also has access to a computerized record of the previous interaction.

(They might also be able to search their database by biographical details such as date of birth, so getting a different passport may not prevent them from guessing that you're the same person.)

It is not a good bet if you're pulled over by the authorities to be doing something with your hands that they can't reliably identify as different from preparing a weapon. Particularly if not white.

This would prevent people from recording police abuse ...

Power-cycling can be done relatively quickly - in 10sec with two fingers (no swipe), or 5 sec + swipe if you only have one hand available.

> "Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer."

You get plenty of tries to perfect the technique, before using it on the actual device.

You acquire identical hardware and "dead finger countermeasures" (does the iphone employ any? Some readers look for pulses and whatnot, I don't know if the iphone does). You then practice reading the fingerprint on that hardware until you are able to reliably get a clean print and bypass any countermeasures. Only then do you try using the finger on the target phone.

You might still fuck it up, and you only get 5 chances on the target hardware. But with practice on the right hardware, I see no reason why you couldn't get it.

There's also a 48 hour window and touch ID doesn't work initially after booting.


Great design.

Not only the amount of work, technology and thought that have gone into this, but also how well this has been implemented is mind-blowing.

It really shows the staggering difference between having a Samsung phone with fingerprint security versus an iPhone.

Is it only five fails on TouchID to delete data? I don't have the option to delete the data enabled on my iPhone... but it often takes more than five tries to just get it to work on my finger that is legitimately registered in touchID.

After five failures the you cannot use Touch ID to unlock and will instead need the passcode to access the phone again. This means that any approach to fooling the fingerprint reader will need to be done within five tries.

No, it's five fails before Touch ID stops working until after a passcode is entered again.

You should overtrain your TouchID: http://www.imore.com/touch-id-not-working-you-heres-fix

Given the 6 tries, is there any benefit to a strong password?

It's my understanding that the current battle is about the request to bypass the retry cap.

  All bets are off if the iPhone is power-cycled.
Except, you don't have explicit control over the iPhone's battery, so how do you know if the power is actually cycled?

If the has been switched off or if >48h passed since the last unlock.

Also remember that rubber-hose cryptanalysis is always an option.

Can you be convicted in the US based on evidence obtained with physical torture?

Edit: Looks like the answer is it depends and not a resounding no


Of course you can. As long as the courts can be persuaded that there is no causal nexus between the torture and the evidence, or if the torture actually isn't legally torture. That assumes that the defendant can show (or is even aware) the torture actually took place.


* prolonged solitary confinement: not legally torture

* fellow prisoner violence: not legally torture, no nexus

* prolonged pre-trial confinement: not really torture, but we may as well include it

* waterboarding/drowning: not legally torture? (Supreme Court declined to rule)

* stress positions: cannot show it took place

* parallel construction: cannot show / not aware

No, you cannot. Evidence derived from facts learned from torture is also excludable.

Sure, you can. It all depends on who gets to define "torture."

If they can find a judge who believes the iron maiden isn't torture while the anal pear is, then guess what... the government will use the iron maiden.

Even if they can't find such a pliable jurist, they'll have no problem getting a John Yoo to write an executive memo that justifies whatever they want to do to you, and let the courts sort it out later. There's no downside from their point of view.

> getting a John Yoo to write an executive memo

The memos didn't provide de iure indemnity. There is no constitutional basis, in fact the proposition that a memo can supersede the Constitution is idiotic on its face.

The failure is the de facto doctrine of absolute executive immunity. It has two prongs: 1. "When the president does it, that means that it is not illegal." 2. When the perpetrator follows president's orders, also not illegal.

Nevertheless, since there is no legal basis, there is nothing preventing the next government from prosecuting them.

The memos didn't provide de iure indemnity. There is no constitutional basis, in fact the proposition that a memo can supersede the Constitution is idiotic on its face.

Yes, and that's what I meant by "let the courts sort it out later." The Constitution's not much help either way, being full of imprecise, hand-waving language and vague terms like "cruel and unusual." It was anticipated by the Constitution's authors that it would be of use only to a moral government.

Nevertheless, since there is no legal basis, there is nothing preventing the next government from prosecuting them.

I wonder if that's ever happened in the US? Does anyone know?

I would disagree. The Constitution is a bulwark against tyranny. The US have successfully prosecuted waterboarding in the past.

It usually only happens when the rule of law is suspended and then resumed. You're a young country, so maybe it hasn't happened before. Robert H. Jackson was an American, though ;-)

Torture to get detailed info, use details to establish plausible parallel construction.

Enter parallel-constructed information as court-sanitized evidence.

TouchID disables itself after 48 hours and requires the password again.

Also after 5 failed attempts - you can test with an unregistered finger

Or if the phone runs out of batteries and restarts.

Does TouchID have any protections against your finger unlocking your phone post-mortem?

No, although I'd love to see a HealthKit app that uses your Apple Watch as a dead man's switch, and disables Touch ID or powers the phone off in the event the watch is removed or your pulse is no longer detected.

That wouldn't work well with loose wrists and other similar edge cases.

Then those people could turn it off. But it would be a nice option.

Without a wristprint for the watch to read, what prevents somebody else from wearing it?

The pulse and skin conductivity might change, but are either of those reliable enough metrics for such an application?

If you take the watch off, it automatically locks. I wouldn't mind it also automatically locking my phone and requiring a passcode instead of TouchID.

There is a VERY limited amount of time in which you can take the watch off and switch to another wrist (like milliseconds, you have to practically be a magician to switch wrists (which I do throughout the day)).

Apple has the watch, they could use it to beef up security for those that want it.

I don't think "already fenced off if people were savvy" is really valid. That's the security equivalent of "no type errors if people were savvy", which is the same as "probably has type errors".

It was near-impenetrable, but it could have been inevitable if it weren't for the fact that Apple could push OS updates without user consent. They could have made it impossible for anyone to get in even if your pin was 1234, but didn't.

Kind of disappointing given their whole thing about the Secure Enclave. Bunch of big walls in the castle, but they left the servant's door unlocked.

Secure enclave as per their docs sounds just like their implementation of trust zone.. err "Trust Zone", most likely following ARM specs.

The main difference would be that everyone knows trust zone through Qualcom's implementation and software - as it's been broken many times. At the end of the day "its just software" though, which runs on a CPU-managed hypervisor with strong separation ("hardware" but really, the line is quite a blur at this level).

What that means is that you need to be unable to update the secure enclave without user's code (so the enclave itself needs to check that) which is probably EXACTLY what apple is going to do.

Of course, Apple can still update the OS to trick the user into inserting the code elsewhere, then FBI to use that to update the enclave and decrypt - though that means the user needs to be alive obviously.

Past that, you'd need to extract the data from memory (actually opening the phone) and attempt to brute force the encryption. FBI does not know how to do this part, the NSA certainly does, arguably, Apple might since they're designing the chipset itself.

Secure Enclave is explicitly not TrustZone per Apples iOS Security Guide. It's a separate core in the SoC running on L4.

Aww shit.. embedded crypto hypervisors all up in this hood.

Wopw wopw

I don't understand the whole debate about Apple security:

- Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?

- Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.

- Even if iCloud is supposed to be encrypted, they didn't open up that part to public scutinity.

- Therefore a perfect security around the SE only solves the problem of accessing a phone that wasn't backdoored yet. There are all reasons for, say, Europe and CIA, to require phones to be backdoored by default for LE and economic intelligence purposes.

Apple is not required by any country to have a backdoor and I am not aware of any agreement from Apple to install such a back door for anyone

If the person knowing the passcode is around and you can fool them into using their passcode then yes, you could capture their passcode. Touch ID is even less of a problem because taking someone's fingerprints is a lot easier than taking a passcode out of their head.

But in both those situations the weakness is in the person, not the device. Apple devices still potentially have security weaknesses which the FBI is asking Apple to exploit for them. Apple wants to fix these weaknesses, to stop Apple being forced to exploit them.

Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?

I don't believe this is the case.

Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.

What you say about an on-screen passcode is likely true but the architecture of the secure enclave is such that the touch ID sensor is communicating over an encrypted serial bus directly with the SE and not iOS itself. It assumes that the iOS image is not trustworthy.

From the white paper [1]:

It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.


The Secure Enclave is responsible for processing fingerprint data from the Touch ID sensor, determining if there is a match against registered fingerprints, and then enabling access or purchases on behalf of the user. Communication between the processor and the Touch ID sensor takes place over a serial peripheral interface bus. The processor forwards the data to the Secure Enclave but cannot read it. It’s encrypted and authenticated with a session key that is negotiated using the device’s shared key that is provisioned for the Touch ID sensor and the Secure Enclave. The session key exchange uses AES key wrapping with both sides providing a random key that establishes the session key and uses AES-CCM transport encryption.

[1]: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

I guess the last one percent is making sure you don't brick customer phones inadvertently with software update or fix.

Can the SE be updated on a locked phone? Because Apple's docs give the impression that it can't.

The only statement I could find from Apple was from the iOS security guide that states, "it utilizes its own secure boot and personalized software update separate from the application processor." I think we can both agree that's a pretty vague statement, if you have a better source I'd like to see it.

A former Apple engineer said on Twitter:

"@AriX I have no clue where they got the idea that changing SPE firmware will destroy keys. SPE FW is just a signed blob on iOS System Part"


Then Apple seems to confirm it:

"The executives — speaking on background — also explicitly stated that what the FBI is asking for — for it to create a piece of software that allows a brute force password crack to be performed — would also work on newer iPhones with its Secure Enclave chip"


I understand that the boot chain is the only way Apple may modify the behaviour of the Enclave but how would the update be forced? DFU wipes the class key, making any attempt at trying to brute force the phone, useless. If debug pinout access is available, then why does FBI needs Apple to access the phone at all?

"These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec."

I hate to be that guy, but if you have an op and you have any opsec, you aren't even carrying a phone.

Right ?

Like literally every other type of security, OpSec is not binary.

Any device that relies on hiding secrets inside the silicon itself is subject to hacking. Several secure-enclave like chips have been hacked in the past by using electron microscopes and direct probes on the silicon. If BlackHat conference independent security researchers have the resources to pull this off, Apple and the NSA certainly can. Exfiltrating the Enclave UID could be done by various mechanisms at the chip level, especially if you have access to the actual HW design and can fab devices to help.

I mean, we're talking about threat models where chip-level doping has been shown as an attack. This just seems to be a variation on the same claims of copy protection tamper resistant dongles we've had forever. That someone builds a secure system that is premised on a secret being held in a tiny tamper-resistant piece, only the tamper resistance is eventually cracked.

It might even be the case that you don't even need to exfiltrate the UID from the Enclave, what the FBI needs to do is test a large number of PIN codes without triggering the backoff timer or wipe. But the wipe mechanism and backoff timer runs in the application processor, not on the enclave, and so it is succeptable to cracking attacks the same way much copy protection techniques are.

You may not need to crack the OS, or even upload a new firmware. You just need to disable the mechanism that wipes the device and delays how many wrong tries you get. So for example, if you can manage to corrupt, or patch the part of the system that does that, then you can try thousands of PINs without worrying about triggering the timer or wipe, and without needing to upload a whole new firmware.

I used to crack disk protection on the Commodore 64 and no matter how sophisticated the mechanism all I really needed to do was figure out one memory location to insert a NOP into, or change a BNE/BEQ branch destination, and I was done. Cracking often came down to mutating 1 or 2 bytes in the whole system.

(BTW, why the downvote? If you think I'm wrong, post a rebuttal)

A couple issues:

* Decapping and feature extraction even from simpler devices is error prone; you can destroy the device in the process. You only get one bite at the apple; you can't "image" the hardware and restore it later. Since the government is always targeting one specific phone, this is a real problem.

* There's no one byte you can write to bypass all the security on an iPhone, because (barring some unknown remanence effect) the protections come from crypto keys that are derived from user input.

* The phone is already using a serious KDF to derive keys, so given a strong passphrase, even if you extract the hardware key that's mixed in with passphrase, recovering the data protection key might still be difficult.

No, the chief protection against the PIN code hacking comes from the retry counter. The FBI doesn't need the crypto keys, it just needs the PIN code. So it needs to brute force about 10,000 PIN codes.

Any mechanism that prevents the application processor from either a) remembering it incremented the count b) corrupts the count or c) patches the logic that handles a retry count of 10, is sufficient to attack the phone.

Somewhere in the application processor, code like this is running:

if (numTries >= MAX_RETRY_ATTEMPTS) { wipe(); }


if (numTries >= MAX_RETRY_ATTEMPTS) { retryTime = retryTime * 2; }

Now there are two possibilities. Either there are redundant checks, or there aren't. If there aren't redundant checks, all you need to do is corrupt this code path or memory in a way that prevents it's execution, even if it is to crash the phone and trigger a reboot. Even with 5 minutes between crash reboot cycles, they could try all 10,000 pins in 34 days.

But you could also use more sophisticated attacks if you know where in RAM this state is stored. You couldn't need to de-capp the chip, you could just use local methods to flip the bits. The iPhone doesn't use ECC RAM, so there are a number of techniques you could use.


You aren't limited to 10,000 possibilities. You can use an alphanumeric passphrase. The passphrase is run through PBKDF2 before being mixed with the device hardware key.

On phones after the 5C, nothing you can do with the AP helps you here; the 10-strikes rule is enforced by the SE, which is a separate piece of hardware. It's true that if you can flip bits in the SE, you can influence its behavior. But whatever you do to extract or set bits in SE needs to not cause the SE to freak out and wipe keys.

We can still imagine a state actor spending the megadollars to research a reliable chip-cloning process, to bring parallel brute-forcing within reach. I wonder if the NSA have been on a SEM/FIB equipment buying spree lately.

The ultimate way to defeat physical or software attacks is to exploit intrinsic properties of the universe, which suggests finding a mathematical and/or quantum structure impervious to both.

Your reply is the kind of comment I come to HN for - we've started off talking about mobile device security and ended up discussing unbreakable quantum encryption.

I'm speaking the case of the San Bernardino killers. Using strong alphanumeric pass phrases are anti-usability, the vast majority of people won't use them. Hell, the vast majority of people don't even have strong alphanumeric passwords on desktop services.

So it falls to either 2-factor or biometric to avoid PINs. Biometric of course has it's own problems.

Perhaps people should really carry around a Secure Enclave on a ring or something, and with a button to self-destruct it in case of emergency. (e.g. pinhole reset)

You only need the strong alphanumeric pass phrases on device startup, then you can use TouchID. I bought an iPhone 6 for exactly this reason (employer required strong passphrase, was too annoying to type in on the Android device I had at the time).

In a way, that's even worse. You're more likely to forget a complicated passphrase when you only have to type it in very seldomly.

You have to enter it every 48 hours.

Only if you don't unlock the phone in these 48 hours, no?

No, you have to enter it every 48 hours, regardless of what you have done with the phone in these 48 hours, and at every phone boot.

You seem to make the assumption that corrupting the secure enclave firmware is easy, or that its RAM is exposed of chip.

The entire point of an secure enclave is to completely enclose all the hardware and software needed to generate encryption keys in a single lump of silicon.

This means that all of its processing requirements (it's a complete co-processor) are on chip, it's RAM is on chip (not shared with it the main CPU, and probably has ECC), and it uses secure boot to cryptographically verify that it's firmware has not been tampered with before it starts executing. Additionally it may even be possible to update it bootloader in the future to prevent further updates without a passcode.

The end result means that attacking a secure element is very difficult. There are few, if any, exposure points that would allow you to fiddle with its internal state, and any attempts too should result in the secure element wiping stored keys, making further attacks a moot point.

I don't make that assumption, I worked on developing TPM modules myself in the 90s at research labs, and our prototypes had even more anti-tampering than so far revealed about Secure Enclave/Trustzone: we had micro-wire-meshes in the packaging to self-destruct on drilling or decapping, we had anti-ultrasonic and anti-TEMPEST shielding. I'm pretty familiar.

The point is that state actors have vast resources to pull off these attacks. The NSA intercepted hardware in the supply chain to implant attacks as documented by Snowden. Stuxnet was a super-elaborate attack on the physical resources of the Iranian nuclear program, which was obviously carried out with supply chain vendors like Siemens. Apple uses Samsung as a supplier, and the US government has very high level security arrangements with the South Koreans, so how do we know the chips haven't been compromised even before they arrive at Foxconn for assembly?

Here's an example of a TPM module being decapped and hacked at Blackhat: https://redmondmag.com/articles/2010/02/03/black-hat-enginee...

Attacks have been shown using silicon doping, security fuse cutting, etc.

If the NSA really wanted to crack the Secure Enclave, I have very little doubt about their ability to carry it out.

> If the NSA really wanted to crack the Secure Enclave, I have very little doubt about their ability to carry it out.

Well they certainly really want to crack the Secure Enclave, so maybe this case is moot.

The NSA cracking the Secure Enclave is not the same as the FBI cracking the Secure Enclave.

If the NSA can't crack the Secure Enclave in a terrorism case, it's not super useful that the NSA can crack the Secure Enclave.

Perhaps the NSA is savvy enough to know that a heroic effort isn't needed, and that the FBI is mostly looking to set precedent rather than find anything worth the cost and risk of chip-hacking.

Interesting stuff, cool post.

Seems to me when we are at a point were every time the NSA wants to get at some data, the have to start a heroic effort of attacking low level hardware, we are in a pretty good state in terms of device security.

>it's RAM is on chip (not shared with it the main CPU, and probably has ECC)

Apple's security guide would indicate otherwise, look on page 7. The secure enclave encrypts its portion of memory, but it isn't built into the secure enclave itself.


Is there anything preventing them from imaging the parts of the device that store data? The data in the image would be encrypted, of course, but wouldn't this give them essentially unlimited (or up to their budget) attempts at getting to the data?

It's encrypted against an effectively random 128 bit AES key. Unlimited time is not enough.

The method I'm thinking of is:

1. Get a dump of the encrypted data.

2. Try to probe the hardware, potentially destroying it.

3. If the probe works, we're done. If not, put the encrypted data dump onto a fresh iPhone and repeat from step 2.

This way, you effectively get unlimited shots at an otherwise risky hardware probe.

If the encryption key didn't depend on the hardware this would work. Even the iPhone 5C that the recent court case is about relies on the hardware keeping a key secret and it doesn't contain the secure enclave. For an iPhone 5C, the encryption key is derived from the pin and a unique ID for the phone that the CPU itself can't read. The only thing that the application processor can do is perform some crypto instructions using the key, there isn't an operation that would just put the key into memory or a register that you can read from. Even if you have root and the phone in front of you with the password, there's nothing you can do short of decapping it to try to identify that key.

Unless there is weakness in the PRNG/RNG that creates the fused key in the secure enclave itself. Which is not out of question. I am not sure why FBI didn't ask apple politely how these keys are generated in the first place.

That seems excessively unlikely to me. The phone itself wouldn't have anything to seed a PRNG with, so the random number would need to come from an embedded hardware generator or a dedicated random number device in the factory, and both of those options would have huge amounts of engineering oversight.

>You may not need to crack the OS, or even upload a new firmware. You just need to disable the mechanism that wipes the device and delays how many wrong tries you get. So for example, if you can manage to corrupt, or patch the part of the system that does that, then you can try thousands of PINs without worrying about triggering the timer or wipe, and without needing to upload a whole new firmware.

I disagree. The pin validation is done within the secure enclave. You can't do it outside the secure enclave because the pin is combined with a secret that is burned into the silicon of it. The secure enclave can and will enforce timeouts for repeated failures, as well as refuse to process any pin entries after too many attempts. Disabling the wipes or bypassing the timer won't do you any good when you only have a few attempts.


The state representing the number of attempts must be stored somewhere, and thus a determined adversary could eventually corrupt it.

Look, there's a big difference between trusting known ciphers that have been well studied by the world's top cryptographers, and a proprietary TPM chip that relies on security-through-obscurity.

The history of embedding secrets into black boxes is a history of them being broken. This isn't a theoretical concern, it's a very practical one.

The question is not whether a determined adversary could corrupt the counter. The question is whether they can corrupt the counter before they corrupt something else that causes total data loss.

Physical defenses are not security through obscurity, and why you assuming they don't use known ciphers?

Kerckhoff's principle should be adhered to if truly secure encryption is desired; alas, then all sorts of hard obstacles pop up (UX becomes a SPOF, most commonly - a secret always needs to be stored somewhere, if only in the user's head).

OTOH, the practical purpose of encryption is to remain unbroken for long enough, not to be completely unbreakable. As seen here, security-through-obscurity is practical enough in cases where user-obtained key material is too weak to provide enough protection using strong publicized crypto. In other words, it's a two-part key: one is in user's wetware, the other in phone's hardware (as per obXKCD, it's usually easier to attack the former).

"(BTW, why the downvote? If you think I'm wrong, post a rebuttal)"

Don't discuss your (or others') votes.

Don't interrupt the discussion to meta-discuss the scoring system.

Isn't the point of their efforts to make it so that Apple doesn't know how to get into the phones? If someone from BlackHat or Defcon can get in, the FBI should hire that person if they want access. The reason Apple is doing this is so that if they are served a court order, they can just say "we don't know how"

> You just need to disable the mechanism that wipes the device

Sure, to resist microscopic attacks, an IC must assert logical integrity to itself i.e. that the gates & wires are not compromised by a microscopic attack.

But just because you and I haven't imagined it, doesn't mean some kind of internal canary can't exist. Your naive code (below) of a counter might instead be based on quantum cryptography, or on intrinsic properties of a function or algorithm which if compromised the SE cannot function at all.

The existence of one-time password schemes like S/KEY gives me hope, since it is a sequence generator that simply doesn't function without input of the correct next value (technically the previous value from the hash function). S/KEY itself is not the answer (wrong UX and no intrinsically escalating timer), but I wanted to illustrate that you can generate a self-validating sequence without tracking integer position.

Apple apparently has a motive and the warchest for the R&D. If they're hiring cryptographers (has anyone checked?), they're acting on it.

I upvoted because I think you are absolutely correct. Better security could be had with a two-factor system -- plug the phone into a cryptobox to decrypt. Having everything in one place is vulnerable.

If you need to plug the phone into the cryptobox to decrypt it, they're going to be in one place anyway.

I've been very impressed with what I've learned in the last few weeks regarding Apple's efforts to provide privacy for its customer using what it seems some very robust engineering and design. I'm currently an Android user (Samsung S6 edge) but am considering seriously going back to the iPhone because of this.

The cynical side of me says that Apple's marketing tactics have worked. But I've got a feeling, heck, I want to believe, that this is actually driven by company values and not a short-term marketing benefit.

I wonder if Microsoft came out with Palladium (https://en.wikipedia.org/wiki/Next-Generation_Secure_Computi...) today, if it would be hailed as a great development for privacy or would still garner lots of criticism as it did 10 years ago.

Of Palladium, Bruce Scheier said:

> "There's a lot of good stuff in Pd, and a lot I like about it. There's also a lot I don't like, and am scared of. My fear is that Pd will lead us down a road where our computers are no longer our computers, but are instead owned by a variety of factions and companies all looking for a piece of our wallet. To the extent that Pd facilitates that reality, it's bad for society. I don't mind companies selling, renting, or licensing things to me, but the loss of the power, reach, and flexibility of the computer is too great a price to pay."

I think his fears have come true to some extent in iOS, but knowing what we know now about government surveillance of everybody, it may no longer seem like too great a price to pay. That is, if you trust the vendor. Apple seems to be worthy of that trust. But Microsoft...?

Edit: formatting

> I think his fears have come true to some extent in iOS, but knowing what we know now about government surveillance of everybody, it may no longer seem like too great a price to pay.

We're already paying that price, essentially. An iPhone won't run arbitrary code, a replacement OS, or accept code from arbitrary sources. It's already an exclusively vendor-curated platform. If you're already going to buy into that model, I don't see the point in not going for the greatest amount of protection that you can get. (OK, yes, a dev can compile their own code and push it to their own device. I'm actually not sure why I don't hear about this happening more often as a way to run "unacceptable" programs on iOS devices).

I thought that's the opposite of what Palladium did. Doesn't it make it so the apps and data on your computer aren't actually yours? Like Microsoft would have total control over what you put on your computer? I was under the impression it didn't do anything to protect your privacy: instead it actually put backdoors in your computer that Microsoft could access any time they wanted?

Wait, are you saying you trust apple yet not Microsoft or more than?

I do trust Apple more than Microsoft.

> Apple seems to be worthy of that trust.

Oh no... it's working...

Anything specifically missing on android side except the PR? Seriously asking if I'm missing something. The nexus series has comparable crypto hw and similar options for encryption + wiping.

Two things come to mind. First an equivalent of the secure enclave. Second a single company that is willing to go this far to protect its users. For Samsung this is complicated because both Google and Samsung are involved, and Samsung is not a US company so I'd expect them to cave in under pressure from the US govt more easily.

Edit: a Nexus device bought directly from Google with the right hw may address both points.

Looks like the Nexus 5X/6P have ARM TrustZone (http://phandroid.com/2015/09/30/nexus-fingerprint-security/).

Secure Enclave is a variant of TrustZone (http://www.iphoneincanada.ca/iphone-5s/apples-new-secure-enc...).

I have been looking at the Snapdragon 820 and it at least on that level, it does not seem that android devices should mis anything. The new Sense Id is an improved Touch Id, and I mean that both in terms of the finger print sensor itself, as well as the hardware protection itself. They implemented full UAF in the SecureMSM for the authentification. The best thing is that this is exposed to the layers above and can be leveriged in the growing fido ecosystem.

The major issue with android systems does not seem to be lacking software and hardware, but rather the unwillingnes of providers to push best practices as defaults to all users.

I somewhat agree and somewhat disagree with your analysis of the politics. Their are both advantages and disadvantages of both situations

> For Samsung this is complicated because both Google and Samsung are involved, and Samsung is not a US company so I'd expect them to cave in under pressure from the US govt more easily.

Why? I'd expect just the opposite.

> Why? I'd expect just the opposite.

To many Americans, Apple is the example of American innovation and entrepreneurial spirit, and a proof that the American model works. Apple employs 10s of thousands of Americans directly, and probably provides jobs for 100s of thousands indirectly. Going too aggressive on Apple, e.g. at the level where executives could be charged in court, or products embargoed, would be a decidedly unpopular move with many voters and politicians. Samsung is a much easier target here.

Also as an American company, Apple can legitimately enter the democratic debate, see the calls it makes to congress. Samsung can't really do that. Imagine Samgsung putting out press release quoting the founding fathers or referring to the first amendment. That would not be credible.

Edit: grammar

You are right to a certain extent! But lets not forget that Samsung is a huge company too and is registered as per US norms. So the American executives of Samsung would be very much comfortable referring to either of them.


I'll repost a snippet from a post by merhdada that hints at the root of one of the problems with android security:

"This can happen only because of a design flaw in the security architecture of Android (L). Unlike iOS and like traditional PCs, the disk encryption key is always in memory when the device is booted and nothing is really protected if you get a device in that state. It's an all-or-nothing proposition."

Please read the entire thread, and check the links referenced in that thread, for information on how issues like these are mitigated.

That's only one issue though. There are a few more.

But none of that even matters a lot of times ... you really won't need to hack an android phone... because the data is also on corporate servers. So the FBI could get at it in any case most of the time.

Yeah, the problem is that Google's whole business model depends on uploading all your unencrypted data to their cloud, whereas Apple could probably decide to encrypt everything in iCloud so not even they could read it if any government/hacker came looking.

IPhone data is just as much on corporate servers.

To bad flock does not exist any more, a droping replacement for google sync with end to end crypto. Very nice.

Of course it's a short-term marketing benefit. But, if the encryption is secure, then the marketing benefit is matching up with the customer benefit, so hey.

It's not clear that it IS a short-term benefit. Poll results are mixed (although the wording has a significant impact on the results) and a leading presidential candidate is calling for a boycott of their products. Marketing campaigns tend to be less polarizing. Also, I would imagine that losing the case would negatively impact sales more than if they had quietly complied.

Polls say whatever the poll-maker wants. Ask people if they support government surveillance, they say "Sure." Ask if they want the government to be able to access their Dick-Pics and the answer is a resounding NO[1]. Apple is on the right side of history here.

[1] https://www.youtube.com/watch?v=XEVlyP4_11M

Right, but the question here is of the obvious short-term marketing benefit, which to me is not that obvious. I think that in the short term Apple has more to lose financially by not aiding the FBI in an emotionally-charged request than if they had silently complied, particularly if they end up losing the case.

Maybe I'm being cynical, but if I were Apple and had just been forced to implement back doors with a gag order, I might be announcing that my new phones were unhackable too.

Is anyone aware of anything that makes this more than a leap of faith?

Do you really need such strong security? Or after the FBI forced Apple to apply their best engineering minds to crack your phone, they'd just find a grocery shopping list and pictures of your cats? Because this sounds a bit like Tesla's "operating room air quality" - something that might be useful 0.001% of the customers, and it's just marketing for the remaining 99.999%

How can you ask a question like this? Define "so strong" in this context? It's similar to asking "do you need so free speech". We're not talking about anything special here beyond a standard expectation of reasonable security. The fact that apple is trying to make it "so secure even they can't hack it" is just a means for them to protect themselves that happens to align with the interests of the user.

General, unbreakable crypto security applied to all contents is a feature that very few people ever needed or even tried to achieve.

Until a few years ago you were perfectly content with keeping an agenda in your pocket and pictures in your living room's drawer. A minimum of privacy is of course needed and welcome; however, unless you're planning a major terror attack, or strategic war plans, or you have incredibly valuable industrial secrets (all cases in which you'll probably be using specialized SW to keep your information) you don't really need incredibly advanced security simply because nobody is going to spend vast amounts of time and resources to uncover your little secrets. The GP is talking about switching phone (spending money) to obtain a level of security that he won't need in a million years.

Your agenda in your pocket wasn't subject to unconditional dragnet surveillance. Copies of it weren't going to find their way on to security contractors' systems. Such copies wouldn't have then been stolen and distributed by whoever, and made available for search as you type. The intimacies of daily life are very precious.

For me it's not really about my personal security because, you're right, there's nothing interesting on my phone. My issue is with one entity having access to ALL of our phones. Have you read 1984? Because that's what that sounds like. It's too much power for the government to have.

I think I missed the part where anybody asked Apple to build a backdoor into every phone that could be accessed without appropriate control from the authorities and without passing through Apple each time. Of course I'm not saying that your data should be uploaded daily to a government's server for anybody with a badge and free time to spare to look through.

Yes, you did miss it. The FBI/etc are very clearly and deliberately looking to set a precedent for use in any and all future instances. Just because you don't seem to value personal privacy and security doesn't mean the rest of us are willing to throw it away for no good reason.




The FBI here only represents the 'legal' government and not the world of secret courts and the NSA.

The NSA did infact try to build backdoors into important hardware and software standards. They did push companies into using worse crypto. The do massiv port scanning and build themself botnets from where thet attack other nation states. And thats just a tiny fraction of what they do.

So yes, I absolutly do need computer hardware and software that even the manufacturer cant break. Low level security for boot and authentification is only the first in many, many steps that we have to take all the way up to imroving usability in end user applications to make it hard to do the wrong thing.

The FBI are not the o ly player, all governments want such control, all governments have things like the NSA. Even private actors are getting better and better.

We do need better security to protect the integrety of all our data, this includes all our communication and even, if possible metadata that we produce.

Sorry but "unbreakable crypto" is the only right crypto.

Of course. And 11000 meters waterproof is the only waterproof acceptable for a watch. And operating room clean air is the only clean air. And obsidian blades are the only ones that deserve to be used in your kitchen. And triple malt, 60 years aged whiskey is the only whiskey. Etc.

The reason not everyone has the best watches, air conditioning, knives, or whiskey is that, for physical products, quality tends to cost more.

There is no reasonable argument to be made that people shouldn't have higher quality products when they _don't_ cost more^.

Apple only have to develop "unbreakable" encryption once and then it costs them no more to make it available in every iPhone than to only make it available in some of them. Indeed, it'd be cheaper than maintaining both breakable and "unbreakable" variants.

There are arguments to be made about the secure enclave hardware, since it presumably costs more to make it more tamperproof.

However, securing iPhones against this particular "attack" appears to be a software issue: iOS should never apply updates without an authenticated user approving them first.

^ For the avoidance of doubt, this includes externalized costs.

That argument doesn't hold water.

If you're using a breakable crypto , you're not protected at any given time.

If you're using a watch that's waterproof up to 100m, you're safe up to 100 meters.

If you're using a watch that's waterproof up to 100m, you're safe up to 100 meters.

To be pedantic, that's not exactly what is meant by 100m Water Resistant, but your point is valid.


I'm sorry, I might be wrong here, but I thought that any cryptographic system is breakable, given enough time and resources. If this is true, then, according to your statement, you're never protected. Therefore you can just transmit and store plain data without any cryptography, isn't it the same?

Any watch can be breached by water, given enough time and pressure. Most watches would not survive very long at the bottom of the Marianas Trench. Similarly, most watches would not survive a few centuries in a shallow pool, even if rated for much deeper immersion.

Although no watch can be absolutely waterproof, not even at a given depth, there are levels of risk one can accept. A watch you can use at 100m for several hours a day is effectively waterproof if that's the harshest treatment the watch will receive.

Similarly, although no cryptographic system is absolutely unbreakable^, there are levels of risk one can accept. And, unlike with watches, we can design cryptographic systems which, except in the face of unforeseen mathematical breakthroughs, or bugs (or backdoors) in their implementation, cannot be broken in the next few hundred years even by a nation state-level attacker.

I think is it reasonable to describe a cryptographic system that can't be broken within the lifetime of anyone alive today as "unbreakable".

^ Except maybe one-time-pads, depending upon how "unbreakable" is defined.

Your comment (and its sibling) substantially agree with what I wrote - there isn't absolutely unbreakable cryptography, only reasonably secure. Therefore the parent doesn't make sense.

Now, is a cryptography that can't be broken by anyone except maybe (that hasn't even happened yet) through a specific court order signed by a judge, reasonably secure? I think it qualifies as such. If you need even more security, I'm sure you can use specialized software to achieve it - I'm not saying you shouldn't be allowed to.

Strictly, it is not the cryptography being broken in this case. The FBI want to guess a (possibly) six-digit pin. The iPhone might have been configured to erase its data on 10 failed PIN attempts, so the current odds are not good. To this end, the FBI want Apple to produce a version of iOS that bypasses this restriction, and install it on the phone.

Assuming I agree that a security system that can be turned off remotely by its vendor is reasonably secure, it is only a specific court order now. If Apple are successfully compelled to produce a version of iOS that bypasses PIN security, it will be much easier for the FBI to request that it be deployed on phones in the future - after all, that version of iOS will already exist then.

If Apple do make it, I am certain there will quickly be a slew of court orders regarding other iDevices that the authorities have in their possession, all of which are likely to be harder to defeat than the court order they would just have failed to defeat.

However, I don't agree that a security system that can be turned off remotely by its vendor is reasonably secure, anyway. There is nothing technically requiring Apple to wait for a court order: the phone will accept their new software whether or not it comes with a court order. Apple could decide to make PIN cracking available to anyone who can prove they own a given iPhone. Given their attitude, they probably won't, but the actual security mechanism is reliant on their goodwill for it to remain unbroken. I don't consider that reasonable.

> If Apple are successfully compelled to produce a version of iOS that bypasses PIN security

this would seem a rather scary precedent of forced, unwilling labor. i wonder if it could be construed as "involuntary servitude".

There's an idea used in crypto commonly called "reasonable security". Anything is possible given an computationally unbounded adversary, but the point of strong crypto is to make it such that cracking the crypto takes an "unfeasible amount" of time. Crypto isn't some spectrum like waterproofing is, it's binary: either broken or it's "will be broken".

Please see the reply to your comment's sibling, they say substantially the same thing.

>11000 meters waterproof is the only waterproof acceptable for a watch

It depends, how many meters does it have to claim before I can make sudden movements and god forbid press the buttons underwater?

> Until a few years ago you were perfectly content with keeping an agenda in your pocket and pictures in your living room's drawer.

only because they weren't (thought to be) subject to casual perusal by unknown entities. this is a silly thing to even mention.

> unless you're planning a major terror attack

ah, the "if you don't have anything to hide" rhetoric. do you really buy that?

> a level of security that he won't need

unless there is some nontrivial cost or burden associated, it's a red herring to belabor whether it's "too strong" or "more than needed".

I am not sure why this comment (and all Udik's comments) is being downvoted into oblivion. This is the view of the US government and quite likely a vast majority of citizens here (and, I would guess, in many countries).

This morning I was having a conversation with my fiancee, who said "if the US government gets a warrant they can open your mail, they can tap your phone calls, they can come into your house and search -- why should your phone be some sort of zone they cannot search even with a warrant?"

I happen not to agree but this is not some wacko view.

It might not be the most constructive way of doing things, but people tend to downvote comments they disagree with.

As to why they disagree: HN's audience is not representative of the general citizenry. We're better informed about technical security matters (or we like to think we are, at least). I suspect that correlates with being less willing to trust security to the goodwill of third parties.

you forgot about freedom of the press. Typical FUD:

"If you aren't doing anything wrong, what do you have to fear."

"If you do want something private then you must be doing something wrong, ARE YOU A TERRORIST!?!?!?!"

>Do you really need such strong security?

Do i really need a quad core smartphone with a dedicated GPU 3GB of ram a higher pixel density than i can possibly distinguish, etc etc?

Why would i settle for shitty crypto just because the information isnt a state secret?

Maybe you don't need it, but you'll have fun every day with it. While you'll never be able to enjoy the difference between "almost unbreakable" and "unbreakable".

and whats the cost differential to me as an end user?

And whats the difference to me between 452 ppi or 532 ppi? I'll never be able to enjoy the difference between the two, yet i would still go for the higher ppi all else being equal.

It's never the case of "all other things being equal". The GP was saying that he switched from Apple to Android - presumably because there was a relevant difference between the two - but he's considering switching back to have a feature that he'll never use.

Of course there is always an appeal in the numbers. I'd go for a 40MP camera instead of a 20MP one - who cares if the quality of the lens is such that there is no difference beyond 10MP. It's marketing. It's curious how people so wary of being observed or exploited make themselves so prone to basic manipulation by entities who want to get their money.

ah, i'm thinking more of something like WEP vs WPA2 - like, why the heck would i want to downgrade my crypto?

I agree there may be other reasons the user switched, but maybe they switched to android because they believed it to be more secure? Or maybe the user wants to vote with their wallet for the company they see as most in support of security/privacy.

I do agree though, switching for a feature you are unlikely to use is silly, but i think there are definitely reasons enough to make a switch like that from a 'voting with your wallet' type standpoint

Do you drive around in a 1 litre CC car? Or do you buy a car with a bigger engine?

In both cases, when was the last time you drove it at its maximum speed all the time? Or ensured that you were using maximum torque at all times and always sitting in the maximum power band for the engine?

If you find that you haven't done these things, you probably should ask yourself why you have a car, right? After all, you're never going to drive the full speed of the car, so why have the car in the first place?

Yes. Because I've got plenty to lose from criminals too.

A lot of the comments on that article burn me up. People in the U.S. really think there's a terrorism problem here. The only problem is that government spending so much money on a non-issue! Politicians love to "debate" it because they know it is one of those things that looks good to the naive citizens but they really don't have to do anything because there's nothing to be done.

What really burns me is that this strategy is so well known. 1984 was written almost 70 years ago, and yet we have millions of people begging for persistent, unavoidable surveillance by authorities as part of a never-ending war with an ambiguous enemy that our own policies are strengthening.

Referencing 1984 is childish in this context, we're talking about obtaining a warrant for known suspects or already convicted persons. The enemy isn't ambiguous, you're purposely muddying their image.

I believe the GP was making a generality and not talking about just this specific scenario. "Terrorism" is an ambiguous enemy and while the number of deaths to terrorism is disheartening, it pales in comparison to many other problems (e.g. car accidents or heart disease).

Let's not forget that because terrorism is ambiguous, our own government can create mock attacks and blame them on 3rd parties. Furthering their own agendas. Invoking fear and loathing in the citizens.

Indeed. Even Bernie doesn't make this point (or at least, I haven't heard him make it). To stand up and say, "Actually, terrorism isn't a big threat to the US, especially compared to ..." would be political suicide. Why? Because terrorism isn't about any real threat, it's about hurt pride, outrage at being vulnerable, outrage at being hated, and underlying it all a cultural animosity that ranges from dispassionate concern to visceral hatred. American's are very much doers and they want to "win the war on terror". Which of course is stupid since terrorism has always been around, and will always be around. (And in another twist of irony I am positive that the American Revolutionaries were called terrorists by the British.)

Anyway, a rational politician would have a tremendous uphill battle against both Pride and Ignorance. He or she would have to have tremendous skill as a teacher and a leader, not to mention the emotional fortitude of a Buddha to endure the onslaught of hatred.

> Even Bernie doesn't make this point (or at least, I haven't heard him make it). To stand up and say, "Actually, terrorism isn't a big threat to the US, especially compared to ..." would be political suicide.

Sanders has expressly argued that climate change is a bigger national security threat than terrorism (or anything else) -- and did so in one the Democratic debates, in response to a question on national security threats. While that may not be directly minimizing terrorism, it certainly is explicitly placing it behind other problems in terms of need for focus.

> (And in another twist of irony I am positive that the American Revolutionaries were called terrorists by the British.)

They absolutely were not; the term "terrorists" was first applied to the leaders of the regime of the Reign of Terror in the French Revolution (shortly after the American Revolution), and it was quite a long time after that before the term was applied to actors other than state leaders applying terror as a weapon to control their subject population.

it's an appeal to emotion and it's actually a bit disgusting to me. I wish my government would stop creating the terrorists that it wants to then fight.

> People in the U.S. really think there's a terrorism problem here

out of curiosity, what evidence is there that there isn't?

perhaps i should ask what you mean by "a terrorism problem" as well.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact