Hacker News new | past | comments | ask | show | jobs | submit login
Apple ordered to bypass auto-erase on San Bernadino shooter's iPhone (techdirt.com)
685 points by bgentry on Feb 17, 2016 | hide | past | web | favorite | 348 comments



A thought experiment: Let's say the government makes hardware encryption standards in the style of FedRAMP that sets standards for preventing tampering by foreign governments. Then, imagine that a consumer electronics company voluntarily makes all devices comply with this standard. Could a court attempt to compel the company to defeat the standards which the government set as tamper-proof against governments?

A second: What happens if Apple states that it will take a 50-person team with an average annual labor cost of $200K/person approximately 5 weeks to fix the problem with a 50% chance of success. Can Apple bill the court a million dollars to try to fix the issue?

A third: Apple open-sources their encryption modules and firmware. They no longer have proprietary information for how to unlock the phone. Are they legally required to be the ones who defeat a system to which they hold no proprietary information?

A fourth: The small team that built the system no longer works for Apple. Perhaps their visa was revoked and they left the country, perhaps they were poached by a competitor, or perhaps they retired in the years since this module was published. Who is responsible for complying with the order?

A fifth: The data is actually corrupted. Apple presents this conclusion under penalty of perjury after a thousand hours spent on the project, which it requests are compensated.

A sixth: Apple requests that trading of its stock is frozen for one month while it expends considerable resources on complying with an unexpected court order relevant to national security.


1: Yes.

2: Yes.

3: No, but they will probably be the ones asked anyway, and then yes, they would be legally required.

4: Apple.

5: What's the question? Is the question will they be compensated? Then yes.

6: They can't. They don't own their stock. Bad PR is not a good enough reason.

You are treating the court like a mathematical proof and finding edge cases. I used to as well. But courts don't work that way at all - they don't care in the slightest about your proof. They analyze things on a human level, not a mathematical level.


>They analyze things on a human level

This (and the job prospects) are the reason I choose computer science/software engineering over being a lawyer even though most people who commented on what I should be say I should be a lawyer. The law requires not just thinking like a human, but accepting such thought as valid. To me the logic is exactly the same behind any racist who makes special exceptions for all minority people they know while still holding their view in general. In effect the system is built on a mix of emotion and logic, and in doing such it is insanely unjust to those who fall on the unpopular side. A President who admits to smoking pot leading a government that ruins the lives of people who use pot... it is just as bad as some older men I know who admit to having tried pot but who think anyone currently trying it deserves to be put in prison.

Analyzing something on the human level is always a horrible standard when viewed outside the context of our own emotional fallacies.


This is an amazing meta-point here; the people on the other side of this debate seem to have no need for logical cohesion.

I might extend that a bit to ask 'what is the consequence of such lack of cohesion'? Perhaps first that getting agreement on anything needs to be done at a close to case-by-case basis since there is a weaker adherence to standards. This can introduce favoritism (e.g. some people get hit for pot and others don't). Another consequence might be up-front costs. Not needing to think too deep about edge cases and whatnot is clearly less time-cost to run first-time-seen cases. This in turn means people won't hesitate as much in setting new precedence because they won't notice when they do. Another consequence would be inability to get consistency across and entire system since not everything can be debated by all and there's not clear rules.


You NEED human emotion not just logic when making laws.

Pure logic is like 100% free markets with zero oversight and government interference, it looks good on paper but not in practice.

Human emotion in laws add things like forgiveness, understanding and compassion, things that mathematics can't help with. I know that it also adds greed, unfairness and discrimination hence the reason it is a constant struggle to find the right balance.


> They analyze things on a human level, not a mathematical level.

Courts analyse things on a political level. It has less to do with what normal humans think is right and more to do with causing the outcome the judge wants to occur.

The result is that the better judges are more consistent (and then get celebrated when the outcome in a specific case is politically popular but pilloried for creating "loopholes" when it isn't), whereas the worse judges find a way to make the politically popular thing happen no matter how tortured the logic it takes to get there is. And the existence of the second class destroys the rule of law, because you get in front of one of them and it doesn't matter what you did, it only matters if the judge likes you.


Thanks for answering. And original question was great too. Are you a lawyer, not trying to dismiss your answer because it seems logical, but asking so that I can ask something else


I am not a lawyer. There was a time when I spent a lot of time reading verdicts from Judges. (The higher the level of Judge the better the writing, and they are really surprisingly very readable, not full of legalese as you might expect.)

Anyway, I used to think that law was mathematical, but after reading lots of verdicts I realized it was not.

> but asking so that I can ask something else

Ask. If I don't know, I'll tell you that.


I was going to ask what happens if Apple says no! I don't know what the next step of FBI will be. What do you think? Can Apple be sued over this? Can they say okay give us the device and we will give you the data but not the actual 'modified os'?

(Or we'll just have to find out?)


> what happens if Apple says no.

They would be fined, or executives charged with contempt of court and possible jail time. That would also make the Judge mad at them, and not likely to agree to anything they ask.

The Judge has a LOT of power - he can impose some really high fines, and that's just to start with. Apple will not mess with him.

> I don't know what the next step of FBI will be.

Not FBI. If the Judge says to do it (so far he has not), he will be the one making sure they do it. The FBI will just complain to the Judge if necessary, not actually do anything.

> Can they say okay give us the device and we will give you the data but not the actual 'modified os'?

They can ask the Judge if they can do that. It's up to the Judge to say yes or no. They'll probably have to explain why it's better that way, and that the end result would be the same. The FBI can then argue against it, (or accept it). The Judge will listen to both sides and explain his reasoning in a paper.

You might want to try to read all the stuff the Judge writes in this case. Ignore what the lawyers for Apple or FBI write, read just what the Judge writes (he will summarize what the lawyers wrote, so you won't miss anything).


Thank you for taking the time to reply. That is what I wanted to know.


Specifically regarding question 5 - how does the burden of proof work? How do statues of limitations apply?


There is no burden of proof. Apple says it's corrupted that's about it. I'm sure the other party will ask for details. If they don't believe Apple they'll explain that to the Judge and he'll go from there, but otherwise no one will demand proof of anything.

I don't see how statues of limitations is connected to the question.


re: statue of limitations - if we are talking exponential levels of difficulty to crack vs. exponential increases in computing power (Moore's law) I wonder whether "we won't have the computing power to crack this for X years, so a statute of limitations of (X-1) years to solve this means that we are functionally unable to answer the question within the allotted time" becomes defensible as a reason to call something "impossible", i.e. it's improbable to solve during the statue of limitations


Statue of limitations is only until the court date. Once that starts there is no statue of limitations, even if the case takes years.

The closest I can think of is a right to a speedy trial, but that has so many exceptions (for example if the person is out on bail) that I couldn't say how a judge would rule in this scenario. Especially if it was the defendant causing the delay by refusing to reveal the password and making the prosecutor get it the hard way. (Not applicable in this case, might be applicable in another.)


specifically in this case, the only non-John Doe defendant I am aware of is dead so those provisions do not apply.

The FBI is not doing this for proceedings related evidenciary purposes but instead for investigative purposes. No one has been charged here (from what I know) so the issue of speedy trials is moot while the statue of limitations on the crimes under investigation are likely extremely long if they exist at all.


A statute of limitations prevents you from charging someone with a crime once some period of time elapses. It says nothing about how long you have to prove your case in court. That's more likely to run afoul of the right to a speedy trial.


The fact that courts don't have ways to handle edge cases is a huge problem with our case law style legal system.


They do have ways to handle edge cases: judges.

They are imperfect and I'm sure you can find much wrong with the courts, but attempting to codify every edge case is not only impossible, but dangerous. The more laws you create, the more likely you can find a way you're breaking one. It's not a perfect system, but a cornerstone of American democracy is that you are judge by people, people like you, not by an algorithm.


> ...people, people like you...

We've been reminded again and again by recent events how horribly wrong things can go when the is system is applied to people who are not like the judge/jury/officers/etc. There's got to be a better way than relying on inherently biased people, especially for commonly persecuted groups (minority races/orientations/occupations).

I think when someone says they want an axiomatic or algorithmic legal system, they're saying people are not like them, and they would rather be judged by an algorithm. Also that they would rather know in advance what behaviors will be judged positively or negatively.


I understand this, and if I thought it were possible, I would agree.

However...

The law is essentially trying to codify a moral code, one that changes with time. We've seen how the law struggles with changes to technology, lifestyle and shifting attitudes. But sometimes the very axioms change (see slavery, suffrage, common law). Because the law cannot keep up with changes, we're stuck with messy human interpretation to smooth over some of those rough edges. Something like Brown v. Board of Education, which seems obvious in retrospect, may not have occurred in an axiomatic system (perhaps Plessy v. Ferguson may not have occurred either, though considering the primary axiom that American law is based on once said that blacks were 3/5 of a human, I find that unlikely).

I would argue that we ought to try to introduce algorithms into the enforcement of the law (policing, traffic enforcement, jury selection, public defenders, etc) rather than the interpretation of it. Of course, one could argue mass surveillance is exactly that, so I don't know.


You ever get in a fight with someone?

You ever get in a fight where it wasn't 100% clear who was right and who was wrong? You were both a little bit right and a little bit wrong?

Welcome to the problems our legal system has to deal with. The world isn't a mathematical equation. It's fuzzy. So is our legal system.


I think my comment was misinterpreted. Our legal system is a part of our government, which is a representative democracy. Part of the idea behind a representative democracy is that it is designed to protect the minority from majority mob rule. By definition, edge cases are cases that fall outside of the normal majority. Our legal system is fuzzy because it is operated by humans, not silicon.

The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward. So in a sense, case law is intended to be the legal equivalent of mathematical proofs in the courtroom. Obviously this analogy isn't 100% correct, but the main reason it can't be is because of the thousands of edge cases that come up in real life...edge cases that are minority cases intended to be protected by representative democracy, but the case law legal system has difficulty actually doing so because of the design of the case law system. See the conflict? Completely lost yet?


>The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward.

Sorry if this comes out as pedantic but I think it is important. Case law is not intended to minimize fuzziness but to address occurrences of fuzziness.


I disagree, I think precedent reduces fuzziness because it shows how the law has been interpreted in the real world.

No cases are identical, but knowing how a similar situation was handled in the past clarifies, not confuses the situation. It gets us closer to a consistent interpretation of the law, which is, in my opinion, paramount because consistency ideally means predictability and equality.

Indeed the roots of the common law go back to making sure all people are viewed equally under the law, rather than having random interpretations based on who's judging and who's being judged.


From my parent comment:

The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward.

Aren't we in agreement?


But then you go on to speak of edge cases that precedent can't deal with, but I would argue that your view of case law as essentially "algorithmic" is flawed. Obviously precedent cannot be exact, but using prior interpretations helps guide thinking.

For example, look at Katz v. US, one of the seminal cases that would inform this case. In Katz, the court starts with existing law, one that prevents illegal searches, and tries to decide whether that law can apply to electronics (such as tapping a phone). At this point the OCCSSA is in effect, but not really tested, so we've got a pretty fuzzy legal area despite the fact that phones are a well established technology. The court rules that even though law enforcement did not search or seize things, that privacy is still implied in electronic communications because of other acts a user takes surrounding the act of making a phone call (closing the door, making the phone call from home, etc.).

In fact, there's a whole legal concept at work here called lawful intrusion that is built upon every time a case is judged. These concepts, and human application of them to the case at hand, help attorneys, judges and juries deal with edge cases.


> You ever get in a fight where it wasn't 100% clear who was right and who was wrong? You were both a little bit right and a little bit wrong?

I'm not really sure what point you're trying to make. Yes, the world is not black and white, but neither is math.

If you have e.g. a car crash where one person was speeding and the other failed to yield the right of way, they're both at fault. Maybe one is more at fault than the other and therefore has to pay a higher percentage of the damages, but how is that not still an algorithm?

The hard part is designing the right algorithm ahead of time. But throwing up your hands and saying that problem is hard and therefore we should give up on having laws and just let judges decide everything on a case by case basis isn't democracy and isn't the rule of law. It's the rule of whatever the judge says it is today.


It's impossible to design an appropriate algorithm ahead of time capable of accepting all of the possible inputs that might occur.

Which isn't to say that we shouldn't do our best. We shouldn't just throw up our hands. But I think it's important to admit that judges will always be necessary to handle inevitable ambiguities.


It's completely possible to design an algorithm that will give a deterministic result in every case. It might not always be the result you want, but a judge might not give the result you want either. And at least with the algorithm the result is predictable (even if not predicted) and, if wrong, can be fixed and then apply consistently in the future without making decisions based on politics or race or personal relationships.

Or to put it another way, we could have an algorithm make the decision but then have a judge whose job it is to find when the algorithm is wrong and then fix it for future but not past/current cases.


Yes. That's why I said "appropriate algorithm" and not just "algorithm."

There is a tradeoff between predictability and correctness. Where you come in on that tradeoff is pretty different from most other people.


> There is a tradeoff between predictability and correctness. Where you come in on that tradeoff is pretty different from most other people.

Because I don't see it as a tradeoff between predictability and correctness, I see it as a tradeoff between correctness for past acts and correctness forever in the future.

If you know a law is ridiculous but you also know that a judge will see the same thing and then not let you do that, you won't do it. Some people like that result. The problem is it causes the ridiculous law to carry on existing and not be fixed, because nobody is willing to challenge it when they know they'll lose and go to jail even though by the letter they shouldn't. So then nobody ever knows what the law actually is because it clearly isn't what it says it is, but if it isn't that then what is it?


[deleted]


Maybe when humans are axiomatic that might work.

Till then......

Edit: The deleted post was suggesting that it would be wonderful if the law would be entirely axiomatic. The post was getting a lot of downvotes.


Ah yes, that worked great for Asimov.


No robot ever lobbied government for special exemptions to robot-owned businesses, which it then used to completely trash the economy while accumulating vast amounts of wealth, which it in turn used to strengthen its grip on government. Also, no robot ever argued that it must own every single thing it says for life+95 years or it will stop speaking.

The occasional malfunction is to be expected.


That would be terrible


Why?


Even ignoring Godel - take the stuff that's been legal in the past & not anymore.

Slavery? Restricted Suffrage? Genocide? Witch trials? Heroin?

Or stuff that's been illegal before

Inter-racial marriage? Hemp? Gay marriage?

Due process isn't enough for a legal system to work - the law has to change & revocable Axioms aren't exactly a thing.

In fact, my sister goes on and on about Jurisprudence in such discussions - letter of the law and the spirit of it.


> revocable Axioms aren't exactly a thing

Aren't revocable axioms why there are different geometries?


Because designing a law that makes sense when interpreted strictly formally is a friendly-AI-equivalent problem, i.e. it requires figuring out what values humans actually care about and then implementing it in a system that can outsmart anyone trying to game it. Failing that, people will get hurt by corner cases in such a law, or suffer abuse from people gaming the system.


There is a different point to be made here though. You can't predict everything ahead of time, but what do you do when somebody finds an edge case?

For the future the answer is obvious. You consider the edge case, decide what to do what it happens, publish the decision and follow it from now on.

The real question is, what do you do for the person who fell into the edge case before it was decided?

And the problem with the existing criminal justice system is that the judge decides how that edge case should be handled and then imposes it on the defendant's behavior ex post facto. Which the courts don't allow Congress to do but seem to have no problem doing themselves.

There is a fair argument to be made that we should let the first defendant who does it get away with it, but then publish the decision saying that it was wrong and imposing that on the next one.

It's kind of like the exclusionary rule: Somebody getting away with something that one time would be the punishment Congress gets for not considering more of the edge cases from the outset (or passing simpler laws that are easier to reason about).


I think there's yet another point to be made here. In an imperfect world, where we can't design axiomatic systems that are corner-case free and where we can't get stuff right on the first try, the judge's flexibility seems like a necessary safeguard against the letter of the law totally missing its spirit. I like to imagine it as a grease, a lubricant - you need to apply it, or else minor imperfections will grind your machine down.

A fundamental question here is - do you want the law to be a completely trustless system? Sticking to the letter of the law and enumerating all imaginable corner cases are examples of going into that direction.

I'm not totally convinced trustless systems are a good idea for society, because of inefficiencies they introduce as a cost of not having to trust anyone.


It's just not possible.

It's like arguing that the developers of Windows or MacOS or Linux or whatever should just start over and do everything right this time. Or Intel and AMD should create a new CPU architecture and do it right this time. What you'll quickly find are a million reasons why things are done the way they are, and that this whole restarting project was a terrible idea.

Yeah it sounds great, but anyone who knows what that actually means realizes it's a retarded idea.


It's not possible to make a perfect system. It's completely possible to let the first person to find an imperfection take advantage of it but then immediately fix it.

Which would create an incentive for finding imperfections and therefore fix more of them faster.


Because incompleteness.


6. Wrong, Yes they can, a company can suspend public trading of its stock any time it wishes.

https://www.sec.gov/litigation/suspensions.shtml


> The federal securities laws allow the SEC to suspend trading in any stock for up to ten trading days when the SEC determines that a trading suspension is required in the public interest and for the protection of investors.

> allow the SEC to suspend


My understanding is that it's the SEC who suspends trading on a stock, not the company itself.


Ironically, many .gov organizations are rolling out iPhones by the thousand because of the strong security controls available on the platform.


While I only have anecdotes, when I've encountered governments rolling out Apple devices it is far more about being able to have an Apple device than any technical ability it provides. This comes from talking with those in charge of the roll out who admit such. Granted, this is only 3 very much non-random anecdotes.


Cite your source(s), please.


I worked in government mobile security for many years, until last November. Many agencies are indeed rolling out iPhones and scaling back on blackberries finally


I'm not at liberty to say. :)

That said, Apple's enterprise business grew 40% in 2015 to $25B. (http://appleinsider.com/articles/15/10/27/apples-enterprise-...). .gov is a big part of that. When you attend Apple enterprise-focused events, the attendance has grown beyond the old media/field service crowd to include finance,police and .gov attendees.

If you need to comply with various federally mandated compliance regimes, (ie. FIPS 140-2, CJIS, IRS Pub 1075, etc) iOS is very clearly the easiest and most straightforward platform to achieve that compliance on. It's not as trivial as it was with legacy BlackBerry, but it's pretty close.

With Android, the carrier and manufacturer variance makes it more difficult to achieve, demonstrate and maintain compliance. That said, Android offers a number of advantages from an application perspective, but when in production, you'll commonly see licensed third party software in place to perform typical mail and other functions.


Blackberry's collapsing business model.


Why is that?


BlackBerry failed to keep the positive aspects of it's legacy platform fucntioning when modernizing the platform.

I used to run an environment with something like 25k devices... Provisioning and management was braindead simple with legacy BlackBerry, and it's integration model for mail & calendar was robust and reliable. Most users considered BB more reliable than any other mail access methodology. PIN messaging was device-centric vs. identity centric, which made it attractive or many use cases. That also went away, along with the perceived and real security benefits.

That benefit became a liability -- BlackBerry was very "enterprise telecom" and mail-centric. So the people who were responsible for BlackBerry were very much affected by tunnel vision. That's why BlackBerry was surprised by the market shifts -- their customers were very happy, but were ultimately people with an overly narrow focus who were disconnected from the business.

When BlackBerry switched to the ActiveSync model, they became just another ActiveSync device, with the added "benefit" that the rest of the platform was a mess. They are now trying to leverage their past reputation for good management to become an MDM vendor for some reason.

iPhone slaughtered Blackberry by attacking their strengths... BBM/PIN messaging with iMessage (with the added feature that you cannot intercept the messages if the feature is enabled), the obvious improvements of iOS vs. Blackberry OS and an easy institutional management model.


Poor sales, major benefit of blackberry overshadowed by other services (BBM), late to adjust to the times, etc.

I doubt you were suggesting that BB's downfall is rooted in their encryption, however. In the slim case that it did, they didn't speak up about backdoors and letting .gov into their devices until 2015, way after their market share has shrunk.


While I havent seen heavy market in that aspect there appears to be some level of improvisation with Blackberry's messenger http://www.bbm.com/bbm/en.html


A seventh: What if there's influence from super-secret organizations to make those FEDRAMP-style standards less "tamper-proof"? Then a court wouldn't even need to be involved: https://www.eff.org/deeplinks/2014/11/eff-joins-call-nist-we...


Courts already have ways to balance requests for information (by the government or another litigant), with the burden on third parties who might have to undertake expenses to get that information.


Courts already have ways to balance requests for information (by the government or another litigant), with the burden on third parties who might have to undertake expenses to get that information. Generally, third parties are required to take reasonable measures to help.


Remember, this is an iPhone 5C, which doesn't have Touch ID or the Secure Enclave; the security model for this phone is significantly different from that of more recent iPhones.

On phones with a Secure Enclave, the wipe-on-failures state is managed in the coprocessor (which runs L4), and is not straightforwardly backdoor-able.

If you're worried about the police brute-forcing your phone, enable Touch ID and set a passcode that is approximately as complex as the one on your computer.


Even without the Secure Enclave, is it even possible for Apple to do this? The article talks about how Apple could add a backdoor to the OS and update the software on the device in order to break this, but I'm not sure how anyone is supposed to update the software on the device while it's locked, without erasing the device in the process (assuming of course that the iPhone is running a relatively recent version of the OS to begin with). AIUI Apple used to have the ability to backdoor phones for use in complying with law enforcement requests, but they removed that ability several years ago. And of course if they still had that capability with this phone nobody would need to order them to add a backdoor anyway since they could just bypass the passcode directly.


The court order refers to the need to load the custom OS image via DFU (device firmware upgrade) mode. I am not an iPhone user but I'm assuming that is exactly what the name implies. (some pre-boot recovery environment)


It's been a few years since I looked at DFU, but my impression was that installing a new OS via DFU would have the side-effect of erasing the device.


That's probably the case for you and I, but I'm sure with sufficient knowledge of how DFU works (for example by employing the engineers who designed it) you can persuade it to only rewrite particular blocks, leaving the data intact.


In my jailbreak days, the DFU update did indeed wipe the device. I don't know how it works.


Did it actually write zeros to the flash memory or did it leave the data sitting around somewhere?


The encryption key is deleted. All data can remain on the flash for years, but it's useless.


But if you're planning on brute forcing the encryption key, this side-effect doesn't matter.


The key is not the password. If that were the case, the phone would have to re-encrypt everything everytime you change the pin or password. The password unlocks the key. And if you brute force the key itself, it might take decades, maybe centuries.


You could rewrite the bootloader and recovery firmware to really do anything. Resize some partitions and dual boot into a shell with some drivers for keyboard and wireless.

Basically jailbreak..


So all the FBI has to do is desolder the flash chips and hope it was a weak passcode?

Seems like they're just hoping to use this as an opportunity to set a precedent. Never let a serious crisis go to waste?

Also hasn't Apple been able (and previously willing) to unlock pre-Secure Enclave phones for law enforcement for... ever?


No, because the flash chips are only decryptable when they're installed in the phone. The user's passcode is tangled with a long key burned into the SoC, so you need both to decrypt the flash.


Is that true of non-Secure Enclave phones like the 5C?


Even if touch id, it would be of no use. TouchID requires a password after 48 hours. or after the device resets.

Which is interesting. If you happen to use TouchID, is your best bet to hope a court will not be able to compel you to unlock it within 48 hours of arrest? That sounds very probable.


After five failed fingerprint attempts, your password is required to unlock the phone. That seems pretty safe to me. If you're ever ordered to unlock the phone, just touch an unregistered finger to it. Fingerprint sensors aren't foolproof. It'd be hard to prove you deliberately sabotaged the effort.

Though, one feature I'd like would be to register a distress fingerprint. Then I could touch say... my left index finger to require a password unlock.


If you do this on purpose after asked to unlock your phone you will probably be charged with destruction of evidence or something like that.

However, while a court is (afaik) able to ask you to put your finger on the fingerprint reader, you do not need to tell them which of the fingers the correct one is. So instead of purposely using a wrong finger, I'd ask the court to explicitly tell me which of my fingers I should use to unlock the phone.


I think the court would similarly consider that obstruction and contempt. If they tell you to unlock your phone and you try to play some "first you have to guess which finger's the right one!" game, the judge will slap you with either contempt of court or refusal to comply with a subpoena.


IANAL, but I don't think there's much of a difference between asking someone to reveal the correct password and asking someone to reveal the correct finger. In both cases you would be asked to incriminate yourself.

If it would be lawful for a court to ask you to "unlock the phone with the correct finger" then they might as well also ask you to "unlock this harddisk with the correct keyboard keys pushed in the correct order (as a password)".


I don't think there's much of a difference between asking someone to reveal the correct password and asking someone to reveal the correct finger.

There's a huge difference. Authorities can force you to give up your fingerprint, but not your password[1].

1. http://jolt.law.harvard.edu/digest/telecommunications/court-...


It's the "which finger" that becomes similar to a password, not the fingerprint (ianal)


All the court cares about is for the person to supply the one that unlocks the phone, they're not going to play guessing games.


Courts care about precise distinctions of law (that's their purpose!). Seems clear that fingerprints aren't protected, basically the same thing as your face in terms of privacy given a good enough camera.

But they would effectively be asking you the question "which finger did you use to lock this phone" to which you may plead the 5th.


It'll be contempt and possibly more if you don't unlock the device with your fingerprint.

It's not hard, the "precise distinction of law," is "unlock this with your finger, whichever one does it." I don't know what complicated back and forth you're imagining, but it's never occurred in any case that I've heard of.

they would effectively be asking you the question "which finger did you use to lock this phone" to which you may plead the 5th.

We already covered this in the link above: the 5th Amendment covers passcodes, not fingerprints.


But, as Schrodinger says, this is not about your fingerprint; it is about a bit of information you have that the government does not have: which of your finger(s) this device knows about.

"A bit of information you have that the government does not have" is a password.


I really don't understand this line of thinking. What is the scenario you imagine where the question of "which finger?" is relevant? I'll lay out the beginning:

- Police want to get into your phone for some reason - You refuse to help them based on 5th Amendment or admiralty law or whatever - They go to court for a order compelling you to operate the touch lock to open the phone - You receive the order - ?

Please lay out the "?" part, if you don't mind. I'm highly curious.


> We already covered this in the link above: the 5th Amendment covers passcodes, not fingerprints.

No, it is you who is not understanding schrodinger's assertion. The secret knowledge of which finger unlocks it is in itself a passcode and subject to 5th Amendment protection.


OK, whatever you say.


Imagine you were to take the example further, unlocking the device required a sequence of fingerprint reads, with a precise ordering. i.e. left-ring finger, right index finger, right little finger, etc... That sequence would be a passcode, just as a precise sequence of keypresses would be. The government can insist on all your fingerprints, but not (in this argument) the correct sequence of uses of those fingerprints to unlock it. If it's only a single finger this same argument could apply.


I wonder if we can then reduce this example to a 1-finger sequence. Could a court require you to turn over all of your fingerprints, but not identify the (1 character) sequence?


As other have noted, the distinction is between fingerprints (all of them) and the correct fingerprint (one of them).

If you would like to claim that there's no difference between the two, then you (and your hypothetical court) should have no problem with a user supplying copies of all their fingerprints when asked to unlock their phone.

That's obviously not what's being asked for, hence other people's distinctions.


As I said, the court doesn't care about the, "which finger?" question. If they tell you to unlock your phone with the fingerprint that unlocks it, you replying with "which finger?" isn't going to help you.

Of course "the finger," and "which finger," are different things, but that's irrelevant.


Yes, and when they do force you to give your fingerprints they don't ask you what finger you want them to print they tell you which finger and when to print.


You better hope they don't say the one that you designated to unlock your phone?


Use your middle finger to actually unlock and use your index finger to fail. Unless the phone retains the biometric data after erasing itself, it's deniable.


This might work in the short term, but if enough folks actually did it, there would soon be a law that you may only use your thumb to unlock your phone.


Why not just turn the phone off instead.


It would be a very long and involved case, anyway.


I only recently got a touchID iPhone so I'm still having fun with it. But I did my right thumb, index, and middle finger, and left thumb and index.

If under police duress I keep trying to unlock a phone with my pinkie finger I think that would be suspicious.

If I have that much access to the device I should just force-reboot it by holding the lock and home buttons for about 2 seconds. Or maybe have done that before being arrested.

Upon a reboot the iPhone will always require its passcode.


They would fingerprint the reader, and you, to get a rough idea of which finger is a likely candidate. Even an extremely partial print should be enough to narrow it down to 4 or 5. The reality is that all right-handed people hold phones in their left and so use fingers on their right hand. Lefties do the reverse. So it is already down to five candidates ... which might have something to do with why apple picked that number in the first place.


...where on earth did you get that idea from? I'm right-handed and both hold and unlock my phone with the fingers on my right hand.


It depends on the size of the phone. My too open/unlock my phone with the same hand that holds it (pattern unlock). But my phone is tiny. As a phone gets larger so does the likelihood that it is manipulated with two hands. Having the scanner below the screen also requires some dexterity when used single-handed, increasing the likelihood that the thumb is the print finger.


Sidenote: Fingerprints should most certainly not be used as passwords at all and only serve as usernames (imo).

Mostly because it's pretty hard to change your fingerprint which is a desirable feature for passwords ;)


If one is really paranoid, register only a toe print. I've currently got my right big toe registered to my phone as an experiment and it works as often as finger scanning


A penis works as well. It makes checking email on the bus awkward though


Going through a few thought experiments, one might think they could adapt by dusting it for prints and requiring you to use the one they found there.

On the other hand, one might habitually touch, but not register, random fingers to it, while registering some fairly unusual finger as the real one, while using the dominant hand's index finger to "unlock" it.

Finally, someone might decide that if you fail the unlock, they'll just inspect the fingerprint module in isolation and if it works, they'll assume you did that deliberately.


The countermeasure is for the authorities to push your finger of their choice by force.

Thumb and index finger should cover 98% of people.


What if I bite the skin off my fingers? God this conversation is turning into an horror movie.


Pineapple juice might be easier.

https://en.wikipedia.org/wiki/Bromelain


>After five failed fingerprint attempts, your password is required to unlock the phone.

How is that actually enforced? Is there an if statement and a counter somewhere? Couldn't that just be disabled by a sufficiently advanced attacker?


If they could do that, they could also read the memory for the encryption key stored in the enclave. No such attack exists that we know of.


As Apple says, it's not feasible to try 50,000 fingerprints since you only get 5 tries. But you can try a lot of passwords.

Anyway, how does any of this prevent rubber hose cryptography?


In this case, the part where the phone's owner is dead.


Do dead guys not have functional fingerprints?

(Yes, I really asked that. Yes, I'm really curious.)


Because I was interested and no one else chirped in, the answer is "Yes, but require some additional effort to obtain due to post-mortem stiffening."

For those curious, here's a Northwestern scan of an article in The Journal of Criminal Law from the 70s (thanks, internet!): http://scholarlycommons.law.northwestern.edu/cgi/viewcontent...


He had a 5C, so wouldn't help here.


They first have to guess which body part needs to be placed on the TouchID sensor! :)


having made the request for both an erase password as well as an erase finger print I would not mind going one further, a setting which wipes the phone if neither are entered in a set amount of time. The would protect you when the phone is stolen or confiscated.


I don't know about that but I'd be fairly certain a court would just order you to unlock the phone regardless of whether it's your finger locking it or a password.


In the USA the courts treat passwords as testimony, and in most cases you can invoke your 5th amendment right and refuse to provide passwords or encryption keys, given the state does not already know the contents of the device. This same protection does not extend to physical keys, which I think fingerprints would fall under.

http://www.uclalawreview.org/the-fifth-amendment-encryption-...


That seems to be representative of the only actual ruling on this topic that I can find

>The Fifth Amendment to the U.S. Constitution gives people the right to avoid self-incrimination. That includes divulging secret passwords, Judge Steven C. Frucci ruled. But providing fingerprints and other biometric information is considered outside the protection of the Fifth Amendment, the judge said.

[1] http://blogs.wsj.com/digits/2014/10/31/judge-rules-suspect-c...


It would seem to me that the "fingerprint = key" analogy is flawed. Physical keys can be trivially copied, and even reverse engineered from the locking mechanism. A key is a physical object that is required to disengage a lock.

A fingerprint, when used on an electronic lock of this kind, is not a key. It's attributable to one person only, not trivially duplicated, and not able to be reverse engineered from a locking mechanism. It requires an action by a single person who cannot be forcibly relieved of their possession of their fingerprint.

Additionally, a key is specified during the manufacturing or assembly of a lock, and comes with the lock, since they are "paired" when the lock is made. However, a fingerprint or password are specified by the user at will after they've assumed ownership of the device. They "testified" their identity to their phone with a fingerprint, just like they did with their password.

If compelled to imprint a finger, it is the same sort of personal interaction that a password entails: the credential holder utters/presents their personal information - not a physical object, but a repeated testimony of the same content they previously and uniquely presented to their device. It should be protected as other self-incriminating testimony under the fifth amendment.


I think the courts have this one right. Your finger is a characteristic of you, not compelled speech.

If you can compel a suspect to stand up on a lineup, or produce id, there's no reason why the court shouldn't be able to compel you to produce a finger.

In technical terms, the finger is really a "something you have" second authentication factor. If you think of it on those terms, it's more like looking at someone's Hardware token than compelling a password disclosure.


>Your finger is a characteristic of you, not compelled speech.

So is all your knowledge. The technology to extract it doesn't yet exist, but once it does, should it be deployed by the courts without a challenge from the 5th Amendment?


Clearly there's going to have to be some evolution from a legal standpoint once accessing your thoughts is a mere fMRI away.


> If compelled to imprint a finger, it is the same sort of personal interaction that a password entails:

Not quite sure it is though. If they already arrested them, they already have the fingerprint don't they? That is different than key and is different than password.


Interesting being Australian I don't have a 5th amendment to protect me. Also my understanding is that at least in Australia you'd likely be charged with obstruction of some sort, does that not fly in the US?


I was under the impression that law enforcement can coerce you to put your finger on the touch pad to unlock your device much easier than they could coerce you to provide the pass code. I have Touch ID disabled on my device for exactly this reason. Is that not the case?


The FBI is trying to set a precedent but the phone in question is an iPhone 5c.

If the device in question was an iPhone 5s or above, then all they'd need is the dead man's hands.


This seems more reasonable. Also typing out a complex password on the iPhone is a pain in the ass when you just want to reply to your buddy's text.


Law enforcement can legally force you to unlock your phone with your fingerprint, but cannot force you to reveal your passcode.

http://pilotonline.com/news/local/crime/police-can-require-c...


That's not correct. This is still an open question. Some courts have went one way, some the other way.


So with biometrics being adopted by the mainstream population, would the law eventually be expanded to include our DNA?


They can already compel you to give a DNA sample though I believe they typically need a warrant or something official to do so.


Wouldn't Apple (or the manufacturer) know the key of the security enclave?


No. From Apple's iOS security guide[1]:

> The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing. No software or firmware can read them directly; they can see only the results of encryption or decryption operations performed by dedicated AES engines implemented in silicon using the UID or GID as a key. Additionally, the Secure Enclave’s UID and GID can only be used by the AES engine dedicated to the Secure Enclave. The UIDs are unique to each device and are not recorded by Apple or any of its suppliers. ... Integrating these keys into the silicon helps prevent them from being tampered with or bypassed, or accessed outside the AES engine. The UIDs and GIDs are also not available via JTAG or other debugging interfaces.

Even for older devices like the iPhone 5C, if the owner chose a good passphrase, I doubt it can be decrypted with Apple's help.

1. From the section on Encryption and Data Protection. Starts on page 10: https://www.apple.com/business/docs/iOS_Security_Guide.pdf


Thanks. So only recourse for highly resourced adversary will be to decode key via hardware imaging (not sure if any research has been done on this), and after that they will still have to bruteforce the passphrase used to secure the phone, the effectiveness of which depends on the entropy of passphrase.

I wonder what how Apple can help the law enforcement here.


A lot of research has gone into information recovery from silicon inspection since it's tied closely to reverse engineering ICs. It's not the most trivial of pursuits but widely done.

There are some hardware HMACs (Atmel's in particular IIRC) where the process of opening the chip package destroys the area of silicon that encodes the private keys. I don't know if Apple used the same tech but if they did, any attempt to look at the private key storage would destroy it.


This kind of security is used in SIM-cards, access-cards for pay-TV, TPMs. Kind of standard with various variations.

Some criss/cross metal mesh as the topmost layer you would have to penetrate, or photodiodes that sense the light if you put a device under a microscope, ...


Quantum cryptography would be fullproof. Any attempt to view the algorithm instead of using it would render it useless.


That's not how quantum crypto works (it's based on observation of state, not the algorithm). Further, we've had cases of quantum crypto that just wasn't good enough to stop an observer from MITMing the internal state.


The usual expression is "fool proof", rather than "fullproof".


It sounds like since the UID is fused it cannot be erased; it's probably the GID that's erased, and it sounds like the GID is known to Apple.


None of the local passwords within passwords are known to Apple. They say.

The FBI is hoping they do know something secret.

In most cases it would be easier to subpoena online accounts, but of course Apple says iMessage is also unreadable for different reasons.


but fuses can be blown. Simply by blowing one of the fuses the key will change. Even a single bit change means it's useless to authorities.


The fuses are only "blown" (i.e. the UID is burned into the chip) at manufacturing time, not when the device is erased.

When a device is first set up (or wiped) a random key is created and encrypted by the Secure Enclave with a key derived from the user's passcode and the device's UID. Since only that particular device's Secure Enclave has access to the UID the user's passcode can't be brute forced by any other computer, which enables the Secure Enclave to enforce policies like the passcode attempt delay and incorrect passcode attempt. If the device needs to be wiped the random key is simply erased by the Secure Enclave.

(Also, if you only changed 1 bit that would mean you only had to try 2 possible keys...)


Thanks, that's what I was missing.


Nope. The keys are personalized to the device at manufacture and, according to Apple, not retained.


Interesting. Let's take this current tech you described and apply it to this situation, what happens.

You did some unspeakable act and you are dead.

* You have securely enabled touchID

Even if you have 10 tries, barring using random appendages your phone can be unlocked right?

Edit: on my phone so far it is password OR touchID. Making this easily defeatable with the physical device and my:

1. Hand if I am dead or alive.

2. A copy of my fingerprints on file with the gov't (true for me)

3. A non-gov adversary with literally anything i've touched.

Maybe I can enable both, but currently it is either or.


After 5 tries with your finger, an iPhone will require passcode for any further unlocks.

Also, after 48 hours an iPhone requires a passcode to be unlocked, even if TouchID is enabled.

So you have limited time, and have to hope you guess the right finger to use and it reads the finger within that 5 try period.


this is an interesting vector of attack. Can by software modify those limits?

Instead after 5, make it after 100; instead after 48 hours make it after 40000.

And then, can a fake finger with the correct fingerprints be made so it fools the reader? Lets say make it of silicone or gummy bears?


All I'm really saying is that the complexity of your passcode matters a lot in this scenario, so anything you can do to increase it will tend to pay off.


I am not saying you're wrong. I was hoping there was a way to provide more security that I overlooked, however that doesn't seem to be the case.

Most people likely use their dominant hand, probably thumb maybe pointer. In this threat model, someone lifts your phone and opens it with a fingerprint. Assuming they can completely replicate a print and get one (fairly non trivial assumption) they could probably get through with 5 tries.

Genuine question: How secure would you rate:

* TouchID

* 6 char password

* numeric 4 digit pin


Also, don't use your actual fingerprint, since it's becoming harder to avoid giving it to the government, even if you're not a criminal.


What is the alternative approach?


I've legitimately tried using my bellend and it doesn't work, unfortunately.


Some people are using their nose-print instead of a fingerprint. I'm assuming you can also use a knuckle or some other part of your anatomy.


Your toe?

I'm only half joking. I wonder if a knuckle or something would work.


If you read the iOS security guide you'll know Apple built the phone in such a way as to wash its hands with these types of request. They'll say it's impossible and they won't be lying. Nothing is ever impossible, but it will be very impractical. The hardware and software is built to ensure this.

I think the real game here is to compel Apple to build a backdoor into future models. I expect to see a lot of rhetoric around this fact, until something forces Apple hand.


That is possibly true for current models of the iPhone. It is significantly less true for the 5c in question, which has less robust security features. See other answers referring to the Secure Enclave.


The article at Errata Security [1] is better. There is an HN submission for it [2], but it hasn't drawn any attention.

In particular, it addresses technical issues not covered in the Techdirt article that are relevant to many of the existing comments here on HN.

[1] http://blog.erratasec.com/2016/02/some-notes-on-apple-decryp...

[2] https://news.ycombinator.com/item?id=11115251


Unfortunately, there's no good outcome here.

If Apple can unencrypt the phone, it will prove to everyone that backdoors exist. If they can't, and they tell the FBI as much, it will just give politicians more reasons sound off about how we have to have backdoors, because this shooter was a "terrorist" after all, and we just have to suck it up and do whatever is necessary to go after people like that.

Either way, we end up with backdoors.


Did you read the article? The court didn't order Apple to decrypt the phone. Instead, Apple has to disable the phone's feature that automatically wipes the hard drive after 10 failed password attempts. This is so that the FBI can brute-force its way into the data.


This could be example of parallel construction[1]. They may already have unencrypted it via a backdoor, but they wouldn't be able to use anything they find as evidence in court because they'd have to reveal the backdoor. If they can plausibly show they brute-forced it instead, they keep the backdoor hidden.

[1] https://en.wikipedia.org/wiki/Parallel_construction


"it could be parallel construction" is true in literally every instance since it's impossible to prove the negative case.

This is becoming my cue to stop reading the comments; when parallel construction is the most obvious argument, you've read the interesting ideas up thread.


However, the US Government is known to have gathered hidden evidence in drug cases, then used parallel construction to hide the violation. So, now the government's presentation of evidence should always be considered in question. As it has shown dishonesty at the presentation and gathering of evidence and there's nothing that says they haven't changed their unethical ways, how can they be trusted to present evidence legally gathered?

https://www.washingtonpost.com/news/the-switch/wp/2013/08/05...


Ok, but IMO this is a much lesser evil than (1) compulsory lawful-override of encryption/back door or (2) legislation to exclude devices which don't provide back doors.

Ultimately states will develop the capacity for brute forcing and you have relatively little recourse. While I hate the idea of a three letter agency doing this at any scale large or small, the potential for corrupt local LEOs to abuse their power with an encryption backdoor is very great.


Yes, and that's effectively the same thing. Bypassing controls counts as a "backdoor".


Brute forcing a password could take more time, with today's technology, than we have left on Earth depending on complexity and if there are known vulnerabilities. I'm not sure I would effectively consider this order an order to "unencrypt".


Passcodes are only 4 or 6 digits.


After a few attempts the OS would rate-limit guesses to prevent exactly that. On some iOS versions it is possible to override this mechanism by cutting power at the right moment[1] but this exploit has been patched for a while and I doubt this device is vulnerable.

[1] http://observer.com/2015/03/watch-this-little-machine-brute-...


Hence this part of the order

(3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.


The encryption key is calculated from your passcode + the AES key etched into the chip inside the phone. There's no way to read that key directly, unless you do some crazy chip imaging where you read the actual electron state of the memory - could be done, but the chance of corrupting that memory is very high, and if they read even one bit wrong then the entire key is useless.

So there are two ways to go about this - they can either brute force AES, which, quite simply, can't be done(and I don't mean can't be done with current computers, the number of possible combinations is larger than the atoms in the universe or something stupid like that), unless NSA has a way to crack AES faster(but if they do, they won't make that knowledge public). Or try every passcode combination going through the Apple's full algorithm, which takes about ~5 seconds to generate a key. So it's doable, but it would take some time.


5C doesn't have dedicated HSM.


Maybe? Do we even know what type of passcode they used? If they turned off simple passcode then they could have entered anything they wanted at varying lengths.


For me, the most interesting question I would have is absent from the article.

The court is basically ordering Apple to produce new firmware that doesn't block brute forcing. If Apple were to comply, who keeps this firmware after the fact?

There's no mention of this at all, but if the firmware image stays with the FBI then the implications are much more profound with regard to privacy.


They specify that the FW will be locked to the unique device's ID, so it won't be usable on any other one.

But once it's established that it can be required from Apple, Apple has to comply, and Apple effectively does comply, other judges in other cases will be able to request other FW, hard-coded against other IDs, as needed.

The middle-term solution to this is for Apple's security team to protect against this threat model, and implement encryption in such a way that _they_ can't bypass it under constraint. I'm not an Apple customer and I don't follow their products closely, but I understand that iPhones 6 already are harder to bypass than iPhones 5.


I would bet you all the money in the world that the very second such a firmware image was provided to the FBI it would find its way to the CIA/NSA. All with the assumption of course that the FBI has no rogue agents who work for foreign governments or criminal organisations.

Apple is right to be terrified at the thought of being asked to make such a firmware image.


Exactly this! Once it's in existence somewhere it's immediately part of the NSA/CIA/FBI basic iPhone toolkit.


>The SIF will be coded by Apple with a unique identifier of the phone so that the SIF would only load and execute on the SUBJECT DEVICE.

If I understand the cited order correctly the firmware is ordered to be constructed in a way that it runs only on the target phone.


I do wonder though that had Apple not predicted this exact scenario ahead of time (likely), how would they control this?

It's unlikely they can rely on hardware protections to provide this device locking, so is it the case that they would build the unique identifier into the image.

Optimistically some obfuscation could help but are the FBI/CIA/NSA really more than a few hops away from opening the binary image in a hex editor and changing it by hand?

If Apple firmware images for the iPhone are signed per-device then fine, but is that the case?

I don't know this but it seems unlikely to me that a custom device-signed build of iOS happens for every iDevice, and if that's not the case, I can't see how Apple can reliably restrict this with confidence.


I agree with you that this will be difficult or maybe impossible to implement. However the court has foreseen the upthread argument as the order shows.

As many here I believe that once this backdoor exists it will be somehow exploited (at the very least by further orders).


> Apple ... will probably have little time to debug or test it overall, meaning that this feature it is being ordered to build will almost certainly put more users at risk.

Eh? They are not being asked to install it to the public at large, just one phone.

Of all reasons to object, this reason makes little sense.


That's true. In fact, if it's possible for Apple to accomplish what DOJ is demanding of it, the best outcome would be for DOJ to succeed, and do so publicly:

* There is an authentic need to get at the data on that phone

* There's no likelihood at all that other users will be impacted by the backdoor

* We'll all be on the same page about how secure these phones are versus the USG.

It's possible that they can prevail against the 5C but not against the 5S or later, since the security architecture of the 5S is very different from that of the 5C.


> There is an authentic need to get at the data on that phone

What is the authentic need? The shooters are dead. Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?


If you shoot a bunch of people while declaring allegiance to an organized group known for shooting bunches of people then I think that pretty clearly demonstrates that reading your communications has a pretty high likelihood of turning up something useful in preventing future incidents. If this doesn't clear your hurdle for reasonable search then what would?

To be clear, I don't think the order to Apple is necessarily altogether a good idea or is even going to produce the desired results, but your complaint seems to be with the fact that this data is being pursued at all.

Edit to reply:

> Evidence of a conspiracy would help. You said they declared allegiance to an organized group. When they did that, did they say or hint that they had been in contact with that group, other than, say, watching public YouTube videos?

The woman in the couple declared it right before the shooting[0]. Do you want a notarized letter from the deceased?

> Would you agree that "high likelihood" is too low a bar for justifying searching the phones of people who live in high-crime neighborhoods?

I'm pretty sure neither "high likelihood" nor "authentic need" were being used as a term of art here, but I would bet that any judge would view the commission of murder declaredly for an organized militant group to be probable cause that there is information pertaining to more criminal activity by that group on these two's phones and in their communications.

Do you really view this as a government overreach or are you just trolling? Under what circumstances, if any, would you see as justified a search of someone's email? phone? house? So far you've equivocated between living in a bad neighborhood and committing murder-suicide.

[0] http://www.nytimes.com/2015/12/05/us/tashfeen-malik-islamic-...


In response to your (first?) edit:

> The woman in the couple declared it right before the shooting[0].

I'm not questioning that she declared allegiance. I'm asking if she was in private contact with anyone. If you were responding to that, can you show me where that is in the NYT article you linked? I don't see it.

> Do you want a notarized letter from the deceased?

Let's try to keep this civil, please.

> Do you really view this as a government overreach or are you just trolling?

I actually believe the things I am saying. I am not saying them to anger or upset you or anyone else. Please do not let the fact that we disagree about the scope of the 4th Amendment cause you emotional suffering.

I am not ready to declare it overreach, because I do not know all of the evidence yet. This is why I have been saying things like "Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?" and "did they say or hint that they had been in contact with that group" and "I have not followed the news on this shooting, so I would not be shocked if the answer were 'yes, there is some evidence of a conspiracy'."

If there is no such evidence, I do think it is overreach, but my opinions on policy are not fixed in stone, and I sometimes change my mind about them when presented with new arguments, ideas, or philosophies.

> Under what circumstances, if any, would you see as justified a search of someone's email? phone? house?

I doubt anyone has a complete enumeration of all circumstances under which they feel a search is justified. I would feel torn if there was lousy circumstantial evidence that the phone would solve or prevent crimes, I would be in support of a warrant if there was strong evidence, and I am opposed to a warrant with no evidence. One thing I would call strong evidence is a shooter having announced that he or she was part of a terrorist cell in the US.

I will no longer reading or responding to your edits that are "edited to reply". If you want to discuss with me further, please reply to reply by using the "reply" button. I will not be editing any of my posts to "edit to reply".


You keep switching between legal and normative requirements. We disagree on the 4th amendment in the same way that scientists and climate change deniers disagree about global warming. You have a fringe understanding of it with no support from the relevant literature and your arguments about it are poorly structured, deny evidence, and rely on intentionally misunderstanding context and terms of art.

The legality of searching for evidence is pretty open and shut because you need probable cause. The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.

Shooting a bunch of people and saying you're with ISIS is plenty of probable cause for a search. I don't see how you're waiting for "all the evidence" here since all the relevant facts are in and they're sufficient. Whether or not she was conversing privately with ISIS counterparts would be the resulting information of the search.

> One thing I would call strong evidence is a shooter having announced that he or she was part of a terrorist cell in the US.

The only way to read this in light of our previous discussion is that saying "I'm in ISIS!" and then shooting up a bunch of civilians is insufficient to prompt a post-mortem search of the attackers' affairs, instead they need to say "I'm in ISIS and there are a bunch of us!" and then shoot a bunch of civilians.

Bravo sir, I have been well and properly trolled.


> You keep switching between legal and normative requirements.

If I did so, it was a mistake. My reference to the 4th Amendment, for instance, should have said "how the 4th Amendment ought to protect us". I did not mean to imply that I am trying to predict what warrants the justice system will or will not grant.

> You have a fringe understanding of it

I think I mentioned the 4th amendment just the once. I have been trying to stick to normative arguments.

> The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.

I think this is a point where we truly disagree. I think a system can function in which some evidence that a search will yield results is required before the search is conducted. I do not think that the evidence must be airtight. Note that I am speaking about what I think is possible and just and right, not what the law says now or the justice system does now.

> The only way to read this in light of our previous discussion is that saying "I'm in ISIS!" and then shooting up a bunch of civilians is insufficient to prompt a post-mortem search of the attackers' affairs

Did the shooter say she was "in ISIS", or that she pledged allegiance to the leader? There might be a difference in this case. I have read that there is religious significance to a pledge of allegiance in ISIS's theology that might make a pledge indicative of ideological alignment and a membership "in ISIS" indicative of being in actual conversations with ISIS.

> Bravo sir, I have been well and properly trolled.

Please, let's try to be civil.


> Did the shooter say she was "in ISIS", or that she pledged allegiance to the leader?

Either one would seem to constitute probable cause for an association. Of course we don't know if she was actually in ISIS, or just agreed with their beliefs. But how would we know without conducting further investigation? You seem to be demanding a somewhat unreasonably large burden of proof, when all that is needed in this case is probable cause. Frankly, even if she hadn't verbally declared allegiance to ISIS, I don't think it's a stretch to say there's probable cause for connection to other terrorist groups. The fact that she did say that makes it a slam dunk.

> I think a system can function in which some evidence that a search will yield results is required before the search is conducted. I do not think that the evidence must be airtight.

We do have such a system. The evidence you're describing is called probable cause, and that's the whole point. I'm not sure of any reasonable definition of probable cause that this situation wouldn't satisfy. Moreover, your objections seem to be in the form of vague misgivings rather than concrete arguments. You haven't precisely described what would constitute sufficient evidence for an investigation, but instead seem to just be saying "there's not enough right now." I think this is what's behind GPs frustrations responding to your posts.


The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.

That kind of reasoning allows wholesale collection of communications data by the NSA and other agencies. Since that practice has been widely criticized, there must be something missing from your argument.


> evidence that would be the result of a search

No one is advocating warrantless searches or not requiring reasons for warrants.

If I want to get a warrant to see who you're calling, it is inherently a broken system that requires the list of people that you called as cause to obtain that warrant.

Any kind of reasoning allows wholesale collection of communications if you misread it properly.


Ah, so you do agree with the premise that there must be compelling evidence to warrant a search?

In that case, all you(pl.)'re haggling over is the "price point" of how much evidence is required to support how invasive a search. I'm unsure how that results in the kind of heated debate that seems to happen here.

Oh well...


I got trolled :(


not at all. I'm just pointing out that you're both trying to make your points in such a convoluted way that neither is gaining any ground.


...pretty clearly demonstrates that reading your communications has a pretty high likelihood of turning up something useful in preventing future incidents.

It sounds like common sense, I guess, but has that ever worked, actually?

Similar "prevention" rationale is offered for governments to spy on virtually all telecom all the time, now. But this shooting happened anyway.


> Similar "prevention" rationale is offered for governments to spy on virtually all telecom all the time, now. But this shooting happened anyway.

1) Anyone with a plan promised to stop all terrorist attacks is lying to you, stupid, or both. You can't have a free society and a 0% chance of political violence.

2) Yes, searching the possessions and communications of dead terrorists unsurprisingly are substantially more likely to lead to useful criminal leads than reading your metadata. A warrant to read this person's stuff isn't unreasonable in the slightest, an order forcing apple to do shit might be but that's a procedural thing unrelated to the core issue of "is there a good reason to read this person's stuff"


> reading your communications has a pretty high likelihood of turning up something useful in preventing future incidents

Would you agree that "high likelihood" is too low a bar for justifying searching the phones of people who live in high-crime neighborhoods?

> If this doesn't clear your hurdle for reasonable search then what would?

Evidence of a conspiracy would help. You said they declared allegiance to an organized group. When they did that, did they say or hint that they had been in contact with that group, other than, say, watching public YouTube videos?


I'm having a hard time believing that you're commenting in good faith here. Yes, the police will easily get warrants to search whatever property of a mass murderer's they feel would be productive to search. No, that does not mean they can randomly get warrants to search random houses in high-crime neighborhoods. Privacy rights for mass murderers: not a high priority of US constitutional law.

Is there some other issue we're missing here, or does that pretty much wrap it up?


> I'm having a hard time believing that you're commenting in good faith here.

You can feel free to disengage from this conversation if you find it troubling. If you are incredulous that someone might be concerned with the privacy of these people (and their friends and family) in the particular way I am, then I'm not sure what I can do to make you believe.

I am a person. These are my true thoughts. I actually and honestly believe them.

> Yes, the police will easily get warrants to search whatever property of a mass murderer's they feel would be productive to search.

As I said earlier to you in another branch of this discussion, I am not disagreeing that the police CAN get this warrant. They appear to HAVE gotten this warrant, so I guess that's a historical fact. I'm trying to have a discussion about what we think is just and fair and right, as well as trying to find out if there is any evidence of a conspiracy. I have not followed the news on this shooting, so I would not be shocked if the answer were "yes, there is some evidence of a conspiracy". This is why I asked the question, "Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?"

> No, that does not mean they can randomly get warrants to search random houses in high-crime neighborhoods.

I did not ask if police CAN randomly get warrants to search random houses in high-crime neighborhoods, I asked the commenter if he or she thought they ought to be able to do so. If the answer is "yes, they ought to", then we might have a different discussion than if the answer is "no, they should not". I have met people who would answer "yes" and I have met people who would answer "no". Neither answer will cause me to accuse the commenter of commenting in bad faith.

> Privacy rights for mass murderers: not a high priority of US constitutional law.

I'm not talking about what is and is not a high priority for the justice system. I'm trying to engage in a dialogue about what we think the requirements for a warrant SHOULD OR SHOULD NOT BE and whether or not there is any evidence that the phone will provide information that will help solve or prevent crimes.

> Is there some other issue we're missing here, or does that pretty much wrap it up?

I'm sorry if this conversation is upsetting or troubling to you.


..... I currently have a lot of time on my hands, so sure, I'll bite....

> I did not ask if police CAN randomly get warrants to search random houses in high-crime neighborhoods, I asked the commenter if he or she thought they ought to be able to do so

Given that the answer to CAN they is a solid no, and that random searches of homes is in no way related to searching devices used in a conspiracy to commit murder, what is the point of this? In one instance, someone has clearly committed a conspiratorial crime, in another instance, people are living in houses with low property value.

> if there is any evidence of a conspiracy

Conspiracy - a secret plan by a group to do something unlawful or harmful.

Point 1: A conspiracy took place. A plan to kill people was kept secret between multiple people until it was executed.

Point 2: Immediately prior to commission of the murders, one of the participants declared that they were part of a larger group, known for organized commission of murder and terrorist attacks.

Given these points, what information is missing that would motivate you to think that a search of the attackers' phones should be conducted? Are you really asserting that there is no evidence of conspiracy that extends beyond the deceased, despite the fact that they said they were doing this under the flag of a larger organization?

I don't see any room for a normative argument defending against a search. I don't imagine that you're arguing that the post-mortem privacy interests of the terrorists prohibit investigation. Are you suggesting that the risk from not knowing the contents of the phone are so low as to not rise to outweigh the privacy interests of anyone incidentally mentioned on the device?

Sorry for all the questions, what I'm trying to get at is that from a normative perspective, societies generally allow investigators to search the shit of known participants of violent criminal conspiracies in order to detect previously unknown elements or plans of those conspiracies. What is the moral base from which you are arguing that this nearly universally accepted standard is somehow deficient?


There is something I forgot to say in my other reply, and I didn't want to edit it and provide a moving target.

> Sorry for all the questions

There is no need to be sorry. They are useful for me to understand your POV and to have a conversation. I am happy to answer them as best I can.


> Given that the answer to CAN they is a solid no, and that random searches of homes is in no way related to searching devices used in a conspiracy to commit murder, what is the point of this? In one instance, someone has clearly committed a conspiratorial crime, in another instance, people are living in houses with low property value.

You said that the state should be able to search the phone because it was likely to have evidence of crimes. I am arguing that higher than normal likelihood, as you might expect in a high-crime neighborhood, is not sufficient to justify a search. Instead, I am arguing that evidence (indicative of finding things that will help solve or prevent crimes), not likelihood of finding such things, should be the standard for a warrant.

> A conspiracy took place.

I should have said "a conspiracy beyond the two dead perpetrators".

> one of the participants declared that they were part of a larger group

Did she? I thought she said she "pledged allegiance" to a larger group, like one might do to a Pope you have never met or spoken with.

> what information is missing that would motivate you to think that a search of the attackers' phones should be conducted?

I discussed this elsewhere in the thread, but you may not have seen that post yet. Here is a link: https://news.ycombinator.com/item?id=11115698

> Are you really asserting that there is no evidence of conspiracy that extends beyond the deceased

No, I am /asking/ if there is any such evidence.

> I don't imagine that you're arguing that the post-mortem privacy interests of the terrorists prohibit investigation.

No, I don't think it prohibits investigation, but I do think that state searches of their personal effects ought to require evidence that searching their personal effects would solve old crimes or prevent new ones.

> Are you suggesting that the risk from not knowing the contents of the phone are so low as to not rise to outweigh the privacy interests of anyone incidentally mentioned on the device?

I am suggesting that those privacy interests can be balanced against evidence that searching the phone would solve old crimes or prevent new ones. I do not believe that risk is the only question. That is what I was trying to get at with my distinction between "high likelihood of" and "evidence of", above.

> What is the moral base from which you are arguing that this nearly universally accepted standard is somehow deficient?

The reason I think that evidence of solving (or helping to solve) old crimes or preventing new ones should be required before searching the possessions of any person, living or dead, murderer or pacifist, is a traditional one about privacy, but it seems like the balance I use is different than your balance.

That "societies generally allow" the state to do something, or that societies "nearly universally" do so, is not a big factor in my feelings on whether or not it is fair and just.


Sure, of course that's a legitimate concern: they may have talked to other people planning attacks.


> they may have talked to other people planning attacks.

I'm not particularly concerned with crimes that we believe "may have" occurred. Of course, they may have. Anything may have happened -- I'm asking for more than just correlation that criminals know criminals. Do we have any evidence, or even any hints or clues, that the phone contains evidence that would help solve or prevent any crimes?


I'm sorry, but I don't understand what you're getting at here. The legitimate concern is prevention of future attacks. They may have collaborated with people who were never apprehended on the attack that actually happened.


> They may have collaborated with people who were never apprehended on the attack that actually happened.

Right, but we don't go searching everyone's papers just in case they are conspirators.

If Alice punches Bob in the face, then is hit by a bus and dies, we don't go searching through all of Alice's stuff just in case there might have been someone else involved with the Bob-punching incident, right?

Is there any evidence, any at all, that the shooters collaborated with anyone?

> The legitimate concern is prevention of future attacks.

I'm not questioning the "legitimate concern", but I don't think "legitimate concern" should be sufficient to get a warrant.


> If Alice punches Bob in the face, then is hit by a bus and dies, we don't go searching through all of Alice's stuff just in case there might have been someone else involved with the Bob-punching incident, right?

Wrong. If Alice announces that she's looking for people to attack then of course we go looking so see why she's doing that and if other are others involved.

This wasn't some random emotional attack like a bar fight. Stop setting up nonsensical strawmen for your arguments, argue the hard cases not the easy ones.

If your argument/objection can survive the hard cases then you have something, arguing the easy cases is meaningless.


> Wrong. If Alice announces that she's looking for people to attack then of course we go looking so see why she's doing that and if other are others involved.

What, in your opinion, are the limits of that investigation.

So

1. Alice announces she's looking to attack someone.

2. She attacks Bob.

3. She dies.

I gather that, in your opinion, we can search her possessions. Let's say she has a living mother and a best friend who died a week before the attack. Can we search her mother's things? How about her best friend's?


Humans are not computers with fixed rules.

You look at the scenario at hand and see if it makes sense to search this person or that.

I said it to someone else today, I'll tell you too: The law is not like a mathematical proof with exact rules. People with STEM backgrounds tend to think of the law that way because it seems to be all about rules. But it's not. It's about human judgment and gut feelings.


Sorry, meant to say "ought", not "is". I'm mostly interested in what you think is appropriate, not what the justice system would do, today, with the laws and courts as they are.


I think it's appropriate to search anyone who might be relevant.

There is no blanket rule, each situation is different.

You need to have some reason to go searching someone, it doesn't have to be a great reason, but you do need a reason.


What should be sufficient to get a warrant? It seems to me that making warrants harder to get (or impossible), would actually be bad for society at large. In the case of warrant you have a targeted tool, one aimed at a specific person or group, and a demonstrated need approved by a third party that is not law enforcement.

The alternative is dragnetting. Or not having investigative tools. I don't know that I like either option as a citizen.


Well, it is.


> Well, it is.

I didn't say it isn't; I said it shouldn't be.


>evidence

14 dead people and a stack of unused guns and bombs.


two dead attackers, stack confiscated. case closed.


Exactly how do you know that the case is now "closed"? It sounds like you're just making an assumption. The state is too, but the cost of their assumption --- that there is valuable data to gather from the phone --- is very low, and the cost of your assumption, if you're wrong, is immense.


Please do keep up with the context :)

I didn't feel the need to elaborate because my parent thought one word was sufficient, but let me state the full context to my thought process:

> > > Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?

> > 14 dead people and a stack of unused guns and bombs.

> two dead attackers, stack confiscated. case closed.

If the "evidence" for future crimes is merely some dead criminals and the ammunition they didn't get to use, there is no probable cause for further investigation. Hence, the case should be closed.


Nice. Next time I'll refrain from explaining myself.


Apple will not do that. It's a slippery slope argument - yes, these shooters were very bad, but if Apple sets a precedent of breaking the backdoor for this crime, why can't other courts implore them to use the same technological means for lesser crimes?


> * There's no likelihood at all that other users will be impacted by the backdoor

That could not be further from the truth. They are trying to set a precedent that endangers the future of consumer end-to-end encryption.

They are trying to repurpose an 18th-century law (the All Writs law) to force Apple to help them break iPhone encryption.

If this case creates precedent, what is to stop them from, say, forcing Signal and Google to work together to deliver a backdoored app update to a specific user?

You are not a lawyer. Nate Cardozo, staff attorney at the EFF, had this to say: https://twitter.com/ncardozo/status/699964225737748481


You and Cardozo are hyperventilating.

The DOJ can make this demand because Apple phones of this vintage are already breakable, and the DOJ is merely asking Apple to exercise a capability it already has.

Apple knows this, better than most, and has for many years. They have done security design work with governments as an explicit adversary. The 5C was insecure. The 5S is not: it has an entire additional processor, running an incredibly secure OS, whose entire job is to make sure that the phone keeps promises like these even if the DOJ orders Apple to sign bogus updates.

Apple is so committed to this that they've extended the "ten tries and you're out" promise all the way through their server infrastructure, so that if you escrow data into iCloud it will be nuked if someone tries to brute force it. Not only that, but after rigging their HSM cluster to operate that way, they burned the update keys, so that an attempt to change that rule will break all of iCloud.


You haven't responded to my comment at all.

You said:

> There's no likelihood at all that other users will be impacted by the backdoor

The backdoor, a "master key" as Tim Cook put it, that opens all pre-5S iPhones affects millions of other users.

And that's not even the main issue. The issue is the precedent it sets, which endangers a lot more than just a few million users or a few specific models of iPhone.


It's much more than tha. If they do it "just this once", that means they can allow others to get to encrypted data, despite what they've been claiming all along. It's a very binary situation.


So if I get this right, they want to (1) disable the delete feature after x retries (therefore enabling unlimited retries) and (2) enable to submit tries via a connector/wifi, bluetooth (therefore enabling a bruteforce approach). What good is an encrypted filesystem in that scenario?


Plenty of good if you have a reasonable passphrase and the vendor hasn't been compelled to assist.

"Can only try 10 times" isn't anything guaranteed by encryption. My laptop has an encrypted partition, but an attacker can brute-force it at will. Even if I had software to say "only let it happen 10 times, then erase the partition" the whole drive could just be cloned. That's why I have a 20+ character passphrase.


Apple goes way out of their way to avoid scenarios where they can be compelled to subvert iOS security. For instance, see pg44+ of the iOS security white paper:

https://www.apple.com/business/docs/iOS_Security_Guide.pdf

... the HSMs that manage the escrow scheme for credentials stored in iCloud are themselves rigged to blow up on 10 failed tries, and, not only that, but the code that implements that process is burned into the HSMs and the keys Apple would need to change that logic have been destroyed.


Thank you for the link. I was unaware how seriously Apple takes their security. Self-destructing HSMs to avoid brute-forcing is extremely impressive. THIS is a model of how to implement proper key escrow.


Informative. Thanks!


I understand how encryption works :)

I was talking specifically about this scenario where the phone pin may be 4 or 6 numbers and Apple is helping them.


They want to bruteforce the pin code for the device... so longer passphrases and an encrypted filesystem provides plenty of protection.


The key parts of the Federal order:

"Apple's reasonable technical assistance shall accomplish the following three important functions:

(1) it will bypass or disable the auto-erase function whether or not it has been enabled;

(2) it will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available on the SUBJECT DEVICE and

(3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.

Apple's reasonable technical assistance may include, but is not limited to: providing the FBI with a signed iPhone Software file, recovery bundle, or other Software Image File ("SIF") that can be loaded onto the SUBJECT DEVICE.

The SIF will load and run from Random Access Memory and will not modify the iOS on the actual phone, the user data partition or system partition on the device's flash memory. The SIF will be coded by Apple with a unique identifier of the phone so that the SIF would only load and execute on the SUBJECT DEVICE.

The SIF will be loaded via Device Firmware Upgrade ("DFU") mode, recovery mode, or other applicable mode available to the FBI. Once active on the SUBJECT DEVICE, the SIF will accomplish the three functions specified in paragraph 2. The SIF will be loaded on the SUBJECT DEVICE at either a government facility, or alternatively, at an Apple facility; if the latter, Apple shall provide the government with remote access to the SUBJECT DEVICE through a computer allowing the government to conduct passcode recovery analysis.

If Apple determines that it can achieve the three functions stated above in paragraph 2, as well as the functionality set forth in paragraph 3, using an alternate technological means from that recommended by the government, and the government concurs, Apple may comply with this Order in that way."


Robert Graham (Errata Security)'s notes on this: http://blog.erratasec.com/2016/02/some-notes-on-apple-decryp...


The implications are quite important for future technologies. Neural implants for example. Neural implants are currently used for prosthetics and paralysis. A forced backdoor would kill all research to develop a co-processor directly linked to the brain. Who would want a government backdoor directly to the brain


If Apple is capable of compromising security on its devices (by using its root key to sign a custom version of iOS, or through some other method), then I see no way that they will avoid eventually being subject to a court order in some jurisdiction that compels this action. If that's true, then device security is already compromised and Apple knows this. Let's say the facts of the case were slightly different, that the FBI "knows" a terrorist attack is about to occur, and Jack Bauer-style demands that Apple assist in compromising a specific device that has the top seekrit plans on it. In that instance, do you think Apple would comply with a warrantless request for cooperation? Hm...

Reading Tim Cook's announcement in light of this thought experiment, methinks he doth protest too much! Apple does not have any objection to compromising user security at the root level, and in fact has already done so by creating a device that has some limited vulnerability to malicious action by the manufacturer signed with its root key. (By the way, no doubt every other manufacturer has done worse, so this is not to deprecate Apple vs. any other big company.)

I would speculate that Tim Cook's goals with this announcement are largely PR-based, and that the goal of Apple's legal strategy is not to avoid cooperation but rather to retain the ability to decide whether to cooperate, and/or to impose a higher perceived cost on the government for such requests. No doubt Apple is correct to say that once a precedent is established, then it will be widely used by law enforcement even in routine cases.

At the end of the day, I am not optimistic that we can avoid a world in which large device manufacturers are compelled (legally and practically) to build security flaws into their devices. Perhaps not the flaw of a back-doored crypto implementation, but other flaws such as those that have been identified in current iOS devices that allow the government (with commitment of sufficient resources) to chip away at some of the more superficial protections.


Why the worry about auto-wiping? Is it not possible to make a copy of the encrypted data and then play around with it as much as you want?


The data the DOJ wants is encrypted with AES, so all the phone has to wipe is the key; a copy of the encrypted data is useless.


This seems like a key thing to convey to the courts. One piece of encrypted data is supplied, rather than another. Who has to explain to the courts the complexity of breaking one versus the other?


Can someone answer this? Raw read the memory to an external device and then brute force that shit using super computers until it cries.


If you can break AES... then the NSA would love to have a word with you :P

The FBI is going after the lowest hanging fruit, the users password that was used to create the crypto key.


The user's password is not used to create the crypto key; it is randomly generated and burned in at the factory.


It is used to create the crypto key, using a password based key derivation function, using the user's password fed into the PBKDF the output is the key used for encryption/decryption.

The users device key is mixed into that PBKDF. Without both parts of the equation, you have nothing.

For your reading enjoyment: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

Specifically page 11 the diagram at the bottom.


Damn, that's a great idea. Talk to you in a billion billion years.

http://www.eetimes.com/document.asp?doc_id=1279619


Yeah I've got the same question. Is there some hardware safeguard that prevents copying the memory itself? You'd think the first rule of crypto forensics is to work on a copy.


Even if there were no such safeguard, you'd still have to break AES encryption...


They "just" need to break the users password. If it's a weak one, which is likely for most users, a simple dictionary attack would work.


The users password goes through a password based key derivation function, that function spits out an key that is use for the AES crypto used for the file system.

Now, the PBKDF requires a secret that is only stored within the iPhone itself (within the CPU even, where it can't be read out directly). So if we instead grab a copy of the data, all we get is an AES encrypted file system.

We have 2 choices. 1. We can attack AES directly, and attempt to brute force the AES key, or 2. We can attempt to get the AES key somehow by bruteforcing the users password.

2 is infinitely simpler compared to 1. However, 2 becomes more difficult (but still less so than just cracking AES itself) when you need to run your brute force guesses through the CPU within the iPhone.


I guess that's my question...is there really no way to get the secret off of the hardware so that the function can be reproduced "offline".


This is exactly the threat model this was designed to defeat, so I would imagine not. At the very least, if there is a way to do it Apple is probably unaware. If they are aware, they'll probably open themselves to a class action suit, as its one of the core features that they advertised.


restating your question: Has Apple been lying to its customers for years?


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: