A thought experiment: Let's say the government makes hardware encryption standards in the style of FedRAMP that sets standards for preventing tampering by foreign governments. Then, imagine that a consumer electronics company voluntarily makes all devices comply with this standard. Could a court attempt to compel the company to defeat the standards which the government set as tamper-proof against governments?
A second: What happens if Apple states that it will take a 50-person team with an average annual labor cost of $200K/person approximately 5 weeks to fix the problem with a 50% chance of success. Can Apple bill the court a million dollars to try to fix the issue?
A third: Apple open-sources their encryption modules and firmware. They no longer have proprietary information for how to unlock the phone. Are they legally required to be the ones who defeat a system to which they hold no proprietary information?
A fourth: The small team that built the system no longer works for Apple. Perhaps their visa was revoked and they left the country, perhaps they were poached by a competitor, or perhaps they retired in the years since this module was published. Who is responsible for complying with the order?
A fifth: The data is actually corrupted. Apple presents this conclusion under penalty of perjury after a thousand hours spent on the project, which it requests are compensated.
A sixth: Apple requests that trading of its stock is frozen for one month while it expends considerable resources on complying with an unexpected court order relevant to national security.
3: No, but they will probably be the ones asked anyway, and then yes, they would be legally required.
4: Apple.
5: What's the question? Is the question will they be compensated? Then yes.
6: They can't. They don't own their stock. Bad PR is not a good enough reason.
You are treating the court like a mathematical proof and finding edge cases. I used to as well. But courts don't work that way at all - they don't care in the slightest about your proof. They analyze things on a human level, not a mathematical level.
This (and the job prospects) are the reason I choose computer science/software engineering over being a lawyer even though most people who commented on what I should be say I should be a lawyer. The law requires not just thinking like a human, but accepting such thought as valid. To me the logic is exactly the same behind any racist who makes special exceptions for all minority people they know while still holding their view in general. In effect the system is built on a mix of emotion and logic, and in doing such it is insanely unjust to those who fall on the unpopular side. A President who admits to smoking pot leading a government that ruins the lives of people who use pot... it is just as bad as some older men I know who admit to having tried pot but who think anyone currently trying it deserves to be put in prison.
Analyzing something on the human level is always a horrible standard when viewed outside the context of our own emotional fallacies.
This is an amazing meta-point here; the people on the other side of this debate seem to have no need for logical cohesion.
I might extend that a bit to ask 'what is the consequence of such lack of cohesion'? Perhaps first that getting agreement on anything needs to be done at a close to case-by-case basis since there is a weaker adherence to standards. This can introduce favoritism (e.g. some people get hit for pot and others don't). Another consequence might be up-front costs. Not needing to think too deep about edge cases and whatnot is clearly less time-cost to run first-time-seen cases. This in turn means people won't hesitate as much in setting new precedence because they won't notice when they do. Another consequence would be inability to get consistency across and entire system since not everything can be debated by all and there's not clear rules.
You NEED human emotion not just logic when making laws.
Pure logic is like 100% free markets with zero oversight and government interference, it looks good on paper but not in practice.
Human emotion in laws add things like forgiveness, understanding and compassion, things that mathematics can't help with. I know that it also adds greed, unfairness and discrimination hence the reason it is a constant struggle to find the right balance.
> They analyze things on a human level, not a mathematical level.
Courts analyse things on a political level. It has less to do with what normal humans think is right and more to do with causing the outcome the judge wants to occur.
The result is that the better judges are more consistent (and then get celebrated when the outcome in a specific case is politically popular but pilloried for creating "loopholes" when it isn't), whereas the worse judges find a way to make the politically popular thing happen no matter how tortured the logic it takes to get there is. And the existence of the second class destroys the rule of law, because you get in front of one of them and it doesn't matter what you did, it only matters if the judge likes you.
Thanks for answering. And original question was great too. Are you a lawyer, not trying to dismiss your answer because it seems logical, but asking so that I can ask something else
I am not a lawyer. There was a time when I spent a lot of time reading verdicts from Judges. (The higher the level of Judge the better the writing, and they are really surprisingly very readable, not full of legalese as you might expect.)
Anyway, I used to think that law was mathematical, but after reading lots of verdicts I realized it was not.
I was going to ask what happens if Apple says no! I don't know what the next step of FBI will be. What do you think? Can Apple be sued over this? Can they say okay give us the device and we will give you the data but not the actual 'modified os'?
They would be fined, or executives charged with contempt of court and possible jail time. That would also make the Judge mad at them, and not likely to agree to anything they ask.
The Judge has a LOT of power - he can impose some really high fines, and that's just to start with. Apple will not mess with him.
> I don't know what the next step of FBI will be.
Not FBI. If the Judge says to do it (so far he has not), he will be the one making sure they do it. The FBI will just complain to the Judge if necessary, not actually do anything.
> Can they say okay give us the device and we will give you the data but not the actual 'modified os'?
They can ask the Judge if they can do that. It's up to the Judge to say yes or no. They'll probably have to explain why it's better that way, and that the end result would be the same. The FBI can then argue against it, (or accept it). The Judge will listen to both sides and explain his reasoning in a paper.
You might want to try to read all the stuff the Judge writes in this case. Ignore what the lawyers for Apple or FBI write, read just what the Judge writes (he will summarize what the lawyers wrote, so you won't miss anything).
There is no burden of proof. Apple says it's corrupted that's about it. I'm sure the other party will ask for details. If they don't believe Apple they'll explain that to the Judge and he'll go from there, but otherwise no one will demand proof of anything.
I don't see how statues of limitations is connected to the question.
re: statue of limitations - if we are talking exponential levels of difficulty to crack vs. exponential increases in computing power (Moore's law) I wonder whether "we won't have the computing power to crack this for X years, so a statute of limitations of (X-1) years to solve this means that we are functionally unable to answer the question within the allotted time" becomes defensible as a reason to call something "impossible", i.e. it's improbable to solve during the statue of limitations
Statue of limitations is only until the court date. Once that starts there is no statue of limitations, even if the case takes years.
The closest I can think of is a right to a speedy trial, but that has so many exceptions (for example if the person is out on bail) that I couldn't say how a judge would rule in this scenario. Especially if it was the defendant causing the delay by refusing to reveal the password and making the prosecutor get it the hard way. (Not applicable in this case, might be applicable in another.)
specifically in this case, the only non-John Doe defendant I am aware of is dead so those provisions do not apply.
The FBI is not doing this for proceedings related evidenciary purposes but instead for investigative purposes. No one has been charged here (from what I know) so the issue of speedy trials is moot while the statue of limitations on the crimes under investigation are likely extremely long if they exist at all.
A statute of limitations prevents you from charging someone with a crime once some period of time elapses. It says nothing about how long you have to prove your case in court. That's more likely to run afoul of the right to a speedy trial.
They are imperfect and I'm sure you can find much wrong with the courts, but attempting to codify every edge case is not only impossible, but dangerous. The more laws you create, the more likely you can find a way you're breaking one. It's not a perfect system, but a cornerstone of American democracy is that you are judge by people, people like you, not by an algorithm.
We've been reminded again and again by recent events how horribly wrong things can go when the is system is applied to people who are not like the judge/jury/officers/etc. There's got to be a better way than relying on inherently biased people, especially for commonly persecuted groups (minority races/orientations/occupations).
I think when someone says they want an axiomatic or algorithmic legal system, they're saying people are not like them, and they would rather be judged by an algorithm. Also that they would rather know in advance what behaviors will be judged positively or negatively.
I understand this, and if I thought it were possible, I would agree.
However...
The law is essentially trying to codify a moral code, one that changes with time. We've seen how the law struggles with changes to technology, lifestyle and shifting attitudes. But sometimes the very axioms change (see slavery, suffrage, common law). Because the law cannot keep up with changes, we're stuck with messy human interpretation to smooth over some of those rough edges. Something like Brown v. Board of Education, which seems obvious in retrospect, may not have occurred in an axiomatic system (perhaps Plessy v. Ferguson may not have occurred either, though considering the primary axiom that American law is based on once said that blacks were 3/5 of a human, I find that unlikely).
I would argue that we ought to try to introduce algorithms into the enforcement of the law (policing, traffic enforcement, jury selection, public defenders, etc) rather than the interpretation of it. Of course, one could argue mass surveillance is exactly that, so I don't know.
I think my comment was misinterpreted. Our legal system is a part of our government, which is a representative democracy. Part of the idea behind a representative democracy is that it is designed to protect the minority from majority mob rule. By definition, edge cases are cases that fall outside of the normal majority. Our legal system is fuzzy because it is operated by humans, not silicon.
The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward. So in a sense, case law is intended to be the legal equivalent of mathematical proofs in the courtroom. Obviously this analogy isn't 100% correct, but the main reason it can't be is because of the thousands of edge cases that come up in real life...edge cases that are minority cases intended to be protected by representative democracy, but the case law legal system has difficulty actually doing so because of the design of the case law system. See the conflict? Completely lost yet?
>The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward.
Sorry if this comes out as pedantic but I think it is important. Case law is not intended to minimize fuzziness but to address occurrences of fuzziness.
I disagree, I think precedent reduces fuzziness because it shows how the law has been interpreted in the real world.
No cases are identical, but knowing how a similar situation was handled in the past clarifies, not confuses the situation. It gets us closer to a consistent interpretation of the law, which is, in my opinion, paramount because consistency ideally means predictability and equality.
Indeed the roots of the common law go back to making sure all people are viewed equally under the law, rather than having random interpretations based on who's judging and who's being judged.
The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward.
But then you go on to speak of edge cases that precedent can't deal with, but I would argue that your view of case law as essentially "algorithmic" is flawed. Obviously precedent cannot be exact, but using prior interpretations helps guide thinking.
For example, look at Katz v. US, one of the seminal cases that would inform this case. In Katz, the court starts with existing law, one that prevents illegal searches, and tries to decide whether that law can apply to electronics (such as tapping a phone). At this point the OCCSSA is in effect, but not really tested, so we've got a pretty fuzzy legal area despite the fact that phones are a well established technology. The court rules that even though law enforcement did not search or seize things, that privacy is still implied in electronic communications because of other acts a user takes surrounding the act of making a phone call (closing the door, making the phone call from home, etc.).
In fact, there's a whole legal concept at work here called lawful intrusion that is built upon every time a case is judged. These concepts, and human application of them to the case at hand, help attorneys, judges and juries deal with edge cases.
> You ever get in a fight where it wasn't 100% clear who was right and who was wrong? You were both a little bit right and a little bit wrong?
I'm not really sure what point you're trying to make. Yes, the world is not black and white, but neither is math.
If you have e.g. a car crash where one person was speeding and the other failed to yield the right of way, they're both at fault. Maybe one is more at fault than the other and therefore has to pay a higher percentage of the damages, but how is that not still an algorithm?
The hard part is designing the right algorithm ahead of time. But throwing up your hands and saying that problem is hard and therefore we should give up on having laws and just let judges decide everything on a case by case basis isn't democracy and isn't the rule of law. It's the rule of whatever the judge says it is today.
It's impossible to design an appropriate algorithm ahead of time capable of accepting all of the possible inputs that might occur.
Which isn't to say that we shouldn't do our best. We shouldn't just throw up our hands. But I think it's important to admit that judges will always be necessary to handle inevitable ambiguities.
It's completely possible to design an algorithm that will give a deterministic result in every case. It might not always be the result you want, but a judge might not give the result you want either. And at least with the algorithm the result is predictable (even if not predicted) and, if wrong, can be fixed and then apply consistently in the future without making decisions based on politics or race or personal relationships.
Or to put it another way, we could have an algorithm make the decision but then have a judge whose job it is to find when the algorithm is wrong and then fix it for future but not past/current cases.
> There is a tradeoff between predictability and correctness. Where you come in on that tradeoff is pretty different from most other people.
Because I don't see it as a tradeoff between predictability and correctness, I see it as a tradeoff between correctness for past acts and correctness forever in the future.
If you know a law is ridiculous but you also know that a judge will see the same thing and then not let you do that, you won't do it. Some people like that result. The problem is it causes the ridiculous law to carry on existing and not be fixed, because nobody is willing to challenge it when they know they'll lose and go to jail even though by the letter they shouldn't. So then nobody ever knows what the law actually is because it clearly isn't what it says it is, but if it isn't that then what is it?
No robot ever lobbied government for special exemptions to robot-owned businesses, which it then used to completely trash the economy while accumulating vast amounts of wealth, which it in turn used to strengthen its grip on government. Also, no robot ever argued that it must own every single thing it says for life+95 years or it will stop speaking.
Because designing a law that makes sense when interpreted strictly formally is a friendly-AI-equivalent problem, i.e. it requires figuring out what values humans actually care about and then implementing it in a system that can outsmart anyone trying to game it. Failing that, people will get hurt by corner cases in such a law, or suffer abuse from people gaming the system.
There is a different point to be made here though. You can't predict everything ahead of time, but what do you do when somebody finds an edge case?
For the future the answer is obvious. You consider the edge case, decide what to do what it happens, publish the decision and follow it from now on.
The real question is, what do you do for the person who fell into the edge case before it was decided?
And the problem with the existing criminal justice system is that the judge decides how that edge case should be handled and then imposes it on the defendant's behavior ex post facto. Which the courts don't allow Congress to do but seem to have no problem doing themselves.
There is a fair argument to be made that we should let the first defendant who does it get away with it, but then publish the decision saying that it was wrong and imposing that on the next one.
It's kind of like the exclusionary rule: Somebody getting away with something that one time would be the punishment Congress gets for not considering more of the edge cases from the outset (or passing simpler laws that are easier to reason about).
I think there's yet another point to be made here. In an imperfect world, where we can't design axiomatic systems that are corner-case free and where we can't get stuff right on the first try, the judge's flexibility seems like a necessary safeguard against the letter of the law totally missing its spirit. I like to imagine it as a grease, a lubricant - you need to apply it, or else minor imperfections will grind your machine down.
A fundamental question here is - do you want the law to be a completely trustless system? Sticking to the letter of the law and enumerating all imaginable corner cases are examples of going into that direction.
I'm not totally convinced trustless systems are a good idea for society, because of inefficiencies they introduce as a cost of not having to trust anyone.
It's like arguing that the developers of Windows or MacOS or Linux or whatever should just start over and do everything right this time. Or Intel and AMD should create a new CPU architecture and do it right this time. What you'll quickly find are a million reasons why things are done the way they are, and that this whole restarting project was a terrible idea.
Yeah it sounds great, but anyone who knows what that actually means realizes it's a retarded idea.
It's not possible to make a perfect system. It's completely possible to let the first person to find an imperfection take advantage of it but then immediately fix it.
Which would create an incentive for finding imperfections and therefore fix more of them faster.
> The federal securities laws allow the SEC to suspend trading in any stock for up to ten trading days when the SEC determines that a trading suspension is required in the public interest and for the protection of investors.
While I only have anecdotes, when I've encountered governments rolling out Apple devices it is far more about being able to have an Apple device than any technical ability it provides. This comes from talking with those in charge of the roll out who admit such. Granted, this is only 3 very much non-random anecdotes.
I worked in government mobile security for many years, until last November. Many agencies are indeed rolling out iPhones and scaling back on blackberries finally
That said, Apple's enterprise business grew 40% in 2015 to $25B. (http://appleinsider.com/articles/15/10/27/apples-enterprise-...). .gov is a big part of that. When you attend Apple enterprise-focused events, the attendance has grown beyond the old media/field service crowd to include finance,police and .gov attendees.
If you need to comply with various federally mandated compliance regimes, (ie. FIPS 140-2, CJIS, IRS Pub 1075, etc) iOS is very clearly the easiest and most straightforward platform to achieve that compliance on. It's not as trivial as it was with legacy BlackBerry, but it's pretty close.
With Android, the carrier and manufacturer variance makes it more difficult to achieve, demonstrate and maintain compliance. That said, Android offers a number of advantages from an application perspective, but when in production, you'll commonly see licensed third party software in place to perform typical mail and other functions.
BlackBerry failed to keep the positive aspects of it's legacy platform fucntioning when modernizing the platform.
I used to run an environment with something like 25k devices... Provisioning and management was braindead simple with legacy BlackBerry, and it's integration model for mail & calendar was robust and reliable. Most users considered BB more reliable than any other mail access methodology. PIN messaging was device-centric vs. identity centric, which made it attractive or many use cases. That also went away, along with the perceived and real security benefits.
That benefit became a liability -- BlackBerry was very "enterprise telecom" and mail-centric. So the people who were responsible for BlackBerry were very much affected by tunnel vision. That's why BlackBerry was surprised by the market shifts -- their customers were very happy, but were ultimately people with an overly narrow focus who were disconnected from the business.
When BlackBerry switched to the ActiveSync model, they became just another ActiveSync device, with the added "benefit" that the rest of the platform was a mess. They are now trying to leverage their past reputation for good management to become an MDM vendor for some reason.
iPhone slaughtered Blackberry by attacking their strengths... BBM/PIN messaging with iMessage (with the added feature that you cannot intercept the messages if the feature is enabled), the obvious improvements of iOS vs. Blackberry OS and an easy institutional management model.
Poor sales, major benefit of blackberry overshadowed by other services (BBM), late to adjust to the times, etc.
I doubt you were suggesting that BB's downfall is rooted in their encryption, however. In the slim case that it did, they didn't speak up about backdoors and letting .gov into their devices until 2015, way after their market share has shrunk.
While I havent seen heavy market in that aspect there appears to be some level of improvisation with Blackberry's messenger http://www.bbm.com/bbm/en.html
Courts already have ways to balance requests for information (by the government or another litigant), with the burden on third parties who might have to undertake expenses to get that information.
Courts already have ways to balance requests for information (by the government or another litigant), with the burden on third parties who might have to undertake expenses to get that information. Generally, third parties are required to take reasonable measures to help.
Remember, this is an iPhone 5C, which doesn't have Touch ID or the Secure Enclave; the security model for this phone is significantly different from that of more recent iPhones.
On phones with a Secure Enclave, the wipe-on-failures state is managed in the coprocessor (which runs L4), and is not straightforwardly backdoor-able.
If you're worried about the police brute-forcing your phone, enable Touch ID and set a passcode that is approximately as complex as the one on your computer.
Even without the Secure Enclave, is it even possible for Apple to do this? The article talks about how Apple could add a backdoor to the OS and update the software on the device in order to break this, but I'm not sure how anyone is supposed to update the software on the device while it's locked, without erasing the device in the process (assuming of course that the iPhone is running a relatively recent version of the OS to begin with). AIUI Apple used to have the ability to backdoor phones for use in complying with law enforcement requests, but they removed that ability several years ago. And of course if they still had that capability with this phone nobody would need to order them to add a backdoor anyway since they could just bypass the passcode directly.
The court order refers to the need to load the custom OS image via DFU (device firmware upgrade) mode. I am not an iPhone user but I'm assuming that is exactly what the name implies. (some pre-boot recovery environment)
That's probably the case for you and I, but I'm sure with sufficient knowledge of how DFU works (for example by employing the engineers who designed it) you can persuade it to only rewrite particular blocks, leaving the data intact.
The key is not the password. If that were the case, the phone would have to re-encrypt everything everytime you change the pin or password. The password unlocks the key. And if you brute force the key itself, it might take decades, maybe centuries.
You could rewrite the bootloader and recovery firmware to really do anything. Resize some partitions and dual boot into a shell with some drivers for keyboard and wireless.
No, because the flash chips are only decryptable when they're installed in the phone. The user's passcode is tangled with a long key burned into the SoC, so you need both to decrypt the flash.
Even if touch id, it would be of no use. TouchID requires a password after 48 hours. or after the device resets.
Which is interesting. If you happen to use TouchID, is your best bet to hope a court will not be able to compel you to unlock it within 48 hours of arrest? That sounds very probable.
After five failed fingerprint attempts, your password is required to unlock the phone. That seems pretty safe to me. If you're ever ordered to unlock the phone, just touch an unregistered finger to it. Fingerprint sensors aren't foolproof. It'd be hard to prove you deliberately sabotaged the effort.
Though, one feature I'd like would be to register a distress fingerprint. Then I could touch say... my left index finger to require a password unlock.
If you do this on purpose after asked to unlock your phone you will probably be charged with destruction of evidence or something like that.
However, while a court is (afaik) able to ask you to put your finger on the fingerprint reader, you do not need to tell them which of the fingers the correct one is. So instead of purposely using a wrong finger, I'd ask the court to explicitly tell me which of my fingers I should use to unlock the phone.
I think the court would similarly consider that obstruction and contempt. If they tell you to unlock your phone and you try to play some "first you have to guess which finger's the right one!" game, the judge will slap you with either contempt of court or refusal to comply with a subpoena.
IANAL, but I don't think there's much of a difference between asking someone to reveal the correct password and asking someone to reveal the correct finger. In both cases you would be asked to incriminate yourself.
If it would be lawful for a court to ask you to "unlock the phone with the correct finger" then they might as well also ask you to "unlock this harddisk with the correct keyboard keys pushed in the correct order (as a password)".
Courts care about precise distinctions of law (that's their purpose!). Seems clear that fingerprints aren't protected, basically the same thing as your face in terms of privacy given a good enough camera.
But they would effectively be asking you the question "which finger did you use to lock this phone" to which you may plead the 5th.
It'll be contempt and possibly more if you don't unlock the device with your fingerprint.
It's not hard, the "precise distinction of law," is "unlock this with your finger, whichever one does it." I don't know what complicated back and forth you're imagining, but it's never occurred in any case that I've heard of.
they would effectively be asking you the question "which finger did you use to lock this phone" to which you may plead the 5th.
We already covered this in the link above: the 5th Amendment covers passcodes, not fingerprints.
But, as Schrodinger says, this is not about your fingerprint; it is about a bit of information you have that the government does not have: which of your finger(s) this device knows about.
"A bit of information you have that the government does not have" is a password.
I really don't understand this line of thinking. What is the scenario you imagine where the question of "which finger?" is relevant? I'll lay out the beginning:
- Police want to get into your phone for some reason
- You refuse to help them based on 5th Amendment or admiralty law or whatever
- They go to court for a order compelling you to operate the touch lock to open the phone
- You receive the order
- ?
Please lay out the "?" part, if you don't mind. I'm highly curious.
> We already covered this in the link above: the 5th Amendment covers passcodes, not fingerprints.
No, it is you who is not understanding schrodinger's assertion. The secret knowledge of which finger unlocks it is in itself a passcode and subject to 5th Amendment protection.
Imagine you were to take the example further, unlocking the device required a sequence of fingerprint reads, with a precise ordering. i.e. left-ring finger, right index finger, right little finger, etc... That sequence would be a passcode, just as a precise sequence of keypresses would be. The government can insist on all your fingerprints, but not (in this argument) the correct sequence of uses of those fingerprints to unlock it. If it's only a single finger this same argument could apply.
I wonder if we can then reduce this example to a 1-finger sequence. Could a court require you to turn over all of your fingerprints, but not identify the (1 character) sequence?
As other have noted, the distinction is between fingerprints (all of them) and the correct fingerprint (one of them).
If you would like to claim that there's no difference between the two, then you (and your hypothetical court) should have no problem with a user supplying copies of all their fingerprints when asked to unlock their phone.
That's obviously not what's being asked for, hence other people's distinctions.
As I said, the court doesn't care about the, "which finger?" question. If they tell you to unlock your phone with the fingerprint that unlocks it, you replying with "which finger?" isn't going to help you.
Of course "the finger," and "which finger," are different things, but that's irrelevant.
Yes, and when they do force you to give your fingerprints they don't ask you what finger you want them to print they tell you which finger and when to print.
Use your middle finger to actually unlock and use your index finger to fail. Unless the phone retains the biometric data after erasing itself, it's deniable.
This might work in the short term, but if enough folks actually did it, there would soon be a law that you may only use your thumb to unlock your phone.
I only recently got a touchID iPhone so I'm still having fun with it. But I did my right thumb, index, and middle finger, and left thumb and index.
If under police duress I keep trying to unlock a phone with my pinkie finger I think that would be suspicious.
If I have that much access to the device I should just force-reboot it by holding the lock and home buttons for about 2 seconds. Or maybe have done that before being arrested.
Upon a reboot the iPhone will always require its passcode.
They would fingerprint the reader, and you, to get a rough idea of which finger is a likely candidate. Even an extremely partial print should be enough to narrow it down to 4 or 5. The reality is that all right-handed people hold phones in their left and so use fingers on their right hand. Lefties do the reverse. So it is already down to five candidates ... which might have something to do with why apple picked that number in the first place.
It depends on the size of the phone. My too open/unlock my phone with the same hand that holds it (pattern unlock). But my phone is tiny. As a phone gets larger so does the likelihood that it is manipulated with two hands. Having the scanner below the screen also requires some dexterity when used single-handed, increasing the likelihood that the thumb is the print finger.
If one is really paranoid, register only a toe print. I've currently got my right big toe registered to my phone as an experiment and it works as often as finger scanning
Going through a few thought experiments, one might think they could adapt by dusting it for prints and requiring you to use the one they found there.
On the other hand, one might habitually touch, but not register, random fingers to it, while registering some fairly unusual finger as the real one, while using the dominant hand's index finger to "unlock" it.
Finally, someone might decide that if you fail the unlock, they'll just inspect the fingerprint module in isolation and if it works, they'll assume you did that deliberately.
having made the request for both an erase password as well as an erase finger print I would not mind going one further, a setting which wipes the phone if neither are entered in a set amount of time. The would protect you when the phone is stolen or confiscated.
I don't know about that but I'd be fairly certain a court would just order you to unlock the phone regardless of whether it's your finger locking it or a password.
In the USA the courts treat passwords as testimony, and in most cases you can invoke your 5th amendment right and refuse to provide passwords or encryption keys, given the state does not already know the contents of the device. This same protection does not extend to physical keys, which I think fingerprints would fall under.
That seems to be representative of the only actual ruling on this topic that I can find
>The Fifth Amendment to the U.S. Constitution gives people the right to avoid self-incrimination. That includes divulging secret passwords, Judge Steven C. Frucci ruled. But providing fingerprints and other biometric information is considered outside the protection of the Fifth Amendment, the judge said.
It would seem to me that the "fingerprint = key" analogy is flawed. Physical keys can be trivially copied, and even reverse engineered from the locking mechanism. A key is a physical object that is required to disengage a lock.
A fingerprint, when used on an electronic lock of this kind, is not a key. It's attributable to one person only, not trivially duplicated, and not able to be reverse engineered from a locking mechanism. It requires an action by a single person who cannot be forcibly relieved of their possession of their fingerprint.
Additionally, a key is specified during the manufacturing or assembly of a lock, and comes with the lock, since they are "paired" when the lock is made. However, a fingerprint or password are specified by the user at will after they've assumed ownership of the device. They "testified" their identity to their phone with a fingerprint, just like they did with their password.
If compelled to imprint a finger, it is the same sort of personal interaction that a password entails: the credential holder utters/presents their personal information - not a physical object, but a repeated testimony of the same content they previously and uniquely presented to their device. It should be protected as other self-incriminating testimony under the fifth amendment.
I think the courts have this one right. Your finger is a characteristic of you, not compelled speech.
If you can compel a suspect to stand up on a lineup, or produce id, there's no reason why the court shouldn't be able to compel you to produce a finger.
In technical terms, the finger is really a "something you have" second authentication factor. If you think of it on those terms, it's more like looking at someone's Hardware token than compelling a password disclosure.
>Your finger is a characteristic of you, not compelled speech.
So is all your knowledge. The technology to extract it doesn't yet exist, but once it does, should it be deployed by the courts without a challenge from the 5th Amendment?
> If compelled to imprint a finger, it is the same sort of personal interaction that a password entails:
Not quite sure it is though. If they already arrested them, they already have the fingerprint don't they? That is different than key and is different than password.
Interesting being Australian I don't have a 5th amendment to protect me. Also my understanding is that at least in Australia you'd likely be charged with obstruction of some sort, does that not fly in the US?
I was under the impression that law enforcement can coerce you to put your finger on the touch pad to unlock your device much easier than they could coerce you to provide the pass code. I have Touch ID disabled on my device for exactly this reason. Is that not the case?
> The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing. No software or firmware can read them directly; they can see only the results of encryption or decryption operations performed by dedicated AES engines implemented in silicon using the UID or GID as a key. Additionally, the Secure Enclave’s UID and GID can only be used by the AES engine dedicated to the Secure Enclave. The UIDs are unique to each device and are not recorded by Apple or any of its suppliers. ... Integrating these keys into the silicon helps prevent them from being tampered with or bypassed, or accessed outside the AES engine. The UIDs and GIDs are also not available via JTAG or other debugging interfaces.
Even for older devices like the iPhone 5C, if the owner chose a good passphrase, I doubt it can be decrypted with Apple's help.
Thanks. So only recourse for highly resourced adversary will be to decode key via hardware imaging (not sure if any research has been done on this), and after that they will still have to bruteforce the passphrase used to secure the phone, the effectiveness of which depends on the entropy of passphrase.
I wonder what how Apple can help the law enforcement here.
A lot of research has gone into information recovery from silicon inspection since it's tied closely to reverse engineering ICs. It's not the most trivial of pursuits but widely done.
There are some hardware HMACs (Atmel's in particular IIRC) where the process of opening the chip package destroys the area of silicon that encodes the private keys. I don't know if Apple used the same tech but if they did, any attempt to look at the private key storage would destroy it.
This kind of security is used in SIM-cards, access-cards for pay-TV, TPMs. Kind of standard with various variations.
Some criss/cross metal mesh as the topmost layer you would have to penetrate, or photodiodes that sense the light if you put a device under a microscope, ...
That's not how quantum crypto works (it's based on observation of state, not the algorithm). Further, we've had cases of quantum crypto that just wasn't good enough to stop an observer from MITMing the internal state.
The fuses are only "blown" (i.e. the UID is burned into the chip) at manufacturing time, not when the device is erased.
When a device is first set up (or wiped) a random key is created and encrypted by the Secure Enclave with a key derived from the user's passcode and the device's UID. Since only that particular device's Secure Enclave has access to the UID the user's passcode can't be brute forced by any other computer, which enables the Secure Enclave to enforce policies like the passcode attempt delay and incorrect passcode attempt. If the device needs to be wiped the random key is simply erased by the Secure Enclave.
(Also, if you only changed 1 bit that would mean you only had to try 2 possible keys...)
All I'm really saying is that the complexity of your passcode matters a lot in this scenario, so anything you can do to increase it will tend to pay off.
I am not saying you're wrong. I was hoping there was a way to provide more security that I overlooked, however that doesn't seem to be the case.
Most people likely use their dominant hand, probably thumb maybe pointer. In this threat model, someone lifts your phone and opens it with a fingerprint. Assuming they can completely replicate a print and get one (fairly non trivial assumption) they could probably get through with 5 tries.
If you read the iOS security guide you'll know Apple built the phone in such a way as to wash its hands with these types of request. They'll say it's impossible and they won't be lying. Nothing is ever impossible, but it will be very impractical. The hardware and software is built to ensure this.
I think the real game here is to compel Apple to build a backdoor into future models. I expect to see a lot of rhetoric around this fact, until something forces Apple hand.
That is possibly true for current models of the iPhone. It is significantly less true for the 5c in question, which has less robust security features. See other answers referring to the Secure Enclave.
If Apple can unencrypt the phone, it will prove to everyone that backdoors exist. If they can't, and they tell the FBI as much, it will just give politicians more reasons sound off about how we have to have backdoors, because this shooter was a "terrorist" after all, and we just have to suck it up and do whatever is necessary to go after people like that.
Did you read the article? The court didn't order Apple to decrypt the phone. Instead, Apple has to disable the phone's feature that automatically wipes the hard drive after 10 failed password attempts. This is so that the FBI can brute-force its way into the data.
This could be example of parallel construction[1]. They may already have unencrypted it via a backdoor, but they wouldn't be able to use anything they find as evidence in court because they'd have to reveal the backdoor. If they can plausibly show they brute-forced it instead, they keep the backdoor hidden.
"it could be parallel construction" is true in literally every instance since it's impossible to prove the negative case.
This is becoming my cue to stop reading the comments; when parallel construction is the most obvious argument, you've read the interesting ideas up thread.
However, the US Government is known to have gathered hidden evidence in drug cases, then used parallel construction to hide the violation. So, now the government's presentation of evidence should always be considered in question. As it has shown dishonesty at the presentation and gathering of evidence and there's nothing that says they haven't changed their unethical ways, how can they be trusted to present evidence legally gathered?
Ok, but IMO this is a much lesser evil than (1) compulsory lawful-override of encryption/back door or (2) legislation to exclude devices which don't provide back doors.
Ultimately states will develop the capacity for brute forcing and you have relatively little recourse. While I hate the idea of a three letter agency doing this at any scale large or small, the potential for corrupt local LEOs to abuse their power with an encryption backdoor is very great.
Brute forcing a password could take more time, with today's technology, than we have left on Earth depending on complexity and if there are known vulnerabilities. I'm not sure I would effectively consider this order an order to "unencrypt".
After a few attempts the OS would rate-limit guesses to prevent exactly that. On some iOS versions it is possible to override this mechanism by cutting power at the right moment[1] but this exploit has been patched for a while and I doubt this device is vulnerable.
(3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.
The encryption key is calculated from your passcode + the AES key etched into the chip inside the phone. There's no way to read that key directly, unless you do some crazy chip imaging where you read the actual electron state of the memory - could be done, but the chance of corrupting that memory is very high, and if they read even one bit wrong then the entire key is useless.
So there are two ways to go about this - they can either brute force AES, which, quite simply, can't be done(and I don't mean can't be done with current computers, the number of possible combinations is larger than the atoms in the universe or something stupid like that), unless NSA has a way to crack AES faster(but if they do, they won't make that knowledge public). Or try every passcode combination going through the Apple's full algorithm, which takes about ~5 seconds to generate a key. So it's doable, but it would take some time.
Maybe? Do we even know what type of passcode they used? If they turned off simple passcode then they could have entered anything they wanted at varying lengths.
For me, the most interesting question I would have is absent from the article.
The court is basically ordering Apple to produce new firmware that doesn't block brute forcing. If Apple were to comply, who keeps this firmware after the fact?
There's no mention of this at all, but if the firmware image stays with the FBI then the implications are much more profound with regard to privacy.
They specify that the FW will be locked to the unique device's ID, so it won't be usable on any other one.
But once it's established that it can be required from Apple, Apple has to comply, and Apple effectively does comply, other judges in other cases will be able to request other FW, hard-coded against other IDs, as needed.
The middle-term solution to this is for Apple's security team to protect against this threat model, and implement encryption in such a way that _they_ can't bypass it under constraint. I'm not an Apple customer and I don't follow their products closely, but I understand that iPhones 6 already are harder to bypass than iPhones 5.
I would bet you all the money in the world that the very second such a firmware image was provided to the FBI it would find its way to the CIA/NSA. All with the assumption of course that the FBI has no rogue agents who work for foreign governments or criminal organisations.
Apple is right to be terrified at the thought of being asked to make such a firmware image.
I do wonder though that had Apple not predicted this exact scenario ahead of time (likely), how would they control this?
It's unlikely they can rely on hardware protections to provide this device locking, so is it the case that they would build the unique identifier into the image.
Optimistically some obfuscation could help but are the FBI/CIA/NSA really more than a few hops away from opening the binary image in a hex editor and changing it by hand?
If Apple firmware images for the iPhone are signed per-device then fine, but is that the case?
I don't know this but it seems unlikely to me that a custom device-signed build of iOS happens for every iDevice, and if that's not the case, I can't see how Apple can reliably restrict this with confidence.
I agree with you that this will be difficult or maybe impossible to implement. However the court has foreseen the upthread argument as the order shows.
As many here I believe that once this backdoor exists it will be somehow exploited (at the very least by further orders).
> Apple ... will probably have little time to debug or test it overall, meaning that this feature it is being ordered to build will almost certainly put more users at risk.
Eh? They are not being asked to install it to the public at large, just one phone.
Of all reasons to object, this reason makes little sense.
That's true. In fact, if it's possible for Apple to accomplish what DOJ is demanding of it, the best outcome would be for DOJ to succeed, and do so publicly:
* There is an authentic need to get at the data on that phone
* There's no likelihood at all that other users will be impacted by the backdoor
* We'll all be on the same page about how secure these phones are versus the USG.
It's possible that they can prevail against the 5C but not against the 5S or later, since the security architecture of the 5S is very different from that of the 5C.
> There is an authentic need to get at the data on that phone
What is the authentic need? The shooters are dead. Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?
If you shoot a bunch of people while declaring allegiance to an organized group known for shooting bunches of people then I think that pretty clearly demonstrates that reading your communications has a pretty high likelihood of turning up something useful in preventing future incidents. If this doesn't clear your hurdle for reasonable search then what would?
To be clear, I don't think the order to Apple is necessarily altogether a good idea or is even going to produce the desired results, but your complaint seems to be with the fact that this data is being pursued at all.
Edit to reply:
> Evidence of a conspiracy would help. You said they declared allegiance to an organized group. When they did that, did they say or hint that they had been in contact with that group, other than, say, watching public YouTube videos?
The woman in the couple declared it right before the shooting[0]. Do you want a notarized letter from the deceased?
> Would you agree that "high likelihood" is too low a bar for justifying searching the phones of people who live in high-crime neighborhoods?
I'm pretty sure neither "high likelihood" nor "authentic need" were being used as a term of art here, but I would bet that any judge would view the commission of murder declaredly for an organized militant group to be probable cause that there is information pertaining to more criminal activity by that group on these two's phones and in their communications.
Do you really view this as a government overreach or are you just trolling? Under what circumstances, if any, would you see as justified a search of someone's email? phone? house? So far you've equivocated between living in a bad neighborhood and committing murder-suicide.
> The woman in the couple declared it right before the shooting[0].
I'm not questioning that she declared allegiance. I'm asking if she was in private contact with anyone. If you were responding to that, can you show me where that is in the NYT article you linked? I don't see it.
> Do you want a notarized letter from the deceased?
Let's try to keep this civil, please.
> Do you really view this as a government overreach or are you just trolling?
I actually believe the things I am saying. I am not saying them to anger or upset you or anyone else. Please do not let the fact that we disagree about the scope of the 4th Amendment cause you emotional suffering.
I am not ready to declare it overreach, because I do not know all of the evidence yet. This is why I have been saying things like "Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?" and "did they say or hint that they had been in contact with that group" and "I have not followed the news on this shooting, so I would not be shocked if the answer were 'yes, there is some evidence of a conspiracy'."
If there is no such evidence, I do think it is overreach, but my opinions on policy are not fixed in stone, and I sometimes change my mind about them when presented with new arguments, ideas, or philosophies.
> Under what circumstances, if any, would you see as justified a search of someone's email? phone? house?
I doubt anyone has a complete enumeration of all circumstances under which they feel a search is justified. I would feel torn if there was lousy circumstantial evidence that the phone would solve or prevent crimes, I would be in support of a warrant if there was strong evidence, and I am opposed to a warrant with no evidence. One thing I would call strong evidence is a shooter having announced that he or she was part of a terrorist cell in the US.
I will no longer reading or responding to your edits that are "edited to reply". If you want to discuss with me further, please reply to reply by using the "reply" button. I will not be editing any of my posts to "edit to reply".
You keep switching between legal and normative requirements. We disagree on the 4th amendment in the same way that scientists and climate change deniers disagree about global warming. You have a fringe understanding of it with no support from the relevant literature and your arguments about it are poorly structured, deny evidence, and rely on intentionally misunderstanding context and terms of art.
The legality of searching for evidence is pretty open and shut because you need probable cause. The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.
Shooting a bunch of people and saying you're with ISIS is plenty of probable cause for a search. I don't see how you're waiting for "all the evidence" here since all the relevant facts are in and they're sufficient. Whether or not she was conversing privately with ISIS counterparts would be the resulting information of the search.
> One thing I would call strong evidence is a shooter having announced that he or she was part of a terrorist cell in the US.
The only way to read this in light of our previous discussion is that saying "I'm in ISIS!" and then shooting up a bunch of civilians is insufficient to prompt a post-mortem search of the attackers' affairs, instead they need to say "I'm in ISIS and there are a bunch of us!" and then shoot a bunch of civilians.
> You keep switching between legal and normative requirements.
If I did so, it was a mistake. My reference to the 4th Amendment, for instance, should have said "how the 4th Amendment ought to protect us". I did not mean to imply that I am trying to predict what warrants the justice system will or will not grant.
> You have a fringe understanding of it
I think I mentioned the 4th amendment just the once. I have been trying to stick to normative arguments.
> The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.
I think this is a point where we truly disagree. I think a system can function in which some evidence that a search will yield results is required before the search is conducted. I do not think that the evidence must be airtight. Note that I am speaking about what I think is possible and just and right, not what the law says now or the justice system does now.
> The only way to read this in light of our previous discussion is that saying "I'm in ISIS!" and then shooting up a bunch of civilians is insufficient to prompt a post-mortem search of the attackers' affairs
Did the shooter say she was "in ISIS", or that she pledged allegiance to the leader? There might be a difference in this case. I have read that there is religious significance to a pledge of allegiance in ISIS's theology that might make a pledge indicative of ideological alignment and a membership "in ISIS" indicative of being in actual conversations with ISIS.
> Bravo sir, I have been well and properly trolled.
> Did the shooter say she was "in ISIS", or that she pledged allegiance to the leader?
Either one would seem to constitute probable cause for an association. Of course we don't know if she was actually in ISIS, or just agreed with their beliefs. But how would we know without conducting further investigation? You seem to be demanding a somewhat unreasonably large burden of proof, when all that is needed in this case is probable cause. Frankly, even if she hadn't verbally declared allegiance to ISIS, I don't think it's a stretch to say there's probable cause for connection to other terrorist groups. The fact that she did say that makes it a slam dunk.
> I think a system can function in which some evidence that a search will yield results is required before the search is conducted. I do not think that the evidence must be airtight.
We do have such a system. The evidence you're describing is called probable cause, and that's the whole point. I'm not sure of any reasonable definition of probable cause that this situation wouldn't satisfy. Moreover, your objections seem to be in the form of vague misgivings rather than concrete arguments. You haven't precisely described what would constitute sufficient evidence for an investigation, but instead seem to just be saying "there's not enough right now." I think this is what's behind GPs frustrations responding to your posts.
The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.
That kind of reasoning allows wholesale collection of communications data by the NSA and other agencies. Since that practice has been widely criticized, there must be something missing from your argument.
No one is advocating warrantless searches or not requiring reasons for warrants.
If I want to get a warrant to see who you're calling, it is inherently a broken system that requires the list of people that you called as cause to obtain that warrant.
Any kind of reasoning allows wholesale collection of communications if you misread it properly.
Ah, so you do agree with the premise that there must be compelling evidence to warrant a search?
In that case, all you(pl.)'re haggling over is the "price point" of how much evidence is required to support how invasive a search. I'm unsure how that results in the kind of heated debate that seems to happen here.
...pretty clearly demonstrates that reading your communications has a pretty high likelihood of turning up something useful in preventing future incidents.
It sounds like common sense, I guess, but has that ever worked, actually?
Similar "prevention" rationale is offered for governments to spy on virtually all telecom all the time, now. But this shooting happened anyway.
> Similar "prevention" rationale is offered for governments to spy on virtually all telecom all the time, now. But this shooting happened anyway.
1) Anyone with a plan promised to stop all terrorist attacks is lying to you, stupid, or both. You can't have a free society and a 0% chance of political violence.
2) Yes, searching the possessions and communications of dead terrorists unsurprisingly are substantially more likely to lead to useful criminal leads than reading your metadata. A warrant to read this person's stuff isn't unreasonable in the slightest, an order forcing apple to do shit might be but that's a procedural thing unrelated to the core issue of "is there a good reason to read this person's stuff"
> reading your communications has a pretty high likelihood of turning up something useful in preventing future incidents
Would you agree that "high likelihood" is too low a bar for justifying searching the phones of people who live in high-crime neighborhoods?
> If this doesn't clear your hurdle for reasonable search then what would?
Evidence of a conspiracy would help. You said they declared allegiance to an organized group. When they did that, did they say or hint that they had been in contact with that group, other than, say, watching public YouTube videos?
I'm having a hard time believing that you're commenting in good faith here. Yes, the police will easily get warrants to search whatever property of a mass murderer's they feel would be productive to search. No, that does not mean they can randomly get warrants to search random houses in high-crime neighborhoods. Privacy rights for mass murderers: not a high priority of US constitutional law.
Is there some other issue we're missing here, or does that pretty much wrap it up?
> I'm having a hard time believing that you're commenting in good faith here.
You can feel free to disengage from this conversation if you find it troubling. If you are incredulous that someone might be concerned with the privacy of these people (and their friends and family) in the particular way I am, then I'm not sure what I can do to make you believe.
I am a person. These are my true thoughts. I actually and honestly believe them.
> Yes, the police will easily get warrants to search whatever property of a mass murderer's they feel would be productive to search.
As I said earlier to you in another branch of this discussion, I am not disagreeing that the police CAN get this warrant. They appear to HAVE gotten this warrant, so I guess that's a historical fact. I'm trying to have a discussion about what we think is just and fair and right, as well as trying to find out if there is any evidence of a conspiracy. I have not followed the news on this shooting, so I would not be shocked if the answer were "yes, there is some evidence of a conspiracy". This is why I asked the question, "Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?"
> No, that does not mean they can randomly get warrants to search random houses in high-crime neighborhoods.
I did not ask if police CAN randomly get warrants to search random houses in high-crime neighborhoods, I asked the commenter if he or she thought they ought to be able to do so. If the answer is "yes, they ought to", then we might have a different discussion than if the answer is "no, they should not". I have met people who would answer "yes" and I have met people who would answer "no". Neither answer will cause me to accuse the commenter of commenting in bad faith.
> Privacy rights for mass murderers: not a high priority of US constitutional law.
I'm not talking about what is and is not a high priority for the justice system. I'm trying to engage in a dialogue about what we think the requirements for a warrant SHOULD OR SHOULD NOT BE and whether or not there is any evidence that the phone will provide information that will help solve or prevent crimes.
> Is there some other issue we're missing here, or does that pretty much wrap it up?
I'm sorry if this conversation is upsetting or troubling to you.
..... I currently have a lot of time on my hands, so sure, I'll bite....
> I did not ask if police CAN randomly get warrants to search random houses in high-crime neighborhoods, I asked the commenter if he or she thought they ought to be able to do so
Given that the answer to CAN they is a solid no, and that random searches of homes is in no way related to searching devices used in a conspiracy to commit murder, what is the point of this? In one instance, someone has clearly committed a conspiratorial crime, in another instance, people are living in houses with low property value.
> if there is any evidence of a conspiracy
Conspiracy - a secret plan by a group to do something unlawful or harmful.
Point 1: A conspiracy took place. A plan to kill people was kept secret between multiple people until it was executed.
Point 2: Immediately prior to commission of the murders, one of the participants declared that they were part of a larger group, known for organized commission of murder and terrorist attacks.
Given these points, what information is missing that would motivate you to think that a search of the attackers' phones should be conducted? Are you really asserting that there is no evidence of conspiracy that extends beyond the deceased, despite the fact that they said they were doing this under the flag of a larger organization?
I don't see any room for a normative argument defending against a search. I don't imagine that you're arguing that the post-mortem privacy interests of the terrorists prohibit investigation. Are you suggesting that the risk from not knowing the contents of the phone are so low as to not rise to outweigh the privacy interests of anyone incidentally mentioned on the device?
Sorry for all the questions, what I'm trying to get at is that from a normative perspective, societies generally allow investigators to search the shit of known participants of violent criminal conspiracies in order to detect previously unknown elements or plans of those conspiracies. What is the moral base from which you are arguing that this nearly universally accepted standard is somehow deficient?
> Given that the answer to CAN they is a solid no, and that random searches of homes is in no way related to searching devices used in a conspiracy to commit murder, what is the point of this? In one instance, someone has clearly committed a conspiratorial crime, in another instance, people are living in houses with low property value.
You said that the state should be able to search the phone because it was likely to have evidence of crimes. I am arguing that higher than normal likelihood, as you might expect in a high-crime neighborhood, is not sufficient to justify a search. Instead, I am arguing that evidence (indicative of finding things that will help solve or prevent crimes), not likelihood of finding such things, should be the standard for a warrant.
> A conspiracy took place.
I should have said "a conspiracy beyond the two dead perpetrators".
> one of the participants declared that they were part of a larger group
Did she? I thought she said she "pledged allegiance" to a larger group, like one might do to a Pope you have never met or spoken with.
> what information is missing that would motivate you to think that a search of the attackers' phones should be conducted?
> Are you really asserting that there is no evidence of conspiracy that extends beyond the deceased
No, I am /asking/ if there is any such evidence.
> I don't imagine that you're arguing that the post-mortem privacy interests of the terrorists prohibit investigation.
No, I don't think it prohibits investigation, but I do think that state searches of their personal effects ought to require evidence that searching their personal effects would solve old crimes or prevent new ones.
> Are you suggesting that the risk from not knowing the contents of the phone are so low as to not rise to outweigh the privacy interests of anyone incidentally mentioned on the device?
I am suggesting that those privacy interests can be balanced against evidence that searching the phone would solve old crimes or prevent new ones. I do not believe that risk is the only question. That is what I was trying to get at with my distinction between "high likelihood of" and "evidence of", above.
> What is the moral base from which you are arguing that this nearly universally accepted standard is somehow deficient?
The reason I think that evidence of solving (or helping to solve) old crimes or preventing new ones should be required before searching the possessions of any person, living or dead, murderer or pacifist, is a traditional one about privacy, but it seems like the balance I use is different than your balance.
That "societies generally allow" the state to do something, or that societies "nearly universally" do so, is not a big factor in my feelings on whether or not it is fair and just.
> they may have talked to other people planning attacks.
I'm not particularly concerned with crimes that we believe "may have" occurred. Of course, they may have. Anything may have happened -- I'm asking for more than just correlation that criminals know criminals. Do we have any evidence, or even any hints or clues, that the phone contains evidence that would help solve or prevent any crimes?
I'm sorry, but I don't understand what you're getting at here. The legitimate concern is prevention of future attacks. They may have collaborated with people who were never apprehended on the attack that actually happened.
> They may have collaborated with people who were never apprehended on the attack that actually happened.
Right, but we don't go searching everyone's papers just in case they are conspirators.
If Alice punches Bob in the face, then is hit by a bus and dies, we don't go searching through all of Alice's stuff just in case there might have been someone else involved with the Bob-punching incident, right?
Is there any evidence, any at all, that the shooters collaborated with anyone?
> The legitimate concern is prevention of future attacks.
I'm not questioning the "legitimate concern", but I don't think "legitimate concern" should be sufficient to get a warrant.
> If Alice punches Bob in the face, then is hit by a bus and dies, we don't go searching through all of Alice's stuff just in case there might have been someone else involved with the Bob-punching incident, right?
Wrong. If Alice announces that she's looking for people to attack then of course we go looking so see why she's doing that and if other are others involved.
This wasn't some random emotional attack like a bar fight. Stop setting up nonsensical strawmen for your arguments, argue the hard cases not the easy ones.
If your argument/objection can survive the hard cases then you have something, arguing the easy cases is meaningless.
> Wrong. If Alice announces that she's looking for people to attack then of course we go looking so see why she's doing that and if other are others involved.
What, in your opinion, are the limits of that investigation.
So
1. Alice announces she's looking to attack someone.
2. She attacks Bob.
3. She dies.
I gather that, in your opinion, we can search her possessions. Let's say she has a living mother and a best friend who died a week before the attack. Can we search her mother's things? How about her best friend's?
You look at the scenario at hand and see if it makes sense to search this person or that.
I said it to someone else today, I'll tell you too: The law is not like a mathematical proof with exact rules. People with STEM backgrounds tend to think of the law that way because it seems to be all about rules. But it's not. It's about human judgment and gut feelings.
Sorry, meant to say "ought", not "is". I'm mostly interested in what you think is appropriate, not what the justice system would do, today, with the laws and courts as they are.
What should be sufficient to get a warrant? It seems to me that making warrants harder to get (or impossible), would actually be bad for society at large. In the case of warrant you have a targeted tool, one aimed at a specific person or group, and a demonstrated need approved by a third party that is not law enforcement.
The alternative is dragnetting. Or not having investigative tools. I don't know that I like either option as a citizen.
Exactly how do you know that the case is now "closed"? It sounds like you're just making an assumption. The state is too, but the cost of their assumption --- that there is valuable data to gather from the phone --- is very low, and the cost of your assumption, if you're wrong, is immense.
I didn't feel the need to elaborate because my parent thought one word was sufficient, but let me state the full context to my thought process:
> > > Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?
> > 14 dead people and a stack of unused guns and bombs.
> two dead attackers, stack confiscated. case closed.
If the "evidence" for future crimes is merely some dead criminals and the ammunition they didn't get to use, there is no probable cause for further investigation. Hence, the case should be closed.
Apple will not do that. It's a slippery slope argument - yes, these shooters were very bad, but if Apple sets a precedent of breaking the backdoor for this crime, why can't other courts implore them to use the same technological means for lesser crimes?
> * There's no likelihood at all that other users will be impacted by the backdoor
That could not be further from the truth. They are trying to set a precedent that endangers the future of consumer end-to-end encryption.
They are trying to repurpose an 18th-century law (the All Writs law) to force Apple to help them break iPhone encryption.
If this case creates precedent, what is to stop them from, say, forcing Signal and Google to work together to deliver a backdoored app update to a specific user?
The DOJ can make this demand because Apple phones of this vintage are already breakable, and the DOJ is merely asking Apple to exercise a capability it already has.
Apple knows this, better than most, and has for many years. They have done security design work with governments as an explicit adversary. The 5C was insecure. The 5S is not: it has an entire additional processor, running an incredibly secure OS, whose entire job is to make sure that the phone keeps promises like these even if the DOJ orders Apple to sign bogus updates.
Apple is so committed to this that they've extended the "ten tries and you're out" promise all the way through their server infrastructure, so that if you escrow data into iCloud it will be nuked if someone tries to brute force it. Not only that, but after rigging their HSM cluster to operate that way, they burned the update keys, so that an attempt to change that rule will break all of iCloud.
> There's no likelihood at all that other users will be impacted by the backdoor
The backdoor, a "master key" as Tim Cook put it, that opens all pre-5S iPhones affects millions of other users.
And that's not even the main issue. The issue is the precedent it sets, which endangers a lot more than just a few million users or a few specific models of iPhone.
It's much more than tha. If they do it "just this once", that means they can allow others to get to encrypted data, despite what they've been claiming all along. It's a very binary situation.
So if I get this right, they want to (1) disable the delete feature after x retries (therefore enabling unlimited retries) and (2) enable to submit tries via a connector/wifi, bluetooth (therefore enabling a bruteforce approach). What good is an encrypted filesystem in that scenario?
Plenty of good if you have a reasonable passphrase and the vendor hasn't been compelled to assist.
"Can only try 10 times" isn't anything guaranteed by encryption. My laptop has an encrypted partition, but an attacker can brute-force it at will. Even if I had software to say "only let it happen 10 times, then erase the partition" the whole drive could just be cloned. That's why I have a 20+ character passphrase.
Apple goes way out of their way to avoid scenarios where they can be compelled to subvert iOS security. For instance, see pg44+ of the iOS security white paper:
... the HSMs that manage the escrow scheme for credentials stored in iCloud are themselves rigged to blow up on 10 failed tries, and, not only that, but the code that implements that process is burned into the HSMs and the keys Apple would need to change that logic have been destroyed.
Thank you for the link. I was unaware how seriously Apple takes their security. Self-destructing HSMs to avoid brute-forcing is extremely impressive. THIS is a model of how to implement proper key escrow.
"Apple's reasonable technical assistance shall accomplish the following three important functions:
(1) it will bypass or disable the auto-erase function whether or not it has been enabled;
(2) it will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available on the SUBJECT DEVICE and
(3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.
Apple's reasonable technical assistance may include, but is not limited to: providing the FBI with a signed iPhone Software file, recovery bundle, or other Software Image File ("SIF") that can be loaded onto the SUBJECT DEVICE.
The SIF will load and run from Random Access Memory and will not modify the iOS on the actual phone, the user data partition or system partition on the device's flash memory. The SIF will be coded by Apple with a unique identifier of the phone so that the SIF would only load and execute on the SUBJECT DEVICE.
The SIF will be loaded via Device Firmware Upgrade ("DFU") mode, recovery mode, or other applicable mode available to the FBI. Once active on the SUBJECT DEVICE, the SIF will accomplish the three functions specified in paragraph 2. The SIF will be loaded on the SUBJECT DEVICE at either a government facility, or alternatively, at an Apple facility; if the latter, Apple shall provide the government with remote access to the SUBJECT DEVICE through a computer allowing the government to conduct passcode recovery analysis.
If Apple determines that it can achieve the three functions stated above in paragraph 2, as well as the functionality set forth in paragraph 3, using an alternate technological means from that recommended by the government, and the government concurs, Apple may comply with this Order in that way."
The implications are quite important for future technologies. Neural implants for example. Neural implants are currently used for prosthetics and paralysis. A forced backdoor would kill all research to develop a co-processor directly linked to the brain. Who would want a government backdoor directly to the brain
If Apple is capable of compromising security on its devices (by using its root key to sign a custom version of iOS, or through some other method), then I see no way that they will avoid eventually being subject to a court order in some jurisdiction that compels this action. If that's true, then device security is already compromised and Apple knows this. Let's say the facts of the case were slightly different, that the FBI "knows" a terrorist attack is about to occur, and Jack Bauer-style demands that Apple assist in compromising a specific device that has the top seekrit plans on it. In that instance, do you think Apple would comply with a warrantless request for cooperation? Hm...
Reading Tim Cook's announcement in light of this thought experiment, methinks he doth protest too much! Apple does not have any objection to compromising user security at the root level, and in fact has already done so by creating a device that has some limited vulnerability to malicious action by the manufacturer signed with its root key. (By the way, no doubt every other manufacturer has done worse, so this is not to deprecate Apple vs. any other big company.)
I would speculate that Tim Cook's goals with this announcement are largely PR-based, and that the goal of Apple's legal strategy is not to avoid cooperation but rather to retain the ability to decide whether to cooperate, and/or to impose a higher perceived cost on the government for such requests. No doubt Apple is correct to say that once a precedent is established, then it will be widely used by law enforcement even in routine cases.
At the end of the day, I am not optimistic that we can avoid a world in which large device manufacturers are compelled (legally and practically) to build security flaws into their devices. Perhaps not the flaw of a back-doored crypto implementation, but other flaws such as those that have been identified in current iOS devices that allow the government (with commitment of sufficient resources) to chip away at some of the more superficial protections.
This seems like a key thing to convey to the courts. One piece of encrypted data is supplied, rather than another. Who has to explain to the courts the complexity of breaking one versus the other?
It is used to create the crypto key, using a password based key derivation function, using the user's password fed into the PBKDF the output is the key used for encryption/decryption.
The users device key is mixed into that PBKDF. Without both parts of the equation, you have nothing.
Yeah I've got the same question. Is there some hardware safeguard that prevents copying the memory itself? You'd think the first rule of crypto forensics is to work on a copy.
The users password goes through a password based key derivation function, that function spits out an key that is use for the AES crypto used for the file system.
Now, the PBKDF requires a secret that is only stored within the iPhone itself (within the CPU even, where it can't be read out directly). So if we instead grab a copy of the data, all we get is an AES encrypted file system.
We have 2 choices. 1. We can attack AES directly, and attempt to brute force the AES key, or 2. We can attempt to get the AES key somehow by bruteforcing the users password.
2 is infinitely simpler compared to 1. However, 2 becomes more difficult (but still less so than just cracking AES itself) when you need to run your brute force guesses through the CPU within the iPhone.
This is exactly the threat model this was designed to defeat, so I would imagine not. At the very least, if there is a way to do it Apple is probably unaware. If they are aware, they'll probably open themselves to a class action suit, as its one of the core features that they advertised.
Yeah. This may end up costing a huge amount with no results. Apple (and it's employees) doesn't have much incentive to crack the device although someone might be curious and use this as a good time to look for security holes on the government's dime.
"""To the extent that Apple believes that compliance with this Order would be unreasonably burdensome, it may make an application to this Court for relief within five business days of receipt of the Order."""
If what Apple's security guides claim is true, "unreasonably burdensome" should be an easy standard to meet on practical technical feasibility grounds. The issue is whether they'll want to challenge this on non-technical grounds.
If I was the NSA, I certainly wouldn't be exposing the fact that I could crack AES for the sake of a trivial (to the NSA) crime like this.
What I would do, however, is engineer a situation where the FBI could believably brute force the password, then let them have at it for a few days until I supply them with the actual password. Then the FBI tells the public that the passphrase on the phone was actually something trivially guessable lime 000000.
The PIN is an input to the HSM, where the key actually lives. HSMs are designed from the silicon up to resist inspection or modification of internal state. There are certainly no baked-in debugging/JTAG interfaces, and the hardware is designed to blow away the key if the chip is under physical or logical attack.
So, Apple says that "the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation."
If it's possible to make such a "backdoored" build of iOS, then there are state actors who will be throwing $Millions at doing it already, with or without any willing help from Apple.
I thought this was an excellent write-up regarding how the iOS security platform (recent iPhone models) works from someone obviously in the know, as posted in the forums of Apple Insider. (Source: http://forums.appleinsider.com/discussion/191851)
" Apple uses a dedicated chip to store and process the encryption. They call this the Secure Enclave. The secure enclave stores a full 256-bit AES encryption key.
Within the secure enclave itself, you have the device's Unique ID (UID) . The only place this information is stored is within the secure enclave. It can't be queried or accessed from any other part of the device or OS. Within the phone's processor you also have the device's Group ID (GID). Both of these numbers combine to create 1/2 of the encryption key. These are numbers that are burned into the silicon, aren't accessible outside of the chips themselves, and aren't recorded anywhere once they are burned into the silicon. Apple doesn't keep records of these numbers. Since these two different pieces of hardware combine together to make 1/2 of the encryption key, you can't separate the secure enclave from it's paired processor.
The second half of the encryption key is generated using a random number generator chip. It creates entropy using the various sensors on the iPhone itself during boot (microphone, accelerometer, camera, etc.) This part of the key is stored within the Secure Enclave as well, where it resides and doesn't leave. This storage is tamper resistant and can't be accessed outside of the encryption system. Even if the UID and GID components of the encryption key are compromised on Apple's end, it still wouldn't be possible to decrypt an iPhone since that's only 1/2 of the key.
The secure enclave is part of an overall hardware based encryption system that completely encrypts all of the user storage. It will only decrypt content if provided with the unlock code. The unlock code itself is entangled with the device's UDID so that all attempts to decrypt the storage must be done on the device itself. You must have all 3 pieces present: The specific secure enclave, the specific processor of the iphone, and the flash memory that you are trying to decrypt. Basically, you can't pull the device apart to attack an individual piece of the encryption or get around parts of the encryption storage process. You can't run the decryption or brute forcing of the unlock code in an emulator. It requires that the actual hardware components are present and can only be done on the specific device itself.
The secure enclave also has hardware enforced time-delays and key-destruction. You can set the phone to wipe the encryption key (and all the data contained on the phone) after 10 failed attempts. If you have the data-wipe turned on, then the secure enclave will nuke the key that it stores after 10 failed attempts, effectively erasing all the data on the device. Whether the device-wipe feature is turned on or not, the secure enclave still has a hardware-enforced delay between attempts at entering the code: Attempts 1-4 have no delay, Attempt 5 has a delay of 1 minute. Attempt 6 has a delay of 5 minutes. Attempts 7 and 8 have a delay of 15 minutes. And attempts 9 or more have a delay of 1 hour. This delay is enforced by the secure enclave and can not be bypassed, even if you completely replace the operating system of the phone itself. If you have a 6-digit pin code, it will take, on average, nearly 6 years to brute-force the code. 4-digit pin will take almost a year. if you have an alpha-numeric password the amount of time required could extend beyond the heat-death of the universe. Key destruction is turned on by default.
Even if you pull the flash storage out of the device, image it, and attempt to get around key destruction that way it won't be successful. The key isn't stored in the flash itself, it's only stored within the secure enclave itself which you can't remove the storage from or image it.
Each boot, the secure enclave creates it's own temporary encryption key, based on it's own UID and random number generator with proper entropy, that it uses to store the full device encryption key in ram. Since the encryption key is also stored in ram encrypted, it can't simply be read out of the system memory by reading the RAM bus.
The only way I can possibly see to potentially unlock the phone without the unlock code is to use an electron microscope to read the encryption key from the secure enclave's own storage. This would take considerable time and expense (likely millions of dollars and several months) to accomplish. This also assumes that the secure enclave chip itself isn't built to be resistant to this kind of attack. The chip could be physically designed such that the very act of exposing the silicon to read it with an electron microscope could itself be destructive."
Someone pointed out that the device in question doesn't have all of the features described above as it is an iPhone 5c. My apologies. I'll leave my comment as some may still find it interesting. Relevant-ish, perhaps.
> The UIDs are unique to each device and are not recorded by Apple or any of its suppliers.
The GIDs are common to all processors in a class of devices (for example, all devices
using the Apple A8 processor), and are used for non security-critical tasks such as when
delivering system software during installation and restore.
The 'not recorded' explicitly refers only to UID, not GID. This means that in theory the GID is accessible and knowable to/by Apple. With this information, it should be possible to use a different processor in conjunction with the secure enclave that spoofs the correct GID.
Correct me if i'm wrong, but isn't this sufficient to bypass the time-delay and thereby unlock the phone?
There are probably Android devices which make use of ARM's TrustZone [1]. Apple's Secure Enclave is a bit more thorough, though, because it actually uses a physically separate co-processor running a custom L4-based microkernel with a secure boot process. It is hardware isolated from the rest of the system, and uses a secure mailbox and hardware interrupts to communicate. Whereas ARM TrustZone appears to be implementable entirely on a single CPU.
There is a back door already called iCloud Backups. Because if turned on, all the user data is sent to a remote server and it can be restored to a different phone. So that data is obviously not encrypted using the same highly secure hardware based encryption.
Problem becomes harder. The (comparatively weak) passcode gets run through a large number of hashing rounds, which takes about five seconds of real time to complete. The results of that hash are the actual key to a very strong algorithm.
It makes more sense to try brute forcing the hash, even with the delay time.
>>A magistrate judge, an Apple employee, and an FBI agent agree to meet at a local bar. Only the Apple employee makes it. Why? Because the bar didn't have a back door.
The phone in question was not owned by the shooter, but by his employer, who has consented to the search. This seems like a poor basis to contest protecting someone's privacy.
Big problem. People will generally stop using phones of which they suspect that they are back-doored. At the same time, it would be a hopeless endeavour for law enforcement to get a swarm of (Chinese or other Asian) companies to help with unlocking their phones. They would literally not even answer the phone. Therefore, this may very well spell the end of highly centralized, Apple-style companies that can be effectively pressured and browbeaten into "compliance".
Fortunately for the future, this kind of attack can be thwarted through key stretching (making each attempt intrinsically long to perform, by making it computationally expensive).
I expect to see an optional, configurable key stretching setup in future phones, for those whose privacy is worth a couple of seconds' delay when unlocking their phones.
Nobody would have to go to jail. They'd hold Apple in contempt and charge them a non-trivial sum of money for every day they refuse. If they continue refusing, the amount increases exponentially until the company is threatened with bankruptcy.
In short: they can't refuse without a justified explanation of why they are unable to comply.
In which case the best way to proceed would be to say that they will try to do it and after few months of investigation say the data became corrupted before they even got the phone. Who's going to verify it? No one.
The headlines keep changing, is it crypto, is it auto-erase, is it a purple unicorn? sheesh lets muddy up the actual issue with incomplete, or inconsistent data. Its the American way!
It's at times like these they're surely knocking on the door of every company whose R&D in quantum computing, information theory and algorithms they've been funding for at least the past 2 or so odd decades. "So, is it ready yet?"
I'm assuming you are referring to quantum computing for it's speed computations? That wouldn't make a different here. They have only X amount of tries before the phone locks them out. It is the number of tries that is the issue here.
No, what I am assuming is the company will be compelled to provide the data on the phone without the potential for lock=out or erasure during brute-force. Then, in all likelihood state-of-the-art methods in brute-forcing AES (with the best theoretical speed-up up to and including quadratic due to Grover's algorithm on a quantum computer, or some unknown state-of-the-art slower than that on a classical computer) will be employed until the data is ultimately decrypted.
It's not really a single operation though. How would that quantum computer test against an iPhone when each test is a mark against the 10 max test count? An iPhone is still based in standard silicon using bits; you can't exactly pop a process that uses qubits into it and expect it to work.
Nope. For general search operations (modeling your 128-bit cipher as a black box) the best we know how to do is Grover's algorithm, which gives you a quadratic speed up. Your 128 bit problem is now a 64 bit problem (which is of course still quite good).
It bothers me that Tim Cook lied: he stated in his open letter that if they provided the modified OS it could be used on other phones, but the court order specifically says Apple should make the software only work on the specific phone in question.
The 5th amendment protects evidence inside the brain of the accused. As devices becomes more and more an extension of the brain, the more I think we'll need to adjust the rule of the 5th amendment to cover things outside of the brain.
Think about what we want as a society though. If you did commit a crime, we want to convict you. The reason the 5th protects you from compelled self-incrimination is to prevent unjust interrogation and investigation techniques from the cops[0], not to make the overall likelihood of conviction lower, although it incidentally has this effect.
From a policy standpoint the ideal world would be one where all criminals are convicted and no innocents are convicted. In our system we try [see 0 again] to err on the side of letting criminals go rather than convicting innocents.
One of the interesting things about the new normal of everything being recorded always and everywhere is how this might change our laws. Laws are supposed to accomplish something, speeding law is supposed to improve road safety for example. If automatic enforcement instead imposes massive taxes on everyone while not improving road safety, society (should) modify the law since it didn't get them to where they wanted to go.
Sorry, long winded way of saying, our electronic artifacts aren't really all that different than letters. If I wrote an email to my friend demonstrating premeditation in murdering your puppy, you and everyone else have an interest in that email being valid evidence. That's a wholly separate discussion from the encryption debate. I definitely you're right that we need to think about it but I just wanted to play devils advocate here for why we might want to think twice.
[0] Yes, shit still happens, but the system is SUPPOSED to limit these abuses.
>If you did commit a crime, we want to convict you.
There are too many bad laws out there. If I meet a bad guy, I can defend myself. If I cross the government, I'm doomed. I much rather support things that increase my likelihood of meeting a bad guy that I can defend myself against while reducing my ability to cross the government and being doomed without recourse. This is the whole reason I justify that it is better to let 99 guilty go free than to jail 1 innocent (assuming the government doesn't also take away my ability to defend myself, which it does seem intent on doing).
It is unjust to jail Rick for jay walking and ignore that Morty jay walks. A system of partial enforcement is not intrinsically more just because we haven't jailed Morty. It's actually unjust because if Morty is free to jay walk then how can we justify locking up Rick if jay walking is the crime? We probably just like Rick less.
Partial enforcement is a practical reality because we're willing to accept the injustice of partial enforcement rather than live in an Orwellian police state (a decision I'm thrilled with!).
> There are too many bad laws out there.
That's the point! If bad laws stay on the books and aren't generally enforced then they can be enforced capriciously as punitive weapons by government officials. If bad laws are fully enforced, they won't stay on the books for very long.
This is why it is in the best interest of a society to have as high an enforcement rate as possible before increasing false positives. It discourages bad laws from existing in the first place and surviving in the second.
>Partial enforcement is a practical reality because we're willing to accept the injustice of partial enforcement rather than live in an Orwellian police state (a decision I'm thrilled with!).
False dichotomy. If all laws are fully enforced, bad laws will soon be done away with by popular demand. If the government won't do so willingly, then they will be forced to do so by the people.
Can you clarify? I just said that bad laws would go away with full enforcement, which is the core of why it is good for a society to enforce its laws evenly and thoroughly.
I don't think there's any false dichotomy produced by simultaneously noting that on a practical level you will never achieve literally 100% enforcement of laws without some serious damage to civil liberties which is why we err on the side of partial enforcement as the lesser of two evils.
Edit: Is the issue is the use of the phrase "accept the injustice of partial enforcement"? I was trying to communicate that there is some injustice in not fully applying the law (the fact that some murderers go untried is unjust for instance). However we're willing to accept that because getting literally 100% enforcement would require things we don't want. When I say full enforcement as something to aspire to I'm not suggesting literally 100%, I'm referring to the way we roughly fully enforce murder laws and do not fully enforce drug laws. One of those crimes (more or less) gets treated as a crime regardless of who commits it, the other does not. I don't think we'd tolerate drug laws as they are if they were enforced with the same degree of universality as murder laws.
>I don't think there's any false dichotomy produced by simultaneously noting that on a practical level you will never achieve literally 100% enforcement of laws without some serious damage to civil liberties which is why we err on the side of partial enforcement as the lesser of two evils.
My point being there is the third option that the laws will be removed. And it seems like in part of your post you agree with this, but in part of your post you act like this isn't an option. I'm kinda confused in that regard.
>If you did commit a crime, we want to convict you.
That's a massive assumption in an age when there is a law, somewhere, that make you and me criminals. It's a small ask for laws that are impractical to enforce remain impractical.
Email is communication between two people, and is a complete different matter to if a person wrote in their iOS Notes "Feb 17, I've been fantasising eating BWStearns dog. He looks yum. Should I or should I not?".
And if you dog really got murdered, that note will implicate whoever wrote it, even if they didn't do it! Thinking about performing a criminal act is not a crime. Conspiring to do it is.
But let's say that I/you killed your/my dog is not in question. The killer's notes on dog killing are relevant to the case. Your iOS notes are no closer to being part of your brain as a dead tree journal is. Other than (as someone else brought up) a hypothetical brain scanning technology that claims to reveal thoughts[0] I can't think of a category of digital evidence that is so substantively different from analogue predecessors as to warrant folding the whole category into some protected class of information.
[0] I do think that precedent regarding the inadmissibility of the polygraph will probably protect us on that front for at least a while after someone pulls it off.
Oh yeah, sorry. Distinguishing between premeditation or not in order to charge and sentence properly. Without premeditation there may be no crime at all. I did not spell that out well and only touched directly on it in the first post. The point is that it's evidence and not really different than if I wrote it in my dead tree diary. I'm hobbled a bit here because I'm trying to make the very unsexy point that most electronic things are not unique because they're electronic (kinda like how "doing it on a computer" shouldn't be sufficiently novel to justify patent protection). The extent to which "devices becomes more and more an extension of the brain" is at the moment not far enough to justify unique treatment to your iOS notes.
Wouldn't that also apply to, say, a small paper notebook that the accused carries with him/her? Surely there's been plenty of people who used such paper notebooks as an "extension of the brain"...
Regardless, this wouldn't apply in this case however. The device wasn't the shooter's to begin with. He didn't own it. His employer did. They've already given permission to access the device.
Let's go off the deep end, shall we, and speculate? Consciousness (the process that manages your evidence) is already encrypted. It's encrypted by quantum events occurring in the brain. The going conjecture is that microtubules in brain cells provide a latticework to maintain and force collapse of quantum states which somehow drive neuron activity. Regardless if you believe that or not, it is now known that brains are storing information in DNA and that DNA processes appear to be governed by quantum events. It's a reasonably palatable conjecture which may be testable in the next few years, and it's speculation for the hell of it here anyway.
Having essentially hardware equipped encryption, you brain ends up being completely unique and probably uncopyable in total from a quantum level process, given it lives half here and, ahem, half there. Any connection to it by a non-quantum devices in reality, especially high speed ones, are probably a bad idea as it increases the exposure surface area beyond what nature has likely already secured. Given your brain is reprogrammable, it's probably not a good idea to hook up weird shit to it until we know more about how the universe and it work. That includes any shit a government asks you to hook up to yourself now and again when you pass through its borders.
The 5th amendment being out of date is the least of our problems. Our government in the US, in its valiant attempt to protect us, has somehow decided it needs to keep increasing the efficiency of protection in response to what appear to be escalating abilities with the bad guys. The problem is, of course, is that eliminating all suffering is completely impossible, and that the idea of what is a decent amount of suffering to endure is constantly shifting as it's being squabbled over by people infected with angry cooperative memes...memes which have forgotten that this country was founded on freedom to do whatever you wanted, when you wanted to do it, as long as you weren't shitting on your neighbor when you did it.
Frankly, I'm more concerned about cloud services nowadays than phones. Seems as if a phone is just a viewport to the cloud and with so many services and apps accessing my phone's data (seems like every day another apps asks for more data) that the exposure area of my phone's cloud footprint is probably easier to hack than the phone itself.
I always wondered why more people don't go around bricking iPhones by entering the wrong pin several times. Same goes for any other lockout. Why not do this to someone famous by constantly logging in as them from a botnet?
I believe there's exponential delays enforced on attempts, so it would take hours to enter 10 incorrect pins. Not something you can do while they're at the bar.
Why no one is attacking on hardware level? Cut the processor to get the GID and UID, dump the flash, pregenerate rainbow tables with pin, power flash chip externally and give the codes ...
Yeah it is expensive, but I would not be surprised if there aren't such labs that could provide such service. Why does FBI goes trough such pains?
While I'm also curious why this isn't possible (as per everyone else's comments), the phone in question doesn't even have the same level of security so does it even have a dedicated chip with the GID and UID?
As I've understood from other posts in the thread, you can't reproduce the hardware security module on the chip. It has a unique per-device key that cannot be read out (except perhaps with an electron microscope).
> Can somebody explain to me how this warrant is not a direct violation of this individual's 4th amendment rights?
The person is dead and it's reasonable and the 4th isn't even applicable (depending on the interpretation of "make a backdoor in a product" as opposed to "we're looking at someone's data they own on a device they own"). Even the very liberal interpretation of the 4th doesn't apply here, when the right violation would be the act of accessing the data, not just asking to have a way to look at it.
Techdirt is full of fiction by design, so I'm not surprised by the confusion.
Ok someone murders a bunch of defenseless people... Why is Apple dragging their feet? This is tasteless. I'm NOT for backdoors, but this is ridiculous.
I think you've just learned that you are, in fact, for backdoors. A backdoor doesn't become any less of a backdoor when it's only used against bad people.
No, don't put words into my mouth. I think Apple should go to some extraordinary means to assist here. Any system is hackable if have physical possession and control over the input.
It's true that we haven't seen perfection yet, but there are tamper-resistant devices where the above is not trivially true. If each device were protecting the same keys or we just needed to break the platform, I might be more inclined to agree, but given that they have individual keys and a failed break could leave the data unobtainable, I'm not so confident.
That said, I think this particular phone doesn't have the secure enclave, so it may be breakable here.
It's more than just that. It's about setting a precedent. If you let the government walk all over your security, then where do you draw the line? What's the point of security at that point? Security doesn't favor good or bad.
Great point about good/bad, so here's the line: systems should resist and prevent mass warrantless surveillance. We are innocent by default, a government should not be able to dragnet it's citizens.
However I don't understand why everyone on hn thinks it's impossible to design things in this manner.
Judges are not strict compilers. They will interpret "submit passcodes" to mean what a lay person would read it to mean: if the submitted passcode is correct, the phone will be unlocked.
Games like this will get you contempt of court. I don't know if an obstruction of justice charge could come out of this, but I wouldn't test it myself...
A second: What happens if Apple states that it will take a 50-person team with an average annual labor cost of $200K/person approximately 5 weeks to fix the problem with a 50% chance of success. Can Apple bill the court a million dollars to try to fix the issue?
A third: Apple open-sources their encryption modules and firmware. They no longer have proprietary information for how to unlock the phone. Are they legally required to be the ones who defeat a system to which they hold no proprietary information?
A fourth: The small team that built the system no longer works for Apple. Perhaps their visa was revoked and they left the country, perhaps they were poached by a competitor, or perhaps they retired in the years since this module was published. Who is responsible for complying with the order?
A fifth: The data is actually corrupted. Apple presents this conclusion under penalty of perjury after a thousand hours spent on the project, which it requests are compensated.
A sixth: Apple requests that trading of its stock is frozen for one month while it expends considerable resources on complying with an unexpected court order relevant to national security.