A second: What happens if Apple states that it will take a 50-person team with an average annual labor cost of $200K/person approximately 5 weeks to fix the problem with a 50% chance of success. Can Apple bill the court a million dollars to try to fix the issue?
A third: Apple open-sources their encryption modules and firmware. They no longer have proprietary information for how to unlock the phone. Are they legally required to be the ones who defeat a system to which they hold no proprietary information?
A fourth: The small team that built the system no longer works for Apple. Perhaps their visa was revoked and they left the country, perhaps they were poached by a competitor, or perhaps they retired in the years since this module was published. Who is responsible for complying with the order?
A fifth: The data is actually corrupted. Apple presents this conclusion under penalty of perjury after a thousand hours spent on the project, which it requests are compensated.
A sixth: Apple requests that trading of its stock is frozen for one month while it expends considerable resources on complying with an unexpected court order relevant to national security.
3: No, but they will probably be the ones asked anyway, and then yes, they would be legally required.
5: What's the question? Is the question will they be compensated? Then yes.
6: They can't. They don't own their stock. Bad PR is not a good enough reason.
You are treating the court like a mathematical proof and finding edge cases. I used to as well. But courts don't work that way at all - they don't care in the slightest about your proof. They analyze things on a human level, not a mathematical level.
This (and the job prospects) are the reason I choose computer science/software engineering over being a lawyer even though most people who commented on what I should be say I should be a lawyer. The law requires not just thinking like a human, but accepting such thought as valid. To me the logic is exactly the same behind any racist who makes special exceptions for all minority people they know while still holding their view in general. In effect the system is built on a mix of emotion and logic, and in doing such it is insanely unjust to those who fall on the unpopular side. A President who admits to smoking pot leading a government that ruins the lives of people who use pot... it is just as bad as some older men I know who admit to having tried pot but who think anyone currently trying it deserves to be put in prison.
Analyzing something on the human level is always a horrible standard when viewed outside the context of our own emotional fallacies.
I might extend that a bit to ask 'what is the consequence of such lack of cohesion'? Perhaps first that getting agreement on anything needs to be done at a close to case-by-case basis since there is a weaker adherence to standards. This can introduce favoritism (e.g. some people get hit for pot and others don't). Another consequence might be up-front costs. Not needing to think too deep about edge cases and whatnot is clearly less time-cost to run first-time-seen cases. This in turn means people won't hesitate as much in setting new precedence because they won't notice when they do. Another consequence would be inability to get consistency across and entire system since not everything can be debated by all and there's not clear rules.
Pure logic is like 100% free markets with zero oversight and government interference, it looks good on paper but not in practice.
Human emotion in laws add things like forgiveness, understanding and compassion, things that mathematics can't help with. I know that it also adds greed, unfairness and discrimination hence the reason it is a constant struggle to find the right balance.
Courts analyse things on a political level. It has less to do with what normal humans think is right and more to do with causing the outcome the judge wants to occur.
The result is that the better judges are more consistent (and then get celebrated when the outcome in a specific case is politically popular but pilloried for creating "loopholes" when it isn't), whereas the worse judges find a way to make the politically popular thing happen no matter how tortured the logic it takes to get there is. And the existence of the second class destroys the rule of law, because you get in front of one of them and it doesn't matter what you did, it only matters if the judge likes you.
Anyway, I used to think that law was mathematical, but after reading lots of verdicts I realized it was not.
> but asking so that I can ask something else
Ask. If I don't know, I'll tell you that.
(Or we'll just have to find out?)
They would be fined, or executives charged with contempt of court and possible jail time. That would also make the Judge mad at them, and not likely to agree to anything they ask.
The Judge has a LOT of power - he can impose some really high fines, and that's just to start with. Apple will not mess with him.
> I don't know what the next step of FBI will be.
Not FBI. If the Judge says to do it (so far he has not), he will be the one making sure they do it. The FBI will just complain to the Judge if necessary, not actually do anything.
> Can they say okay give us the device and we will give you the data but not the actual 'modified os'?
They can ask the Judge if they can do that. It's up to the Judge to say yes or no. They'll probably have to explain why it's better that way, and that the end result would be the same. The FBI can then argue against it, (or accept it). The Judge will listen to both sides and explain his reasoning in a paper.
You might want to try to read all the stuff the Judge writes in this case. Ignore what the lawyers for Apple or FBI write, read just what the Judge writes (he will summarize what the lawyers wrote, so you won't miss anything).
I don't see how statues of limitations is connected to the question.
The closest I can think of is a right to a speedy trial, but that has so many exceptions (for example if the person is out on bail) that I couldn't say how a judge would rule in this scenario. Especially if it was the defendant causing the delay by refusing to reveal the password and making the prosecutor get it the hard way. (Not applicable in this case, might be applicable in another.)
The FBI is not doing this for proceedings related evidenciary purposes but instead for investigative purposes. No one has been charged here (from what I know) so the issue of speedy trials is moot while the statue of limitations on the crimes under investigation are likely extremely long if they exist at all.
They are imperfect and I'm sure you can find much wrong with the courts, but attempting to codify every edge case is not only impossible, but dangerous. The more laws you create, the more likely you can find a way you're breaking one. It's not a perfect system, but a cornerstone of American democracy is that you are judge by people, people like you, not by an algorithm.
We've been reminded again and again by recent events how horribly wrong things can go when the is system is applied to people who are not like the judge/jury/officers/etc. There's got to be a better way than relying on inherently biased people, especially for commonly persecuted groups (minority races/orientations/occupations).
I think when someone says they want an axiomatic or algorithmic legal system, they're saying people are not like them, and they would rather be judged by an algorithm. Also that they would rather know in advance what behaviors will be judged positively or negatively.
The law is essentially trying to codify a moral code, one that changes with time. We've seen how the law struggles with changes to technology, lifestyle and shifting attitudes. But sometimes the very axioms change (see slavery, suffrage, common law). Because the law cannot keep up with changes, we're stuck with messy human interpretation to smooth over some of those rough edges. Something like Brown v. Board of Education, which seems obvious in retrospect, may not have occurred in an axiomatic system (perhaps Plessy v. Ferguson may not have occurred either, though considering the primary axiom that American law is based on once said that blacks were 3/5 of a human, I find that unlikely).
I would argue that we ought to try to introduce algorithms into the enforcement of the law (policing, traffic enforcement, jury selection, public defenders, etc) rather than the interpretation of it. Of course, one could argue mass surveillance is exactly that, so I don't know.
You ever get in a fight where it wasn't 100% clear who was right and who was wrong? You were both a little bit right and a little bit wrong?
Welcome to the problems our legal system has to deal with. The world isn't a mathematical equation. It's fuzzy. So is our legal system.
The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward. So in a sense, case law is intended to be the legal equivalent of mathematical proofs in the courtroom. Obviously this analogy isn't 100% correct, but the main reason it can't be is because of the thousands of edge cases that come up in real life...edge cases that are minority cases intended to be protected by representative democracy, but the case law legal system has difficulty actually doing so because of the design of the case law system. See the conflict? Completely lost yet?
Sorry if this comes out as pedantic but I think it is important. Case law is not intended to minimize fuzziness but to address occurrences of fuzziness.
No cases are identical, but knowing how a similar situation was handled in the past clarifies, not confuses the situation. It gets us closer to a consistent interpretation of the law, which is, in my opinion, paramount because consistency ideally means predictability and equality.
Indeed the roots of the common law go back to making sure all people are viewed equally under the law, rather than having random interpretations based on who's judging and who's being judged.
The design of a case law system however, which the U.S. operates under, is intended to minimize fuzziness by looking at case precedent for guidance on issues moving forward.
Aren't we in agreement?
For example, look at Katz v. US, one of the seminal cases that would inform this case. In Katz, the court starts with existing law, one that prevents illegal searches, and tries to decide whether that law can apply to electronics (such as tapping a phone). At this point the OCCSSA is in effect, but not really tested, so we've got a pretty fuzzy legal area despite the fact that phones are a well established technology. The court rules that even though law enforcement did not search or seize things, that privacy is still implied in electronic communications because of other acts a user takes surrounding the act of making a phone call (closing the door, making the phone call from home, etc.).
In fact, there's a whole legal concept at work here called lawful intrusion that is built upon every time a case is judged. These concepts, and human application of them to the case at hand, help attorneys, judges and juries deal with edge cases.
I'm not really sure what point you're trying to make. Yes, the world is not black and white, but neither is math.
If you have e.g. a car crash where one person was speeding and the other failed to yield the right of way, they're both at fault. Maybe one is more at fault than the other and therefore has to pay a higher percentage of the damages, but how is that not still an algorithm?
The hard part is designing the right algorithm ahead of time. But throwing up your hands and saying that problem is hard and therefore we should give up on having laws and just let judges decide everything on a case by case basis isn't democracy and isn't the rule of law. It's the rule of whatever the judge says it is today.
Which isn't to say that we shouldn't do our best. We shouldn't just throw up our hands. But I think it's important to admit that judges will always be necessary to handle inevitable ambiguities.
Or to put it another way, we could have an algorithm make the decision but then have a judge whose job it is to find when the algorithm is wrong and then fix it for future but not past/current cases.
There is a tradeoff between predictability and correctness. Where you come in on that tradeoff is pretty different from most other people.
Because I don't see it as a tradeoff between predictability and correctness, I see it as a tradeoff between correctness for past acts and correctness forever in the future.
If you know a law is ridiculous but you also know that a judge will see the same thing and then not let you do that, you won't do it. Some people like that result. The problem is it causes the ridiculous law to carry on existing and not be fixed, because nobody is willing to challenge it when they know they'll lose and go to jail even though by the letter they shouldn't. So then nobody ever knows what the law actually is because it clearly isn't what it says it is, but if it isn't that then what is it?
> allow the SEC to suspend
That said, Apple's enterprise business grew 40% in 2015 to $25B. (http://appleinsider.com/articles/15/10/27/apples-enterprise-...). .gov is a big part of that. When you attend Apple enterprise-focused events, the attendance has grown beyond the old media/field service crowd to include finance,police and .gov attendees.
If you need to comply with various federally mandated compliance regimes, (ie. FIPS 140-2, CJIS, IRS Pub 1075, etc) iOS is very clearly the easiest and most straightforward platform to achieve that compliance on. It's not as trivial as it was with legacy BlackBerry, but it's pretty close.
With Android, the carrier and manufacturer variance makes it more difficult to achieve, demonstrate and maintain compliance. That said, Android offers a number of advantages from an application perspective, but when in production, you'll commonly see licensed third party software in place to perform typical mail and other functions.
I used to run an environment with something like 25k devices... Provisioning and management was braindead simple with legacy BlackBerry, and it's integration model for mail & calendar was robust and reliable. Most users considered BB more reliable than any other mail access methodology. PIN messaging was device-centric vs. identity centric, which made it attractive or many use cases. That also went away, along with the perceived and real security benefits.
That benefit became a liability -- BlackBerry was very "enterprise telecom" and mail-centric. So the people who were responsible for BlackBerry were very much affected by tunnel vision. That's why BlackBerry was surprised by the market shifts -- their customers were very happy, but were ultimately people with an overly narrow focus who were disconnected from the business.
When BlackBerry switched to the ActiveSync model, they became just another ActiveSync device, with the added "benefit" that the rest of the platform was a mess. They are now trying to leverage their past reputation for good management to become an MDM vendor for some reason.
iPhone slaughtered Blackberry by attacking their strengths... BBM/PIN messaging with iMessage (with the added feature that you cannot intercept the messages if the feature is enabled), the obvious improvements of iOS vs. Blackberry OS and an easy institutional management model.
I doubt you were suggesting that BB's downfall is rooted in their encryption, however. In the slim case that it did, they didn't speak up about backdoors and letting .gov into their devices until 2015, way after their market share has shrunk.
On phones with a Secure Enclave, the wipe-on-failures state is managed in the coprocessor (which runs L4), and is not straightforwardly backdoor-able.
If you're worried about the police brute-forcing your phone, enable Touch ID and set a passcode that is approximately as complex as the one on your computer.
Seems like they're just hoping to use this as an opportunity to set a precedent. Never let a serious crisis go to waste?
Also hasn't Apple been able (and previously willing) to unlock pre-Secure Enclave phones for law enforcement for... ever?
Which is interesting. If you happen to use TouchID, is your best bet to hope a court will not be able to compel you to unlock it within 48 hours of arrest? That sounds very probable.
Though, one feature I'd like would be to register a distress fingerprint. Then I could touch say... my left index finger to require a password unlock.
However, while a court is (afaik) able to ask you to put your finger on the fingerprint reader, you do not need to tell them which of the fingers the correct one is. So instead of purposely using a wrong finger, I'd ask the court to explicitly tell me which of my fingers I should use to unlock the phone.
If it would be lawful for a court to ask you to "unlock the phone with the correct finger" then they might as well also ask you to "unlock this harddisk with the correct keyboard keys pushed in the correct order (as a password)".
There's a huge difference. Authorities can force you to give up your fingerprint, but not your password.
But they would effectively be asking you the question "which finger did you use to lock this phone" to which you may plead the 5th.
It's not hard, the "precise distinction of law," is "unlock this with your finger, whichever one does it." I don't know what complicated back and forth you're imagining, but it's never occurred in any case that I've heard of.
they would effectively be asking you the question "which finger did you use to lock this phone" to which you may plead the 5th.
We already covered this in the link above: the 5th Amendment covers passcodes, not fingerprints.
"A bit of information you have that the government does not have" is a password.
- Police want to get into your phone for some reason
- You refuse to help them based on 5th Amendment or admiralty law or whatever
- They go to court for a order compelling you to operate the touch lock to open the phone
- You receive the order
Please lay out the "?" part, if you don't mind. I'm highly curious.
No, it is you who is not understanding schrodinger's assertion. The secret knowledge of which finger unlocks it is in itself a passcode and subject to 5th Amendment protection.
If you would like to claim that there's no difference between the two, then you (and your hypothetical court) should have no problem with a user supplying copies of all their fingerprints when asked to unlock their phone.
That's obviously not what's being asked for, hence other people's distinctions.
Of course "the finger," and "which finger," are different things, but that's irrelevant.
If under police duress I keep trying to unlock a phone with my pinkie finger I think that would be suspicious.
If I have that much access to the device I should just force-reboot it by holding the lock and home buttons for about 2 seconds. Or maybe have done that before being arrested.
Upon a reboot the iPhone will always require its passcode.
Mostly because it's pretty hard to change your fingerprint which is a desirable feature for passwords ;)
On the other hand, one might habitually touch, but not register, random fingers to it, while registering some fairly unusual finger as the real one, while using the dominant hand's index finger to "unlock" it.
Finally, someone might decide that if you fail the unlock, they'll just inspect the fingerprint module in isolation and if it works, they'll assume you did that deliberately.
Thumb and index finger should cover 98% of people.
How is that actually enforced? Is there an if statement and a counter somewhere? Couldn't that just be disabled by a sufficiently advanced attacker?
Anyway, how does any of this prevent rubber hose cryptography?
(Yes, I really asked that. Yes, I'm really curious.)
For those curious, here's a Northwestern scan of an article in The Journal of Criminal Law from the 70s (thanks, internet!):
>The Fifth Amendment to the U.S. Constitution gives people the right to avoid self-incrimination. That includes divulging secret passwords, Judge Steven C. Frucci ruled. But providing fingerprints and other biometric information is considered outside the protection of the Fifth Amendment, the judge said.
A fingerprint, when used on an electronic lock of this kind, is not a key. It's attributable to one person only, not trivially duplicated, and not able to be reverse engineered from a locking mechanism. It requires an action by a single person who cannot be forcibly relieved of their possession of their fingerprint.
Additionally, a key is specified during the manufacturing or assembly of a lock, and comes with the lock, since they are "paired" when the lock is made. However, a fingerprint or password are specified by the user at will after they've assumed ownership of the device. They "testified" their identity to their phone with a fingerprint, just like they did with their password.
If compelled to imprint a finger, it is the same sort of personal interaction that a password entails: the credential holder utters/presents their personal information - not a physical object, but a repeated testimony of the same content they previously and uniquely presented to their device. It should be protected as other self-incriminating testimony under the fifth amendment.
If you can compel a suspect to stand up on a lineup, or produce id, there's no reason why the court shouldn't be able to compel you to produce a finger.
In technical terms, the finger is really a "something you have" second authentication factor. If you think of it on those terms, it's more like looking at someone's Hardware token than compelling a password disclosure.
So is all your knowledge. The technology to extract it doesn't yet exist, but once it does, should it be deployed by the courts without a challenge from the 5th Amendment?
Not quite sure it is though. If they already arrested them, they already have the fingerprint don't they? That is different than key and is different than password.
If the device in question was an iPhone 5s or above, then all they'd need is the dead man's hands.
> The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing. No software or firmware can read them directly; they can see only the results of encryption or decryption operations performed by dedicated AES engines implemented in silicon using the UID or GID as a key. Additionally, the Secure Enclave’s UID and GID can only be used by the AES engine dedicated to the Secure Enclave. The UIDs are unique to each device and are not recorded by Apple or any of its suppliers. ... Integrating these keys into the silicon helps prevent them from being tampered with or bypassed, or accessed outside the AES engine. The UIDs and GIDs are also not available via JTAG or other debugging interfaces.
Even for older devices like the iPhone 5C, if the owner chose a good passphrase, I doubt it can be decrypted with Apple's help.
1. From the section on Encryption and Data Protection. Starts on page 10: https://www.apple.com/business/docs/iOS_Security_Guide.pdf
I wonder what how Apple can help the law enforcement here.
There are some hardware HMACs (Atmel's in particular IIRC) where the process of opening the chip package destroys the area of silicon that encodes the private keys. I don't know if Apple used the same tech but if they did, any attempt to look at the private key storage would destroy it.
Some criss/cross metal mesh as the topmost layer you would have to penetrate, or photodiodes that sense the light if you put a device under a microscope, ...
The FBI is hoping they do know something secret.
In most cases it would be easier to subpoena online accounts, but of course Apple says iMessage is also unreadable for different reasons.
When a device is first set up (or wiped) a random key is created and encrypted by the Secure Enclave with a key derived from the user's passcode and the device's UID. Since only that particular device's Secure Enclave has access to the UID the user's passcode can't be brute forced by any other computer, which enables the Secure Enclave to enforce policies like the passcode attempt delay and incorrect passcode attempt. If the device needs to be wiped the random key is simply erased by the Secure Enclave.
(Also, if you only changed 1 bit that would mean you only had to try 2 possible keys...)
You did some unspeakable act and you are dead.
* You have securely enabled touchID
Even if you have 10 tries, barring using random appendages your phone can be unlocked right?
Edit: on my phone so far it is password OR touchID. Making this easily defeatable with the physical device and my:
1. Hand if I am dead or alive.
2. A copy of my fingerprints on file with the gov't (true for me)
3. A non-gov adversary with literally anything i've touched.
Maybe I can enable both, but currently it is either or.
Also, after 48 hours an iPhone requires a passcode to be unlocked, even if TouchID is enabled.
So you have limited time, and have to hope you guess the right finger to use and it reads the finger within that 5 try period.
Instead after 5, make it after 100; instead after 48 hours make it after 40000.
And then, can a fake finger with the correct fingerprints be made so it fools the reader? Lets say make it of silicone or gummy bears?
Most people likely use their dominant hand, probably thumb maybe pointer. In this threat model, someone lifts your phone and opens it with a fingerprint. Assuming they can completely replicate a print and get one (fairly non trivial assumption) they could probably get through with 5 tries.
Genuine question: How secure would you rate:
* 6 char password
* numeric 4 digit pin
I'm only half joking. I wonder if a knuckle or something would work.
I think the real game here is to compel Apple to build a backdoor into future models. I expect to see a lot of rhetoric around this fact, until something forces Apple hand.
In particular, it addresses technical issues not covered in the Techdirt article that are relevant to many of the existing comments here on HN.
If Apple can unencrypt the phone, it will prove to everyone that backdoors exist. If they can't, and they tell the FBI as much, it will just give politicians more reasons sound off about how we have to have backdoors, because this shooter was a "terrorist" after all, and we just have to suck it up and do whatever is necessary to go after people like that.
Either way, we end up with backdoors.
This is becoming my cue to stop reading the comments; when parallel construction is the most obvious argument, you've read the interesting ideas up thread.
Ultimately states will develop the capacity for brute forcing and you have relatively little recourse. While I hate the idea of a three letter agency doing this at any scale large or small, the potential for corrupt local LEOs to abuse their power with an encryption backdoor is very great.
(3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.
So there are two ways to go about this - they can either brute force AES, which, quite simply, can't be done(and I don't mean can't be done with current computers, the number of possible combinations is larger than the atoms in the universe or something stupid like that), unless NSA has a way to crack AES faster(but if they do, they won't make that knowledge public). Or try every passcode combination going through the Apple's full algorithm, which takes about ~5 seconds to generate a key. So it's doable, but it would take some time.
The court is basically ordering Apple to produce new firmware that doesn't block brute forcing. If Apple were to comply, who keeps this firmware after the fact?
There's no mention of this at all, but if the firmware image stays with the FBI then the implications are much more profound with regard to privacy.
But once it's established that it can be required from Apple, Apple has to comply, and Apple effectively does comply, other judges in other cases will be able to request other FW, hard-coded against other IDs, as needed.
The middle-term solution to this is for Apple's security team to protect against this threat model, and implement encryption in such a way that _they_ can't bypass it under constraint. I'm not an Apple customer and I don't follow their products closely, but I understand that iPhones 6 already are harder to bypass than iPhones 5.
Apple is right to be terrified at the thought of being asked to make such a firmware image.
If I understand the cited order correctly the firmware is ordered to be constructed in a way that it runs only on the target phone.
It's unlikely they can rely on hardware protections to provide this device locking, so is it the case that they would build the unique identifier into the image.
Optimistically some obfuscation could help but are the FBI/CIA/NSA really more than a few hops away from opening the binary image in a hex editor and changing it by hand?
If Apple firmware images for the iPhone are signed per-device then fine, but is that the case?
I don't know this but it seems unlikely to me that a custom device-signed build of iOS happens for every iDevice, and if that's not the case, I can't see how Apple can reliably restrict this with confidence.
As many here I believe that once this backdoor exists it will be somehow exploited (at the very least by further orders).
Eh? They are not being asked to install it to the public at large, just one phone.
Of all reasons to object, this reason makes little sense.
* There is an authentic need to get at the data on that phone
* There's no likelihood at all that other users will be impacted by the backdoor
* We'll all be on the same page about how secure these phones are versus the USG.
It's possible that they can prevail against the 5C but not against the 5S or later, since the security architecture of the 5S is very different from that of the 5C.
What is the authentic need? The shooters are dead. Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?
To be clear, I don't think the order to Apple is necessarily altogether a good idea or is even going to produce the desired results, but your complaint seems to be with the fact that this data is being pursued at all.
Edit to reply:
> Evidence of a conspiracy would help. You said they declared allegiance to an organized group. When they did that, did they say or hint that they had been in contact with that group, other than, say, watching public YouTube videos?
The woman in the couple declared it right before the shooting. Do you want a notarized letter from the deceased?
> Would you agree that "high likelihood" is too low a bar for justifying searching the phones of people who live in high-crime neighborhoods?
I'm pretty sure neither "high likelihood" nor "authentic need" were being used as a term of art here, but I would bet that any judge would view the commission of murder declaredly for an organized militant group to be probable cause that there is information pertaining to more criminal activity by that group on these two's phones and in their communications.
Do you really view this as a government overreach or are you just trolling? Under what circumstances, if any, would you see as justified a search of someone's email? phone? house? So far you've equivocated between living in a bad neighborhood and committing murder-suicide.
> The woman in the couple declared it right before the shooting.
I'm not questioning that she declared allegiance. I'm asking if she was in private contact with anyone. If you were responding to that, can you show me where that is in the NYT article you linked? I don't see it.
> Do you want a notarized letter from the deceased?
Let's try to keep this civil, please.
> Do you really view this as a government overreach or are you just trolling?
I actually believe the things I am saying. I am not saying them to anger or upset you or anyone else. Please do not let the fact that we disagree about the scope of the 4th Amendment cause you emotional suffering.
I am not ready to declare it overreach, because I do not know all of the evidence yet. This is why I have been saying things like "Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?" and "did they say or hint that they had been in contact with that group" and "I have not followed the news on this shooting, so I would not be shocked if the answer were 'yes, there is some evidence of a conspiracy'."
If there is no such evidence, I do think it is overreach, but my opinions on policy are not fixed in stone, and I sometimes change my mind about them when presented with new arguments, ideas, or philosophies.
> Under what circumstances, if any, would you see as justified a search of someone's email? phone? house?
I doubt anyone has a complete enumeration of all circumstances under which they feel a search is justified. I would feel torn if there was lousy circumstantial evidence that the phone would solve or prevent crimes, I would be in support of a warrant if there was strong evidence, and I am opposed to a warrant with no evidence. One thing I would call strong evidence is a shooter having announced that he or she was part of a terrorist cell in the US.
I will no longer reading or responding to your edits that are "edited to reply". If you want to discuss with me further, please reply to reply by using the "reply" button. I will not be editing any of my posts to "edit to reply".
The legality of searching for evidence is pretty open and shut because you need probable cause. The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.
Shooting a bunch of people and saying you're with ISIS is plenty of probable cause for a search. I don't see how you're waiting for "all the evidence" here since all the relevant facts are in and they're sufficient. Whether or not she was conversing privately with ISIS counterparts would be the resulting information of the search.
> One thing I would call strong evidence is a shooter having announced that he or she was part of a terrorist cell in the US.
The only way to read this in light of our previous discussion is that saying "I'm in ISIS!" and then shooting up a bunch of civilians is insufficient to prompt a post-mortem search of the attackers' affairs, instead they need to say "I'm in ISIS and there are a bunch of us!" and then shoot a bunch of civilians.
Bravo sir, I have been well and properly trolled.
If I did so, it was a mistake. My reference to the 4th Amendment, for instance, should have said "how the 4th Amendment ought to protect us". I did not mean to imply that I am trying to predict what warrants the justice system will or will not grant.
> You have a fringe understanding of it
I think I mentioned the 4th amendment just the once. I have been trying to stick to normative arguments.
> The point of a search is to gather evidence, requiring the evidence that would be the result of a search is obviously a non-starter as a system.
I think this is a point where we truly disagree. I think a system can function in which some evidence that a search will yield results is required before the search is conducted. I do not think that the evidence must be airtight. Note that I am speaking about what I think is possible and just and right, not what the law says now or the justice system does now.
> The only way to read this in light of our previous discussion is that saying "I'm in ISIS!" and then shooting up a bunch of civilians is insufficient to prompt a post-mortem search of the attackers' affairs
Did the shooter say she was "in ISIS", or that she pledged allegiance to the leader? There might be a difference in this case. I have read that there is religious significance to a pledge of allegiance in ISIS's theology that might make a pledge indicative of ideological alignment and a membership "in ISIS" indicative of being in actual conversations with ISIS.
> Bravo sir, I have been well and properly trolled.
Please, let's try to be civil.
Either one would seem to constitute probable cause for an association. Of course we don't know if she was actually in ISIS, or just agreed with their beliefs. But how would we know without conducting further investigation? You seem to be demanding a somewhat unreasonably large burden of proof, when all that is needed in this case is probable cause. Frankly, even if she hadn't verbally declared allegiance to ISIS, I don't think it's a stretch to say there's probable cause for connection to other terrorist groups. The fact that she did say that makes it a slam dunk.
> I think a system can function in which some evidence that a search will yield results is required before the search is conducted. I do not think that the evidence must be airtight.
We do have such a system. The evidence you're describing is called probable cause, and that's the whole point. I'm not sure of any reasonable definition of probable cause that this situation wouldn't satisfy. Moreover, your objections seem to be in the form of vague misgivings rather than concrete arguments. You haven't precisely described what would constitute sufficient evidence for an investigation, but instead seem to just be saying "there's not enough right now." I think this is what's behind GPs frustrations responding to your posts.
That kind of reasoning allows wholesale collection of communications data by the NSA and other agencies. Since that practice has been widely criticized, there must be something missing from your argument.
No one is advocating warrantless searches or not requiring reasons for warrants.
If I want to get a warrant to see who you're calling, it is inherently a broken system that requires the list of people that you called as cause to obtain that warrant.
Any kind of reasoning allows wholesale collection of communications if you misread it properly.
In that case, all you(pl.)'re haggling over is the "price point" of how much evidence is required to support how invasive a search. I'm unsure how that results in the kind of heated debate that seems to happen here.
It sounds like common sense, I guess, but has that ever worked, actually?
Similar "prevention" rationale is offered for governments to spy on virtually all telecom all the time, now. But this shooting happened anyway.
1) Anyone with a plan promised to stop all terrorist attacks is lying to you, stupid, or both. You can't have a free society and a 0% chance of political violence.
2) Yes, searching the possessions and communications of dead terrorists unsurprisingly are substantially more likely to lead to useful criminal leads than reading your metadata. A warrant to read this person's stuff isn't unreasonable in the slightest, an order forcing apple to do shit might be but that's a procedural thing unrelated to the core issue of "is there a good reason to read this person's stuff"
Would you agree that "high likelihood" is too low a bar for justifying searching the phones of people who live in high-crime neighborhoods?
> If this doesn't clear your hurdle for reasonable search then what would?
Evidence of a conspiracy would help. You said they declared allegiance to an organized group. When they did that, did they say or hint that they had been in contact with that group, other than, say, watching public YouTube videos?
Is there some other issue we're missing here, or does that pretty much wrap it up?
You can feel free to disengage from this conversation if you find it troubling. If you are incredulous that someone might be concerned with the privacy of these people (and their friends and family) in the particular way I am, then I'm not sure what I can do to make you believe.
I am a person. These are my true thoughts. I actually and honestly believe them.
> Yes, the police will easily get warrants to search whatever property of a mass murderer's they feel would be productive to search.
As I said earlier to you in another branch of this discussion, I am not disagreeing that the police CAN get this warrant. They appear to HAVE gotten this warrant, so I guess that's a historical fact. I'm trying to have a discussion about what we think is just and fair and right, as well as trying to find out if there is any evidence of a conspiracy. I have not followed the news on this shooting, so I would not be shocked if the answer were "yes, there is some evidence of a conspiracy". This is why I asked the question, "Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?"
> No, that does not mean they can randomly get warrants to search random houses in high-crime neighborhoods.
I did not ask if police CAN randomly get warrants to search random houses in high-crime neighborhoods, I asked the commenter if he or she thought they ought to be able to do so. If the answer is "yes, they ought to", then we might have a different discussion than if the answer is "no, they should not". I have met people who would answer "yes" and I have met people who would answer "no". Neither answer will cause me to accuse the commenter of commenting in bad faith.
> Privacy rights for mass murderers: not a high priority of US constitutional law.
I'm not talking about what is and is not a high priority for the justice system. I'm trying to engage in a dialogue about what we think the requirements for a warrant SHOULD OR SHOULD NOT BE and whether or not there is any evidence that the phone will provide information that will help solve or prevent crimes.
> Is there some other issue we're missing here, or does that pretty much wrap it up?
I'm sorry if this conversation is upsetting or troubling to you.
> I did not ask if police CAN randomly get warrants to search random houses in high-crime neighborhoods, I asked the commenter if he or she thought they ought to be able to do so
Given that the answer to CAN they is a solid no, and that random searches of homes is in no way related to searching devices used in a conspiracy to commit murder, what is the point of this? In one instance, someone has clearly committed a conspiratorial crime, in another instance, people are living in houses with low property value.
> if there is any evidence of a conspiracy
Conspiracy - a secret plan by a group to do something unlawful or harmful.
Point 1: A conspiracy took place. A plan to kill people was kept secret between multiple people until it was executed.
Point 2: Immediately prior to commission of the murders, one of the participants declared that they were part of a larger group, known for organized commission of murder and terrorist attacks.
Given these points, what information is missing that would motivate you to think that a search of the attackers' phones should be conducted? Are you really asserting that there is no evidence of conspiracy that extends beyond the deceased, despite the fact that they said they were doing this under the flag of a larger organization?
I don't see any room for a normative argument defending against a search. I don't imagine that you're arguing that the post-mortem privacy interests of the terrorists prohibit investigation. Are you suggesting that the risk from not knowing the contents of the phone are so low as to not rise to outweigh the privacy interests of anyone incidentally mentioned on the device?
Sorry for all the questions, what I'm trying to get at is that from a normative perspective, societies generally allow investigators to search the shit of known participants of violent criminal conspiracies in order to detect previously unknown elements or plans of those conspiracies. What is the moral base from which you are arguing that this nearly universally accepted standard is somehow deficient?
> Sorry for all the questions
There is no need to be sorry. They are useful for me to understand your POV and to have a conversation. I am happy to answer them as best I can.
You said that the state should be able to search the phone because it was likely to have evidence of crimes. I am arguing that higher than normal likelihood, as you might expect in a high-crime neighborhood, is not sufficient to justify a search. Instead, I am arguing that evidence (indicative of finding things that will help solve or prevent crimes), not likelihood of finding such things, should be the standard for a warrant.
> A conspiracy took place.
I should have said "a conspiracy beyond the two dead perpetrators".
> one of the participants declared that they were part of a larger group
Did she? I thought she said she "pledged allegiance" to a larger group, like one might do to a Pope you have never met or spoken with.
> what information is missing that would motivate you to think that a search of the attackers' phones should be conducted?
I discussed this elsewhere in the thread, but you may not have seen that post yet. Here is a link: https://news.ycombinator.com/item?id=11115698
> Are you really asserting that there is no evidence of conspiracy that extends beyond the deceased
No, I am /asking/ if there is any such evidence.
> I don't imagine that you're arguing that the post-mortem privacy interests of the terrorists prohibit investigation.
No, I don't think it prohibits investigation, but I do think that state searches of their personal effects ought to require evidence that searching their personal effects would solve old crimes or prevent new ones.
> Are you suggesting that the risk from not knowing the contents of the phone are so low as to not rise to outweigh the privacy interests of anyone incidentally mentioned on the device?
I am suggesting that those privacy interests can be balanced against evidence that searching the phone would solve old crimes or prevent new ones. I do not believe that risk is the only question. That is what I was trying to get at with my distinction between "high likelihood of" and "evidence of", above.
> What is the moral base from which you are arguing that this nearly universally accepted standard is somehow deficient?
The reason I think that evidence of solving (or helping to solve) old crimes or preventing new ones should be required before searching the possessions of any person, living or dead, murderer or pacifist, is a traditional one about privacy, but it seems like the balance I use is different than your balance.
That "societies generally allow" the state to do something, or that societies "nearly universally" do so, is not a big factor in my feelings on whether or not it is fair and just.
I'm not particularly concerned with crimes that we believe "may have" occurred. Of course, they may have. Anything may have happened -- I'm asking for more than just correlation that criminals know criminals. Do we have any evidence, or even any hints or clues, that the phone contains evidence that would help solve or prevent any crimes?
Right, but we don't go searching everyone's papers just in case they are conspirators.
If Alice punches Bob in the face, then is hit by a bus and dies, we don't go searching through all of Alice's stuff just in case there might have been someone else involved with the Bob-punching incident, right?
Is there any evidence, any at all, that the shooters collaborated with anyone?
> The legitimate concern is prevention of future attacks.
I'm not questioning the "legitimate concern", but I don't think "legitimate concern" should be sufficient to get a warrant.
Wrong. If Alice announces that she's looking for people to attack then of course we go looking so see why she's doing that and if other are others involved.
This wasn't some random emotional attack like a bar fight. Stop setting up nonsensical strawmen for your arguments, argue the hard cases not the easy ones.
If your argument/objection can survive the hard cases then you have something, arguing the easy cases is meaningless.
What, in your opinion, are the limits of that investigation.
1. Alice announces she's looking to attack someone.
2. She attacks Bob.
3. She dies.
I gather that, in your opinion, we can search her possessions. Let's say she has a living mother and a best friend who died a week before the attack. Can we search her mother's things? How about her best friend's?
You look at the scenario at hand and see if it makes sense to search this person or that.
I said it to someone else today, I'll tell you too: The law is not like a mathematical proof with exact rules. People with STEM backgrounds tend to think of the law that way because it seems to be all about rules. But it's not. It's about human judgment and gut feelings.
There is no blanket rule, each situation is different.
You need to have some reason to go searching someone, it doesn't have to be a great reason, but you do need a reason.
The alternative is dragnetting. Or not having investigative tools. I don't know that I like either option as a citizen.
I didn't say it isn't; I said it shouldn't be.
14 dead people and a stack of unused guns and bombs.
I didn't feel the need to elaborate because my parent thought one word was sufficient, but let me state the full context to my thought process:
> > > Do we have reason to believe that there is evidence of any pending crimes or any old unsolved crimes on the phone?
> > 14 dead people and a stack of unused guns and bombs.
> two dead attackers, stack confiscated. case closed.
If the "evidence" for future crimes is merely some dead criminals and the ammunition they didn't get to use, there is no probable cause for further investigation. Hence, the case should be closed.
That could not be further from the truth. They are trying to set a precedent that endangers the future of consumer end-to-end encryption.
They are trying to repurpose an 18th-century law (the All Writs law) to force Apple to help them break iPhone encryption.
If this case creates precedent, what is to stop them from, say, forcing Signal and Google to work together to deliver a backdoored app update to a specific user?
You are not a lawyer. Nate Cardozo, staff attorney at the EFF, had this to say:
The DOJ can make this demand because Apple phones of this vintage are already breakable, and the DOJ is merely asking Apple to exercise a capability it already has.
Apple knows this, better than most, and has for many years. They have done security design work with governments as an explicit adversary. The 5C was insecure. The 5S is not: it has an entire additional processor, running an incredibly secure OS, whose entire job is to make sure that the phone keeps promises like these even if the DOJ orders Apple to sign bogus updates.
Apple is so committed to this that they've extended the "ten tries and you're out" promise all the way through their server infrastructure, so that if you escrow data into iCloud it will be nuked if someone tries to brute force it. Not only that, but after rigging their HSM cluster to operate that way, they burned the update keys, so that an attempt to change that rule will break all of iCloud.
> There's no likelihood at all that other users will be impacted by the backdoor
The backdoor, a "master key" as Tim Cook put it, that opens all pre-5S iPhones affects millions of other users.
And that's not even the main issue. The issue is the precedent it sets, which endangers a lot more than just a few million users or a few specific models of iPhone.
"Can only try 10 times" isn't anything guaranteed by encryption. My laptop has an encrypted partition, but an attacker can brute-force it at will. Even if I had software to say "only let it happen 10 times, then erase the partition" the whole drive could just be cloned. That's why I have a 20+ character passphrase.
... the HSMs that manage the escrow scheme for credentials stored in iCloud are themselves rigged to blow up on 10 failed tries, and, not only that, but the code that implements that process is burned into the HSMs and the keys Apple would need to change that logic have been destroyed.
I was talking specifically about this scenario where the phone pin may be 4 or 6 numbers and Apple is helping them.
"Apple's reasonable technical assistance shall accomplish the following three important functions:
(1) it will bypass or disable the auto-erase function whether or not it has been enabled;
(2) it will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available on the SUBJECT DEVICE and
Apple's reasonable technical assistance may include, but is not limited to: providing the FBI with a signed iPhone Software file, recovery bundle, or other Software Image File ("SIF") that can be loaded onto the SUBJECT DEVICE.
The SIF will load and run from Random Access Memory and will not modify the iOS on the actual phone, the user data partition or system partition on the device's flash memory. The SIF will be coded by Apple with a unique identifier of the phone so that the SIF would only load and execute on the SUBJECT DEVICE.
The SIF will be loaded via Device Firmware Upgrade ("DFU") mode, recovery mode, or other applicable mode available to the FBI. Once active on the SUBJECT DEVICE, the SIF will accomplish the three functions specified in paragraph 2. The SIF will be loaded on the SUBJECT DEVICE at either a government facility, or alternatively, at an Apple facility; if the latter, Apple shall provide the government with remote access to the SUBJECT DEVICE through a computer allowing the government to conduct passcode recovery analysis.
If Apple determines that it can achieve the three functions stated above in paragraph 2, as well as the functionality set forth in paragraph 3, using an alternate technological means from that recommended by the government, and the government concurs, Apple may comply with this Order in that way."
Reading Tim Cook's announcement in light of this thought experiment, methinks he doth protest too much! Apple does not have any objection to compromising user security at the root level, and in fact has already done so by creating a device that has some limited vulnerability to malicious action by the manufacturer signed with its root key. (By the way, no doubt every other manufacturer has done worse, so this is not to deprecate Apple vs. any other big company.)
I would speculate that Tim Cook's goals with this announcement are largely PR-based, and that the goal of Apple's legal strategy is not to avoid cooperation but rather to retain the ability to decide whether to cooperate, and/or to impose a higher perceived cost on the government for such requests. No doubt Apple is correct to say that once a precedent is established, then it will be widely used by law enforcement even in routine cases.
At the end of the day, I am not optimistic that we can avoid a world in which large device manufacturers are compelled (legally and practically) to build security flaws into their devices. Perhaps not the flaw of a back-doored crypto implementation, but other flaws such as those that have been identified in current iOS devices that allow the government (with commitment of sufficient resources) to chip away at some of the more superficial protections.
The FBI is going after the lowest hanging fruit, the users password that was used to create the crypto key.
The users device key is mixed into that PBKDF. Without both parts of the equation, you have nothing.
For your reading enjoyment: https://www.apple.com/business/docs/iOS_Security_Guide.pdf
Specifically page 11 the diagram at the bottom.
Now, the PBKDF requires a secret that is only stored within the iPhone itself (within the CPU even, where it can't be read out directly). So if we instead grab a copy of the data, all we get is an AES encrypted file system.
We have 2 choices. 1. We can attack AES directly, and attempt to brute force the AES key, or 2. We can attempt to get the AES key somehow by bruteforcing the users password.
2 is infinitely simpler compared to 1. However, 2 becomes more difficult (but still less so than just cracking AES itself) when you need to run your brute force guesses through the CPU within the iPhone.