> There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
Crazy if true.
Doesn't this also create a weird incentive problem where the FBI (or any other law enforcement agency) who would normally be tasked with helping Apple with this doesn't actually want to?
Elbit is a big defense contractor and basically wants to simultaneously be a competitor of General Dynamics and Palantir.
Listen carefully to the choice and emphasis on words.
Second hand knowledge: Within international organizations that have worked extensively in the Israel-Lebanon border area it is well known that Israel has pwned most of the Lebanese telecoms and ISPs quite thoroughly. To the extent that Hezbollah started laying its own fiber optic cables.
It's not due only to Israeli resources. Much of Israel's defense budget comes from the U.S., plus there is much more support, including technology transfer, that isn't provided in cash.
Why would the moderators get involved anyway? Should they also moderate criticism of Russia whether right or wrong? of North Korea?
How is it criticism? Which facts were made up?
And if you wondering about the reason for discrepancies from the article above it's because that one is from 2012 the budget is larger and the US dollar devaluation against the Israeli Shekel by 15% since then, in fact fluctuations of the US Foreign Military Financing (FMF) as portion of the Israeli Defense budget are often due to currency exchange as much of the Israeli budget allocated in local currency is spent locally while the FMF is spent over in the US in dollars.
It's also important to note that the Israeli allocation of the budget does not include FMF or any other aid so when Israel allocates say 18 billion USD equivalent in local currency in the budget that is the amount to which the government will fund the defense ministry, beyond that the defense ministry has it's own internal budget which is funded via the Israeli government budget, FMF as well as any additional revenue streams of defense ministry such as rent, dividends from it's share of now privatized national defense contractors like IWI (form Israeli Military Industries) and IAI etc.
Apple is an American company, but they have foreign subsidiaries like Shazam Entertainment, which is a British company. If Apple Singapore is a full subsidiary, then sure, that's a Singaporean company, but Apple Inc wouldn't be.
they don't play by their own set of rules.
The Israel == Guantanamo thing doesn't exactly make sense to me either, but now you're arguing nonsense. Certainly, we all know that the government, including law enforcement, doesn't always follow the law.
That's almost the entire argument for putting any limits on governmental power at all.
(That's not to say we should restrict them from doing this, though; if they can crack a phone, good for them, I suppose. But it's another thing to be concerned about.)
Here, the "Israel == Guantanamo" thing doesn't make sense if you assume that the government is using the Israeli hacks to break iPhones it has in custody because of a warrant or arrest. You can speculate that the government is stealing peoples' iPhones and breaking into them without a warrant, but it's an actual logical fallacy to point to different things the government is doing to argue that the government is doing this thing too.
Well, OK, now you know:
The US government seizes phones and laptops, "without showing reasonable suspicion of a crime or getting a judge’s approval", on a regular basis, and has done so for a number of years.
(et fuckin' cetera...)
You might not like it, but border searches aren't illegal, and the government doesn't need to go to Israel to do them.
> This technology isn’t being used to break into phones at surprise checkpoints, it’s being used to search phones of people who have been arrested.
> As far as I know, the government isn’t stealing peoples iPhones to search them.
That implies it's nothing to worry about if you aren't being arrested, which is wrong.
First, you don't know when this technology is being used. It would be prudent to assume they US government could use this technology on any phone they seize.
Second, even if it's not "stealing" when government agents seize your phone at a border (or yes, at a surprise checkpoint, which they can and do use), from a security standpoint, it's the same thing.
The legality of these searches is not that interesting to me (witch-burning and slavery were legal too). What's interesting is that this new exploit, assuming the story is accurate, allows the government to search the data of phones that they seize.
Why should we worry about that? Because, as we have already established, they seize phones routinely, and not necessarily in conjunction with an arrest or even suspicion of criminality.
Yes, it's legal (in many cases, anyway). But before this new phone-cracking capability, it probably wasn't effective. The security on the Apple iPhone was believed to be good enough to stop such intrusion; now (again, assuming this article is accurate) we know it isn't.
2018: FBI smuggles Apple engineers out of the US
It might seem logical to those unfamiliar with how these hacks work, but consider that such hacks do not depend on the secrecy of the design. Also consider that Apple hires regular software and hardware engineers, who do their best to design a system, and Apple then hires hackers (both internally, as well as external consultants) to find weaknesses in their designs. These weaknesses are then fixed before the product ships, meaning even those who were paid to break it no longer know how. This alone should tell you that people who know the system intimately are not the ones who understand how to break it.
Put another way, if I need to make this product, and there are two candidates I can hire, one person who wrote the software and one who knows nothing about the software but is demonstrably skilled at finding exploits in similar systems, I’ll take the latter in a heartbeat.
Also don't forget the hardware. Like with most/all other phone vendors, iPhone chips aren't made in the US; most of the design maybe is, but the chips themselves are not, and finding a Chinese hardware engineer happy to help would be even easier because all he should implement is a covert channel to tunnel sensitive data (passwords?) to a known place. If you have access to the hardware that should be trivial to do: just implement a small undocumented flash memory space anywhere, then when the user taps a password an also undocumented firmware routine (that's hundreds of bytes, very easy to conceal) would add the password in that small memory that can be read only in certain conditions (say connecting power+tapping a bossa rhythm while with the phone screen is facing down+disconnecting power - that seems crazy but you get the idea: every sensor is a switch and any switch can be used to enter a code). A few spare kilobytes of memory here and there would allow this and other spying mechanisms, so I would't be surprised at all if some big agency would attack the hardware/firmware rather than the software.
I sincerely apologize for being this blunt, but you clearly have no clue what you’re talking about. Ask anyone who’s shipped any piece of hardware they helped design, let alone a processor, and they won’t be able to answer because they’ll be laughing so hard. Adding persistent hardware based spying like you describe into a design is anything but trivial.
5x pay sounds quite regruntling, as in P.G. Wodehouse's "I could see that, if not actually disgruntled, he was far from being gruntled."
[ https://www.goodreads.com/quotes/26678-i-could-see-that-if-n... , https://en.wiktionary.org/wiki/regruntle ]
Jailbreak requires a reboot of the device and after a reboot the encryption key for the useful data on the device is not available. If the device uses a strong passcode (as opposed to a numeric code) it cannot practically be brute forced even if you force the device to allow you to enter many attempts quickly.
70^8 = 576480100000000 // 8 chars of upper/lower case, numbers, symbols
4000^4 = 256000000000000 // 4 words pulled from a vocabulary of 4000 words
staple (not in the first 4000)
There is, but that part isn't the important part. The important part is using a long pass phrase rather than a multi digit number.
Authentication is necessary because the prosecutor has the burden of proof, and part of meeting that burden of proof is making a facially sound case about the authenticity and reliability of each piece of evidence. But unlike, say, an illegal search, failure to meet that burden doesn't poison derivative evidence as long as that evidence is independently submissible.
Importantly, you don't need to authenticate evidence _before_ getting a warrant to take possession of the phone; at least not to the extent required at trial. And as far as I know there are no laws limiting how the government can extract information from a phone it legally possesses for investigatory purposes, which means any technical process would be entirely irrelevant to the legality of the search. So there's no way to force the government to divulge the process as long as they don't try to submit the information gained by that process directly as evidence.
But maybe I'm missing something.
They gather the evidence illegally, and the conjure up a legal means to re-find the evidence they already know exists.
But the reason most of us care about having our data encrypted is not actually because we are committing heinous felonies, and want our phones to hide the evidence from legitimate cops (though of course sometimes that’s the case).
It’s because we don’t trust the authorities to follow the law. If they can crack your phone legally, they can also crack it illegally. (Say, after seizing it within 100 miles of the border, which they can do any time they please for whatever reason (including no real reason)).
So even though this ability isn’t necessarily illegal in and of itself, it’s certainly of interest to those of us who are concerned about the threat vectors that are presented by government forces that do engage in illegal practices.
Is there anything you can provide to convince me it's remotely possible?
you use illegal methods to get X without a warrant. but you can't use that information legally. so you use your knowledge of X to find a legal way of learning X, after the fact.
then you go to the courts saying you found X the legal way.
but you didn't.
What would you say, exactly?
If you said:
"your honor, breathalyzers can be tampered with to provide false readings"
It seems quite easy for anyone to respond with "how so?". Do you refer to "this one time in New Jersey"?
Well this is a more general issue (I mean not limited to this case or to sending a device to Cellebrite or to another external laboratory) once a chain of custody is formally valid, it has as much integrity as the integrity of the people that had physical access to or worked on the device.
Still - thankfully - "planting" evidence on a modern file system and OS (provided that the end result is an actual physical extraction) is not as easy as it may seem.
Definitely possible, but extremely difficult to achieve without leaving any trace behind.
In the end, the only options are: bruteforcing passcodes on the original device while attempting to trick the device into allowing more than 10 failures, or prying open the Secure Enclave to obtain the Unique ID — both options a lot more complicated than just cloning the data and trying passcodes on it.
People have been cracking secure coprocessors of the type used in payment cards, TPMs, and the like for a long time, dare I say even those which were designed to a higher level of security than Apple's. The fact that there is an entire phone attached to it doesn't make much of a difference, but the technology behind this (FIB, microprobing, etc.) has been steadily dropping in price and increasing in availability for a long time.
But if they cracked the SE, and kept that fact to themselves, they would be making even more money because every government on the planet would be coming to them. This is provided they kept it to themselves.
It would mean a significant spike in the number of phones being cracked and people being arrested/charged/hung/etc. This would be a statistic that would jump off the charts and trigger Apple to essentially develop a solution straight away.
The only way this would work is if they had cracked the SE and are doing an Enigma: keeping it top secret and only cracking very high profile targets with the technology, which I guess is possible.
Of course. The "big guns" are not to be used lightly, as the saying goes.
How, though? If the only information Apple has is that their SE scheme is broken, how is that supposed to help them develop a solution?
Isn't it more of a cat and mouse? The defences also drop in price and increase in availability?
As usual, old tech is becomes vulnerable. Hopefully most people will get the chance to upgrade to the latest and greatest before attacks get too easy and destroy the old device.
I wonder what Chris Tarnovsky is up to these days...
Previous iOS versions used to have some small window where you could race and power off after trying a passcode but before the enclave had incremented the counter, but that bug was fixed long ago. Maybe there are others unknown bugs of similar kind
I skimmed the secure enclave documentation at https://www.apple.com/business/docs/iOS_Security_Guide.pdf (page 5, 14, 15), and I can't find anything to confirm that.
It was also confirmed explicitly during the Q&A at the blackhat talk in 2016, which I believe is on YouTube.
doesn't look like it. https://www.blackhat.com/docs/us-16/materials/us-16-Mandt-De...
Edit: this one. https://youtu.be/BLGFriOKz6U
Edit 2: the question was asked at 47:20
I wouldn't rule out some kind of electrical glitching attack.
If I'm understanding correctly, obtaining the unique ID would simply mean the strength of the AES key becomes the point of failure? (So a strong password means FBI doesn't get in)
I think the Secure Enclave has independent built-in mechanisms for keeping track of the number of times things have happened though.
(If that sounds sarcastic it's not meant to, genuine question as I don't know much a about this stuff!).
That's not be good. Since we're bound to have that cost-of-unlock war anyway as new workarounds are found, it should at least be higher. I'd hope for $50k+ so if it's really needed, it goes through several levels of approvals.
Forbes would unlock one phone and then alert Apple by way of their story. Celebrite’s ideal customer pays for lots of iPhones to be unlocked.
Presumably they weren't, because there's nothing in it for them. And Forbes ran the story anyway. Would it really have been better for Forbes to not run it?
No such thing exists
> If you don't do a lookup every X days, it'll expire from the cache
Requerying a cached DNS record doesn’t extend its TTL. TTL is based on when it was first added to the cache.
Think about that from a technical perspective and you’ll realize the flaw. :)
You can’t cycle to a new set unless the authoritative server is still responding with the key. If the authoritative server still has it, what difference does the fact a caching name server has it? Furthermore, there’s zero guarantees a caching resolver will cache for the length of the specified TTL, so you literally have a land mine that’ll explode randomly and cause you to lose your data.
> As the GP hints
Read it again. GP’s Github link doesn’t allude to what you imply it does. Storing arbitrary data in DNS is of course possible and others will cache it for you, but implying anything like what you described as feasible just doesn’t hold merit.
> but calling it "factually not correct" is unfair.
This entire theory you posted originally doesn’t hold up to even basic technical review. It’s nothing against you personally, the idea simply doesn’t provide any actual benefit and very fairly is factually incorrect.
Yes - during a refresh you'd (re-)add the key to an authoritative server that you control, query it from X open resolvers that you do not control, then delete the records from the authoritative server that you control, such that the only remaining copies are held by the open resolvers. Care would be taken to make the key forensically unrecoverable on the authoritative resolver whenever it's not participating in this refresh process.
> there’s zero guarantees a caching resolver will cache for the ... [full] TTL
To deal with that you can test them first using useless/random data, and use many (10+) to deal with the risk that policy changes after your test. The hope being that it's unlikely for all 10+ to go offline, time out, etc, before your next refresh. But it is true that some availability risk is the price you must pay for the "unrecoverability after X seconds, using HW you don't own" benefit of the scheme.
A better idea would be to put an RTC + watchdog timer into the security chip that holds the keys and power it continuously with a small amount of external power. The power must be available and the watchdog timer must have time remaining to disconnect a circuit that pulls the memory holding the keys to ground.
More advanced types of tamper resistance and self-destructing chips are possible, but they tend to have significant downsides.
Presumably, this would automatically delete your G Suite email, contact, calendar, and other data from your device.
Using Apple Configurator, you can change the number of times unsuccessful attempts can occur before erasing an iPhone to anything between 2-10.
That option has been built in to the iPhone pretty much since the beginning.
After all, when was the last time you spent three days without fondling your phone?
This is a planted marketing/PR piece.
Hmm, how did Apple get cleared for DOD use?
What process would be required for that? Hmmmm.
All I want is my every day encryption to be a big enough pain in the butt to crack that the feds can't break it without spending a medium amount of money.
HN: Apologies this was lost in my subtlety, but consider the game theory aspects. Your best bet is to _just enough_ of a pain in the butt it's difficult to reach you, but you certainly don't want to be singled out on a national stage either. Maybe I'm the only one that considers this angle?
Presumably a security professional selling a usable exploit to a company like Celibrite pays far better than the $0 that comes from releasing it as a "jailbreak" to the general public.
All of the jailbreaks have just involved tethering your phone to iTunes or visiting a particular website or app. There's never been a need to do any desoldering.
The difference is just that those exploits which were difficult enough that they required soldering generally weren't released or didn't get much traction.
That being said, I remember soldering a modchip on my original xbox 16 years ago