
Why can't Apple decrypt your iPhone? - silenteh
http://blog.cryptographyengineering.com/2014/10/why-cant-apple-decrypt-your-iphone.html
======
abalone
There's another technical surveillance method here that I feel more people
should be talking about: monitoring iMessage communication.

iMessage is extremely secure[1], except for the fact that Apple controls the
device list for iCloud accounts. The method would simply be for Apple to
silently add another device to a target's account which is under law
enforcement's control. I say "silently" in that they would need to hide it
from the target's iCloud management UI to stay clandestine, but that's it,
just a minor UI change. iMessage clients will then graciously encrypt and send
a copy of every message to/from the target to the monitoring device.

This would still work even with impossible-to-crack encryption. It wouldn't
allow access to old messages, just stuff after the monitoring was enabled.
It's the modern wiretap.

It mirrors wiretapping in that sufficiently sophisticated people could
discover the "bug" by looking at how many devices iMessage is sending copies
to when messaging the target (just inspecting the size of outgoing data with a
network monitoring tool would probably suffice), but it would go a long way
and probably be effective for a high percentage of cases.

The main thrust of the article is that encryption is not new, just the extent
of it, particularly iMessage. Here's a way around that.

[1]
[http://images.apple.com/iphone/business/docs/iOS_Security_Fe...](http://images.apple.com/iphone/business/docs/iOS_Security_Feb14.pdf)

~~~
dantiberian
Whenever you add a device to your pool of iMessage devices, all of the other
devices get a popup message telling them. This is the step where each of the
other devices get the decryption key for that device.

My hunch is that Apple doesn't have a way to prevent the popup messages so if
they were forced to add a law enforcement iPhone to an iMessage pool then the
target would notice an extra device was added.

~~~
jshevek
Maybe I don't understand the problem, but it seems to me that it would be very
simple for apple to prevent the popup messages at their discretion, since they
write and control the software that _causes_ the popup to happen in the first
place.

~~~
eridius
They would have to update the OS on the device first. And that's definitely
not something that can be done silently.

I'm assuming here that the OS already doesn't have the ability to suppress the
popup, and I think that's a safe assumption because Apple _doesn 't want_ to
have this ability.

~~~
pilif
_> They would have to update the OS on the device first. And that's definitely
not something that can be done silently._

how do we know that the iMessage protocol doesn't _already_ have support for a
"silent" flag, which, if set, will not cause the message to appear?

~~~
eridius
I already answered that in the second sentence of my comment.

------
comex
> The Secure Enclave is designed to prevent exfiltration of the UID key. On
> earlier Apple devices this key lived in the application processor itself,
> and could (allegedly) be extracted if the device was jailbroken and kernel
> patched.

Speaking as a jailbreaker, this is actually incorrect. At least as of previous
revisions, the UID key lives _in hardware_ \- you can ask the hardware AES
engine to encrypt or decrypt using the key, but not what it is. Thus far,
neither any device's UID key or (what would be useful to jailbreakers) the
shared GID key has been publicly extracted; what gets extracted are secondary
keys derived from the UID and GID keys, but as the whitepaper says, the
passcode lock key derivation is designed so that you actually have to run a
decryption with the UID to try a given passcode. Although I haven't looked
into the newer devices, most likely this remains true, since there would be no
reason to decrease security by handing the keys to software (even running on a
supposedly secure coprocessor).

~~~
pasta_1
Is jailbreaking becoming more difficult and would that be a sign of iOS/iPhone
becoming more secure?

~~~
ghshephard
It's a _step_ towards becoming more secure. Having control of your operating
system (that is, preventing other programs from taking control prior to the
operating system starting up), is clearly a desirable behavior if you want to
prevent applications like keyloggers, data sniffers, etc...

------
JoshTriplett
Now if only it was possible to turn off remote installation of applications on
both iOS and Android devices, this kind of security would actually mean
something.

Right now, you can do full disk encryption on an Android device (which seems
likely to become hardware-assisted on future devices similar to the solution
mentioned in the article). If you pick a sufficiently strong passphrase, that
should keep your data secure even on devices without hardware assistance.
However, if the device is turned on and locked (the common case), it's trivial
to remote-install an arbitrary app, including one that unlocks the phone. (You
can do this yourself with your Google account, which means anyone with access
to that account can do so as well.)

It would help to be able to disable remote installation of any kind; that
wouldn't have to come at the expense of convenience, because the phone could
just prompt ( _behind the lockscreen_ ) to install a requested app.

~~~
icebraining
As far as I know, it can install an app _but not run it_ (EDIT: on Android,
that is). So it shouldn't be able to do any such decryption.

~~~
opendais
The problem is the attack surface for an app on a device vs. in the wild is
much, much larger and much, much less secure.

Once its on your phone, it can just screenshot things if nothing else.

~~~
scintill76
Android requires root privilege and/or system-level app to take screenshots.
iOS is probably similar. I don't think an app could spy like this without user
enabling it (which is a valid but separate concern about how knowledgeable the
average rooter/jailbreaker is.)

[0] [http://android.stackexchange.com/questions/10930/why-do-
we-n...](http://android.stackexchange.com/questions/10930/why-do-we-need-a-
rooted-phone-to-capture-screenshots)

~~~
c_c_c
Maybe I'm not fully understanding here but the last two phones I've used do
not require root or any app to take screenshots. Nexus 5 and Moto X.

~~~
scintill76
By root, I meant that an app with root access could probably take screenshots,
not that a root app is needed for the user to access the system's screenshot
capability.

------
tzs
If someone obtains your phone, and prevents you from initiating a remote wipe
(perhaps they have you in custody, or perhaps they have isolate the phone so
that it cannot receive the wipe command), it sounds like this technology will
do a good job of preventing them from decrypting your data from the phone if
you have a decent passcode. They cannot throw GPUs or FPGAs or clusters or
other custom hardware at the problem of brute forcing your passcode because
each attempt requires computation done by the Secure Enclave using data only
available in the Secure Enclave. That limits them to trying to brute force
with no parallelization and 80ms per try [1].

However, assuming they have an appropriate warrant, can't they get your iCloud
backups and try to brute force those? Maybe I'm being an idiot and overlooking
something obvious, but it seems to me the encryption on the backups CANNOT
depend on anything in the Secure Enclave.

That's because one of the use cases that iCloud backup has to support is the
"my phone got destroyed, and now I want to restore my data to my new phone"
case. To support this, it seems to me that the backup encryption can only
depend on my iCloud account name and password. They can throw GPUs and FPGAs
and all the rest at brute forcing that.

My conclusion then is that when I get a new iPhone, I should view this as a
means of protecting my data on the phone only. It lets me be very secure
against an attacker who obtains my phone, but not my backups, provided I have
a good passcode, where "good" can be achieved without having to be so long as
to be annoying to memorize or type. A passcode equivalent to 32 random bits
would take on average over 5 years to brute force.

To protect against someone who can obtain my backups, I need a good password
on iCloud, where "good" means something considerably more than equivalent to
32 bits.

[1] I wonder if they could overclock the phone to make this go faster?

~~~
giovannibajo1
Yes, currently iCloud backups are not encrypted so they can be extracted by
law enforcement, but on the other hand they are not mandatory, as Apple also
offers a full local backup solution through iTunes (albeit, admittedly, they
could make it work automatically like Time Machine, instead of manually; I
guess they'll get there, now that they're using privacy in marketing).

On the other hand, it is perfectly possible to devise a system to locally
encrypt iCloud backups and still be able to restore them. Look at how iCloud
keychain works, in the Apple documents, as those datas (= all your passwords
and secrets) are synced through the cloud between your devices but Apple can't
access them. For iCloud keychain, in case you lost access to all your devices,
you need a master recovery key that's generated when you first activate it; if
you don't have it, you lose the data.

~~~
gilgoomesh
> iCloud backups are not encrypted

This is wrong. iCloud uses AES 128 and 256 encryption:

[http://support.apple.com/kb/HT4865?viewlocale=en_US&locale=e...](http://support.apple.com/kb/HT4865?viewlocale=en_US&locale=en_US)

~~~
kentonv
This doesn't really say much unless we also know how the AES key itself is
derived.

If you don't have two-step authentication on, then Apple's password reset is
based on asking some security questions and sending an e-mail challenge. Even
if they actually derived the key from your security question answers
(unlikely), that's the kind of thing law enforcement would have no trouble
cracking. More likely, the whole authentication system simply returns true or
false, and the key is stored in some separate place -- perhaps more secure
than a random hard drive in the datacenter, but still somewhere where Apple
can get it if forced.

If you _do_ have two-step authentication enabled, Apple's docs imply that it
is impossible to recover your account without at least two of:

* Your password.

* Your "recovery key" \-- a secret that they give you and instruct you to print out and keep somewhere.

* Your phone (or some other device that can generate one-time codes).

They say that if you can't produce two of these then all your data is lost and
you'll need to create a new Apple ID. This implies that they might indeed
store your data encrypted by an AES key which is in turn stored encrypted in
two different ways: once with your password, and once with your recovery key.
Thus it would actually be impossible to recover your data without one of
these, and Apple doesn't store either one on their servers, therefore Apple
would not be able to produce your data for law enforcement.

That said, I would be _very_ surprised (and impressed) if Apple actually does
this. Consider that this would prevent their services from doing any offline
processing of your data at all -- a compromise few product designers would be
willing to make for a security guarantee that almost no user actually
understands. More likely, the language in the documentation is a matter of
policy -- Apple refuses to recover your account simply because that's the only
way to guarantee that social engineering is impossible, not because they are
technically incapable.

And anyway, even if your data is stored encrypted at rest with a key that is
actually derived from your password, there's nothing stopping law enforcement
from demanding that Apple intercept your password (or the key itself) next
time you log in. That's basically what they did with Lavabit, after all.

(Things I think about while working on sandstorm.io...)

------
weinzierl

       1. [...]
       2. [...]
       3. [...]
       4. [...]
       5. The manufacturer of the A7 chip stores every UID for
          every chip.
    

I'm a total layman, but the UID has to be created at some point and so it can
be known by someone. Wouldn't it be the easiest way to just record it for
every chip? Apple wouldn't even have know about it.

~~~
lisper
This is the fundamental problem: unless you are rolling your own silicon, at
some point you have to take some big corporation's word for it that a chip
does what they say it does. This fundamental problem is the reason that
nuclear launch codes are protected by a relatively low-tech solution:

[http://en.wikipedia.org/wiki/Gold_Codes](http://en.wikipedia.org/wiki/Gold_Codes)

~~~
weinzierl
But if this is the case, why bother with bullet point 1. to 4. The chip is
probably manufactured in China, why spend a thought about whether US law
enforcement can somehow via Apple decrypt the data of my phone when the
Chinese Government can do it anyways?

~~~
avn2109
I fear the American government's totalitarian/police state leanings far more
than I fear the PRoC.

Though the Chinese government is undoubtedly an enthusiastic squasher of
political dissent, and their secret police are surely quite brutal, and I hate
commies with a passion, I am hardly ever likely to have a conflict with the
Chinese state.

An Unconstitutional American surveillance state is a far more immediate
problem. The NSA can break down my door tomorrow after reading my politically
unpalatable text messages, and there's nothing I can do to stop them. So if I
had to pick someone to have the keys to my phone's backdoor, I'd pick a
"hostile" foreign power any day. Though of course it would be better to have
no backdoors at all :)

~~~
jshevek
I agree with you, but I would add that: if _any_ power has the key to your
phone's backdoor, there is a chance of the key getting into the NSA's hands.

------
philip1209
> (Apple pegs such cracking attempts at 5 1/2 years for a random 6-character
> password consisting of lowercase letters and numbers. PINs will obviously
> take much less time, sometimes as little as half an hour. Choose a good
> passphrase!)

Do not use simple pin passwords on your phone. In particular, if you use
fingerprint access, there is no reason not to have a long, complex password.

~~~
rietta
There is an argument against using the fingerprint access and that is that a
user gives up the right of consent while in custody. If law enforcement gets a
judicial order to forcibly press the prisoner's finger to the sensor to unlock
the device, then he or she has little recourse as the right to remain silent
is not implicated. One cannot be similarly physically compelled to disclose a
code only held in his or her memory.

~~~
girvo
_> One cannot be similarly physically compelled to disclose a code only held
in his or her memory._

Are you sure? In the UK and I'm pretty sure in my native Australia, they
definitely can, under pain of "contempt of court".

~~~
rietta
That gets into key disclosure law and it does vary by jurisdiction. Though I
am not a lawyer, it is my understanding that this is an area of dispute in the
United States with respect to the 5th amendment, which forbids the government
the power to compel one to ever testify against him or herself.

Even under a mandatory key disclosure regime, it's still a choice to remain
silent even if that means one remains jailed. That situation sure seems like a
form of torture to extract information from the incarcerated individual.

This 2012 Forbes article is a good read on the matter:
[http://www.forbes.com/sites/jonmatonis/2012/09/12/key-
disclo...](http://www.forbes.com/sites/jonmatonis/2012/09/12/key-disclosure-
laws-can-be-used-to-confiscate-bitcoin-assets/)

------
ghshephard
I read through the article - including the hand wavey "Apple has never been
cracking your data conclusion" \- but I don't understand what has changed
since previous versions of iOS other than more data being encrypted.

Apple claims they can't decrypt data, but, the article suggests that they can
simply run the decryption on the local phone with custom firmware. Most people
chose a 4 digit pin, and, @80 millisecond/guess, that means Apple should be
able to crack your phone in 12 minutes.

If you use a longer passcode, your data is more secure - but I thought that
was _always_ the story with Apple.

So what, if anything, has changed (other than more data being encrypted?)

~~~
callesgg
The passcode and the pin is not the same thing, most people don't have a
passcode.

~~~
ghshephard
Re: "The passcode and the pin is not the same thing, most people don't have a
passcode."

On iOS I believe this is incorrect, the Passcode is another name for PIN on
the iPhone.

[http://support.apple.com/kb/ht4113](http://support.apple.com/kb/ht4113)

You can have complex or simple passcodes, but everybody I've ever seen who
bothers to have a passcode (myself excepted), sets it to a 4 digit code (aka
PIN). To make it worse, it's usually their ATM PIN.

------
shabble
Invasive attacks for extracting the UID depend on exactly how it's
'implemented in hardware'.

It could be a total lie, and hardwired or masked-rom per-revision (but I doubt
that, too easy to discover)

It could be in a one-time programmable block somewhere that gets provisioned
during manufacture - a flash block/eeprom with write-only access (externally
at least), or a series of blowable fuses, or even laser/EB-trimmed traces.

All of those 1-time prog methods are susceptible to the person operating the
programmer to record the codes generated, although managing and exfiltrating
that much data would make it rather tricky.

The method of storage also influences how hard it is to extract through
decapping and probing/inspection.

If I had to design something like this (note: not a crypto engineer), I'd have
some enclave-internal source of entropy run through some whitening filters,
and read from until it meets whatever statistical randomness/entropy checks,
at which point it is used to seed & store the UID into internal EEPROM. That
way, there's no path in _or_ out for the key material, except when already
applied via one of the supported primitives.

Then you need to protect your secrets! Couple of layers of active masks (can
they do active resistance/path-length measurements instead of just continuity
yet? That would annoy the 'bridge & mill' types :)) Encrypted buses, memory,
and round-the-houses-routing is also pretty standard for the course, but I'm
sure it too could be improved on.

IIRC there was someone on HN who was working for a TV smartcard mfgr who was
reasonably confident they'd never been breached. Curious what he'd have to say
(without an NDA :) )

~~~
shabble
Edit: I wonder if anyone is mask/shielding against back-side attacks yet? I
did like the buried light-sensors dotted around some hi-sec core I was
watching, although they didn't end up being particularly useful.

My understanding here is that this Enclave is just a specific part of the
overall die, so they're somewhat constricted in the crazy-fabtech methods they
might otherwise be able to consider.

------
josefonseca
> Apple doesn't use scrypt. Their approach is to add a 256-bit device-unique
> secret key called a UID to the mix, and to store that key in hardware where
> it's hard to extract from the phone. Apple claims that it does not record
> these keys nor can it access them.

Technically, this is where it breaks down. As in "Trust me I don't store the
keys."

If that hypothesis is true(they don't store these keys), then they'll have a
hard time breaking your encryption indeed. But you must trust Apple at that
point.

If there was a way to buy an anonymously replaceable chip with this
cryptographic key in it and replace it on the phone like a SIM, then we'd be
much closer to stating "Apple can't decrypt your phone".

~~~
tptacek
Right. We'd instead be having a discussion about how Atmel can decrypt your
phone.

~~~
josefonseca
It was just an example how you could detach the secret and the device made by
Apple. I'm sure there are better ideas for that. The way this is configured
now Apple _can_ decrypt any phone, which voids the argument made in the OP.
But I can understand why this will be the unpopular opinion here.

------
bjornsing
> Secure Enclave allows firmware updates -- but before doing so, the Secure
> Enclave will first destroy intermediate keys. Firmware updates are still
> possible, but if/when a firmware update is requested, you lose access to all
> data currently on the device.

Given that the end-user has entered the passcode it shouldn't be hard to
retain the data: after upgrading the Secure Enclave firmware simply unencrypt
all data using the old key and reencrypt it using the new key (derived from
same passphrase but a new UID).

You can also use a "two stage" approach where the encryption key derived in
hardware is only used to protect a secondary key. In this case you just
reencrypt this secondary key which in turn protects the data.

------
higherpurpose
Is Apple's "Secure Enclave" anything more than ARM's TrustZone?

[http://www.arm.com/products/processors/technologies/trustzon...](http://www.arm.com/products/processors/technologies/trustzone/index.php)

~~~
vvhn
Contrary to speculation ( there are whole articles which "explain" the secure
enclave to be ARM trustzone) secure enclave is documented ( only very
recently) to be a _seperate_ chip inside the A7 chip running it's own L4 based
microkernel. (From
[https://www.apple.com/privacy/docs/iOS_Security_Guide_Sept_2...](https://www.apple.com/privacy/docs/iOS_Security_Guide_Sept_2014.pdf))

" The Secure Enclave is a coprocessor fabricated in the Apple A7 or later
A-series processor. It utilizes its own secure boot and personalized software
update separate from the application processor. It provides all cryptographic
operations for Data Protection key management and maintains the integrity of
Data Protection even if the kernel has been compromised.

The Secure Enclave uses encrypted memory and includes a hardware random number
generator. Its microkernel is based on the L4 family, with modifications by
Apple. Communication between the Secure Enclave and the application processor
is isolated to an interrupt-driven mailbox and shared memory data buffers. "

~~~
bryanlarsen
That's pretty much exactly how AMD implements TrustZone.
[http://www.anandtech.com/show/6007/amd-2013-apus-to-
include-...](http://www.anandtech.com/show/6007/amd-2013-apus-to-include-arm-
cortexa5-processor-for-trustzone-capabilities)

~~~
adsr
It sounds more like they are using a Cortex-A5 to gain access to TrustZone
with an existing x86 core.

~~~
bryanlarsen
And it sounds like Apple is using a separate unspecified ARM processor
(probably a Cortex-A5 since that's the cheapest possible one) to gain access
to an existing A7 or A8 core.

~~~
adsr
In Apple's case, they use the ARM ISA but implement their own micro
architecture and from vvhn's comment seems to also use a co-processor
specifically for the secure enclave. But the link above on TrustZone hardware
architecture mentions that this isn't a requirement.

"TrustZone enables a single physical processor core to execute code safely and
efficiently from both the Normal world and the Secure world. This removes the
need for a dedicated security processor core, saving silicon area and power,
and allowing high performance security software to run alongside the Normal
world operating environment."

I guess since Apple use the ARM ISA, it's still binary compatible with ARM but
with a different implementation. AMD uses an x86/ARM hybrid where the ARM part
is an off the shelf Cortex-A5 which already contain TrustZone.

~~~
bryanlarsen
I highly doubt they use their own micro architecture. It'd be a lot cheaper to
license Cortex-A5. Using their own micro architecture for the main processor
gives them a huge competitive advantage. For the security co-processor, COTS
would work fine.

------
hadoukenio
What's stopping Apple from being coerced by the PTA (Peeping Tom Agencies)
from installing an update to bypass the lock for specific targets?

~~~
foobarqux
Nothing.

------
higherpurpose
> _Apple has built a nuclear option. In other words, the Secure Enclave allows
> firmware updates -- but before doing so, the Secure Enclave will first
> destroy intermediate keys. Firmware updates are still possible, but if /when
> a firmware update is requested, you lose access to all data currently on the
> device._

That seems ideal. Let's hope Apple actually does that (probably not).

------
venomsnake
So the key is derived from passcode? isn't that 5 digits that are easy to
brute force?

~~~
tzs
5 digits would be easy. It would take a little over an hour on average.
However, passcodes are not limited to digits. Use upper and lower case letters
and digits, and then a 5 character passcode would on average a little over a
year. Make it 6 characters, and that's 72 years.

~~~
ghshephard
I'm guessing based on my observations of 40-50 people entering Passcode, that
north of 90% of people who bother to use a Passcode (or care to) - use a 4
digit pin.

Even those who are super security conscious tend to just use numeric PINs.
It's the very, very rare individual who enters an alphanumeric passcode.

------
yeukhon
Two questions:

* Regarding the fixed 80ms timing: has there been study on the average time needed (aside from the WHY 80ms instead of 70ms or 90ms). I also want to ask for clarification: where is the entire PBKDF2-AES is done? On the AES engine (which I believe is part of the A7 chip)? On a TPM chip (which might be a NO based on unauthenticed source [1])?

* So this UID created in every device and stored in Secure Enclaved which there is a dedicated path between SE and AES engine. But can we conduct any side-channel attack? I am pretty noob with hardware security.

------
SynchrotronZ
I wonder, is it in the realms of possibility for big-budget organizations like
the NSA to simply read the UID from the silicon by means of physical analysis
(e.g. a scanning tunneling microscope)?

~~~
rosser
It's very probably within the realms of _possibility_ , yes.

It's very probably _not_ within the realms of _practicality_ just yet,
however.

~~~
shabble
Go check out some conference presentations by Christopher Tarnovsky. He's made
a career out of it, and acquired some very expensive toys (focused ion-beam
equipment doesn't come cheap), but there are lectures of his explaining how he
broke the (iirc) STMicro TPM chips _for fun_. These sorts of devices have all
sorts of countermeasures against direct invasive attacks like these, but with
enough cash and bricked test phones, I'd be greatly surprised if it wasn't
entirely practical.

The only issue would be making the process so 100% reliable that you succeed
first time, because a single mistake or misunderstanding could trash the
single copy you have of the code.

I'm curious now if flylogic or chipworks have done any serious teardown of the
'secure enclave' stuff.

~~~
ghshephard
If the iPhone actually does become very popular, particularly with terrorists,
it will be hard to imagine that the NSA doesn't just go develop this
capability internally.

"enough cash and bricked test phones" \- the great thing about this, is you
can just buy the $650 phones - you can get a thousand of them for less than a
million dollars, which probably is under your typical line managers budget in
the NSA techOps group.

And, lets be realistic, Apple isn't trying to defend against the NSA or Nation
States, just your average hacker without access to $100mm+ in hardware.

------
triski
The security is based on the premise that Apple is unable to decrypt as they
do not keep a record of the devices unique ID that is the base of all the
cryptography.

What if that is not true? What if the device has a built-in keylogger to just
get all the crypto from the user input? be it a passcode or a fingerprint.

Wouldn´t it be partly better if this were based on a trully public key
cryptography with a randomly generated private key generated each time the
device is factory reset?

------
tkinom
With all the talks about supper crypto tech use in the Iphone, isn't cloning
phone's data as simple as declaring your phone is loss and paying a mobile
phone store clerk a new phone and reset the password for the I-Account? The
new iphone would have access to all your info, pictures, msg logs in 1-2
hours?

Is this any reasonable PI, police, FBI, hackers can easily social engineer via
legal and/or illegal means?

------
vexelized
What about iCloud? It used to be that a user could reset their password and
then restore from iCloud backup on a new phone... Is this no longer true?

~~~
nemothekid
From what I can understand, if you wanted to hypothetically maximize your
security, it would mean turning off iPhone backups.

Apple could also have it set that you must have the iPhone passphrase to
restore a backup but obviously those can be "easily" brute forced (because for
the restore to work, it must mean you can bypass the old device's UID)

~~~
giovannibajo1
You can still fully backup your phone through iTunes, if you want to avoid
iCloud backups, until those get encrypted as well.

------
pronoiac
If you're wondering when Secure Enclave first appeared, I looked up the A7
processor. It's in the iPhone 5S, the iPad Air, and the iPad Mini (2nd
generation).

------
moe
So the article's answer to "Why can't Apple decrypt your iPhone?" is: "Because
Apple says that no software can extract the UID".

In our post-Snowden world this is just ridiculous and intellectually
insulting. The author is either naive beyond belief or he got paid to write
this PR shill piece.

cf. [https://gigaom.com/2014/09/18/apples-warrant-canary-
disappea...](https://gigaom.com/2014/09/18/apples-warrant-canary-disappears-
suggesting-new-patriot-act-demands/)

------
PaulSec
And what about a backdoor? Code is not open-source. Just saying..

~~~
Alupis
I'm skeptical as well. Seems to me that so many things on your phone are
talking to Apple (and other 3rd parties) anyways, that this might not even
matter?

Although the FBI seems to be not very happy about this (if it's not just "for
show" that is)[1]. The FBI is using the age-old "Save/Protect the children"
argument, literally.

[1]
[http://www.washingtonpost.com/business/technology/2014/09/25...](http://www.washingtonpost.com/business/technology/2014/09/25/68c4e08e-4344-11e4-9a15-137aa0153527_story.html)

~~~
waps
Never mind that none of this matters if you have unlocked the phone and it's
on (default protection policy is protect until first unlock, which happens
right after turnup). That's gotta be 99.9% of cases. Once the police or apple
have a locked phone, all bets are off. Apple can just install an app remotely
that gets them past the lockscreen, and unless this is from a cold boot, you
have access to all apps and all data. This includes access to e.g. The logs of
any configured skype session, the company mails ...

Add to that the usual closed software problems. Apple says they don't have a
specific backdoor anymore (!), and they won't let you audit anything.

~~~
jshevek
>> Apple says they don't have a specific backdoor anymore (!), and they won't
let you audit anything.

Yes. Its amazing to me how eager some people are to take a corporation's
spokepeople at their word.

------
antocv
Dont worry, the NSA can.

------
simonh
All the criticisms I've yet seen of Apple's iMessage security comes down to
"yes it's probably completely locked down now and for all historical messages,
but here's this obscure way they could open it up for messages in the future
therefore it's not secure".

Well duh! It's their software. Of course they could backdoor it in future,
such as if required to by the government. That's true of any software. Apple
are asserting that right now there are no such backdoors and iMessages are
secure. I've not seen any credible argument that this is not the case other
thst "maybe they're lying". Ok. What's the alternative? Run everything through
OpenSSL? That didn't worm out do well. Maybe we should run everything on Linux
using Bash scripts. Oops again!

Maybe Apple are lying. Maybe they will sell us all out. But if they do these
things always have a tendency to come out in the open eventually. So far
they've had a pretty good track record of being on the level. In the end it's
tfag reputation, and their appreciation of its value, that is the best and
really the only guarantee we have, as with anyone else we rely on.

