Hacker News new | past | comments | ask | show | jobs | submit login

Author isn't very clever about crypto attacks.

Sending device grabs all of the recipients public keys (as well as all of their own keys for other devices, which allows the conversation to be replicated on all of their own devices as well) hosted by Apple. Sending device has no way to verify those keys belong to the intended recipient. User has no way to verify which, or how many devices they are sending to. User doesn't even know if the recipient is mysteriously using a different key that has never been seen before. Sending device does not display any information about how many keys it grabs.

Apple wants to read your messages? They drop one of their public keys in the list. Apple gets a warrant? They drop the FBI's key in the list. You'll never know that you're CCing the FBI device keys on all of your messages.

What's more, is these keys are provided by Apple over TLS without certificate pinning. So now anyone who can mint certificates from a CA trusted by the device can just assume Apple's position. You don't need to hack or legally compel Apple in order to eavesdrop.

If your iDevice is managed by your company IT department, it can be silently fed a certificate without compromising a CA.[1]

Finally, if you did not apply the goto fail update a few days ago, it's trivial to break that TLS channel and also "misconfigure" those keys. That hole has been there since September 19, 2012, by the way.

Basically, iMessage has been securing you against someone who knows how to run wireshark or tcpdump, but not much else.

[1] http://blog.quarkslab.com/imessage-privacy.html




>Apple wants to read your messages? They drop one of their public keys in the list. Apple gets a warrant? They drop the FBI's key in the list.

If they were doing this, it would come out real quick. You'd just send a message to a different account you control then see how many keys you're getting/encrypted messages you're sending. Someone like Applebaum, who knows he's under surveillance and has the crypto/networking chops to dig into it, could verify it quite quickly.


GP's point is sound: the weakness in public key crypto is key exchange. Unless you can independently verify that the keys you're getting from Apple belong to the intended recipient, the system doesn't work. You don't even have to add a key to the pool, you simply replace a key in the pool with a key generated by Apple, and when the message arrives on the server, it's decrypted and sent to anyone at all, and then re-encrypted with the recipient's key and sent along as normal. Even signatures aren't really a barrier in this case, since Apple can MITM your public key (used to verify the signature) to the recipient, as well.

Any crypto system whose security is predicated on a trusted server might as well be compromised. It's way too easy for servers to be subverted, either technologically or (il)legally.


If you have access to the private keys, which I'm assuming as I'm not talking about users trusting the system but rather researches identifying malicious use, then it's trivially easy to verify that the public keys being used are those, and only those, that correspond to the correct devices.

It's not a good system, but it's also not one that is impervious to detection of malicious activity.


Right, but isn't the point that they could do it in a targeted fashion? Granted, it beats the dragnet situation we have now, but it's not quite at the point of "Apple couldn't intercept your messages even if they wanted to".


Granted. That said, "Apple couldn't undetectably intercept your messages even if they wanted to" is surprisingly close.


See rpdillon's comment above.


Well, classic key distribution problem is classic. You have to anchor your trust somewhere. And whatever you choose, somebody will complain.

Cue web of trust PSA.


Well, maybe not. Web of Trust has serious scaling issues. https://bitcointalk.org/oldSiteFiles/byzantine.html

Blockchain based key distribution may let you anchor trust to a decentralized process.


I am really curious why/how you think the byzantine link relates to key distribution and the web of trust?


The byzantine Trust issue is really just all agreeing on a a value. A handle/name and public key pair is such a value. We can distribute public keys in a block-chain. This pushes MIM attacks down to the last mile between a block-chain server and client and to the identifier exchange. The user can run their of own Block-chain server and authenticate via sneakernet if need be and we can at least be assured that you are communicating with the person who's handle matches the key (even if it might not be the person we think it is).

--------------------

A Blockchain makes a good backed for a web of trust because it lets us use the above to link an identifier with a public key and server. One of the primary scaling issues with a web of trust based authentication is that names get long fast: PersonA.PersonB.PersonC.Otherguy. A blockchain allows for a universal naming scheme and a central place to store high confidence links that is faster then querying the peers I trust and asking if they have a link to an arbitrary node.


I cant understand anything that you wrote. Yes you can distribute keys in a blockchain. After Alice grabs a key out of the blockchain how does she know it is Bob's key and not Eve's?


This is a better summary then I can hack together: http://namecoin.info/

I have issues with their implementation, but in general the message is right.


How does Alice attest to Carol's key so that Bob (who trusts Alice) can identify/use Carol's key for communication?


Alice just signs Carols name. Bob looks up Carol's name in the blockchain. The primary application is TLS style authentication, so the user ideally already knows the name of their intended recipient.


I found a comment on this blog [1] by someone claiming to have reverse engineered the iMessage protocol. [2]

In his comment, he says: "OS X 10.9 (and probably also iOS 7) does certificate pinning for the push connection, I don't know about the directory lookup."

[1] http://blog.cryptographyengineering.com/2013/06/can-apple-re...

[2] https://github.com/meeee/pushproxy/blob/master/doc/apple-pus...


Users get both a push notification and email when a new device is added to the key bag. So additionally this step would need to be maliciously skipped.


That would only be the case if someone does it by logging in to your iCloud account, which applies to none of these attack scenarios. Even then, it only notifies the owner of that account, not the party who is sending messages to unverified keys.

If you're spoofing the keyserver, it's invisible to Apple. If you're the FBI and serving a warrant on Apple, Apple does not send the message telling you that "FBI's MacBook Pro has logged in to your iCloud Account."


Pardon my ignorance not being a security expert, but given that all endpoints here (software and hardware) are Apple-controlled, just how trivial is spoofing a keyserver? Genuinely curious.


Assuming (perhaps incorrectly) that Apple does not use certificate pinning for their iMessage servers, and the adversary controls a trusted CA (likely for secretive government agencies), and the adversary controls a router somewhere between the device and Apple (also likely for government agencies), it's very trivial.

It's a simple matter of redirecting traffic using something like iptables.


Even more trivial if the iOS device is managed. by an Enterprise Admin, who can tell the device to trust any arbitrary certs without the user ever knowing.


Based on the quarkslab link upthread, Apple wasn't using certificate pinning for iMessage at least as recently as October 2013. Even worse, also as of that article, Apple was sending the AppleID and cleartext password over this MITM-vulnerable SSL connection as part of iMessage login.


And since Apple owns the infrastructure and we can't validate anything about it, there is little reason to believe they couldn't maliciously skip steps.


Actually, is it not possible to do a count on the key bag?


A custom-written third party client could theoretically do this (and mitigate most of the other attacks here). What Apple provides won't help with this though.

Think of an SSH client that never verifies host keys and never warns you when they change. That's basically what iMessage is.


Even a custom-written third party client still doesn't eliminate the requirement to trust Apple. They're delivering your closed source operating system, which could very easily include hostile behaviors like key logging and screen recording.

At least with iMessage, the entire ecosystem is managed by one company, so you only have to trust one company.


> Even a custom-written third party client still doesn't eliminate the requirement to trust Apple. They're delivering your closed source operating system, which could very easily include hostile behaviors like key logging and screen recording.

It does eliminate the requirement to trust Apple actually. That's the point of (real) end to end encryption. With a custom client, you could use them as a conduit to deliver your message without using any Apple software on either client if you wanted to. As long as the users can verify the keys.

> At least with iMessage, the entire ecosystem is managed by one company, so you only have to trust one company.

I just explained at least 3 different reasons why that is not true. You can fully "trust" Apple and still compromise an iMessage conversation.


>> Even a custom-written third party client still doesn't eliminate the requirement to trust Apple. They're delivering your closed source operating system, which could very easily include hostile behaviors like key logging and screen recording.

> It does eliminate the requirement to trust Apple actually.

As the parent comment indicated (emphasis added), in an extreme hypothetical scenario, Apple could (be compelled to) write a backdoor specifically for your device that captures all text and touch input events, or periodically takes screenshots and sends them somewhere.


You're missing the part where "custom client" means you aren't using an Apple device at all, if you don't want to be.

This is like sending a PGP message over gmail using mutt on linux via IMAP. Google owns gmail, but they can't backdoor you or read your message in this scenario. Any non-broken secure messaging protocol should have this property, regardless of if people use it that way. This is what end-to-end encryption means.


> Any non-broken secure messaging protocol should have this property

Okay then, tell me how you could build a system that

* Guarantees secure transmission of public keys

* Can be installed safely on a closed-source operating system

* Is impervious to the whims of the hardware vendor or the operating system vendor

* Is sufficiently straightforward that mums everywhere will happily use it

* Doesn't require users to become security experts

Given the practical realities Apple has to contend with, I think they've done a pretty good job -- certainly a lot better than what anyone could have expected, arguably a lot better than anything comparable that has come before it.


No, Apple has definitely done worse than what we have expected. Worse, they haven't provided what they claimed. That being a secure end-to-end messaging protocol.

This discussion about backdooring applications isn't even interesting or relevant here. The protocol itself is not secure.


Well you can kind of say that about the entire internet. Possibly "Evil" and/or compromised companies control everything, right?


At a certain point people who are worried about the 'much else' part have to take personal responsibility for their paranoia. Apple cannot prove they aren't reading your iMessages. Maybe they are beaming them into deep space because they want to help aliens come get you. Who knows? It's hard to disprove. Anyone who reaches that level of paranoia should not be using these services.


Apple cannot prove they aren't reading your iMessages.

Sure they could. They could change the protocol and UI so that the users could verify and pin keys, and they could publish the iMessage source[1] to allow people to check it and compile it themselves.

Whether they should be expected to do these things is a different issue, but they could.

[1] not necessarily under a liberal license - for example, PGP was proprietary but allowed people to download its sources


They could publish the source, but how do you know that source is running on your iPhone?


Compile it yourself or get some trusted party to it for you, and sideload it. "But you can't sideload iOS apps!" Well, then we're back to what Apple could do.


Which was my point...


What was your point? All I said was that they could prove it. If allowing sideloading is required, so what?


In reply to:

    Apple cannot prove they aren't reading your iMessages.
You said:

    Sure they could. They could change the protocol and UI so that the users could verify and pin keys, and they could publish the iMessage source[1] to allow people to check it and compile it themselves.
My point is that publishing the source is not enough since you cannot verify that is the source running on your iPhone. Even if we concede that Apple could let you sideload your own compiled binary (which is not the case today), you still couldn't prove it since you couldn't know that the iPhone is running your binary and not a backdoored binary in the OS.

None of which is to say that Apple should have to prove any of this. If you cannot trust your phone vendor, then you cannot use any function of that phone, be it purportedly-secure messaging or location services or even the web browser.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: