Hacker News new | past | comments | ask | show | jobs | submit login
Apple Explains How Secure iMessage Is (techcrunch.com)
202 points by prateekj on Feb 27, 2014 | hide | past | favorite | 120 comments



Author isn't very clever about crypto attacks.

Sending device grabs all of the recipients public keys (as well as all of their own keys for other devices, which allows the conversation to be replicated on all of their own devices as well) hosted by Apple. Sending device has no way to verify those keys belong to the intended recipient. User has no way to verify which, or how many devices they are sending to. User doesn't even know if the recipient is mysteriously using a different key that has never been seen before. Sending device does not display any information about how many keys it grabs.

Apple wants to read your messages? They drop one of their public keys in the list. Apple gets a warrant? They drop the FBI's key in the list. You'll never know that you're CCing the FBI device keys on all of your messages.

What's more, is these keys are provided by Apple over TLS without certificate pinning. So now anyone who can mint certificates from a CA trusted by the device can just assume Apple's position. You don't need to hack or legally compel Apple in order to eavesdrop.

If your iDevice is managed by your company IT department, it can be silently fed a certificate without compromising a CA.[1]

Finally, if you did not apply the goto fail update a few days ago, it's trivial to break that TLS channel and also "misconfigure" those keys. That hole has been there since September 19, 2012, by the way.

Basically, iMessage has been securing you against someone who knows how to run wireshark or tcpdump, but not much else.

[1] http://blog.quarkslab.com/imessage-privacy.html


>Apple wants to read your messages? They drop one of their public keys in the list. Apple gets a warrant? They drop the FBI's key in the list.

If they were doing this, it would come out real quick. You'd just send a message to a different account you control then see how many keys you're getting/encrypted messages you're sending. Someone like Applebaum, who knows he's under surveillance and has the crypto/networking chops to dig into it, could verify it quite quickly.


GP's point is sound: the weakness in public key crypto is key exchange. Unless you can independently verify that the keys you're getting from Apple belong to the intended recipient, the system doesn't work. You don't even have to add a key to the pool, you simply replace a key in the pool with a key generated by Apple, and when the message arrives on the server, it's decrypted and sent to anyone at all, and then re-encrypted with the recipient's key and sent along as normal. Even signatures aren't really a barrier in this case, since Apple can MITM your public key (used to verify the signature) to the recipient, as well.

Any crypto system whose security is predicated on a trusted server might as well be compromised. It's way too easy for servers to be subverted, either technologically or (il)legally.


If you have access to the private keys, which I'm assuming as I'm not talking about users trusting the system but rather researches identifying malicious use, then it's trivially easy to verify that the public keys being used are those, and only those, that correspond to the correct devices.

It's not a good system, but it's also not one that is impervious to detection of malicious activity.


Right, but isn't the point that they could do it in a targeted fashion? Granted, it beats the dragnet situation we have now, but it's not quite at the point of "Apple couldn't intercept your messages even if they wanted to".


Granted. That said, "Apple couldn't undetectably intercept your messages even if they wanted to" is surprisingly close.


See rpdillon's comment above.


Well, classic key distribution problem is classic. You have to anchor your trust somewhere. And whatever you choose, somebody will complain.

Cue web of trust PSA.


Well, maybe not. Web of Trust has serious scaling issues. https://bitcointalk.org/oldSiteFiles/byzantine.html

Blockchain based key distribution may let you anchor trust to a decentralized process.


I am really curious why/how you think the byzantine link relates to key distribution and the web of trust?


The byzantine Trust issue is really just all agreeing on a a value. A handle/name and public key pair is such a value. We can distribute public keys in a block-chain. This pushes MIM attacks down to the last mile between a block-chain server and client and to the identifier exchange. The user can run their of own Block-chain server and authenticate via sneakernet if need be and we can at least be assured that you are communicating with the person who's handle matches the key (even if it might not be the person we think it is).

--------------------

A Blockchain makes a good backed for a web of trust because it lets us use the above to link an identifier with a public key and server. One of the primary scaling issues with a web of trust based authentication is that names get long fast: PersonA.PersonB.PersonC.Otherguy. A blockchain allows for a universal naming scheme and a central place to store high confidence links that is faster then querying the peers I trust and asking if they have a link to an arbitrary node.


I cant understand anything that you wrote. Yes you can distribute keys in a blockchain. After Alice grabs a key out of the blockchain how does she know it is Bob's key and not Eve's?


This is a better summary then I can hack together: http://namecoin.info/

I have issues with their implementation, but in general the message is right.


How does Alice attest to Carol's key so that Bob (who trusts Alice) can identify/use Carol's key for communication?


Alice just signs Carols name. Bob looks up Carol's name in the blockchain. The primary application is TLS style authentication, so the user ideally already knows the name of their intended recipient.


I found a comment on this blog [1] by someone claiming to have reverse engineered the iMessage protocol. [2]

In his comment, he says: "OS X 10.9 (and probably also iOS 7) does certificate pinning for the push connection, I don't know about the directory lookup."

[1] http://blog.cryptographyengineering.com/2013/06/can-apple-re...

[2] https://github.com/meeee/pushproxy/blob/master/doc/apple-pus...


Users get both a push notification and email when a new device is added to the key bag. So additionally this step would need to be maliciously skipped.


That would only be the case if someone does it by logging in to your iCloud account, which applies to none of these attack scenarios. Even then, it only notifies the owner of that account, not the party who is sending messages to unverified keys.

If you're spoofing the keyserver, it's invisible to Apple. If you're the FBI and serving a warrant on Apple, Apple does not send the message telling you that "FBI's MacBook Pro has logged in to your iCloud Account."


Pardon my ignorance not being a security expert, but given that all endpoints here (software and hardware) are Apple-controlled, just how trivial is spoofing a keyserver? Genuinely curious.


Assuming (perhaps incorrectly) that Apple does not use certificate pinning for their iMessage servers, and the adversary controls a trusted CA (likely for secretive government agencies), and the adversary controls a router somewhere between the device and Apple (also likely for government agencies), it's very trivial.

It's a simple matter of redirecting traffic using something like iptables.


Even more trivial if the iOS device is managed. by an Enterprise Admin, who can tell the device to trust any arbitrary certs without the user ever knowing.


Based on the quarkslab link upthread, Apple wasn't using certificate pinning for iMessage at least as recently as October 2013. Even worse, also as of that article, Apple was sending the AppleID and cleartext password over this MITM-vulnerable SSL connection as part of iMessage login.


And since Apple owns the infrastructure and we can't validate anything about it, there is little reason to believe they couldn't maliciously skip steps.


Actually, is it not possible to do a count on the key bag?


A custom-written third party client could theoretically do this (and mitigate most of the other attacks here). What Apple provides won't help with this though.

Think of an SSH client that never verifies host keys and never warns you when they change. That's basically what iMessage is.


Even a custom-written third party client still doesn't eliminate the requirement to trust Apple. They're delivering your closed source operating system, which could very easily include hostile behaviors like key logging and screen recording.

At least with iMessage, the entire ecosystem is managed by one company, so you only have to trust one company.


> Even a custom-written third party client still doesn't eliminate the requirement to trust Apple. They're delivering your closed source operating system, which could very easily include hostile behaviors like key logging and screen recording.

It does eliminate the requirement to trust Apple actually. That's the point of (real) end to end encryption. With a custom client, you could use them as a conduit to deliver your message without using any Apple software on either client if you wanted to. As long as the users can verify the keys.

> At least with iMessage, the entire ecosystem is managed by one company, so you only have to trust one company.

I just explained at least 3 different reasons why that is not true. You can fully "trust" Apple and still compromise an iMessage conversation.


>> Even a custom-written third party client still doesn't eliminate the requirement to trust Apple. They're delivering your closed source operating system, which could very easily include hostile behaviors like key logging and screen recording.

> It does eliminate the requirement to trust Apple actually.

As the parent comment indicated (emphasis added), in an extreme hypothetical scenario, Apple could (be compelled to) write a backdoor specifically for your device that captures all text and touch input events, or periodically takes screenshots and sends them somewhere.


You're missing the part where "custom client" means you aren't using an Apple device at all, if you don't want to be.

This is like sending a PGP message over gmail using mutt on linux via IMAP. Google owns gmail, but they can't backdoor you or read your message in this scenario. Any non-broken secure messaging protocol should have this property, regardless of if people use it that way. This is what end-to-end encryption means.


> Any non-broken secure messaging protocol should have this property

Okay then, tell me how you could build a system that

* Guarantees secure transmission of public keys

* Can be installed safely on a closed-source operating system

* Is impervious to the whims of the hardware vendor or the operating system vendor

* Is sufficiently straightforward that mums everywhere will happily use it

* Doesn't require users to become security experts

Given the practical realities Apple has to contend with, I think they've done a pretty good job -- certainly a lot better than what anyone could have expected, arguably a lot better than anything comparable that has come before it.


No, Apple has definitely done worse than what we have expected. Worse, they haven't provided what they claimed. That being a secure end-to-end messaging protocol.

This discussion about backdooring applications isn't even interesting or relevant here. The protocol itself is not secure.


Well you can kind of say that about the entire internet. Possibly "Evil" and/or compromised companies control everything, right?


At a certain point people who are worried about the 'much else' part have to take personal responsibility for their paranoia. Apple cannot prove they aren't reading your iMessages. Maybe they are beaming them into deep space because they want to help aliens come get you. Who knows? It's hard to disprove. Anyone who reaches that level of paranoia should not be using these services.


Apple cannot prove they aren't reading your iMessages.

Sure they could. They could change the protocol and UI so that the users could verify and pin keys, and they could publish the iMessage source[1] to allow people to check it and compile it themselves.

Whether they should be expected to do these things is a different issue, but they could.

[1] not necessarily under a liberal license - for example, PGP was proprietary but allowed people to download its sources


They could publish the source, but how do you know that source is running on your iPhone?


Compile it yourself or get some trusted party to it for you, and sideload it. "But you can't sideload iOS apps!" Well, then we're back to what Apple could do.


Which was my point...


What was your point? All I said was that they could prove it. If allowing sideloading is required, so what?


In reply to:

    Apple cannot prove they aren't reading your iMessages.
You said:

    Sure they could. They could change the protocol and UI so that the users could verify and pin keys, and they could publish the iMessage source[1] to allow people to check it and compile it themselves.
My point is that publishing the source is not enough since you cannot verify that is the source running on your iPhone. Even if we concede that Apple could let you sideload your own compiled binary (which is not the case today), you still couldn't prove it since you couldn't know that the iPhone is running your binary and not a backdoored binary in the OS.

None of which is to say that Apple should have to prove any of this. If you cannot trust your phone vendor, then you cannot use any function of that phone, be it purportedly-secure messaging or location services or even the web browser.


It's worth re-reading this post by Matthew Green, "Can Apple read your iMessages?" [1]

For one, if you back up your device with iCloud, then yes, Apple can read your iMessages. This has been verified by experiment.

Second, Apple operates a central directory of iMessage public keys mapped to accounts, and this enables various kinds of MiTM attacks. Contrast this with the way TextSecure / RedPhone does contact discovery using blinded signature queries [2].

Third, iMessage and iOS are closed source. Ultimately, closed source can do whatever the heck it wants. Not just what they're telling you it does.

All the same, we now have some new details on iMessage from Apple [3], and I'm looking forward to hearing the crypto experts pick it apart.

[1] http://blog.cryptographyengineering.com/2013/06/can-apple-re...

[2] https://whispersystems.org/blog/contact-discovery/

[3] http://images.apple.com/iphone/business/docs/iOS_Security_Fe...


It's still way better than SMS, and Facebook loves to use that kind of information to sell ads.

This is actually more secure than I expected.


And it's far better than other messaging services where messages can be read by the server (e.g. Google talk). If they/the government wanted to read your messages, at last they'd have to jump through some hoops.


A single, very easily jumpable hoop: sending a lawyer to the nearest friendly judge, asking him/her to sign a warrant.


I don't have a problem with a judge granting a warrant for a search. That's the whole point of warrants. The problem is the warrant less dragnet surveillance


Better than no hoop.


The way Apple could "read" the messages is by sending a keybag down to the person sending the messages with another public key, one that Apple holds the private key for.

For example if you have 3 devices (iPhone, iPad, MBP) and someone goes to send you a message, they have to re-encrypt the message three times because Apple would have sent them three public keys.

Now if Apple were evil because of a government order, they could send down four public keys, the three ones for the devices you own, and the one public key that Apple has the private key for. At that point once they receive the message they can read it.

Any system that distributes public keys like this can be compromised the same way.

---

The only real way to stop something like this is to make sure that the person you are talking to holds the keys, OTR does this for example by allowing both parties to verify the fingerprint...


Lawful intercept isn't evil.


When there are secret courts and laws, it is evil. It is indistinguishable from the surveillance state.


Hardly. I think people's concern is with the use of warrants that apply to a massive range of people to justify (legally) data collection en masse without being backed by intelligence and probabal cause. Declaring that all warrants for communications material are evil because a certain subset are evil does not work to improve our situation at all; it is a tabloid worthy knee jerk reaction, and while such populism is common here, it is not constructive.


Let's say you are the German foreign ministry's IT guy. Are you really going to say "Hey, it's OK to use services and equipment from the companies on the PRISM participants list because we can trust the denials that they work with the NSA and we can trust LI not to be a gateway for spying <cough>Athens Affair</cough>."

This is not a matter of populism or even of principle. US technology can't be trusted any more. How are you going to restore that trust without making the technology verifiable and without providing simple, reliable ways for end users to routinely put their communications and data out of reach of surveillance?

Where is the "populism" in this? This is about many 10s of billions of dollars in revenue lost and even more in lost shareholder value and opportunity.


that depends on if the law is moral or immoral. Anything can be turned into law.


Who decides if they're moral?


You do, or at least I hope you do.


Would you suggest that Apple be the arbiter of morality?


Are the NSA's intercepts lawful?


The NSA's intercepts are a lot different from this. The problem with PRISM and the FISA courts that the NSA is intercepting all the data they think they'll ever need, then they need a warrant to query it. It inverts the intention of what warrants are for, which is to require just cause before any surveillance happens.

With iMessage, if the FBI gives apple a warrant to include their snooping pubkey as an additional encryption endpoint for all messages for a user, by definition it only gives access to messages made from then on, which is in keeping with how I would expect a search warrant to work.


My comment to which you replied was tongue-in-cheek, responding to the fellow who was defending lawful intercept. The point is that the NSA's intercepts, though controversial, have not halted nor have they been declared unlawful. (Not talking here about 215 programs.)

You're correct about iMessage, which is an important point to make. But you're not correct about PRISM; see my article from last summer: http://news.cnet.com/8301-13578_3-57588337-38/


This is a stale thread but I thought I should reply:

I made no claim that about direct access to servers, but I guess since the rumors of direct server access and "PRISM" are synonymous in popular news articles, it was misleading to use the term. My point still stands if you replace "PRISM" with "NSA's dragnet surveillance", which is surely happening.


And it would also only be for a specific user, rather than the dragnet surveillance (biggest fishing trip in history) that the NSA, et al, are conducting.


And I have bridge in Brooklyn I'd like to sell you.


Security is not a subset of morality.


Says spooky23.


My iPhone gives me a loud, blaring notification whenever my keybag gets changed, so it wouldn't necessarily be easy to modify my keybag without me knowing, unless they already have code in place with that specifically in mind.


Unless Apple is omitting something or there’s some backdoor tucked into their many-layers-deep encryption (which, while unlikely, isn’t inconceivable) they really can’t read your iMessages without a fairly insane amount of effort.

That is, assuming, that there isn't some code in the app that allows Apple to request that the app send your private key up to the server. It's conceivable that in order to comply with law enforcement, for example, that Apple could just tell the app to send up your private key so that it can decrypt any message they have stored.

There's also no way to verify that your messages have, in fact, been removed from their services.


> (which, while unlikely, isn’t inconceivable)

Not only do I think it's not unlikely, I actually think it's pretty much a certainty that Apple has a backdoor in their code. After the slides detailing how easy it is for NSA to break into Apple phones I'd be simply shocked if they hadn't inserted such a vulnerability.

Sounds to me like the author is applying a nice coat of white wash.


After reading a post by a Google engineer on this issue, I'm going to err on the side of believing that Google, Apple, and others aren't actually actively inserting back doors into their code.

https://plus.google.com/108799184931623330498/posts/SfYy8xbD...


By all accounts, these secret court orders often come with gagging clauses - which would likely extend to even gagging one employee from telling another. For all I know the guy next to me could be under a secret court order forcing him to insert backdoors.

Of course, as there's no way to disprove this hypothesis, and there's no proof of it, you can still err any way you like :)


So why would Apple voluntarily issue an update to fix the problem if they were gagged under a secret court order?


After the slides detailing how easy it is for NSA to break into Apple phones I'd be simply shocked if they hadn't inserted such a vulnerability.

The slides pertained to physical access to an iOS 6 device, which was publicly known as insecure before the NSA revelations:

http://apple.slashdot.org/story/13/08/01/2024212/iphone-hack...


Obviously this system has limitations and entirely relies on your ability to trust Apple. But there's quite a few things to consider here:

* Text messages and most other chat protocols require that you trust multiple hardware vendors, multiple software vendors, and multiple telcos. By comparison, iMessage only requires that you trust a single company, Apple.

* As long as the operating system and messaging software is closed source, it would be impossible to eliminate the requirement to trust Apple anyway. If you really need serious security, you shouldn't be relying on any closed source third party systems, period.

* This is about as secure as it could ever get without requiring users to be educated about security principles. Given that iMessage is foremost a seamless alternative to text messages, it's difficult to imagine how they could make it more secure without compromising utility.

* The implementation details mean that any Government snooping must be done with Apple's knowledge, and will require the blessing of Apple's legal department. This might not be a particularly high bar to cross, but it does mean that Governments aren't running rampant, analyzing every message sent.

* The United States government isn't the only bad actor out there. The level of security appears to be extremely good against entities that hold no sway with Apple's legal team. It's also presumably impervious to a hostile network, or hostile foreign governments.


Most of these points are completely wrong.

> * Text messages and most other chat protocols require that you trust multiple hardware vendors, multiple software vendors, and multiple telcos. By comparison, iMessage only requires that you trust a single company, Apple.

This is incorrect. It requires you to trust Apple, every company who operates a CA, every government who can compel any CA to mint a certificate (read: you trust the Turkish and Chinese governments), your IT department, and any hacker who has access to any of those. If you didn't install the patch this week, and for the last year and a half, it required you to trust everyone on every network segment you have ever connected to from any Apple device.

> * This is about as secure as it could ever get without requiring users to be educated about security principles. Given that iMessage is foremost a seamless alternative to text messages, it's difficult to imagine how they could make it more secure without compromising utility.

It could very easily be more secure. For example, certificate pinning would be a decent start. It could also allow users to view key fingerprints if they choose to. Many users wouldn't understand the purpose of that exercise, but at least the option exists. More paranoid users could enable warnings if a user's key changes. See also: Whisper Systems

> * The implementation details mean that any Government snooping must be done with Apple's knowledge, and will require the blessing of Apple's legal department. This might not be a particularly high bar to cross, but it does mean that Governments aren't running rampant, analyzing every message sent.

Incorrect, as explained above. There are attacks that do not require governments nor do they require any assistance or blessing from Apple.

> * The United States government isn't the only bad actor out there. The level of security appears to be extremely good against entities that hold no sway with Apple's legal team. It's also presumably impervious to a hostile network, or hostile foreign governments.

It's not impervious to hostile networks if those networks include your corporate IT network or if you have used it in the past year and a half on any network. Even with the latest patches, it is not impervious to hostile foreign governments. Check your browser's CA list and take a look at how many different countries are in there. For a more complete list, take a look at this: https://www.eff.org/files/colour_map_of_CAs.pdf


Every one of your objections relies upon the assertion that iMessage is vulnerable to certificate forging attacks. Do you have a citation? Has this been demonstrated, or is it theoretical?


Now you're just being willfully ignorant.


Or rather, you're being willfully obtuse.

Citation, please?


Among other things, the system requires you to not only trust Apple, but the hardware of the device itself. In particular, the random number generator in the iPhone/iPad, since the keys are generated on the device.


The whole document was an interesting read.

http://images.apple.com/iphone/business/docs/iOS_Security_Fe...


Agreed. I found this bit about Touch ID interesting:

> With one finger enrolled, the chance of a random match with someone else is 1 in 50,000. However, Touch ID allows only five unsuccessful fingerprint match attempts [...]

I assumed it was more accurate than 1 in 50,000. Then again I don't know what a normal fingerprint sensor is capable of. Does anyone know the accurate of the sensor in the new S5, or the sensors that IBM/Lenovo/etc. have put on laptops?

Also:

> The 88-by-88-pixel, 500-ppi raster scan is temporarily stored in encrypted memory within the Secure Enclave while being vectorized for analysis, and then it’s discarded after.

I wonder what kind of neat stuff people could come up with if we had raw access to that kind of sensor data.


As I understand it, most fingerprint scanners can only distinguish between on the order of tens of thousands of fingerprints. 50,000 is higher than other numbers I've heard for fingerprint scanners (30-40k).


If all that is true, it sounds perfectly secure against anyone other than Apple and whatever law enforcement agencies they comply with requests from.

So, you know, really not secure at all.


More so that its rivals, Apple has consistently put forth a greater effort to explain their technology to its customers. Apple has remained keen to point out the difficulties of hardware and software development. Perhaps this is one reason why people outside of the technology sector perceive Apple products as superior. People think Apple has gone the extra mile.


Perhaps not how I'd phrase it, but I do love reading Apple's documents and whitepapers. They're all designed and laid out so well for reading the information. It's amazingly rare for that kind of stuff. I read the entire document and found it fascinating.



And they usually go the extra mile. Have you got examples where someone else 'went the extra mile' where Apple didn't? Most other companies just like to talk big, like Google with their fantasy projects, but most of their actual released products are mediocre.


As mentioned in other comments, you have to trust Apple to hand you the correct public keys. They could easily MITM you and decrypt the messages on the server if they misrepresent the other party's public key. Additionally, the iMessages you send are signed by your private key, which is probably not something you want.


More to the point, they could be forced under court order (or FAA 702 order etc.) to MITM. This is a cousin to the Lavabit scenario.

Nothing wrong with end-to-end encryption, folks. Why don't we have more of it?


> Nothing wrong with end-to-end encryption, folks. Why don't we have more of it?

Complacency and laziness. There is no excuse.


Well said! :)


> iMessages you send are signed by your private key, which is probably not something you want.

Why not?


Because they can be used against you and you won't be able to deny the message has originated from you, which is a desirable security/privacy property in a messaging app setting. OTR, and mpOTR[1] in particular, try hard to remove non-repudiability from messages.

[1]: http://www.cypherpunks.ca/~iang/pubs/mpotr.pdf


I always thought non-repudiability was desirable and what whistleblowers need is anonymity. Thanks for that link.


To be clear, it's a desirable property that parties engaging in a conversation are able to verify authenticity of messages and that they indeed originated from their authors. It is undesirable that the parties of the conversation are able to definitively prove to a third party, not involved in the conversation, that a message was indeed originated from a specific person.


Coming from a background of using cryptography regularly (far from an advanced user), this revelation seems... Not surprising. It's practically the equivalent of using SSL for viewing webpages. I say practically because for some mind boggling reason, using standard crypto practices seems to be novel for messaging services on the Internet.


It is closed sourced anyway; so who knows if the document is correct, and who knows if there is a backdoor or a bug...


They don't have to have your private key to pull off a MITM.

In reality, it is probably secure enough against most adversaries. State level adversaries is a different story.. That you need OTR and key verification in person.


If the above reports of not Apple not pinning their certificate, then this isn't secure enough against any adversary that can compel or hack a CA to issue you a certificate in Apple's name. The latter appears to be not as hard as you might imagine.

And the recent fail in their SSL/TLS library means you don't even need the help of a CA to create a certificate Apple software would consider valid.


And if Apple's servers lie to you and tell you there's a device with a private key they generated?

They may never have your private key, but you are still trusting them to deliver the correct public keys to other users.


No end-to-end no bueno.

Standard SSL even when done right isn't enough to guard against our current privacy-abusing GO's.


The new "Security through trust in big corp" model.


also known as feudal security, where the serfs just have to trust their lord knows whats best for them.


Everyone needs to start caring a lot more about verification and authenticity of keys (even public keys). iMessage anchors all trust in Apple Inc. with no means to verify that you're public key has not been swapped.

If you can't verify and pin keys, then assume there is no encryption.


Can't they man in the middle the encryption? If there's a key exchange, how do clients verify the keys they get are legitimate? SSL/TLS uses trusted authorities to verify the public key.


The combination of dislike for Apple and paranoia in this thread makes for a pretty potent combination. Every communication channel has it's flaws. Once upon a time, the post office was opening mail to read it, or wiretaps on telegrams and telephones. Now, it's iMessage. Every channel has potential exploitations, and if you can't agree with the ones that a channel comes with, don't use it. iMessage is optional. SMS is optional. Don't open your mail. Whatever.


What would be more interesting to me would be a comparison between the security of the iMessage protocol and similar competing facilities like SMS and Google Hangouts.


This means nothing without the full source code to prove it.


Apple is able to do this today because instant message services not (yet) covered under CALEA. ( Carrier assistance for Law enforcement agencies.) If CALEA is updated to include instant messaging services, Apple would be legally obligated to have a method of intercepting these messages, possibly with a separate public key as discussed in other comments.


CALEA can't compel you to hand over anything you do not have access to. If cleartext in your chat system can't be accessed because you don't have access to customer private keys, that's not illegal.


Apple could mitigate most of the security concerns listed in this thread by listing the trusted devices which you're encrypting against. This solves the "extra encryption key" angle. You'd still have to trust your recipient to be just as mindful of this as you to prevent the vulnerability in the other direction though.


> by listing the trusted devices which you're encrypting against

More precisely, on a user device: list the contact devices and key fingerprints. Adding a contact's device: on first exchange show fingerprint and ask for trust then pin the pubkey. Warn should the key change. What remains is for parties to exchange fingerprints in a peer to peer side channel (possibly even physically via bluetooth/qr-code).


Excuse me if this sounds ignorant as I am not a security expert, but isn't there a flaw in using a public key for continuous messaging? Shouldn't public-private key crypto be used only to generate a symmetrical key? The system was originally designed for symmetrical key exchange no? Using it this way presents some flaw?


That sounds great and super secure, but all I wanted was a single line goto statement fixed asap. Took forever and basically made my phone, tablet, personal laptop and gifts I gave for Christmas insecure for a long time.


I'm afraid they were as secure as Apple crumble, indeed.


Much better than I expected, it may not be perfect but it seems like the most secure of the mainstream chat services. I would love to also have seen forward security but that's asking for quite a bit.


I read that a lot of the comments are related to key exchange.

Just wanted to mention that there is a possibility to key verification over sms. An sms can even be used for a temporary key for encrypting the key transfer.


Does this explain why when I get an iMessage, no matter what, it appears on my Macbook, my iPad, and my iPhone?


This article is clearly timed to offset the bad security PR generated by the goto fail SSL flaw.


Talk is cheap, show the code


This is great security for what it is. Probably enough to keep you 98% secure.

Which is still exactly 0% secure as far as I'm concerned.

All in all though - in general - I'll be more than happy to continue using iMessage and feel at peace. As a general rule, however, never send anything electronically that may screw you over later.


If you see security as boolean, you're going to have a rough time...


Security is as strong as the weakest link. If there is a weak link there's no security. So, it is boolean.


Any more fuzzy platitudes for us, Geee?

The best encryption we have today is still bound to time and technology. Without a threat model, it's pointless to discuss security. To the NSA, not much of anything is secure on the Internet. There is a lot you can do when you have total visibility into all traffic. But your psychotic girl/boyfriend who barely understands what Wi-Fi is? They probably won't be eavesdropping on your iMessages.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: