Hacker News new | past | comments | ask | show | jobs | submit login
If You Can’t Break Crypto, Break the Client: Recovery of Plaintext iMessage Data (bishopfox.com)
282 points by cgtyoder on Apr 8, 2016 | hide | past | web | favorite | 81 comments



For the depressing truth on the crypto wars: https://news.ycombinator.com/item?id=7757978 (Crypto won't save you either [PDF])

...or to paraphrase Jeff Atwood: "I love crypto, it tells me what part of the system not to bother attacking"


That's my favorite quote from Atwood! People are so prone to forget that while cryptographic algorithms are provably secure (under practical constraints) in a mathematically rigorous way, their implementations are subject to all of the shortcomings of any engineering practice. Makes quick work for an attacker trying to figure out where to start.


It's my understanding that most (all?) public key cryptographic algorithms aren't provably secure, but are conjectured to be. They are reliant on some problem being hard to solve (factoring of large integers, discrete log, etc.).

Something like a one time pad is provably secure, however.


This is a common misconception. The algorithm itself is provably secure, in the sense that violating the stated security guarantees of the algorithm is equivalent to solving a problem that's considered to be computationally intractable. The only part that isn't 'provable' is the basic assumption that the problem is intractable in the first place.


Didn't you just agree with him but substitute 'hard to solve' with 'computationally intractable'?

Yes based on our understanding today these things are computationally expensive (e.g not feasible), but they could theoretically be easy to crack given a mathematical breakthrough.

Am I misunderstanding?

As the field of mathematics advances there's a chance that current crypto will be broken. Why is this a misconception to point out?

Why is it not, on some level, conjecture to say these systems are secure?


What you are saying was true until this February when this paper came out https://eprint.iacr.org/2016/170

It hadn't been implemented yet I'm any practical crypto system that I know of, buy it certainly seems like we are finally going to have actually, provably hard problems to build out security om.


I believe you made an unfortunate typo, substituting "probably" when you meant "provably".


Even auto correct finds it surprising :-)


The scheme in this paper is in the bounded-storage model...


It says you either have to use exponential time or quadratic storage. Schemes based on high memory requirements have actually been sought for a while, since (apparently) memory is considered less scalable than computation.


No, I mean your original comment is inaccurate. The paper presents a time-space lower bound for parity learning, but the encryption scheme based on this result is only 'unconditionally secure' in a model where the adversary is restricted to having at most (n^2)/25 bits of storage. This isn't a general-purpose unconditionally secure encryption scheme, which is what your original comment implied.


All proofs have axioms if you chase them all the way down. Given the axiom that the solving the math in the crypto is intractable, the crypto algorithm is proven secure. But only so long as the axiom holds.

For example, quantum computing may break the axiom, and then the proof will be invalidated.

It might be more correct to say assumption rather than axiom here.


It looks like probably secure is defined in cryptography to mean breaking the algorithm is equivalent to solving the underlying intractable problem [0]. In my mind provably secure meant that the problem was actually intractable (which is not the convention).

0. https://en.wikipedia.org/wiki/Provable_security


That is the conventional meaning of 'provably secure' in every text and research paper on modern cryptography.


You're falling victim to the same misconception. It is not a contradiction to say both that a cryptographic scheme is provably secure and that its security relies on a conjecture about the hardness of a computational problem.


If the conjecture turns out to be false, is the scheme still secure?

If so - interesting, how does that work?

If not - then doesn't that mean it's not probably secure?


Ehhhh.... well, it's complicated. For most cryptosystems, the answer is no, because if you can solve the underlying problem efficiently you can break the security of the scheme as defined. It turns out that this isn't always a 'break' in the sense that most people understand it. For example, a 'break' might just mean the ciphertext is no longer indistinguishable from random noise, but it might be possible to prove meaningful security in a weakened model that doesn't require ciphertexts to look like random noise but, for example, requires that no bits of the plaintext are leaked with high probability. Cryptographers build schemes with very strong, conservative security guarantees for this exact reason.


True and fair. I overstated my point. More appropriate to say "provably secure (under practical constraints)", and that's a rather significant caveat.


Given the breadth of ways to leak information about the private keys--side channel attacks, physical attacks, userspace (allocator, random number generator) this would be extremely difficult (impossible?) to prove.


It's a well known result that one time pads are provably secure. (Also, no PRNGs are involved.)


The PRNG is certainly involved in the generation of the one-time pad. You could tell if they used the ascii text of hamlet.


Using a PRNG to generate an OTP is called a stream cipher, and then it isn't an OTP. :)

When using an OTP, you have to use non-pseudorandom values to avoid just being a stream cipher. If you're doing that, you can skip sharing the pad and just share the initial state of the PRNG.

If you go to the trouble of sharing the pad, go to the trouble of using random data within it. :)


Could be actual random data is used, rather than a PRNG


I think you misunderstand him? If the crypto of a system is assumed to be correct, then that is the last place to look for a vulnerability.

Better to look at what lies on either side without relying on the pipe being vulnerable.


"Don't bother trying to break encryption, break [whatever] implements the encryption instead"

that's how i interpreted it


Even cryptography relies on unproven assumptions; we just consider them trustworthy enough to rely on.


Cryptographic algorithms are generally not 'provably' secure, because most are based on an underlying assumption that some problems are hard, and this is not proven, just assumed. Also, there are mathematically verified implementations, and tools for verifying existing implementations that are about as 'provably' secure as the specifications of the algorithms.


There is obviously a place for cryptography in both personal and commercial communications. I'm always curious when one hears a politician moaning on about the 'dangers' of cryptography, if they understand how intricately intertwined cryptography is with the modern economy. And the fact that something is either cryptographically secure, or it isn't. There is no middle ground. If you intentionally break a cryptography system then you're going to disrupt trillions of dollars of commerce.

But crypto is of course not a magic pill. There are political issues that need to be addressed as well. This was a theme touched on in Bruce Sterling's SXSW keynote this year.


Just the idea that crypto CAN be fought indicates a misunderstanding of the technology. You can't break a technology implemented decently on every consumer computer on the planet and with many open source implementations.


This is why I always eyeroll when people complain about GPG user interface weaknesses.

The only way to really ensure the integrity of encrypted communications is by isolating and keeping the endpoints away from prying eyes. If your personal, business, political or criminal activity is such that you're concerned about third party interference with your clients, you have no business using iMessage -- which is protecting you from snooping network admins and carriers.

The beauty of a complex but powerful tool like GPG is that you can completely isolate your online activity from secure activity. There's nothing preventing you from printing cipher text and using a scanner attached to an air gapped computer without any network connection.

If your health and safety depend on secure communications to avoid extraordinary threats, don't use off the shelf tools that you don't understand. If you don't understand any tools, follow "the Godfather's" advice and avoid telecom-based communication.


The PDF you link attributes that quote to Drew Gross



oops - my bad, you're totally right... too late to edit though :(


The fake URL in a JavaScript comment in the the JavaScript URI is a hilarious and neat trick.

    javascript://bishopfox.com/research?%0d%0aalert(1)
gets interpreted as:

    //bishopfox.com/research?
    alert(1)
Fortunately most browsers prevent you from pasting JavaScript URIs in the URL bar these days.

It's a little surprising Apple overlooked not one but two fairly obvious major holes: allowing JavaScript URIs, and the lack of same-origin policy. I wonder how many other applications are similarly vulnerable.


Well the lack of SOP is by design, since it's not a browser visiting multiple sites the idea of an "origin" doesn't always make sense. This is part of a larger body of work we've been researching, we found much more than this one (all known bugs have been patched, that's why we've been waiting to release this). We'll be submitting the full body of work to DEFCON/Blackhat, and a few other cons, hopefully we'll get accepted, be on the lookout if we do!


This is the article that years ago convinced me it's not worth obsessing about my own technological privacy: http://www.gaudior.net/alma/johnny.pdf

I despise the "if you have nothing to hide..." argument for the surveillance state. And I argue against it every chance I get.

But, practically speaking, I don't have much to hide. I also realized that one can draw more attention to oneself by taking drastic measures to preserve one's own privacy.

I know, citation needed... I believe FB (or a related party) released some research about detecting "holes in the social network". Browser fingerprinting is another front on which I've probably made myself more unique to trackers.


On the other hand, if encryption is the default then there is no obscurity in not using it.


Don't we all want to hide our payment information when we buy stuff online? Modern commerce is built on identity assertion and securing payments between two parties over the wire.


No doubt. I'd prefer my information stay private. But that liability lies with the party that loses my data.


Apple use a web view for messages? I would've thought they'd use native UI. I guess it's easier to handle text properly with HTML.


Yes, surprisingly the OS X Messages app doesn't seem to share a lot of UI code with the iOS version. You can easily tell that it's a simple WebView from the way text selections behave.


Strangely Apple seem to build everything twice, once for iOS, once for OS X, even if they have the same appearance. It'll bite them some day.


Meaning how you can select text across messages?


Yes, that and how some of the whitespace between the messages gets selected as well.


Yes it surprised us too, we didn't even think about looking into Messages until Shubs jokingly started sending payloads to me using it.


Man, that's depressing. It's fairly easy to prevent this particular kind of injection—you just have to add a Content Security Policy to the HTML page. The appropriate value for web pages running from file://, with no expectation of downloading and executing remote JavaScript is: `script-src 'self';`

Really sad to see that Apple is using embedded web views without these sort of basic protections. I bet worse exploits than this are possible, given that they probably expose parts of the ObjectiveC layer through the JavaScriptCore bridge.


It gets even scarier with frameworks like nw.js where you can just execute native code directly from the DOM.


It looks like the code was pulled from Github

https://github.com/BishopFox/cve-2016-1764


Sorry about the mix up, the code available here: https://github.com/moloch--/cve-2016-1764

and here: https://github.com/BishopFox/cve-2016-1764


Simplified POC:

  javascript://%0aprompt()


You need the full a://a.a/%0a to match the URL pattern, but that's the jist of it.


I thought so too but a quick test showed that the code from Capira does indeed work


Where did you test it? In a VM?

If you have a Mac that's on OS X 10.11.3-, then you shouldn't be running unpatched systems.


Ah okay, we have a few other vulns (that have been patched) in various other messaging apps, some required the full pattern others not.


Does anyone know how they managed to open the console / inspector inside iMessage.app?



We’ll see a lot more of this soon, considering more and more software is moving to webkit UIs, often with similar flaws.


Implementing CSP and other mitigations for these types of same origin bypass attacks is relatively easy. I'm shocked that Apple didn't check this. I couldn't imagine Google ever making this mistake, their web security teams are solid.

Apple really needs to invest heavily in bug bounties and internal security audits. This is 101 type of stuff when implementing any user-controllable embedded web content.

The bar should never be this low for critical OS apps like iMessage.


> I couldn't imagine Google ever making this mistake, their web security teams are solid.

You haven’t seen their XML bugs in Google Toolbar’s web gallery in 2013, have you? Full access to the whole file system of their servers via XML includes.

A bunch of security researchers managed to dump /etc/passwd as a sample to get the bug bounty.

Google’s security isn’t that much better either...


In case of Android what you only need is that your application can read notifications (and has notifications/accessibility permissions). E.g. all whatsapp messages go through it...


A typical fanboyism argument when one's favorite company screws up. Just mention the other rivals and add zero insight into the original idea being discussed.

> In case of Android what you only need is that your application can read notifications

This "only" is much harder to do than sending a Javascript URL.


This means a user would have to installed the app. Once your local machine is owned it's over.


that's pretty much the approach on all crypto. crack the implementation, not the algorithm.


It's rather "steal the message after decrypted" scenario, rather than cracking the implementation.


I think they were referencing, "Most crypto is bypassed rather than broken."


exactly


"Never do anything against conscience even if the state demands it." --Einstein


I had a similar thought with WhatsApp's Signal announcement. I believe that on iOS, by default all WhatsApp messages are backed up to iCloud Drive. So that would seem to be an easier attack vector.


Not just the messages - the key too. Just imagine the outcry from someone breaking his iPhone not being able to restore his messages because of the introduction of end-to-end.

That way, you don't even have to attack WhatsApp itself and they have all the plausible deniability they needed.


The cryptosystem in use by Signal and now WhatsApp does not work this way. It offers forward secrecy, where recovery of the long-term keys will not allow decryption of past intercepted encrypyed messages.


Would love to see a source on this. Every time I restore an iOS device, I need to manually re-enter anything secret like passwords.


Why would the key be backed up to iCloud? Do you have a source for this?


Does iOS have a way to designate parts of app data as secret/transient/do_not_back_up? I suspect not.

EDIT: It appears that I was incorrect.


Yes, the NSURLIsExcludedFromBackupKey file property.



The only text/voip app that securely stores its data is Biocoded (https://biocoded.com/home). Even if the local on-device database gets copied elsewhere, it will be undecryptable outside of that device.


What's wrong with Signal's encrypted storage? AFAIK if you set a password it will be used to encrypt local storage of your messages.


It's not that difficult to break. Anything encrypted with a password is not all that secure. Someone can clone your device or by using a security hole in the device can get to that storage blob and eventually crack it in reasonable time.


That entirely depends on how much entropy is in your passphrase. It's entirely possible to use one that cannot be cracked in reasonable time.


Signal on Android offers password protected encryption for chat logs


This isn't the only thing you can do without breaking crypto. If exploits are too hard because you are lazy like me: check out CreepyDOL.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: