Hacker News new | comments | ask | show | jobs | submit login
MTProto, the symmetric encryption scheme used in Telegram, is not IND-CCA secure (iacr.org)
181 points by bpierre on Dec 11, 2015 | hide | past | web | favorite | 62 comments



For those wondering:

IND refers to "indistinguishability"; it is the property of not being able to pick a matching plaintext/ciphertext pair out of a lineup of ciphertexts.

CCA stands for Chosen Ciphertext Attacks, and refers to the class of attacks where the cryptanalyst alters ("chooses") the ciphertext before it's decrypted by the victim, and is then able to learn things from the victim's behavior. (There's also CCA2, in which the attacker gets feedback from the victim on a single piece of ciphertext, over and over again.)

TLS's Mac-then-encrypt CBC was known not to be IND-CCA secure, and a few years later Thai Duong and Juliano Rizzo turned that property into BEAST. Then there was CRIME, then Lucky13, and finally TLS MtE CBC had to be put to sleep.

The easiest way to lose IND-CCA security is to fail to MAC your ciphertext. This is such a common flaw that Moxie Marlinspike, a co-author of TextSecure/Signal, which you should use in preference to Telegram, named it "the cryptographic doom principle".

This paper is a building block for a much more significant attack, which is the author's masters thesis:

https://news.ycombinator.com/item?id=10713064


BEAST and CRIME are both chosen-plaintext attacks. They don't rely on the improper MAC composition in TLS CBC.

Lucky13 and POODLE are the chosen-ciphertext attacks.

EDIT: Some more details:

BEAST takes advantage of predictable IVs in SSLv3 and TLS 1.0. In these protocols, the IV for each new record is simply the last block of the previous record. An attacker monitoring traffic on the wire can use this predictability to build an encryption oracle and guess-and-check the contents of ciphertext blocks.

CRIME uses plaintext compression to its advantage. A message with longer common substrings will compress slightly better than one without, and this is reflected in the ciphertext length. An attacker can make adaptively chosen guesses at substrings included in the message to recover, e.g., session cookies.


Great takeaways from BEAST and CRIME.

For BEAST, my takeaway was to never introduce a dependency into a cryptosystem of any kind without thinking very carefully about the implications-- and if in doubt, don't do it. I also took away "don't be creative" -- boring crypto is best. Generating random IVs from a CSPRNG is boring. Reusing the last block is interesting, but there be dragons. Crypto should be boring, straightforward, and directly based upon the current state of the art best practices without introducing anything that hasn't been subjected to rigorous analysis. Complexity equals bugs, etc.

For CRIME the takeaway is the concise "never compress somethings secret together with something attacker-influenced." It means one must be careful in using compression together with encryption, and that if one is very highly paranoid it is probably best to leave compression out to err on the side of caution.


Nit: chaining IVs was the norm, used in a bunch of protocols, until the mid-2000s. It's obvious why you'd want to do it (saves space, and, conceptually, the IV is the negative-oneth ciphertext block anyways).


D'oh. Sorry. It's release day here.

POODLE is the best example, I think.


It happens. :)

I agree, POODLE is a close analog.


> TLS's Mac-then-encrypt CBC was known not to be IND-CCA secure, and a few years later Thai Duong and Juliano Rizzo turned that property into BEAST. Then there was CRIME, then Lucky13, and finally TLS MtE CBC had to be put to sleep.

One important thing to note however is that most (all?) of the TLS attacks rely on opening many thousands or millions of connections via scripting. Without this, these attacks would not be possible. A messaging system like telegram is not scriptable and hence, should not be exploitable in the same sorts of ways.

Of course, as you say, better solutions like Signal and OTR exist.

I'm really disappointed with the project management of Signal though, the shitstorm with F-Droid and Moxie's strange obsession with using 3rd party analytics software in privacy software, the fact they pulled socialist millionaires (zero knowlege proof based verifications) and left only automatic "let's trust the server" and QR code verification, the strong reliance upon phone numbers which still hasn't been broken even with the desktop release, etc... ChatSecure and similar might be a better option.


Are you here to discuss the cryptography of Telegram, or just to slag Signal?


> This is such a common flaw that Moxie Marlinspike, a co-author of TextSecure/Signal, which you should use in preference to Telegram, named it "the cryptographic doom principle".

He also wrote a good post explaining it:

http://www.thoughtcrime.org/blog/the-cryptographic-doom-prin...


> The easiest way to lose IND-CCA security is to fail to MAC your ciphertext

Or the easiest way to get IND-CCA is to add a MAC to your ciphertext 8)

Actually, it's not that your scheme magically gain IND-CCA when you add the MAC, it is that the attacker just lose the CCA attack model (he can't test random decryptions anymore).


Unless you fail to MAC in constant-time. Then you've just made it less convenient.


Does this only affect the standard Telegram chats or also "Secret Chats" as well?


My understanding is that the "secret chats" (which should be, but aren't, the default) use the same bulk encryption construction.


What a surprise, not. Many respected experts have criticised Telegram for implementing their own cryptography and using meaningless buzzwords, while also making encryption opt in. Telegram is clearly not a privacy-motivated platform and anyone who thinks so is deluding themselves.

Even the closed source WhatsApp (uses ETE from the Signal guys) and iMessage are arguably less likely to contain cryptographic flaws than Telegram.


> Even the closed source WhatsApp (uses ETE from the Signal guys) and iMessage are arguably less likely to contain cryptographic flaws than Telegram

Even though you used the words "arguably less likely" to soften your statement, there's no way to check what WhatsApp or iMessage are doing because they're not open source. So even if security researchers want to look at the code or build their own clients, it's not possible. Reverse engineering is possible, but is tedious compared to having the source code. At least the Telegram client code is open source to support examining it.

That said, the criticism of Telegram for using home brewed encryption is appropriate and needs to be mentioned often (hopefully Telegram will change the protocol). Even the authors of this paper state:

> The take-home message (once again) is that well-studied, provably secure encryption schemes that achieve strong definitions of security (e.g., authenticated-encryption) are to be preferred to home-brewed encryption schemes.


> no way to check what WhatsApp or iMessage are doing because they're not open source

> Reverse engineering is possible, but is tedious

I think for a security-related application, trusting that the code you have is the one being built and distributed as binary is a huge oversight. I'd argue that sniffing packets and stepping through code is the proper way (of course having code /does/ help with this) Consider: what idiot would put a backdoor in plain sight?


The original commenter specifically mentioned the use-case of building clients from the source. Your comment doesn't counter anything that was said. Btw, most backdoors are disguised as 0-days for corner cases for deniability. They are primarily errors breaking memory safety or side channels. One can also exploit compiler properties but I've never seen that in the wild. Would be easier on iOS, though, due to standardized tools & platform details.

So, no, sniffing packets or stepping through code isn't the best way to do it. Best way is combining docs, source code, covert channel analysis, execution traces, and looking at them all for issues. That's still not even minimum requirement for high assurance security but how many problems are caught that exist in low-assurance source & binary distributed software.


The point I'm trying to make is that you can audit some copy of the Telegram source code all you want, you have no idea if that's what's deployed in the app store and thus what's on everyone's phone. So it makes sense to audit what's actually being distributed to end users.


That should definitely be audited on top of the source code. You have no disagreement from me, there.


The Android Telegram app uses a closed source blob for messaging, so we can't check what it does either.

The server sofware is closed source too, so we can't check what it does either.

What is your point then when both ends of the communication are closed source?


Is the android telegram client not open: https://github.com/DrKLO/Telegram ? I may be missing something.

It seems to me that we could just fork the app and add in any ETE encryption we want. I get that everyone is annoyed at the Telegram people, but there are a bunch of open clients and the encryption we'd want must be independent of the server. For instance I could paste in PGP encrypted messages. Maybe there is some technical reason this wouldn't work?


I think people would rather develop their own app from the ground up instead of squeaking into another app / network when all the users must download the custom app anyway.


There's "no way" to check what iMessage is doing?


I guess it’s possible on a jailbroken device. (Perhaps even on a non-jailbroken device…?)


All you need is to capture and view the traffic at the network level. You can easily do that off-device when using wifi.


That's not enough. The keys could be distributed later when it's not obvious. System or network timing channels can be used. Subversion is a very difficult problem to deal with. Having the source code is a start on it. Not having the source code is a no-go for trustworthiness if malicious insiders exist.


That way you can't know for sure what is happening all the times you aren't watching (maybe the client is coded with "use shitty encryption [when client receives message X from server / the year is 2016 / your message includes a word on a blacklist]").

In theory you could reverse engineer the binary (which is compiled code), this is how security firms try to understand malware (like stuxnet). But this is pretty hard to do.


Your ability to reverse engineer a binary isn't "theoretical", nor is it hard. These programs haven't been obfuscated.

The hard part of evaluating cryptographic messaging services isn't binary reversing; it's that evaluating cryptographic constructions is hard. The flaw we're discussing today in Telegram is evident from the documentation, but despite the fact that every cryptographer who has commented on Telegram has had nothing but bad things to say about how it does crypto, nobody connected these particular dots.

Crypto is hard. Next to crypto, reversing a program compiled with a normal compiler is just a speed bump.


Of course it's possible, just not as easy. The binaries are available, you decompile them, and you step through the resulting low level code. I bet many people had done that already.


Sorry, I was doing multiple edits on my comment in the last few minutes without noticing the replies. I did mention that reverse engineering is possible, but it's really tedious and not as easy to interpret all the code flows compared to having the source code.


WhatsApp encryption is broken.

WhatsApp can just turn off encryption when they want, without users knowing:

http://heise.de/-2630361

That's probably what happened in June:

"Investigators said earlier they had detained 16 people in the anti-terror raids after working with U.S. authorities to monitor suspects' communications on WhatsApp Inc.'s messaging service."

http://www.bloomberg.com/news/articles/2015-06-08/belgium-ar...


I'm saddened by this. I'm a heavy user of Telegram, and i think it's in many respects superior to WhatsApp and iMessage. The weird stance on cryptography is such a shame.


Please consider switching your team over to Signal. Or at least give it a try if you are on an iOS or an Android phone.


Any suggestion for Windows Phone 8 users?


How recent is your device? Is it scheduled to get Windows 10? I'd say our best bet at this time is to push for a Universal Windows Platform app.

How good is your C#?


I think quite good although I haven't tried to write anything in Universal Windows Platform yet. Sorry for the late reply.


I can't find any official statements by WA/FB that WhatsApp uses ETE.

Do you have further sources aside from [1]?

[1] http://www.wired.com/2014/11/whatsapp-encrypted-messaging/


Why would they use ETE? Isn't the whole point of WA/FB to sell user-created data to third parties?

Edit:

Given the down vote(s), I guess I'm missing something obvious.

What I'm getting at is this: If our activities were completely encrypted and we appeared anonymous to WA/FB they couldn't hope to make money.

Would anyone smarter than me care to explain why this isn't the case?


It's a good question. But I think the contents of their users' communications are less important for their revenues than the metadata.


Is there a description of this somewhere? (Such as in public filings.)

Is the advertising so coarse as to ignore the content of what people say? Are they only propagating information about purchasing and browsing history through the metadata offered by the social network information in order to drive advertising?

All the questions should suggest that I'm genuinely curious and looking to understand, so if you read this and get riled up, instead of downvoting please take twenty seconds to guide me to the truth...


Someone is going to write a book about how backwards and stupid person to person communications were from the year 1990 to the year 2020 at the earliest.

We have all these proprietary awful protocols meant to lock you into an app or a platform, and threw away standards years ago. We "trust" proprietary services with security while ignoring the existence of decades old proven open secure communications like OTR, mainly because the proprietary vendors want to shove their crap down everyones throat to be able to read all their communications, or in telegrams case profit off buzzword security rather than real security. And the joke is it is working fantastically. Someone wrote a non-fiction novel on doublethink and were the setting.


> proven open secure communications like OTR

Axolotl has significant advantages over OTR, such as forward secrecy and deniability while operating in an asynchronous context.

It is necessary that the global standard for secure messaging be decided by rough consensus and running code and not by standards committee.


This article is based on Jakobsen’s master’s thesis: https://news.ycombinator.com/item?id=10713064


According to the official description of MTProto the padding length is between 0 and 120 bits, which means that in the best case scenario the above attack succeeds with probability 2⁻⁸. However experimental validation shows that this is not the case and the length of the padding is in fact between 0 and 96 bits (this is due to the way the objects are serialized in Java). So we conclude that in the best case this attack succeeds with probability 2⁻³²...

With a bit more knowledge about Java object serialization, could one be a bit more clever about picking the random block?


Full disclosure: Worked at Telegram

That's why, we after quitting Telegram and starting to build Actor we decided not to implement theirs encryption because it always looks like spagetti-encryption.


It's fair to say that this is still a theoretical attack. As authors of this paper mentioned, they don't see a way of turning the "flaw" into a real exploit.


It's not a theoretical attack.

https://news.ycombinator.com/item?id=10713064


The article says:

"We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack."

... which appears wrong, and even published after the other paper?


And the article linked by tptacek (by the same authors) builds on this and shows practical attacks.


Theoretical attacks have a way of turning into weaponized exploits.

For example, check out https://www.openssl.org/~bodo/tls-cbc.txt. This is a document published by Bodo Moeller in the early 2000s that details multiple theoretical weaknesses in the CBC mode used in TLS. Read it top to bottom and see how many practical attacks on TLS you can count.


This one was turned into a further-weaponized attack, published in the author's masters thesis, which is in the bibliography for the paper.

I don't know why this paper was published independently, as it's a building block for the other attack.


What other attack?



Well, a theoretical attack is worse than no theoretical attack. Especially if there are perfectly fine protocols available that are IND-CCA2 secure.


Despite the severity of this finding, it still doesn't qualify for the empty "break our crypto" contest that Telegram put out.

STOP USING TELEGRAM!


Good, MTProto is a giant PITA anyway. Why they felt the need to roll their own crypto is beyond me, and I really wish people would stop doing so because inevitably it ends up broken.


I still don't really understand this all discussion around Telegram. MTProto is used just with the regular chats and they decided to use a custom protocol "In order to achieve reliability on weak mobile connections as well as speed when dealing with large files (such as photos, large videos and files up to 1,5 GB)" (from the FAQ). There is also a pretty long discussion here http://unhandledexpression.com/2013/12/17/telegram-stand-bac... between Telegram developers and another "security expert", where the practical reasons for their choices are explained (might be outdated, but it's still interesting to see why certain more secure algorithms might not be always the best solution). Providing cloud services, robustness and reliability can come at a cost unfortunately.

If you care about having a really private conversation, just instantiate the Secret Chat. It's state of the art in terms of security. Just one of the many benchmarks https://www.eff.org/secure-messaging-scorecard.


I don't think it's true that the "secret chats" don't use MTE IGE; I think the "secret chats" are the same construction, but keyed with a separate DH handshake.


But it still asks the server to pick the group in which to do DH. This has to be exploitable somehow...


I agree that's a very dumb design decision, but supposedly the clients are expected to validate the DH parameters.


True, it needs to be a safe prime, but not all 2048-bit safe primes are equally good. At the very least there are those for which the Special Number Field Sieve applies, though that's likely still infeasible.

There might be very rare 2048-bit DH groups using safe primes for which the DLP problem is easily solvable, I wasn't able to find any research into whether or not those can exist.


You also have to validate the generator, which is trickier than the prime.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: