
MTProto, the symmetric encryption scheme used in Telegram, is not IND-CCA secure - bpierre
https://eprint.iacr.org/2015/1177
======
tptacek
For those wondering:

IND refers to "indistinguishability"; it is the property of not being able to
pick a matching plaintext/ciphertext pair out of a lineup of ciphertexts.

CCA stands for Chosen Ciphertext Attacks, and refers to the class of attacks
where the cryptanalyst alters ("chooses") the ciphertext before it's decrypted
by the victim, and is then able to learn things from the victim's behavior.
(There's also CCA2, in which the attacker gets feedback from the victim on a
single piece of ciphertext, over and over again.)

TLS's Mac-then-encrypt CBC was known not to be IND-CCA secure, and a few years
later Thai Duong and Juliano Rizzo turned that property into BEAST. Then there
was CRIME, then Lucky13, and finally TLS MtE CBC had to be put to sleep.

The easiest way to lose IND-CCA security is to fail to MAC your ciphertext.
This is such a common flaw that Moxie Marlinspike, a co-author of
TextSecure/Signal, which you should use in preference to Telegram, named it
"the cryptographic doom principle".

This paper is a building block for a much more significant attack, which is
the author's masters thesis:

[https://news.ycombinator.com/item?id=10713064](https://news.ycombinator.com/item?id=10713064)

~~~
sdevlin
BEAST and CRIME are both chosen-plaintext attacks. They don't rely on the
improper MAC composition in TLS CBC.

Lucky13 and POODLE are the chosen-ciphertext attacks.

EDIT: Some more details:

BEAST takes advantage of predictable IVs in SSLv3 and TLS 1.0. In these
protocols, the IV for each new record is simply the last block of the previous
record. An attacker monitoring traffic on the wire can use this predictability
to build an encryption oracle and guess-and-check the contents of ciphertext
blocks.

CRIME uses plaintext compression to its advantage. A message with longer
common substrings will compress slightly better than one without, and this is
reflected in the ciphertext length. An attacker can make adaptively chosen
guesses at substrings included in the message to recover, e.g., session
cookies.

~~~
api
Great takeaways from BEAST and CRIME.

For BEAST, my takeaway was to never introduce a dependency into a cryptosystem
of any kind without thinking _very_ carefully about the implications-- and if
in doubt, don't do it. I also took away "don't be creative" \-- boring crypto
is best. Generating random IVs from a CSPRNG is boring. Reusing the last block
is interesting, but there be dragons. Crypto should be boring,
straightforward, and directly based upon the current state of the art best
practices without introducing anything that hasn't been subjected to rigorous
analysis. Complexity equals bugs, etc.

For CRIME the takeaway is the concise "never _compress_ somethings secret
together with something attacker-influenced." It means one must be careful in
using compression together with encryption, and that if one is very highly
paranoid it is probably best to leave compression out to err on the side of
caution.

~~~
tptacek
Nit: chaining IVs was the norm, used in a bunch of protocols, until the
mid-2000s. It's obvious why you'd want to do it (saves space, and,
conceptually, the IV is the negative-oneth ciphertext block anyways).

------
HappyTypist
What a surprise, not. Many respected experts have criticised Telegram for
implementing their own cryptography and using meaningless buzzwords, while
also making encryption opt in. Telegram is clearly not a privacy-motivated
platform and anyone who thinks so is deluding themselves.

Even the closed source WhatsApp (uses ETE from the Signal guys) and iMessage
are arguably less likely to contain cryptographic flaws than Telegram.

~~~
newscracker
> Even the closed source WhatsApp (uses ETE from the Signal guys) and iMessage
> are arguably less likely to contain cryptographic flaws than Telegram

Even though you used the words "arguably less likely" to soften your
statement, there's no way to check what WhatsApp or iMessage are doing because
they're not open source. So even if security researchers want to look at the
code or build their own clients, it's not possible. Reverse engineering is
possible, but is tedious compared to having the source code. At least the
Telegram client code is open source to support examining it.

That said, the criticism of Telegram for using home brewed encryption is
appropriate and needs to be mentioned often (hopefully Telegram will change
the protocol). Even the authors of this paper state:

> The take-home message (once again) is that well-studied, provably secure
> encryption schemes that achieve strong definitions of security (e.g.,
> authenticated-encryption) are to be preferred to home-brewed encryption
> schemes.

~~~
makmanalp
> no way to check what WhatsApp or iMessage are doing because they're not open
> source

> Reverse engineering is possible, but is tedious

I think for a security-related application, trusting that the code you have is
the one being built and distributed as binary is a huge oversight. I'd argue
that sniffing packets and stepping through code is the proper way (of course
having code /does/ help with this) Consider: what idiot would put a backdoor
in plain sight?

~~~
nickpsecurity
The original commenter specifically mentioned the use-case of building clients
from the source. Your comment doesn't counter anything that was said. Btw,
most backdoors are disguised as 0-days for corner cases for deniability. They
are primarily errors breaking memory safety or side channels. One can also
exploit compiler properties but I've never seen that in the wild. Would be
easier on iOS, though, due to standardized tools & platform details.

So, no, sniffing packets or stepping through code isn't the best way to do it.
Best way is combining docs, source code, covert channel analysis, execution
traces, and looking at them all for issues. That's still not even minimum
requirement for high assurance security but how many problems are caught that
exist in low-assurance source & binary distributed software.

~~~
makmanalp
The point I'm trying to make is that you can audit some copy of the Telegram
source code all you want, you have no idea if that's what's deployed in the
app store and thus what's on everyone's phone. So it makes sense to audit
what's actually being distributed to end users.

~~~
nickpsecurity
That should definitely be audited on top of the source code. You have no
disagreement from me, there.

------
zanny
Someone is going to write a book about how backwards and stupid person to
person communications were from the year 1990 to the year 2020 at the
earliest.

We have all these proprietary awful protocols meant to lock you into an app or
a platform, and threw away standards years ago. We "trust" proprietary
services with security while ignoring the existence of decades old proven open
secure communications like OTR, mainly because the proprietary vendors want to
shove their crap down everyones throat to be able to read all their
communications, or in telegrams case profit off buzzword security rather than
real security. And the joke is it is working fantastically. Someone wrote a
non-fiction novel on doublethink and were the setting.

~~~
Canada
> proven open secure communications like OTR

Axolotl has significant advantages over OTR, such as forward secrecy and
deniability while operating in an asynchronous context.

It is necessary that the global standard for secure messaging be decided by
rough consensus and running code and not by standards committee.

------
delan
This article is based on Jakobsen’s master’s thesis:
[https://news.ycombinator.com/item?id=10713064](https://news.ycombinator.com/item?id=10713064)

------
jessaustin
_According to the official description of MTProto the padding length is
between 0 and 120 bits, which means that in the best case scenario the above
attack succeeds with probability 2⁻⁸. However experimental validation shows
that this is not the case and the length of the padding is in fact between 0
and 96 bits (this is due to the way the objects are serialized in Java). So we
conclude that in the best case this attack succeeds with probability 2⁻³²..._

With a bit more knowledge about Java object serialization, could one be a bit
more clever about picking the random block?

------
ex3ndr
Full disclosure: Worked at Telegram

That's why, we after quitting Telegram and starting to build Actor we decided
not to implement theirs encryption because it always looks like spagetti-
encryption.

------
paulmillr
It's fair to say that this is still a theoretical attack. As authors of this
paper mentioned, they don't see a way of turning the "flaw" into a real
exploit.

~~~
tptacek
It's not a theoretical attack.

[https://news.ycombinator.com/item?id=10713064](https://news.ycombinator.com/item?id=10713064)

~~~
illumen
The article says:

"We stress that this is a theoretical attack on the definition of security and
we do not see any way of turning the attack into a full plaintext-recovery
attack."

... which appears wrong, and even published after the other paper?

~~~
detaro
And the article linked by tptacek (by the same authors) builds on this and
shows practical attacks.

------
sarciszewski
Despite the severity of this finding, it still doesn't qualify for the empty
"break our crypto" contest that Telegram put out.

STOP USING TELEGRAM!

------
snuxoll
Good, MTProto is a giant PITA anyway. Why they felt the need to roll their own
crypto is beyond me, and I really wish people would stop doing so because
inevitably it ends up broken.

------
sirnicolaz
I still don't really understand this all discussion around Telegram. MTProto
is used just with the regular chats and they decided to use a custom protocol
"In order to achieve reliability on weak mobile connections as well as speed
when dealing with large files (such as photos, large videos and files up to
1,5 GB)" (from the FAQ). There is also a pretty long discussion here
[http://unhandledexpression.com/2013/12/17/telegram-stand-
bac...](http://unhandledexpression.com/2013/12/17/telegram-stand-back-we-know-
maths/) between Telegram developers and another "security expert", where the
practical reasons for their choices are explained (might be outdated, but it's
still interesting to see why certain more secure algorithms might not be
always the best solution). Providing cloud services, robustness and
reliability can come at a cost unfortunately.

If you care about having a really private conversation, just instantiate the
Secret Chat. It's state of the art in terms of security. Just one of the many
benchmarks [https://www.eff.org/secure-messaging-
scorecard](https://www.eff.org/secure-messaging-scorecard).

~~~
tptacek
I don't think it's true that the "secret chats" don't use MTE IGE; I think the
"secret chats" are the same construction, but keyed with a separate DH
handshake.

~~~
xnyhps
But it still asks the server to pick the group in which to do DH. This has to
be exploitable somehow...

~~~
tptacek
I agree that's a _very_ dumb design decision, but supposedly the clients are
expected to validate the DH parameters.

~~~
xnyhps
True, it needs to be a safe prime, but not all 2048-bit safe primes are
equally good. At the very least there are those for which the Special Number
Field Sieve applies, though that's likely still infeasible.

There might be very rare 2048-bit DH groups using safe primes for which the
DLP problem is easily solvable, I wasn't able to find any research into
whether or not those can exist.

~~~
tptacek
You also have to validate the generator, which is trickier than the prime.

