IND refers to "indistinguishability"; it is the property of not being able to pick a matching plaintext/ciphertext pair out of a lineup of ciphertexts.
CCA stands for Chosen Ciphertext Attacks, and refers to the class of attacks where the cryptanalyst alters ("chooses") the ciphertext before it's decrypted by the victim, and is then able to learn things from the victim's behavior. (There's also CCA2, in which the attacker gets feedback from the victim on a single piece of ciphertext, over and over again.)
TLS's Mac-then-encrypt CBC was known not to be IND-CCA secure, and a few years later Thai Duong and Juliano Rizzo turned that property into BEAST. Then there was CRIME, then Lucky13, and finally TLS MtE CBC had to be put to sleep.
The easiest way to lose IND-CCA security is to fail to MAC your ciphertext. This is such a common flaw that Moxie Marlinspike, a co-author of TextSecure/Signal, which you should use in preference to Telegram, named it "the cryptographic doom principle".
This paper is a building block for a much more significant attack, which is the author's masters thesis:
Lucky13 and POODLE are the chosen-ciphertext attacks.
EDIT: Some more details:
BEAST takes advantage of predictable IVs in SSLv3 and TLS 1.0. In these protocols, the IV for each new record is simply the last block of the previous record. An attacker monitoring traffic on the wire can use this predictability to build an encryption oracle and guess-and-check the contents of ciphertext blocks.
CRIME uses plaintext compression to its advantage. A message with longer common substrings will compress slightly better than one without, and this is reflected in the ciphertext length. An attacker can make adaptively chosen guesses at substrings included in the message to recover, e.g., session cookies.
For BEAST, my takeaway was to never introduce a dependency into a cryptosystem of any kind without thinking very carefully about the implications-- and if in doubt, don't do it. I also took away "don't be creative" -- boring crypto is best. Generating random IVs from a CSPRNG is boring. Reusing the last block is interesting, but there be dragons. Crypto should be boring, straightforward, and directly based upon the current state of the art best practices without introducing anything that hasn't been subjected to rigorous analysis. Complexity equals bugs, etc.
For CRIME the takeaway is the concise "never compress somethings secret together with something attacker-influenced." It means one must be careful in using compression together with encryption, and that if one is very highly paranoid it is probably best to leave compression out to err on the side of caution.
POODLE is the best example, I think.
I agree, POODLE is a close analog.
One important thing to note however is that most (all?) of the TLS attacks rely on opening many thousands or millions of connections via scripting. Without this, these attacks would not be possible. A messaging system like telegram is not scriptable and hence, should not be exploitable in the same sorts of ways.
Of course, as you say, better solutions like Signal and OTR exist.
I'm really disappointed with the project management of Signal though, the shitstorm with F-Droid and Moxie's strange obsession with using 3rd party analytics software in privacy software, the fact they pulled socialist millionaires (zero knowlege proof based verifications) and left only automatic "let's trust the server" and QR code verification, the strong reliance upon phone numbers which still hasn't been broken even with the desktop release, etc... ChatSecure and similar might be a better option.
He also wrote a good post explaining it:
Or the easiest way to get IND-CCA is to add a MAC to your ciphertext 8)
Actually, it's not that your scheme magically gain IND-CCA when you add the MAC, it is that the attacker just lose the CCA attack model (he can't test random decryptions anymore).
Even the closed source WhatsApp (uses ETE from the Signal guys) and iMessage are arguably less likely to contain cryptographic flaws than Telegram.
Even though you used the words "arguably less likely" to soften your statement, there's no way to check what WhatsApp or iMessage are doing because they're not open source. So even if security researchers want to look at the code or build their own clients, it's not possible. Reverse engineering is possible, but is tedious compared to having the source code. At least the Telegram client code is open source to support examining it.
That said, the criticism of Telegram for using home brewed encryption is appropriate and needs to be mentioned often (hopefully Telegram will change the protocol). Even the authors of this paper state:
> The take-home message (once again) is that well-studied, provably secure encryption schemes that achieve strong definitions of security (e.g., authenticated-encryption) are to be preferred to home-brewed encryption schemes.
> Reverse engineering is possible, but is tedious
I think for a security-related application, trusting that the code you have is the one being built and distributed as binary is a huge oversight. I'd argue that sniffing packets and stepping through code is the proper way (of course having code /does/ help with this) Consider: what idiot would put a backdoor in plain sight?
So, no, sniffing packets or stepping through code isn't the best way to do it. Best way is combining docs, source code, covert channel analysis, execution traces, and looking at them all for issues. That's still not even minimum requirement for high assurance security but how many problems are caught that exist in low-assurance source & binary distributed software.
The server sofware is closed source too, so we can't check what it does either.
What is your point then when both ends of the communication are closed source?
It seems to me that we could just fork the app and add in any ETE encryption we want. I get that everyone is annoyed at the Telegram people, but there are a bunch of open clients and the encryption we'd want must be independent of the server. For instance I could paste in PGP encrypted messages. Maybe there is some technical reason this wouldn't work?
In theory you could reverse engineer the binary (which is compiled code), this is how security firms try to understand malware (like stuxnet). But this is pretty hard to do.
The hard part of evaluating cryptographic messaging services isn't binary reversing; it's that evaluating cryptographic constructions is hard. The flaw we're discussing today in Telegram is evident from the documentation, but despite the fact that every cryptographer who has commented on Telegram has had nothing but bad things to say about how it does crypto, nobody connected these particular dots.
Crypto is hard. Next to crypto, reversing a program compiled with a normal compiler is just a speed bump.
WhatsApp can just turn off encryption when they want, without users knowing:
That's probably what happened in June:
"Investigators said earlier they had detained 16 people in the anti-terror raids after working with U.S. authorities to monitor suspects' communications on WhatsApp Inc.'s messaging service."
How good is your C#?
Do you have further sources aside from ?
Given the down vote(s), I guess I'm missing something obvious.
What I'm getting at is this: If our activities were completely encrypted and we appeared anonymous to WA/FB they couldn't hope to make money.
Would anyone smarter than me care to explain why this isn't the case?
Is the advertising so coarse as to ignore the content of what people say? Are they only propagating information about purchasing and browsing history through the metadata offered by the social network information in order to drive advertising?
All the questions should suggest that I'm genuinely curious and looking to understand, so if you read this and get riled up, instead of downvoting please take twenty seconds to guide me to the truth...
We have all these proprietary awful protocols meant to lock you into an app or a platform, and threw away standards years ago. We "trust" proprietary services with security while ignoring the existence of decades old proven open secure communications like OTR, mainly because the proprietary vendors want to shove their crap down everyones throat to be able to read all their communications, or in telegrams case profit off buzzword security rather than real security. And the joke is it is working fantastically. Someone wrote a non-fiction novel on doublethink and were the setting.
Axolotl has significant advantages over OTR, such as forward secrecy and deniability while operating in an asynchronous context.
It is necessary that the global standard for secure messaging be decided by rough consensus and running code and not by standards committee.
With a bit more knowledge about Java object serialization, could one be a bit more clever about picking the random block?
That's why, we after quitting Telegram and starting to build Actor we decided not to implement theirs encryption because it always looks like spagetti-encryption.
"We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack."
... which appears wrong, and even published after the other paper?
For example, check out https://www.openssl.org/~bodo/tls-cbc.txt. This is a document published by Bodo Moeller in the early 2000s that details multiple theoretical weaknesses in the CBC mode used in TLS. Read it top to bottom and see how many practical attacks on TLS you can count.
I don't know why this paper was published independently, as it's a building block for the other attack.
STOP USING TELEGRAM!
If you care about having a really private conversation, just instantiate the Secret Chat. It's state of the art in terms of security. Just one of the many benchmarks https://www.eff.org/secure-messaging-scorecard.
There might be very rare 2048-bit DH groups using safe primes for which the DLP problem is easily solvable, I wasn't able to find any research into whether or not those can exist.