You may be right, but it galls me that you are.
Rant over. I wish Telegram all the success in the world. I hope they bulletproof their crypto and add another slick option to the growing list out there to stick it to the man.
Must have gotten out of the wrong side of the bed this morning.
If Telegram was a messaging proposal, I'm sure the attitude would be a more straightforward one (with the same end result of "you're not ready to handle this yet"). Instead, it's billed as a definitely secure system.
Seriously, read the previous thread: https://news.ycombinator.com/item?id=6913456
Things aren't fair. Hard work does not automatically grant your result any status.
You know what's also unfair? Pitching a product that's supposed to do something but doesn't.
Maybe you can try that with your folks on android?
Yeah, the alpha geek passive-aggression is a tiresome aspect of our industry. Usually that smug self-assuredness is worn down by harsh experience, but heaven help you if you meet a 40 year old alpha geek who has not yet learned that they too can be wrong. (I work with one, and he makes some shocking mistakes but cannot admit it, so when he launches at you with a holier-than-thou attitude, scorn and contempt are all that arise).
> What happened to good grace and manners
Exactly. When you're totally right and someone's totally wrong, if you want to effect actual change (assuming that doing so requires their coöperation), grace is precisely what's required. You can have all the technical correctness in the world behind you, but the moment you put someone's back up with your poor attitude, the chance of them rationally engaging with your argument drops dramatically.
I've had this argument before with some people on Reddit, and I received the retort "But who cares if I was rude, _they_ were wrong, I was telling them how to fix it!" ad perpetuam from one particular person who was astounded that no-one took his advice.
If they presented it as an attempt and asked for critique, then this would have been a fine thing to publish. They would have got chewed out by a professional, fixed it or removed it, and everyone would go away learning something new.
The issues comes when they don't accept critique, and make claims that don't stack up in actuality. Doing that is how people get hurt.
Why? Because with any other service, the people using it are going to be able to judge whether it is effective. If Word crashes when I open it, or is buggy, I'll know that and I can choose to use something else.
But with a product that is offering security, there is no way for an "ordinary" end user to know whether it really works or not. As is shown time and time again, providing good security is freakin hard. So you'd better be on your game if you're claiming to provide that.
And honestly, if someone is donating their time to check the security of other people's applications, and all they ask in return is to be a little smug when they point out problems, I think that's a fair trade-off.
> We grew tired of “experts” too lazy to read the full documentation.
The "smugness" is pointing out problems in their implementation, despite them going on about having "ACM champions" that took two years to design it, so it obviously must be perfect.
If someone pitched a database that simply mmap'd a file and then called it "fully transactional and safe", they'd get a lot of strong criticism.
This kind of attitude (trust us, we have enough credentials to show) seems to be a universal problem with people in academics. Come on, don't do this. We all get the sense that you have to bluff to get your paper published, but this is not going to work for a product to be accepted.
There, I said it.
I usually have no temper problem, but everytime someones proposes a new secure system and ignores the holes people point out, because that means I will need to spend months explaining to pepople that they should not use it, even if it is "hype".
So, yes, again, you are right, my apologies.
Only noticed your reply now, two days late. That apology is big of you. Keep up the good work, you've got tons of visibility - I'd return to your site again, that's for sure - and I'd listen carefully to what you've got to say.
Probably a response to the bold, did-no-wrong, we-worked-hard-and-that-should-count and generally arrogant attitude of the people they're criticizing.
The fact that they hired experts to make rookie mistakes is just beyond belief.
It's like a pro painter who covers the walls in oil before putting a coat of latex on, then goes and tells the world he's discovered a way to make paint stick indefinitely to walls, then a week later the paint peels right off.
> The rest looks like matters of taste as opposed to objective reasoning. Can you name an actual attack?
The response to that is, "I shouldn't have to!" Anything that replaces a proven secure component with something that we haven't (yet) found an attack on is grounds for suspicion at the very least.
SHA-1 isn't a MAC. It's not that hard to make it so (HMAC), but Telegram hasn't.
> Again, we do not use MAC-then-encrypt. Our scheme is closer to MAC-and-encrypt with some essential modifications.
Out of the three options: MAC then encrypt, encrypt then MAC, and MAC and encrypt, only encrypt then MAC is secure (http://cseweb.ucsd.edu/~mihir/papers/oem.pdf) I don't care if they've made "essential modifications", they're replacing a component that is provably secure, with one that may or may not be secure.
"Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administrators; but it's time for software engineers to start applying the same engineering principles within individual applications as well."
Are they a charity designed to squirt cyphertext around the globe? Monitoring my behavior to monetize it? freemium?
This bit clears it up: PRIVACY: We take your privacy very seriously and will never give third parties access to your data!
Oh. So they're marketing an app under the banner of privacy, but architected to enable data collection from me. I can opt into having slightly less data collectable with additional encryption (of questionable quality) on top of their standard service. In an age where the greatest threats to privacy are corporate surveillance and the NSA's legal authority to collect data form businesses, they protect my privacy on exactly zero fronts.
Their messaging is either oblivious or underhanded: "We built Telegram to make messaging safe again so you can take back your right to privacy."
They hired some smart people who are not cryptographic experts and now their cryptography is broken. Which is pretty much the story for anyone who has ever created a custom cryptographic system (and the story of SSL for the first few years).
It looks like a bunch of convoluted code that doesn't actually accomplish anything.
sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key + substr (auth_key, 48+x, 16));
sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) + substr (sha1_c, 4, 12);
aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) + substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);
sha1_a = SHA1 (msg_key + auth_key);
sha1_b = SHA1 (sha1_a);
sha1_с = SHA1 (sha1_b);
sha1_d = SHA1 (sha1_C);
aes_key = SHA1(sha1_a+sha1_b+sha1_c);
aes_iv = SHA1(aes_key+sha1_d);
aes_key = RANDOM
aes_iv = RANDOM
To me it looks like repeatedly sending the same message, or a message whose hash varied by a few bits, would leak part of the auth key, basically the person probably doesn't know what they are doing and are just adding extra 'stuff' to assure themselves it's secure.
The threat model is the antidote to this tomfoolery. Before you even start to touch a primitive, sit down and think very hard about who you're trying to be secure against and what their available options for attacking you are. Then write this analysis down and make it a living document, referring to it constantly to make sure that what you're writing is actually protecting you from something and what and how it's doing that.
I'd go so far as to say that without such a document, a system cannot possibly be secure because it doesn't know what it's doing. What's more, the poor team in charge of maintaining and updating it is going to be forever behind the curve, reduced to putting out fires whenever they pop up, be it from external actors actually breaking their code or from their own ignorance introducing vulnerabilities that weren't there before because their updates didn't take into account the non-existent threat model.
" We grew tired of “experts” too lazy to read the full documentation. "
When asked about a diagram. With their attitude, it seems incredibly unlikely they'll ever have a secure product.
They could have made something like: the client generates a key pair,
encrypts the public key with the server’s public key, sends it to the server
with a nonce, and the server sends back the nonce encrypted with the
client’s public key. Simple and easy.
You need cert pinning or a fool proof trust system to verify the server key.
Still, nowhere near as bad as Whatsapp I'm assuming?
Whatsapp is a joke, but lower friction matters much more for their market.
Then again, I'm a happy user under the assumption that everybody everywhere could read my messages if they actually made an effort.
I just don't send anything over whatsapp that I really care about - if the world discovers that I regularly tell my girlfriends I love them, that really doesn't bother me.
TextSecure is where it's at. Hopefully the iOS release comes out soon.