Hacker News new | past | comments | ask | show | jobs | submit login
Telegram, a.k.a. “Stand back, we have Math PhDs” (unhandledexpression.com)
93 points by nwh on Dec 17, 2013 | hide | past | web | favorite | 53 comments

Why so smug? I've seen this attitude in all types of IT heads (sys admins easily being the worst of the bunch) over the years. What is it about techies that they come over so smug when pointing out deficiencies in others? I still have the shame and anger provoking memory of two techies sniggering not-so-openly at me because I didn't know about name service switch in UNIX because guess what, I don't know everything. What happened to good grace and manners or is there no space left in your brain after it is filled up with all the l33t tech guru knowledge?

You may be right, but it galls me that you are.

Rant over. I wish Telegram all the success in the world. I hope they bulletproof their crypto and add another slick option to the growing list out there to stick it to the man.

Must have gotten out of the wrong side of the bed this morning.

Because Telegram is acting careless, and ignoring actual criticism, while touting claims that do not appear to be true? It's close to lying and scamming users.

If Telegram was a messaging proposal, I'm sure the attitude would be a more straightforward one (with the same end result of "you're not ready to handle this yet"). Instead, it's billed as a definitely secure system.

Seriously, read the previous thread: https://news.ycombinator.com/item?id=6913456

Look at the threads regarding CryptoCat (which was quite broken, despite the author fighting folks off and people cheering him on), or that guy that insisted he had secure JavaScript crypto that wasn't ultimately as secure as the server.

So which efforts are leading the pack at this stage that I could recommend to my entire family who practically daily urge me to join them on Viber? Years ago I would have said Skype. Now I just don't know. I'm saddened because I thought Telegram looked pretty sweet and a serious new contender. Practically calling them liars after that much hard work is unfair.

How is it unfair? It's essentially factual. They make tons of strange decisions, using things known to be broken, then hand wave over it with "you don't understand" and "ACM champions".

Things aren't fair. Hard work does not automatically grant your result any status.

You know what's also unfair? Pitching a product that's supposed to do something but doesn't.

If none of the available options are acceptably secure, it doesn't do anyone any favors to point to the one that's least broken and tell them to use that. That just leads to people putting sensitive information into a tool that can't protect it. It's better for everyone to just say that there is no secure choice currently and leave it at that.

The last thread on this alerted me to https://www.whispersystems.org/ (also on github https://github.com/WhisperSystems/)

Maybe you can try that with your folks on android?

TextSecure seems to be pretty popular.

> Why so smug?

Yeah, the alpha geek passive-aggression is a tiresome aspect of our industry. Usually that smug self-assuredness is worn down by harsh experience, but heaven help you if you meet a 40 year old alpha geek who has not yet learned that they too can be wrong. (I work with one, and he makes some shocking mistakes but cannot admit it, so when he launches at you with a holier-than-thou attitude, scorn and contempt are all that arise).

> What happened to good grace and manners

Exactly. When you're totally right and someone's totally wrong, if you want to effect actual change (assuming that doing so requires their coöperation), grace is precisely what's required. You can have all the technical correctness in the world behind you, but the moment you put someone's back up with your poor attitude, the chance of them rationally engaging with your argument drops dramatically.

I've had this argument before with some people on Reddit, and I received the retort "But who cares if I was rude, _they_ were wrong, I was telling them how to fix it!" ad perpetuam from one particular person who was astounded that no-one took his advice.

The issue is that they claim their system is secure. Peoples lives can literally depend on encryption actually doing what it says on the tin; this software does not. It's the epitome of home grown encryption.

I hear you. Agreed. Surely every decent crypto system started off home-grown?

There's a line with this sort of stuff, and the crossing of it is what I have an issue with.

If they presented it as an attempt and asked for critique, then this would have been a fine thing to publish. They would have got chewed out by a professional, fixed it or removed it, and everyone would go away learning something new.

The issues comes when they don't accept critique, and make claims that don't stack up in actuality. Doing that is how people get hurt.

Exactly. Being wrong is fine. Being stubbornly wrong is not.

Sure, in the old days someone had to be the first to invent stuff. But today when we already have well-reviewed protocols you shouldn't deploy something until after it has survived review.

Usually I'd agree with you, but when it comes to offering a service that provides security, it is different.

Why? Because with any other service, the people using it are going to be able to judge whether it is effective. If Word crashes when I open it, or is buggy, I'll know that and I can choose to use something else.

But with a product that is offering security, there is no way for an "ordinary" end user to know whether it really works or not. As is shown time and time again, providing good security is freakin hard. So you'd better be on your game if you're claiming to provide that.

And honestly, if someone is donating their time to check the security of other people's applications, and all they ask in return is to be a little smug when they point out problems, I think that's a fair trade-off.

I initially had the same reaction as you, but reading the discussion between the author and Telegram in the comments (both on the linked article and the original HN thread) it seems to be this is largely deserved given the way Telegram has been responding. They've essentially refused to justify any any of their (curious) choices and relying on authority arguments instead:

> We grew tired of “experts” too lazy to read the full documentation.

This has nothing to do with smugness. If you market your product as a secure alternative, it better be secure.

I understand the importance of security. I do hope Telegram can get it right. I acknowledge it is probably not as secure as they claim right now. I still maintain the article author's attitude just rubs me up the wrong way because of the mood I'm in - the smugness (and I hope I'm not imagining it) is definitely a side issue but one I felt I had to get off my chest. I don't want to make a big issue of it and detract too much from the critique but I felt I had to say it even though HN may not be the bestest place in the world to voice such things. I hear you though.

Given their attitude, it's unlikely they'll get it right. They'll keep telling people "that's not a real attack" and making up excuses. Then someone will publish a real attack, and they'll patch over it. Repeat.

The "smugness" is pointing out problems in their implementation, despite them going on about having "ACM champions" that took two years to design it, so it obviously must be perfect.

If someone pitched a database that simply mmap'd a file and then called it "fully transactional and safe", they'd get a lot of strong criticism.

I am with you. ACM Champions mean a shit to me, and I have enough publications in crypto and secure communication to claim that I am a ACM Champion, and BTW I have a PHD actually related to security and crypto. But, NONE of my credentials is a valid argument about how secure my protocols are.

This kind of attitude (trust us, we have enough credentials to show) seems to be a universal problem with people in academics. Come on, don't do this. We all get the sense that you have to bluff to get your paper published, but this is not going to work for a product to be accepted.

In general, it's largely due to insecurity. I'm not saying that this is the case here, as I didn't pick up on it, but your comment definitely resonated with me, and this is something that bugs me more than I'd like on an ongoing basis. Over the last few years, I've been trying to become better at saying "I don't know", which can sometimes be really difficult in some tech circles.

I have never used Node.js

There, I said it.

Heyyy, it was a joke! Cut me some slack. I did have to install _Node_ to get at _npm_ to get at _bower_ to install _bootstrap_ in a _Rails_ app so can we say I've used it by proxy? :)

Did not pick this up as being smug, did pick it up as some guys stepping into a complicated field, and messing it up royally.

I wrote that article, and I think you are right. That was not the right tone for a review of a new product, even if Telegram's attitude (in particular here https://news.ycombinator.com/item?id=6913456 ) has been of denial and arrogance.

I usually have no temper problem, but everytime someones proposes a new secure system and ignores the holes people point out, because that means I will need to spend months explaining to pepople that they should not use it, even if it is "hype".

So, yes, again, you are right, my apologies.

Hi there geal,

Only noticed your reply now, two days late. That apology is big of you. Keep up the good work, you've got tons of visibility - I'd return to your site again, that's for sure - and I'd listen carefully to what you've got to say.

doffs cap

What is it about techies that they come over so smug when pointing out deficiencies in others?

Probably a response to the bold, did-no-wrong, we-worked-hard-and-that-should-count and generally arrogant attitude of the people they're criticizing.

They did what pretty much everyone tells people not to do.

The fact that they hired experts to make rookie mistakes is just beyond belief.

It's like a pro painter who covers the walls in oil before putting a coat of latex on, then goes and tells the world he's discovered a way to make paint stick indefinitely to walls, then a week later the paint peels right off.

Elitism is found in all specialists, not just IT. I've seen it from doctors, mechanics, finance folks, dentists, police, even unskilled jobs like packaging dispatch staff.

If someone started pitching IV Diet Coke as a cure for malaria, they'd deserve the ridicule and "elitism".

Smugness aside, I am actually concerned about this implementation and Telegram's response:

> The rest looks like matters of taste as opposed to objective reasoning. Can you name an actual attack?

The response to that is, "I shouldn't have to!" Anything that replaces a proven secure component with something that we haven't (yet) found an attack on is grounds for suspicion at the very least.

SHA-1 isn't a MAC. It's not that hard to make it so (HMAC), but Telegram hasn't.

> Again, we do not use MAC-then-encrypt. Our scheme is closer to MAC-and-encrypt with some essential modifications.

Out of the three options: MAC then encrypt, encrypt then MAC, and MAC and encrypt, only encrypt then MAC is secure (http://cseweb.ucsd.edu/~mihir/papers/oem.pdf) I don't care if they've made "essential modifications", they're replacing a component that is provably secure, with one that may or may not be secure.

Agreed. To quote Colin Percival's excellent article which addresses both topics:

"Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administrators; but it's time for software engineers to start applying the same engineering principles within individual applications as well."


What's the business model? The app is free, I'm not going to install it but they claim to be ad free too.

Are they a charity designed to squirt cyphertext around the globe? Monitoring my behavior to monetize it? freemium?

This bit clears it up: PRIVACY: We take your privacy very seriously and will never give third parties access to your data!

Oh. So they're marketing an app under the banner of privacy, but architected to enable data collection from me. I can opt into having slightly less data collectable with additional encryption (of questionable quality) on top of their standard service. In an age where the greatest threats to privacy are corporate surveillance and the NSA's legal authority to collect data form businesses, they protect my privacy on exactly zero fronts.

Their messaging is either oblivious or underhanded: "We built Telegram to make messaging safe again so you can take back your right to privacy."

Typical NIH syndrome.

They hired some smart people who are not cryptographic experts and now their cryptography is broken. Which is pretty much the story for anyone who has ever created a custom cryptographic system (and the story of SSL for the first few years).

Read the comments under TFA. Nothing was broken. The author was quick to write the article without understanding protocol first.

Analysis looks sound to me, using msg content to derive the key is generally bad form, especially messages are known to attackers and sent repeatedly. (eg. Hi)

It looks like a bunch of convoluted code that doesn't actually accomplish anything.

Why is

  sha1_a = SHA1 (msg_key + substr (auth_key, x, 32));
  sha1_b = SHA1 (substr (auth_key, 32+x, 16) + msg_key +   substr (auth_key, 48+x, 16));
  sha1_с = SHA1 (substr (auth_key, 64+x, 32) + msg_key);
  sha1_d = SHA1 (msg_key + substr (auth_key, 96+x, 32));
  aes_key = substr (sha1_a, 0, 8) + substr (sha1_b, 8, 12) +   substr (sha1_c, 4, 12);
  aes_iv = substr (sha1_a, 8, 12) + substr (sha1_b, 0, 8) +   substr (sha1_c, 16, 4) + substr (sha1_d, 0, 8);
better than

  sha1_a = SHA1 (msg_key + auth_key);
  sha1_b = SHA1 (sha1_a);
  sha1_с = SHA1 (sha1_b);
  sha1_d = SHA1 (sha1_C);
  aes_key = SHA1(sha1_a+sha1_b+sha1_c);
  aes_iv = SHA1(aes_key+sha1_d);
and more importantly why is it better than

  aes_key = RANDOM
  aes_iv = RANDOM
Are there special properties of the random bits of parts of various hashes that makes it more 'random'?

To me it looks like repeatedly sending the same message, or a message whose hash varied by a few bits, would leak part of the auth key, basically the person probably doesn't know what they are doing and are just adding extra 'stuff' to assure themselves it's secure.

The comments from TFA doesn't say a shit but full of "you are wrong, because you don't understand it" without debating the technical details.

That they have not made available a threat model is the kicker for me. Every time people start talking about security, whether it's their email or their house, inevitably they start going nuts with all the possibilities. They then leave real security against real threats at the door in order to deal with imagined ones that require far greater resources to secure against. I'd bet that many fortunes were made by security 'professionals' catering to these whims.

The threat model is the antidote to this tomfoolery. Before you even start to touch a primitive, sit down and think very hard about who you're trying to be secure against and what their available options for attacking you are. Then write this analysis down and make it a living document, referring to it constantly to make sure that what you're writing is actually protecting you from something and what and how it's doing that.

I'd go so far as to say that without such a document, a system cannot possibly be secure because it doesn't know what it's doing. What's more, the poor team in charge of maintaining and updating it is going to be forever behind the curve, reduced to putting out fires whenever they pop up, be it from external actors actually breaking their code or from their own ignorance introducing vulnerabilities that weren't there before because their updates didn't take into account the non-existent threat model.

There is also a dialogue between author and Telegram happening in the comments as of right now.

Yes, with gems like this one:

" We grew tired of “experts” too lazy to read the full documentation. "

When asked about a diagram. With their attitude, it seems incredibly unlikely they'll ever have a secure product.

The author isn't exactly friendly either...

And after reading that conversation, I can't avoid to feel sympathy for Telegram.

The Math PhD who designed the should write the protocol up as a paper, submit to top conferences like CCS and S&P. And he will see how badly the reviewers will dump on his protocol.

  They could have made something like: the client generates a key pair,
  encrypts the public key with the server’s public key, sends it to the server
  with a nonce, and the server sends back the nonce encrypted with the
  client’s public key. Simple and easy.
Just out of curiosity, is it really that easy if you're using verified components? I would naively assume (and hope) so, but then everyone tells me that crypto is fraught with subtle ways to shoot your users in the feet, including if you misuse verified primitives. If yes, I'll continue making pie-in-the-sky plans using the magic of public-key cryptography. :)

Not really, that is also vulnerable to man in the middle attacks.

You need cert pinning or a fool proof trust system to verify the server key.

Not if the server key is just hardcoded into the app, which can be appropriate in some cases; of course, that raises the question of how to secure the app download, but that's usually done over a separate protocol anyway.

In fact, the app ships with a bunch of RSA 2048 keys that are used for that part. But there is no revocation support.

Yeah that's generally the downside of cert pinning / hardcoding. The upside is people can't install extra certs.

Well that escalated quickly.

Still, nowhere near as bad as Whatsapp I'm assuming?

Just use RedPhone. Moxie knows what he's doing and it's auditable + good technology.

Whatsapp is a joke, but lower friction matters much more for their market.

I'm a happy user of whatsapp.

Then again, I'm a happy user under the assumption that everybody everywhere could read my messages if they actually made an effort.

I just don't send anything over whatsapp that I really care about - if the world discovers that I regularly tell my girlfriends I love them, that really doesn't bother me.

Wickr seems like a good goto, though I haven't had the need for it yet, so I haven't done all the research I could.

Wickr is closed source.

TextSecure is where it's at. Hopefully the iOS release comes out soon.


Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact