Hacker News new | past | comments | ask | show | jobs | submit login
Telegram founder: US intelligence tried to bribe us to weaken encryption (fastcompany.com)
654 points by anjalik on June 15, 2017 | hide | past | web | favorite | 206 comments



>"It would be naive to think you can run an independent/secure cryptoapp based in the US."

This seems to be a shot at WhatsApp and Signal, implying that they have loopholes that allow the FBI to snoop in. I'm not sure how true that is. This might be an attempt to deflect from the fact that Telegram uses a home-baked encryption protocol which might be insecure, while WhatsApp uses the OWS protocol.


Pavel Durov wants everyone to think security is about trust in people. Most companies in that business do the same, because it's easier than building something that doesn't require trust in people. The way Pavel Durov and others like him present "trust" is (ironically) shady corporate structures[1], shell companies, or use of the word "Switzerland."

They want people to think like that because they've built businesses that require it. Telegram stores the messages you send/receive unencrypted on their servers. That's not good, unless you've been trained to think that "privacy" is just about choosing the company, government, or legal jurisdiction that gets total access to your data.

Security professionals know that's not how we should think about security (never trust people!), because Durov is leaving a lot out: there aren't safe jurisdictions, servers get hacked, and centralized databases will get compromised. Logic, though, is probably no match for conspiracy theories.

1: https://www.washingtonpost.com/news/the-intersect/wp/2015/11...


I read the WP article you cited, titled, "The secret American origins of Telegram, the encrypted messaging app favored by the Islamic State".

If Telegram isn't that secure, then why are extremists like IS using it over Signal or WhatsApp? I know Telegram has better features for big groups and much better multi-platform support, so is that the reason? I'm legitimately asking without any snark.


You highly overestimate the people who are in position of power.

I am talking about politicians, head of terrorist groups, etc.

While saying that top politicians are complete idiots is probably wrong - then again maybe not, I am not sure anymore - the fact that H. Clinton, run the most expensive campaign in history, with backing from all major tech corps AND her staff didn't bother to use encryption at any scale, let alone running a mail server (God knows what kind of software the server was running if it was OpenBSD or Windows Server 08), to me says a lot about how flawed the understanding of these people and their consultant's is about today's world.

Watching "House of Cards" everybody seems incredibly smart, driven, etc. but the politicians I see in real life on average and on the not-very-smart side and the few I've met in person are clueless beyond salvation.

Ps. Sorry for possible mistakes, I'm reading from mobile.


Over 99% of people have no clue how computers work in the most basic sense of the word understanding and most are proud of their ignorance.


I've always heard that while House of Cards might be more "serious" that Veep is actually much closer to real life. Veep has politicians making stupid mistakes on Twitter. Life and death decisions decided on a whims and just absurd amounts of pettiness.


Haha good points. Thankfully savvier viewers and/or just people in general know House of Cards is all show and not realistic even though it acts like it is with its seriousness.


I think "Borgen" is far more realistic when it comes to politics, the "shadow play" and intricacies


Marketing.

Sadly, one of the conclusions you can draw from Telegram is that a security business is almost better off with no security.

Pavel Durov says that Telegram is "heavily" encrypted over and over again, journalists who don't know better take that at face value and encode it into every headline ("Telegram, the encrypted messaging app, ..."), and meanwhile since there is no encryption there is no risk of anyone finding or publishing vulnerabilities in it.

Since cryptography already seems unreal to most people (and is always just a matter of getting the smart IT guy to 'crack' it on television), everyone's already predisposed to thinking that security = Switzerland. Unless there's a major change, the charlatans will win every time.


Because they have fallen for the marketing hype and are not secure messaging systems experts? We know some extremists still use SMS which is definitely not secure.

Also, is there evidence that extremists aren't using Signal and WhatsApp?


Them falling for the marketing makes sense.


Telegram offers at least some degree of anonymity, you can join group chat without exposing your phone number. And even if telegram administration closes the group, they are unlikely to report all members to police.

WhatsApp and Signal offer non-anonymous groups only. They are probably used by people who already know each other.


> Telegram stores the messages you send/receive unencrypted on their servers.

Anyone who has used Telegram for more than 5 minutes knows there are secret chats. The effort being made to rig the information against Telegram also tells a lot about its relevance.


The secret chats are almost useless. Even setting aside the impossibility of syncing between devices the support is severely lacking: Telegram web? No[1], telegram desktop? Nope[2], telegram-cli? Yes, but the client is completely broken and abandoned[3].

The telegram desktop main developer said[4]: "Well, I'm afraid there isn't. They are just not in high priority, many other important things first."

It seems stickers, funny bots and GIFs have more priority over security. I think this tells a lot.

[1]: https://github.com/telegramdesktop/tdesktop/issues/871

[2]: https://github.com/zhukov/webogram/issues/126

[3]: https://github.com/vysheng/tg/issues/1189

[4]: https://www.reddit.com/r/Telegram/comments/2z24ly/how_to_sta...


Oddly enough, they did implement encrypted calls in telegram desktop, and relatively quickly after the release of the feature in mobile.


I think part of the issue in examples like this one is that journalists read (legitimate) criticism about e.g. Telegram and simply don't report all the nuance (whether due to an actual agenda or honest misunderstanding).

Secret chats are not enabled by default, and most users don't enable them; so yes, messages are generally stored without encryption on the Telegram servers. That's a more neutral way of presenting the information.


My understanding was that non-encrypted messages were split across multiple countries/jurisdictions,, making it impractical (not impossible) to snoop.


You mean you only have to be able to hack their servers in one of their multiple sites to look at any user? Cool, so all the snoops can look at every citizen's data.


Secret chats that aren't even supported by the official desktop client? Telegram has some things going for it, but privacy and security are not among them.


> Pavel Durov wants everyone to think security is about trust in people

I think his point is the reverse, while people trust Signal because of Moxie & Trevp.

puts Tinfoil hat it's possible that Trevp is not "that" involved with Signal because he doesn't want to be involved with the government and backdoors.


To say that Signal is secure because of the people, or to say that WhatsApp isn't secure because of Facebook only feeds the logic of Pavel Durov and all the charlatans of the world.

The point of all of this should be that it's not about people, places, or jurisdictions. If you make it about that, the charlatans who want you to think that Switzerland = secure will win every time.


> Telegram stores the messages you send/receive unencrypted on their servers.

They don't. From the Telegram FAQ:

"Cloud chats are stored encrypted in the Telegram Cloud, and the keys needed to decipher this data are kept in other data centers in different jurisdictions. This way, local intruders or engineers can't access this data, and several court orders from different jurisdictions are required to force us to give up anything. Thanks to this structure, we can ensure that no single government or block of like-minded countries can intrude on people's privacy and freedom of expression."[1]

1: http://telegra.ph/Telegram-Security-FAQ


Please don't spread disinformation.


They have keys, they have data - the end of story (and security).


Bingo. The CEO of Telegram likes to spread conspiracy theories that are good for business. In a recent tweet he claimed that Signal is "funded by the US government", [1] citing a ridiculous hit piece on OWS and Moxie. [2]

[1] https://twitter.com/durov/status/872891017418113024

[2] https://surveillancevalley.com/blog/government-backed-privac...


Even wikipedia say it's founded by the governement ...

As of October 2016, the project has received an unknown amount of donations from individual sponsors via the Freedom of the Press Foundation.[100] Open Whisper Systems has received grants from the Knight Foundation,[101] the Shuttleworth Foundation,[102] and the Open Technology Fund,[103] a U.S. government funded program that has also supported other privacy projects like the anonymity software Tor and the encrypted instant messaging app Cryptocat.

https://en.wikipedia.org/wiki/Signal_(software)


The tweet he was replying to said: "I've heard various reports from people I trust that Signal is compromised. Any comments/thoughts/reliable rumors?"

With his reply, Durov is implicitly claiming that Signal must be compromised because it received some funding that can be traced back to the US government. This is the same kind of FUD that has often been used to smear Tor in the past (and indeed, the piece he cites describes Tor as "a federal weapons contractor" and "a foreign policy weapon, a soft power cyber weapon").


That fits the profile of a conspiracy theory.


Assume without thinking that conspiracy theory equals false -> you watch too much TV.


Funded by the government != The government has their hands all over it


Exactly, the government is extremely large and not all the NSA. Sometimes the left hand doesn't know what all the thousands of right hands are doing and can't possibly care.


To some extent that's even true within the NSA.


The internet is a government conspiracy!


The government is a government conspiracy!

...wait.


There's a crazy logic to this that I appreciate.


Even TOR was originally a government funded project


Even "the Internet" was originally a government funded project


and yeah, currently Internet is a spying tool of USA.


It's a spying tool for American corporations far more than it is one for the government.


It's a spying tool for American corporations which is very convenient for the government


And TOR is no longer (was it before, I don't know) a secure tool.


It still gets a large portion of its funding from the USG


And there's lots of Tor posters hung up at USG labs (e.g., https://www.torservers.net/wiki/_media/tor-relay-poster.resi...). I was pretty surprised, at first


Tor has probably saved more than a few CIA agents' lives, not to mention free access to the Internet being a powerful tool for undermining authoritarian regimes.


> Tor has probably saved more than a few CIA agents' lives

Tor's main purpose is to allow agents of US intelligence agencies to communicate back with the home base.

"Giving free access to the Internet being a powerful tool for undermining authoritarian regimes" makes it harder for an adversary to focus solely on intelligence agency agent traffic.

I'm not saying Tor is backdoored or monitored, though it's heavily monitored.

When enjoying the benefits of Tor, it's wise to also remember Tor's main purpose.


Yes, OWS received funding from the US government via the Open Technology Fund, which is organized under Radio Free Asia.

I assume that "founded" is a typo, but to be clear, there is no evidence that the US government either encouraged the creation of OWS or contributed any actual work/code to OWS.

The conspiracy-theory part here is that taking funding from the US government (any part of the US government, whether the NSA or Radio Free Asia) means that your protocols or your implementations have backdoors, even if those protocols and the source code implementing them are public and well-reviewed.


Supporting Cryptocat only makes sense if they don't do due diligence or if they're trying to subvert security.


I worked on the OTF-funded security audit of Cryptocat. (The report: https://leastauthority.com/static/publications/LeastAuthorit...) While I wasn't involved in arranging the work, my vague memory is that a publically-disclosed audit was a condition of the funding -- I think the OTF had that policy in general. (It's been a few years, and I haven't kept up.)

Even the author of CryptoCat now thinks you should not use CryptoCat, but I would not rush to assume ill of the OTF funding it back then.


> Even wikipedia say it's founded by the governement ...

I bet you meant to type "funded".

Most of the funding for the foundation that funds Wikipedia comes from small donations from users.


For all we know, Signal itself could be one big CIA operation. It wouldn't be beyond their capabilities. But at least the protocol and encryption are open source.


Good for business but what business? As far i know he is running telegram using his own money, and he has no businesses plan and he doesn't want one.

Only the telegram server is closed source (for now), telegram client and protocol are open source.


How about a business of selling troves of data to Russian government, for example? The only thing that makes such "business" seem unlikely is trust in Durov. It can be unfounded given his claims like [1]

> No, because I never took money from the government. I left Russia and lost a $3bn business there because I defended users' privacy from it.

while he is often spotted in Saint-Petersburg, Russia. Moreover, Telegram's developers themselves sit next door to Vkontakte's (that was allegedly taken from him), so he's hardly dissident.

I believe no Western media tried to look behind his bravado and this saddens me. I'll be happy to help with local sources if someone is interested, though.

[1]: https://twitter.com/durov/status/872902721401237506


Could you provide a little context to your comment?

I'm not well read on this (eg, what company he lost, how he claims he lost it, etc) -- you seem to be responding in the context of a story, but not a story I know about.


You can find a dry summary here [1]. Basically there is a nice story that Durov constructed over the years, the story of a tough libertarian who stood against the Russian government and was forced out of his business (Vkontakte), having to sell it for a fraction of its true price. He stands by his fellow users, willing to risk his business over their privacy, and now he runs Telegram out of his own pocket because… reasons. He is also in (implied) exile from Russia, he bought a citizenship of some island nation and is now a digital nomad.

I believe that's the gist of his "official" image. However, there are cracks in that story to which I alluded in the comment.

https://en.wikipedia.org/wiki/VK_(social_networking)#2013.E2...


If I'm not mistaken Telegram has already been proven insecure once in the past. WhatsApp is a zuck property now, you can consider it insecure.


when telegram has been proven insecure? in every topic about telegram people keep saying telegram is not secure. i failed to find info about how insecure their secure chats. maybe you can help me?


Their homebrew crypto seems very shaky:

https://cs.au.dk/~jakjak/master-thesis.pdf

https://eprint.iacr.org/2015/1177.pdf

tl;dr: they tried to use SHA-1 as a MAC. This is something of a crypto 101 mistake. Had they even used HMAC they'd be in much better shape. Worse, even after this was pointed out to them and people started writing papers about potential attacks, they have stood by their shaky design, refusing to update it or even admit mistakes were made in the initial design.

But even worse than that, end-to-end encryption is off-by-default, and users must opt into it. Why?

https://telegram.org/faq#q-why-not-just-make-all-chats-secre...

"This allows Telegram to be widely adopted in broad circles, not just by activists and dissidents, so that the simple fact of using Telegram does not mark users as targets for heightened surveillance in certain countries. We are convinced that the separation of conversations into Cloud and Secret chats represents the most secure solution currently possible for a massively popular messaging application."

Putting aside the fact that if Telegram's cryptography were properly implemented, an outside observer shouldn't be able to tell whether or not end-to-end encryption is being used or not (i.e. Telegram does not provide proper separation of data-at-rest versus data-in-motion), this seems to be the "I don't need encryption because I have nothing to hide" argument, but perpetrated by what's allegedly supposed to be a secure messenger.

The real reason why Telegram doesn't enable end-to-end encryption is pretty clear: they don't have the features to provide a good end-to-end encryption experience: They don't support end-to-end encrypted group chats. They don't have encrypted backups like Signal and WhatsApp.


> But even worse than that, end-to-end encryption is off-by-default, and users must opt into it. Why?

Because it is unusable. It has no synchronization between devices, only 1 device to 1 device. And if you accidentally closed the chat, you have to verify it again. You can't store trusted key fingerprint.

It is not suitable for mobile devices. Secure chats are like OTR, but with bad crypto. Signal and WhatsApp are the same protocol as OMEMO, originally designed for Signal.

> so that the simple fact of using Telegram does not mark users as targets for heightened surveillance in certain countries

And yet Telegram is associated with terrorism more that, e.g., WhatsApp.


I think he's spreading a lie.. based on what I've seen, at least.

It's a not yet proven secure protocol, is the worst you can say against it I believe. I'm not a security guy mind you, I'm just parroting what I keep seeing. Rightfully so, lack of deep audit is a very valid reason to worry. Yet, worry and untrusted is vastly different than actively exploitable / broken.


Encryption should probably be considered broken until demonstrated otherwise.


By that logic all encryption is broken.


Many prominent and respected HNers have laid out the case for why Telegram has security issues. If you use the search function at the bottom, you will easily find these issues raised. I don't personally possess the knowledge to determine the soundness of Telegram's crypto, but there have been enough red flags raised on HN over the years to merit skepticism.


tbh, people like tptacek and his buddies have a clear agenda and attitude of their own. I'm definitely not sated by their pronouncements alone.


While I absolutely agree with you about the "tptacek & friends" agenda, that really has no bearing whatsoever on the actual problems they've raised with MTProto, their weird KDF, and IGE in general.

Whenever one can consider a person's arguments in isolation from your opinion of said person, it's wise to do so. This is one of those opportunities.


Can you elaborate on what that agenda is?


Is that how you prove something - by reading comments on HN? Oh, how about all these red flags are raised to scare people to not let them use Telegram which is really secure?


> Is that how you prove something - by reading comments on HN?

I enjoy the comments of tpacek and ryanlol


I don't think you answered my question.


I don't base my life on them but I think tpacek and ryanlol know more about computer security than I do.


> WhatsApp is a zuck property now, you can consider it insecure.

Another day on HN...


Why? Do you have any proof WhatsApp is insecure?


I think the question should be do we have proof that it is secure?


Moxie Marlinspike's reputation is dependent on his ability to deliver a secure product. Signal/Signal Protocol development is funded by donations and grants from groups like the Freedom of the Press Foundation, EFF, etc, and those groups desire a secure messaging product. Moxie has staked his reputation on WhatsApp's implementation of the Signal Protocol multiple times on the OWS blog, which he would not do if he disapproved of WhatsApp's implementation. There's no hard proof (except what can be gained through analysis of the executables), but there's no incentive for Moxie to lie, therefore, WhatsApp's implementation should be reasonably secure.


And you call that a proof? :-)


I did say "there's no hard proof". This was purely an analysis of incentives.


I think "speculation" is a better word here. :-)


As long as it's closed source, we have to assume that it's unsecure.


Man, I've got some horrible news about your CPU.




That article is false. A large number of people in the security community have spoken out against that article, which had the effect of convincing people in dangerous situations to switch to less secure communication methods.

"Security researchers call for Guardian to retract false WhatsApp backdoor story" [1]

The Guardian claims they have offered to let Zeynep Tufekci write a rebuttal; according to Tufekci, they have repeatedly delayed and are not taking the offer they made seriously.

If you have spread this misinformation in other places, you might want to follow up with the people you misled.

[1] https://techcrunch.com/2017/01/20/security-researchers-call-...


There is absolutely nothing wrong with the article, claiming that it is FUD will not change the fact that WhatsApp can re-send messages encrypted with different keys at will (which makes it ABSOLUTELY USELESS for people who actually care about their privacy). The argument against the article seems to be around "we can trust whatsapp not to abuse their ability", which blows my mind. You should not have to trust anyone with cryptography.

> WhatsApp does not give governments a “backdoor” into its systems and would fight any government request to create a backdoor

The problem is that I have to take their word for that, while they have the ability to activate the backdoor at will.

I linked to HN for a reason, so that people could see what other people think concerning the article. Here is the HN version of that opinion https://news.ycombinator.com/item?id=13394900

The misleading is that people are told that proprietary and centralised messaging services such as whatsapp can guarantee security - the truth is that they probably can't.


The Guardian has apologized for the bullshit story that you love, and completely retracted the claim of a "backdoor".

Do you still believe HN users instead of security researchers?

Do you still give security advice based on randos on HN instead of security researchers?


I'm going to go out on a limb and say that Zeynep Tufekci knows more people who need crypto to survive than the average HN user, and understands their trust model better.


You can read about the court filings [1] for Whatsapp metadata (unencrypted), but the actual text communication from Whatsapp is secure - or at least not returned by subpoena.

I think Signal doesn't even have the metadata available, but I'm not sure about that.

[1] (Horrible Forbes Link)

https://www.forbes.com/sites/thomasbrewster/2017/01/22/whats...


Someone already tried requesting data about two suspects from Whisper Systems. What they got is "one number is not a Signal user, while the second one created his account on X day at Y time."[0]

So yeah, Signal doesn't retain metadata. All they can provide is whether you're using Signal or not and when you've started using it.

[0] https://whispersystems.org/bigbrother/eastern-virginia-grand...


Indeed, looks like a shot at Signal. I was under the impression that Signal is much secure than WhatsApp at least in terms of getting away from the prying eyes of Facebook. In reality both of them use the Whisper protocol underneath AFAICT for end-point encryption.


Even if the protocol is mathematically sound, there is no way to verify that it's implemented faithfully by closed source software such as WhatsApp.

Telegram protocol is open and has an open source reference client. While not as good as having both the server and the client open source, at least it allows independent verification that the protocol is being implemented faithfully.


How do we know what WhatsApp uses? And do they use it in a secure way?


I agree that it's somewhat a deflection, too, and I wish the Telegram team wasn't so stubborn about at least studying the Signal protocol and perhaps making its own version, if it doesn't trust it by default. I don't know if their "math Ph.Ds" would know how to do that, though.

That said, I think it's pretty clear that WhatsApp does have a way to allow the FBI to snoop on users:

https://www.theguardian.com/technology/2017/jan/13/whatsapp-...

I know it's controversial and many don't believe WhatsApp would actually use it like that, but I don't buy it. Either they already use that "feature" as a legal intercept or they're about to do it at least in the UK, where Snoopers' Charter passed and the law is vague enough to force them to do something like that. It may happen in Australia, too, if the new anti-crypto bill passes.

I still trust Signal for now, but I'm not sure whether or not their server-based solution will be so backdoor-proof against future bills, and I think they should be prioritizing decentralizing Signal somehow.

EDIT: Sigh, Germany, too:

http://fortune.com/2017/06/14/germany-fingerprint-children-s...


I am not sure about the claim here but the FBI has always been all over cryptography companies and products and this was well before Snowden, Phil Zimmermann (PGP) knows about this.

In 2003-2006, we built a service that was a financial system to exchange financial data through various means including AS/2 EDI over HTTP with big companies and the government suppliers such as AAFES (Army and Air Force Exchange). Initially we had RSA, PGP and a custom encryption in there, the latter two for other features besides EDI. We got a letter from the FBI asking us to switch only to RSA, they wanted to know about our use of PGP and wanted to see our custom encryption if we continued to use it. Being a small/medium company we switched to just RSA to avoid any issues. It was an odd day, when I came into the office they told me I had an FBI letter on my desk and you can imagine what happens around an office when something like that happens. Very strange day indeed.

Moral of the story, if you create your own crypto or aren't using the ones you are supposed to use, in any capacity, expect some knocking.


Did the FBI give any reason/leverage as to why you should comply with their ask?

If you are writing about it here, I'm assuming it wasn't an NSL (national security letter) and so would you be open to publishing a copy of it publicly? Would be great to get sunlight on that.


Probably in an old desk somewhere, it wasn't an NSL but I wish I would have held onto it, I would have captured it on my phone if smart phones were around then. It basically said if you are going to keep using PGP or custom encryption for the app that they would like to meet with us to discuss since we were connecting to government financial endpoints. Then it said if we use RSA that this would not be a needed discussion. It was very strange and eerie all in all and we did not want to rock the boat.

Since we were small/medium agency/company we just complied as we were just helping smaller/medium companies sell stuff like Ukuleles and hats to AAFES so it was not a big issue. The key of the app was the EDI/AS2 integration that didn't use the PGP or custom crypto as it had to use certain algos RSA/DSA/TripleDES/FIPS/strong hashing for Drummond Group interoperability certification. The certification of interoperability and crypto communication was required to trade with Wal-mart, govt/AAFES etc.


To be fair, naively and given the use case you described, that sounds to me like in this particular instance they are trying to ensure that the encryption being used is "secure enough" for the govt. rather than "not too secure". It just so happens that the bureaucracy's definition of "secure enough" is a keyword whitelist that happens to have 'RSA' there and not 'PGP'.


Wickr CEO claims they were also approached to backdoor their product:

http://securitywatch.pcmag.com/security/319544-what-it-s-lik...


Interesting. That strongly implies that RSA has a flaw, which is news to me.


If the USG was aware of a secret flaw in RSA, they wouldn't be tipping their hands about it to random small companies. Come on.

But: avoid RSA anyways. It's inferior to modern curve crypto.


I think the only reason they contacted us was because we were one of 25ish companies certified to communicate with government endpoints early on in HTTP based EDI. It was right at the moment that EDI went from faxing to FTP to HTTP/email in AS2/3/4. Others were Oracle, Microsoft, IBM, EDS, Axway, /n software etc.

It wasn't until we had to connect to AAFES that it was a problem. We were small but we were sending a good amount of orders and financial data from Wal-mart, AAFES, and other gov't sources that had early EDI over HTTP/S going. They were also recommending us to vendors on their approved EDI software list and probably wanted to recommend ones that played nicely.


This is entirely plausible. If you're doing crypto on behalf of USG data, they have all sorts of dumb CYA rules.


It could be the opposite. Perhaps the FBI wanted to ensure this firm, which was moving government data, had a properly implemented security system.

I'd hazard a guess that 'custom encryption' would be a big red flag if the FBI was doing a security audit of who has access to government data.


PGP uses RSA, so anyone in possession of an RSA exploit could exploit it just the same, which would make this a doubly-ridiculous way for them to show their hand. I think it's more likely that the issue was the opposite: the FBI didn't think PGP was secure enough, certified correctly for this work, or that they believed there might be legal issues around PGP.


Weren't there reports that the NSA paid RSA to make flawed crypto the default?

https://arstechnica.com/security/2013/12/report-nsa-paid-rsa...


Or that the FBI was confident enough in their ability to steal private keys, or that they believed the keys were low-enough strength that they could crack the keys and decrypt the intercepted data at some point in the future.


It doesn't. Maybe they were looking out for customers? Custom encryption is usually frowned upon because it's not tested.


We used Bouncy Castle and Microsoft CryptoAPI for our RSA usage, who knows if RSA, CryptoAPI or others had trapdoors but they can neither confirm nor deny.


What encryption scheme were you using in PGP?


We just used it for a trusted/whitelist of users to send encrypted emails to one another, usually in the same company, and to forward on invoices/purchase orders to be sent. We just used default PGP setup with symmetric keys (which used PGP/IDEA internally I think). We switched this portion to Bouncy Castle with RSA.

We were just using it for additional ways to send financial docs usually invoices/purchase orders and acks/receipts using PGP where users traded keys, in places where the company was too small to warrant EDI needs and to send invoices to/from the system. PGP was being productized at the time and people were about encrypted email for a while so we added that feature in.


But you weren't using RSA for the asymmetric PGP keys?


Probably, PGP used RSA for asymmetric keys and session keys before IDEA, can't recall exactly but it was a short lived part of the app and PGP was changing during that time and now has changed supporting most algos with OpenPGP. I think we only added PGP at the time because it was a hot item/had some buzz about it during the time.

In our app, we switched it to Bouncy Castle for digital signatures after removing it for the other features.


Read the replies from all the serious crypto security people on twitter and you will see the overwhelming consensus is that the FSB/Spetssviaz and FBI/NSA probably love Telegram for its roll-your-own-crypto and server mediated group chats.

One also has to wonder if the FBI consider the Telegram team to be essentially undeclared Russian agents, and hence fair game.


Indeed despite what they say, we only have their word for it but no source code to check whether they accepted these requests from the snoopers in usa.


I thought they open sourced the code?


Yeah, it's open and the encryption is done in the client, so it should be available on Github.

[1] - https://telegram.org/apps


Cryptography experts like Matthew Green were having fun with some of his claims on Twitter a couple of days ago. I would read whatever Durov claims with a large amount of skepticism.

ex: https://twitter.com/matthew_d_green/status/87369621172278476...


If there are any professional cryptographic engineers that take this seriously, I'd be interested in hearing from them. From what I can tell, the response to this has been pretty much unanimous.


Governments never believed in privacy. Before they were opening envelopes and tapping phones. Now they are trying to keep up with technology and given the sheer scale of resources, manpower and power at hand operating 24/7 they will prevail.

A journalist like Poitras is on all sorts of lists and incessantly harassed. There are secret courts, secret laws and secret processes at play. And beyond this the power of harassment, intimidation, blackmail and bribery. Individuals and even organizations cannot prevail against the array of capabilities.

Its nice to think of democratic theory and the rights but these only exist when not exercised as talking points. The moment you start exercising them you end up on all sorts of lists, marked for harassment and basically have a target on your back. Dissent is squashed even before it can formulate.


Assuming this isn't just PR, in some ways this is scary and disheartening.

But my first reaction was "Cool, our government really cares, is creative and has the necessary power to get things done."

For those of you who've worked with government, you've seen how insanely difficult the procurement process is. Being as specific as needing to get competitive bids for toilet paper purchases, etc. So the fact that they could get potentially large amounts of bribe money means (a)This goes to high levels in the organization (b)They've probably done this before.

I wonder how much they offered?

And I wonder how many other pieces of software have backdoors. I would think the first things they would try and get access to is (a)Certificate issuers and (b) VPN software.

Do we know that Godaddy,LetsEncrypt, OpenVPN, Cisco VPN, Juniper, etc don't have backdoors?


> Cool, our government really cares

I wonder what they really care about? Liberty, individual rights, security, or more power? They're people too, so I'm sure they believe in the first three, but the last one is a more seductive and drives a lot more of the bad we see governments do; worse is that the drive for more power is frequently justified as "we need this power to protect our nation".

> has the necessary power to get things done

This actually frightens me a bit. The US government has the ability (and has displayed the willingness) to absolutely destroy people's lives in their pursuit of "national security". What checks and balances are in place to keep this kind of power in check?


One... to see if you are for sale. If you can be bought, other state actors will try and may succeed, whether US actually bribes you or not. US needs to know if a successful App is up for taking bribe, including top developers individually, or not, because millions of Americans including persons in position of power might be using that app.

Two... essentially, this is power of people applied. Lot of interest in keeping status quo of power, of narrative, of money. They want to have dirt on everyone and then choose to use it when needed. Putting pressure or not can be decided later, dirt will be collected by default. No point in digging the well when you are thirsty, you need to do it well upfront the need arising.

>What checks and balances are in place to keep this kind of power in check?

Either play the game, or be too big to fail and lobby the government for your needs. I think the best one can do is choose your masters either USA, China, Russia or to some extent India or EU main players.


I think that the first point is a legitimate one - if your own government can buy you, so can someone else. Of course, when it comes down to it, everyone has a pressure point. In my thoroughly uninformed opinion, it would be better to identify and protect against such tactics than do them yourself and watch it backfire like this.

The rest, I can't argue with, though it doesn't really make me feel any better.


The paradox is many people feel the US government should have done more about 9/11 or other such events. Further, people can't get upset about surveillance that they are not aware of.

So, the push is always going to be to collect more data (barring leaks) and it's only people aware of what's going on and thus inside the system that can really resist that pull. Which means the government and politicians are often the most effective supporters of privacy.


> The paradox is many people feel the US government should have done more about 9/11 or other such events.

Certainly. After all, they chose money instead of keeping the one tool[1] they already had that were able to tell them about 9/11 before it happened and save all those people who have died in 9/11 and other attacks. Negligence is complicity when it's a government choosing money over it's citizens, so in the case of 9/11, I'd say the US government is as guilty as the attackers.

A great documentary about NSA and ThinThread is A Good American[2], and a good follow-up is The Maze[3].

[1]: https://en.wikipedia.org/wiki/ThinThread

[2]: http://agoodamerican.org

[3]: https://vimeo.com/ondemand/themaze


I agree with you completely. I think it's good that they aren't sitting behind a desk doing nothing, but I'm glad these stories come out so we can have the proper conversations and keep privacy-security tradeoffs at a comfortable level for society.


"Do we know that [...] Cisco VPN, Juniper, etc don't have backdoors?"

Wasn't it published in NSA leaks that Cisco and Juniper do/did have backdoors?

We can trust particular crypto algorithms, but I'd bet money that for all complex, popular software there are not only vulnerabilities found by various agencies, but also vulnerabilities intentionally inserted there.

Looking back at known historical intelligence operations from the cold war, it'd be ridiculous to assume that noone has tried and succeeded to get some technician (or manager) to insert some backdoors for them. Compared to other intelligence activities, the required effort for that is so small and the benefit so large, that I'd assume that multiple nationstates have inserted their own backdoors in various key components of global infrastructure.


> LetsEncrypt

Let's Encrypt is CA and tools and mostly by 3rd parties.

Centralized CA system is imperfect, but certainly can't have "backdoor" in it especially now when Certificate Transparency is heavily enforced.


> Cool, our government really cares

Government == Individuals that work in public positions and do not think as one.


The ridiculousness of it all is it's unreasonable at its base to try to prevent encryption as a form of safety and security from violence.

Sure, you can lock up all communication for privacy reasons, and the government can spend all kinds of resources on trying to control to prevent or circumvent encryption - however it's a waste of resources as it's simply a bandaid.

If I wanted to do something violent or evil I/you can simply have regular meetings and use paper communication - the old spy-style stuff. Of course those networks can be infiltrated by governments with the resources, and they can maintain that presence by allowing certain acts within networks to occur vs. deciding which ones they should stop; it's how the war against Hitler was won once their encryption was broken - watch the very well-done The Imitation Game - http://www.imdb.com/title/tt2084970/ - for a reference.

The only real solution is dealing with the root causes. I heard an analyst on TV (a rare occasion for me) mention after Trump's Saudi visit and speech, that he didn't mention that the Saudis should look into the root causes of why there is terrorist activity growing in their countries; of course a lot of it is historical karma and rage from violent acts against their families, however a lot is because people's basic needs aren't being met which prevents the higher levels of Maslow's Hierarchy of Needs from being reached and maintained.

There's a solution and it requires building real community, locally, where you are now - and striving for people to become healthy so they don't develop bias and other coping mechanisms which prevent empathy and understanding and therefore compassion; preventing responsible ownership of weapons isn't useful either, not developing and supplying weapons on mass would be beneficial, however most attacks recently have been with vehicles or knives.

Universal Basic Income will also allow closer to a truly free work market and it can evolve from there, giving people the time to do what they feel is the most important in that moment for themselves, while not having to be forced to working in a shitty environment with shitty managers or co-workers; the health improvement and increased productivity here alone is worth it.


> however a lot is because people's basic needs aren't being met which prevents the higher levels of Maslow's Hierarchy of Needs from being reached and maintained.

This is contradicted by a lot of evidence. Terrorists are most commonly middle class members of their society and often well educated. If anything, terrorism is a powerful means of satisfying the higher levels of 'needs', e.g. meaning, purpose, community.


Interesting and perhaps a fair point. We'd have to understand what quality of life being 'middle class' in each society means, what well-educated means, and there will be more factors of course. If you hurt someone enough either directly or indirectly by killing family and friends' family members, that'll definitely start to outweigh the feelings of being in a safe environment - which doesn't simply come from being 'middle class' or well-educated. Safety is the basic need and if you have a world power killing 100,000s of your citizens, your future isn't going to feel safe; I could mention PTSD and the such, though I think that minimizes and simplifies it too much.


"a few months later i was offered an interview for a position at the fbi office for cyber-warfare in nyc who as well offered to fix my immigration status"

and, "before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar"

http://mickey.lucifier.net/b4ckd00r.html


Would you mind clarifying what your quotes and the linked wall of text have to do with the story?


Seems to be a similar story by a security guy who wrote crypto code for OpenBSD. The larger quote:

> about the same time at the bazaar show in nyc i was contacted by a representative of us-ins and a ukrainian millitary attache at un. both investigating my involvement with openbsd. a few months later i was offered an interview for a position at the fbi office for cyber-warfare in nyc who as well offered to fix my immigration status (or none thereof at the time ;) with a greencard. nonetheless for the lack of credibility from the future employer i refused to talk to them as for me deportation did not sound like an acceptable worst-case scenario.

> before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar. later on my way back during a short stay in chicago also randomly appearing fbi agent. fellow was ordering food and beer for me and just like his navy pal gave me a warning to keep my mouth shut!

He was a foreign national visiting the US who probably got targeted by various agencies after attending some security conferences.


Ah, thank you.


But wouldn't it be in the interests of mass surveillance to herd people toward a chat option that isn't secure, or that the surveillants have a backdoor to? You get two benefits: 1) chat people think is secret you can read, 2) people self-identify as selectors / targets by choosing to try to hide their communications, which you can actually read

And if such PR herding worked, wouldn't the surveillants be prepared to pay for such efforts to make their job easier?

So, what seems readily apparent is: Telegram takes state money, to offer an insecure option, while dissimulating to the world that it's: a) secure and b) turning down state money all the time.

I know why this perspective isn't discussed in MSM. But I don't get why it's not discussed more here. It seems obvious to me. And personally IMHO, I think that's a good thing. Catch more criminals / terrorists.


Can someone correct me if I am wrong, but it seems relatively easy to make an encrypted peer-to-peer messaging system.

I mean, simply use a public/private encryption algorithm that has proven to be highly secure:

- Share your public key openly

- Anyone can send a message to you using your public key to encrypt the message

- You decrypt with your private key on device

Do all the encryption/decryption on device and viola, secure messaging. (This is basically how https works.)

Of course this only allows a single device the ability to decrypt the message.

However, if you want to allow multiple devices to share a private key, they can simple send each other their own private keys using the same encrypted protocol.

In addition, for super paranoid use, a master password could be used to salt the private key so that would be required with the private key to enable decryption. (Which is similar to how password keepers basically work.)

What am I missing?


Which public key algorithm? In what mode of operation?

What are you going to use to actually encrypt messages? You don't want to directly use the public key primitives to do this.

In what mode of operation are you going to use that second, bulk encryption algorithm?

How are you going to authenticate messages?

What will you do to validate the public keys of your peers? When you close the application, will it forget everyone's keys? How do you prevent MITM on first contact?

What happens when your peers change devices, and thus public keys? How do you authenticate those changes? If you get any of this wrong, remote attackers can MITM messages.

How will you handle file transfers (and images and videos and voice, which will probably need yet another cryptosystem)? How will you cryptographically bind those transactions to the (presumably, somehow) authenticated chat session you set up?

What happens when someone's device is compromised? Is every chat they've ever sent also compromised?

What happens if someone is briefly compromised? Is every message they send in the future also necessarily compromised?

How will you handle updating your software when, inevitably, someone finds a vulnerability in it? What happens if you have to upgrade the whole protocol?

None of this is easy. Most of these problems by themselves are hard in their own right, but there's a combinatorics to them as well.


Thanks!

This is an excellent list of potential issues.

"What are you going to use to actually encrypt messages? You don't want to directly use the public key primitives to do this."

I'm not sure what you mean by this.

Could you explain why there is a need for another encryption protocol beyond a public/private key encryption?

If the protocol is secure against brute force attack, both the public key and the encrypted messages could be open and would not create a vulnerability to the private key.

What am I missing?


* Assymetric transforms are much, much slower than the AES transform or any other block or stream cipher.

* In most cases, an asymmetric transform gives you a deceptively small amount of headroom within which to fit your data before losing security.

* asymmetric transforms are less safe to implement than simple authenticated symmetric ciphers.

* for that matter, cost-effectively authenticating messages will require "symmetric" primitives anyways.

* modern asymmetric algorithms (like Curve25519) don't "directly" support encryption.

That's just off the top of my head. It is hard to think of a single competent public key cryptosystem that encrypts directly with the asym transform.


> What happens if someone is briefly compromised?

Could you give an example please? How do current protocols deal with that?



I know about forward secrecy but my question is how you could protect future seasons, not past seasons.


Negotiate new keys with Diffie-Hellman periodically, I think.


For example you're missing forward secrecy: Do old messages stay secure or not if a key is leaked?


The public/private keys could be changed periodically. Old private keys could be deleted. Once lost, access to the messages they decrypt would be permanently lost (no searching of message history).


One thing missing is that most users cannot be trusted not to lose their key and still want a way to recover it.

LastPass, for example, provides ways to do that, for example by using devices they have used recently but, I don't think it is particularly secure.

Spreading the key to multiple devices so that you have a copy of it on another device helps obviously, as does allowing an unencrypted backup of the key, for example on a USB key you store securely.

The other problem is paying for it. To deliver messages quickly to all devices, even when they are offline, the messages obviously have to be stored serverside, which takes up space and bandwidth.

A federated system, where a user is on a particular server and, you deliver messages to that server, which delivers to their devices (possibly when they come back online) makes managing paying for it easier - you can get other people to host it or, people who can can host it themselves. It also removes the single central point of failure.


Yes, if you lose your private key, it is gone forever. Otherwise there is no security. (Backup options would depend on the use case.)

Yes, payment is a separate issue. It would be assumed that there is value in having this system available to the users that would be outside their messaging needs.


Who do you trust to distribute the keys? This person is empowered to MITM.

Who do you trust to know the metadata about how encrypted messages are flowing?

Who do you trust to get the crypto implementation details right?

How will you support use of multiple devices? People generally expect seamless switching between phone and laptop these days.


Public keys are open, they can be distributed any way (they could be published to a hosted directory a shared wherever). Only the device that owns the private key could decrypt the message.

In fact encrypted messages could be stored publically anywhere and only the intended recipient could read them.

The flow of messages is not encrypted, the system only encrypts message contents. (However, there are options to make it very difficult to trace, but that is a different issue.)

Trust is up to the user to decide and is always necessary.

Multiple devices is easy enough, a device can encrypt its own private key and send it to another device (using the target devices public key). The target device would now have 2 private keys for decryption.


Public keys are open, but the hard part is mapping real people to their public keys. How do you know that the public key they send is the real one, and wasn't modified en route? For people you know IRL, this is easy, but for strangers?


You could use OpenPGP public keys so you can use the web of trust that comes with them.


you are missing the fact the OS that runs the system is already compromised and has government backdoors. No matter if you use Telegram / Whatsapp / whatever. Windows / Andriod / iOS / you name it already have a million ways to be compromised and have a backdoor if needed.


There aren't a lot of places that are embracing truly end-to-end encryption for the masses. I think it would be tough in the U.S. but it's not clear to me where the better place.


Something like Iceland or Sweden?


Option 1: Could be Russian/Telegram propaganda.

Option 2: Could be true because seriously, who trusts the FBI/NSA not to violate our privacy anymore?

Really not sure what to believe about this one.


Option 2 requires you not just to believe that the FBI and NSA would want to compromise Telegram (I have no trouble believing that) but that they'd be comfortable disclosing that to Pavel Durov. I have a very hard time believing that any branch of the US IC is comfortable trusting secrets to Pavel Durov.


A person born in Russia says something and you think it's "russian propaganda".

Mccarthy would be proud.


Both the US and Russia are well known in interfering with foreign governments. Hell even America has their own foreign propaganda service "Voice of America" radio.

Also I have no problem with communism, I have problems with corruption and government over-reach/attempts to influence populations.

I'm not new to this. I've worked with people who flood social networks with bullshit to sway public opinion.

Always be skeptical, of both sides. And at this point I don't trust either the NSA or the Russian equivalents.


Surely it's not just an issue of location but scale? Unless there is a huge team reviewing code, an individual or small team could be paid-off by an agency to provide a backdoor? For the right combination of large scale app by small team, there'd have to be a price at which many individuals capitulate? If the backdoor is somehow revealed, "doesn't matter, got my money".

I used to wonder whether some success of social media companies couldn't be explained by secret payments for backdoor access. You could be operating out of Europe or Africa and still get offered money, and other pressure carefully applied.

You might think you'd hold true to your plan of privacy-for-all, but if they offer $x00m or more?


Sounds like a reasonable exit if noone wants to buy your popular e2e encrypted chat app. Take the bribe, shutdown and move on to the next iteration.


1) Open source codebase pre-backdoor 2) Take bribe 3) Insert backdoor 4) Close company


truecrypt?



Pavel Durov is the guy who started vk.com which stores passwords in plain text: https://thehackernews.com/2016/06/vk-com-data-breach.html He has no clue about security. Whatever he claims, I would never use Telegram.


"Never do anything against conscience even if the state demands it." --Einstein


There isn't even any need to weaken the homebrew encryption, good luck using it. I don't even have the option on the Linux desktop client at least. The "secret chat" feature isn't available.


This is pretty alarming stuff.

Especially considering how that competitors like Signal are US based. Signal is owned by twitter which by no means is a small player, so it isn't likely to fly under anyones radar.


If Wikipedia is current on the topic [1], then Twitter owns Whisper Systems (RedPhone, TextSecure, ...), but Moxie runs Open Whisper Systems (Signal).

[1] https://en.wikipedia.org/wiki/Open_Whisper_Systems


It's not a stretch to believe that people working on Signal have been offered bribes.

But it's definitely a stretch to believe that they've accepted them, or that it's worked. The protocol and the source code are public and well-reviewed (Telegram has a much less well-reviewed algorithm and does code drops on GitHub); it'd be a challenge to successfully fit a back door in Signal.

The thread that @durov was replying to was portraying being hired away from a public crypto project as a "bribe". Certainly that's a thing you could successfully do, but it also doesn't weaken Signal's encryption as long as there are still people working on it.


Signal is not owned by Twitter. Moxie works there, but that doesn't mean his code is owned by Twitter.


I hate to pedantically bore people but Moxie does not work at Twitter. He did for a short period of time after Twitter acquired his 2 person startup.


Signal is owned by twitter

No. OWS != WS


There is no need for the US intelligence do to that, looking at the choices Telegram made on its own.


The government has progressed from banning encryption to trying to subvert it :/


That's actually a central tenant of the NSA's mission. But that mission predates the internet and public key crypto. Now it's like the gas company is running around drilling holes in gas pipes.


How Telegram is making money?


Good PR.


[flagged]


We detached this comment from https://news.ycombinator.com/item?id=14560721 and marked it off-topic.


[flagged]


You can't be more wrong and uninformed. Russia-bashing seems to be a common and accepted theme now. He actually left Russia and his old company because the Russian government tried to censor it (VK) and he refused to do so. So basically he and Putin are in some kind of fight / disagreement.

https://en.wikipedia.org/wiki/Pavel_Durov#Dismissal_from_VK


He didn't leave Russia as in "exile", he did leave Russia as in "it's nice to leave somewhere else". Here [1] is a news report about him throwing out someone's phone for trying to make a photo of him in one of Saint-Petersburg's malls. Here [2] is an article from 2014 stating that he's visiting Telegram's office daily. By the way, Telegram's developers sit in the same building as VK's (also in [2]).

Durov seems very keen on supporting the myth of his "dissent" (to the point of outright lying) and very shy about the actual location of Telegram's developers and servers. Guess why

[1] https://lenta.ru/news/2017/03/20/durov/ (in Russian)

[2] https://tjournal.ru/p/durov-back-in-ussr (in Russian)


I never stated it was for exile.

but you are right, article 2 talks about him being in Russia some time in early 2014. It also states that one of the reasons to be there was for selling a datacenter. He left shortly after that with the comment that he had no intentions to return to Russia arguing that the country is incompatible with doing internet business.

> Я уехал из России и не собираюсь возвращаться. К сожалению, в данный момент страна не совместима с ведением бизнеса в интернете

As for the office being in the same building as VK's - that is definitely weird. Even though I know that they also have offices in Berlin and London. Same goes for the fact that they also have multiple datacenters around the world for speed and security. There probably is also going to be a DC in Russia. I don't feel like my data is unsafe due to that.

And where did he lie about the location of developers or servers? I'd be very interested in seeing that. With that you could convince me that something fishy is going on.


> A Russian

Russophobia seems to be the only acceptable form of Xenophobia these days. And a wildly popular one, at that.


Is it bad to be afraid when there is reason to be?


Of course not, but there should be more to it than merely seeing the word 'Russian'. I see this all the time now in left-leaning US papers (NYTimes, WaPo) that immediately imply association with something bad happening (he spoke with a Russian!). Much like the other popular generalization of 'Muslim' without qualification of type of Muslim where significant subsets have never been involved with terrorism (sufism vs wahhabism for example).

There are proponents who want to eliminate any type of generalization based on nationality/religion/etc which is also heavily flawed. Somewhere in between is the rational approach.


The stuff in the papers is not taking "Russian" to mean "bad." It's looking at serious allegations of collusion with the Russian government, known interference in the 2016 election by the Russian government, statements by various administration officials that they had no contact with the Russian government, and then using meetings with high-level officials in the Russian government to imply something bad happening.

With this comment, the idea is that crypto software built in Russia could easily be compromised by the rather un-free Russian government, which seems pretty reasonable. Of course, the fact that this fellow no longer lives in Russia scuttles it, but that's just not being fully informed, not "xenophobia."


> The stuff in the papers is not taking "Russian" to mean "bad." It's looking at serious allegations of collusion with the Russian government

That's a very generous assessment.

If only accusations of "speaking to a Russian" were qualified by the conversation actually involving something malicious or illegal before suggesting it was improper then I'd be in much more agreement.

The fact is besides Michael Flynn who was immediately fired once people with real power found out, there hasn't been anything that has come out that was bad on the administrations part. Even the accusations against Kushner seemed to be the handy work of Flynn who was at the only meeting he had with a Russian official.

Even the level of manipulation of the election results was seemingly marginal. If leaking Clinton/Podestas/DNC's emails was the limit of the interference (assuming they even leaked the Podesta emails to Wikileaks), despite most Americans on the left falsely believing it involved manipulating actual votes, then I really don't think it was as bad as people seem to think.

At most the worst that can be said is that Trump/RNC's emails weren't also leaked but everyone knows Trump doesn't use email so the damage would be limited to the RNC who Trump attacked on multiple occassions.

That would be a preferred scenario to Clinton/DNC emails never being leaked IMO.

So overall I don't see the total effect of Russian manipulation having a deciding factor. Especially considering the reason he won was in poorly educated industrial swing states where things like the latest Wikileaks Podesta email isn't as big news as a Trump ralley getting people excited over populist messaging.


The allegations are serious enough for there to be multiple ongoing official investigations, and were enough to get Flynn fired, I don't see what's so generous about that.

Lying under oath to Congress is pretty bad, and when you lie about never meeting Russian officials, and then it turns out you met Russian officials, it's even worse.

Most of your comment is downplaying the consequences of Russian interference, which doesn't make any sense to me. I don't care if Russian interference resulted in +10% of the vote for Hillary, a foreign power screwing with our democracy is capital-B Bad, and if there was in fact coordination between the Russian government and the Trump campaign, then it's capital-T Treason.


> Is it bad to be afraid when there is reason to be?

We also have a reason to connect "American company" with "NSA Spy"... and it's probably not really true for all US companies is it? :)

It's kind of ironic that Telegram is being attacked for being Russian by people coming from US of all places.


Falling down a staircase does not always hurt you seriously, I'm still afraid of getting hurt when falling down a staircase. That's why I avoid as best as I can to fall down staircases.

I also avoid as best as I can giving any data to usa companies or buying privacy-sensitive products from them.


If there's a reason to be afraid, give it. Nationality is a weak reason.


there's a difference between 'afraid of' and 'don't trust closed-source crypto from'.


There should still be a reason at the end of that: don't trust closed-source crypto from [] because [].


Putinophobia is more accurate, and it's somewhat justifiable albeit probably a bit overblown.


While I certainly not fan of custom-made crypto Telegram is also open source and they at least not pretend there some "end to end encryption" except you actually did key exchange on your own.

> Don't suppose this would be one of Putin's patriotic citizen artists spreading fake news do you?

You might be missed it, but Pavel lost business he built to Putin's oligarch friend.


> Pavel lost business he built to Putin's oligarch friend

Source?



the server is closed source


It doesnt matter if server code is proprietary if E2E keys never leave device. And open source server change nothing since you cant verify what code is running on actual production server anyway.

And messages that are not E2E encrypted are unsafe by default.


Durov is the founder and ex-CEO of Russia's (and Europe's) largest social network site, Vkontakte. He is one of the few tech billionaires from Russia.

A few years ago, Kremlin tried getting their hands on Vkontakte (to be able to more easily monitor/censor), but Durov kept rebuffing them. Eventually, they intimidated him into selling his majority stake to one of their pet oligarchs. As you can imagine, Durov is pretty bitter about his company being taken away from him (and generally bitter about security services trying to mess with people's rights). After cashing out, he's moved to the EU and become a consistent critic of the Russian government.


He is not known to be a patriotic Putin supporter, quite the opposite.


The problem I have as an end user is that I want the infrastructure protecting me to be invisible. Let's return to this after the following paragraphs. I will make some pretty far-reaching conclusions.

I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way. This will seem like a made-up example to you but I purposefully don't want to confuse the issue with practical examples. We can all agree that at some point this should be stopped.

A reasonable time to stop it might be if intelligence agencies get a literal screenshot from a darkweb chatroom (from a concerned participant, where the participant thinks they're really going too far) where this is being planned in exacting detail but more information is needed to be precise. (For example, suppose the source of the nuclear bomb were not Russia but not enough information was given to identify it. There are actually quite a few nuclear states and many of them are quite corrupt. A short list includes India, North Korea, Pakistan.)

I would think that this kind of actionable urgent intelligence should unlock whatever privacy safeguards are in place, but the issue is that if there is a correct "technical" solution (if cryptography works 'correctly' and is not broken, in an academic sense), then there is no technical possibility to unlock anything. If Tor, crypto currencies, and encryption "work" (in a binary, yes it works, or no, it's broken sense) then following the receipt of such a screenshot there is no technical means of any further step.

Here I'm going to be philosophical for a second. The future of technology is nearly infinite human power. You can already in the next few seconds initiate a crypto currency transfer to anyone anywhere in the world, who can receive it without any banking infrastructure or oversight.

The arc of technology has been personal human enablement. When individuals become nearly God-like and all-powerful, it is dangerous to be in a position where, like the Muslims reporting the madman banned from his U.K. mosque for radical insanity, the status quo is that if you report your friend to the authorities saying, "My online friend, God-like in his powers, is planning to murder a million people just for shits and giggles, and he's kind of insane. Unfortunately, I don't know where he is or what he's doing, but I'm pretty concerned. He has a lot of money from a few ponzi schemes he ran. It's pretty credible for the following specific reasons (screenshots, quotes, etc)." And the only response from the authorities is, "Thanks for all this. We don't know where he is either, in the grand scheme of things a million deaths isn't that much and if it happens we will look at preventing another such case."

That's a pretty silly response, isn't it? That the only possible response is, sorry, nothing can be done.

Okay, now I've laid out why there should probably be some infrastructure on the back-end.

What I don't like is that this translates to humans literally reading people's private correspondence, web searches, etc. It's not very good.

What is a good middle ground?

Can't the NSA make things that run locally, so that no human is reading your correspondence or web traffic, but as you start researching nuclear weapons and making plans on how to murder a million people, and start making those transactions, all this starts adding up and, to quote the Constitution, its tools can receive instructions "particularly describing the place to be searched, and things to be seized", so that after such a report, its perpetrator can be found, or at least enough information can be collected to stop it if it is actually taking place?

I think that all of us here could be okay with being stopped at some point between purchasing a hundred million dollars in anonymous currency, and detonating a nuclear bomb. It's sensible. That can be part of the social contract.

It's difficult. Nobody wants to live with a judge, jury, and executioner in their home looking at everything they are doing in case they break some law.

I am glad that I personally don't have to answer these questions. But we can all agree on the need for privacy (no human looks at what you're doing), and also on the reasonableness, as each individual online progresses toward infinite personal power, for protecting the rest of society from credible and immediate, specific threats.

I agree with cryptographers who think of cryptography as a tool that is either working or broken. (If it has a back door, it's 'broken').

Perhaps if tools included a certain portion that runs locally they could increase the extent to which the tools are not actually 'broken' (i.e. they are actually working, and actually not backdoored), while also increasing the safety every single person has from other individuals being able to plan or pay for their specific death anonymously, and with impunity.

I realize that my suggestions here are not specific enough to be actionable, they are not clear recommendations. But I don't even see these possibilities being discussed (at least publicly), so I wanted to at least move the conversation a bit in this direction.

EDIT:

---

I'm getting downvoted pretty heavily. Let me ask point-blank: are you okay with someone being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want that person to be stopped at some point after they started doing that? The future of information is that it is ubiquitous and easy to access [I edited this paragraph edited from first to third person.]

Actually secure communications would mean that it is technically impossible to see if someone has started communicating with people at ISIS who have overseen and helped people explode themselves. I am not saying communication should be weak and insecure, but should I really practically be able to start doing that if I want?

This is not some kind of false example, either.

Also, for downvoters: I think it is easier for you to agree with the other half of my statement, that nobody should be looking at our web traffic and correspondence, and that it should be actually secure, and also actually private.


> are you okay with me being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want me to be stopped at some point after I started doing that?

Yes, of course. Just as I am okay with people being able to spend ten minutes to take a knife from the kitchen and kill me. People are able to do all kinds of bad stuff, doesn't mean they actually do. The only alternative to a world where people are able to do bad stuff is totalitarianism, which is itself bad stuff (if only because I don't get to say what is considered good and what is considered bad), so not actually a solution either.

Also, what you seem to think is just a logical impossibility. Either your communication can be read by third parties or it can not. You cannot build crypto that only leaks nuclear bomb plans, but keeps your medical information uncrackable.

And you might be interested in some of Cory Doctorow's talks (or blog articles or maybe even books) on the general topic of "the war against general-purpose computing", like maybe one of these:

https://www.youtube.com/watch?v=gbYXBJOFgeI

https://www.youtube.com/watch?v=HUEvRyemKSg


Ted Kaczynski, is that you? When did they give you Internet access? Wait, I thought you hated the Internet?

> I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way.

Seriously, listen to yourself. Chop up that sentence and analysis it and hear how silly you sound.

Why didn't you thrown Jason Borne or James Bond in there while you were at it?


> I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way.

I agree with this, and nuclear disarmament seems to be the best way to stop it.

Any approach involving changes to electronic communications seems unlikely to be effective: people have been trying to bribe Russian generals for centuries, well before the internet.

> Let me ask point-blank: are you okay with me being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want me to be stopped at some point after I started doing that?

How is this different from you spending one day reading the 1971 Anarchist Cookbook? In almost a half century since it's been released, we don't seem to have had an epidemic of homemade bombs, so I don't seem to have an evidence-based reason to object to people being able to read that book. Is the dark web different?

Also, I live in America. You could just literally go buy a gun at Wal-Mart, and you have the legal, constitutional right to do everything you do up to the second where you point it (intentionally or not) at me or one of my loved ones and fire: you cannot be stopped. Shouldn't I be worried about that instead?


It seems the problem logicallee is working is the massively growing destructive power available to individuals or small groups.

Technology is accelerating to the point where the destructive power that was formerly available only to state actors with proper command & control systems is now available to small states, groups, and even individuals -- chemical, bioweapons, delivery by drone, etc. It is now possible to mail-order custom gene sequences for garage bioengineering (yes, they do try to filter the requests against homebrew bioweapons, but the operative word is 'try'). Even computing power -- I'd be surprised if a random dozen people on this forum, properly motivated and funded, could not take down the US power grid within a year.

This scale of mass destruction in the hands of individuals is a far greater scale and scope of problem than the ability of any nutjob to go to WalMart and buy a hunting rifle to point at you, me, or a Congressman.

It is the kind of real problem that keeps serious security pros up at night. And there are many of these scenarios becoming more real all the time, even if logicallee's nuke example seems too fictitious for you.

The real question he's posing is whether its feasible to build an automated system that's sufficiently private and intelligent so that it could scan the comms without violating privacy while only alerting on genuine threats.

I think it's an interesting idea, but even if implementable, would fall to the <Who Guards the Guards?> problem. What is to prevent the people who build, maintain, operate the watch-system from abusing it? Nothing but the same level of ethical training that we have now, so this is simply adding one level of indirection.


But crypto is built from math, which is available to anyone who possesses a brain. Even if you locked down all the academic output related to encryption you can't ensure no one will discover another way to hide and transmit secrets either around or through your usually-private-except-for-serious-threats communications network. You'd have better luck trying to lock down harmful bioagents and fissionable materials, but as you alluded, as long as these technologies exist the world faces a security threat. It seems to me the only way to combat these threats is to construct a society where individuals never feel the need to leverage their increasing power, a police state where anyone's communications can be inspected or abused at will by an agent, a heirarchy, or an algorithm seems antithetical to such a society.


Sure, but that destructive power has nothing to do with online communications. I sort of buy your argument about the power grid, but I am absolutely confident that a dozen motivated and funded people from this forum could easily build our own darknet, operating over some combination of radio waves, dark fiber, and sneakernet, that is completely invisible and unknown to law enforcement.

Again, there have been conspiracies (and armies) in human history for centuries, and most of them didn't have realtime messages in people's pockets. They had letters carried on horseback, and it worked just fine.

People use encrypted chat apps over the internet because it happens to be easy enough and reliably secure enough. If it weren't, there's no inherent reason to keep using the internet for this. There's enough other ways to communicate.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: