This seems to be a shot at WhatsApp and Signal, implying that they have loopholes that allow the FBI to snoop in. I'm not sure how true that is. This might be an attempt to deflect from the fact that Telegram uses a home-baked encryption protocol which might be insecure, while WhatsApp uses the OWS protocol.
They want people to think like that because they've built businesses that require it. Telegram stores the messages you send/receive unencrypted on their servers. That's not good, unless you've been trained to think that "privacy" is just about choosing the company, government, or legal jurisdiction that gets total access to your data.
Security professionals know that's not how we should think about security (never trust people!), because Durov is leaving a lot out: there aren't safe jurisdictions, servers get hacked, and centralized databases will get compromised. Logic, though, is probably no match for conspiracy theories.
If Telegram isn't that secure, then why are extremists like IS using it over Signal or WhatsApp? I know Telegram has better features for big groups and much better multi-platform support, so is that the reason? I'm legitimately asking without any snark.
I am talking about politicians, head of terrorist groups, etc.
While saying that top politicians are complete idiots is probably wrong - then again maybe not, I am not sure anymore - the fact that H. Clinton, run the most expensive campaign in history, with backing from all major tech corps AND her staff didn't bother to use encryption at any scale, let alone running a mail server (God knows what kind of software the server was running if it was OpenBSD or Windows Server 08), to me says a lot about how flawed the understanding of these people and their consultant's is about today's world.
Watching "House of Cards" everybody seems incredibly smart, driven, etc. but the politicians I see in real life on average and on the not-very-smart side and the few I've met in person are clueless beyond salvation.
Ps. Sorry for possible mistakes, I'm reading from mobile.
Sadly, one of the conclusions you can draw from Telegram is that a security business is almost better off with no security.
Pavel Durov says that Telegram is "heavily" encrypted over and over again, journalists who don't know better take that at face value and encode it into every headline ("Telegram, the encrypted messaging app, ..."), and meanwhile since there is no encryption there is no risk of anyone finding or publishing vulnerabilities in it.
Since cryptography already seems unreal to most people (and is always just a matter of getting the smart IT guy to 'crack' it on television), everyone's already predisposed to thinking that security = Switzerland. Unless there's a major change, the charlatans will win every time.
Also, is there evidence that extremists aren't using Signal and WhatsApp?
WhatsApp and Signal offer non-anonymous groups only. They are probably used by people who already know each other.
Anyone who has used Telegram for more than 5 minutes knows there are secret chats. The effort being made to rig the information against Telegram also tells a lot about its relevance.
The telegram desktop main developer said:
"Well, I'm afraid there isn't. They are just not in high priority, many other important things first."
It seems stickers, funny bots and GIFs have more priority over security. I think this tells a lot.
Secret chats are not enabled by default, and most users don't enable them; so yes, messages are generally stored without encryption on the Telegram servers. That's a more neutral way of presenting the information.
I think his point is the reverse, while people trust Signal because of Moxie & Trevp.
puts Tinfoil hat it's possible that Trevp is not "that" involved with Signal because he doesn't want to be involved with the government and backdoors.
The point of all of this should be that it's not about people, places, or jurisdictions. If you make it about that, the charlatans who want you to think that Switzerland = secure will win every time.
They don't. From the Telegram FAQ:
"Cloud chats are stored encrypted in the Telegram Cloud, and the keys needed to decipher this data are kept in other data centers in different jurisdictions. This way, local intruders or engineers can't access this data, and several court orders from different jurisdictions are required to force us to give up anything. Thanks to this structure, we can ensure that no single government or block of like-minded countries can intrude on people's privacy and freedom of expression."
As of October 2016, the project has received an unknown amount of donations from individual sponsors via the Freedom of the Press Foundation. Open Whisper Systems has received grants from the Knight Foundation, the Shuttleworth Foundation, and the Open Technology Fund, a U.S. government funded program that has also supported other privacy projects like the anonymity software Tor and the encrypted instant messaging app Cryptocat.
With his reply, Durov is implicitly claiming that Signal must be compromised because it received some funding that can be traced back to the US government. This is the same kind of FUD that has often been used to smear Tor in the past (and indeed, the piece he cites describes Tor as "a federal weapons contractor" and "a foreign policy weapon, a soft power cyber weapon").
Tor's main purpose is to allow agents of US intelligence agencies to communicate back with the home base.
"Giving free access to the Internet being a powerful tool for undermining authoritarian regimes" makes it harder for an adversary to focus solely on intelligence agency agent traffic.
I'm not saying Tor is backdoored or monitored, though it's heavily monitored.
When enjoying the benefits of Tor, it's wise to also remember Tor's main purpose.
I assume that "founded" is a typo, but to be clear, there is no evidence that the US government either encouraged the creation of OWS or contributed any actual work/code to OWS.
The conspiracy-theory part here is that taking funding from the US government (any part of the US government, whether the NSA or Radio Free Asia) means that your protocols or your implementations have backdoors, even if those protocols and the source code implementing them are public and well-reviewed.
Even the author of CryptoCat now thinks you should not use CryptoCat, but I would not rush to assume ill of the OTF funding it back then.
I bet you meant to type "funded".
Most of the funding for the foundation that funds Wikipedia comes from small donations from users.
Only the telegram server is closed source (for now), telegram client and protocol are open source.
> No, because I never took money from the government. I left Russia and lost a $3bn business there because I defended users' privacy from it.
while he is often spotted in Saint-Petersburg, Russia. Moreover, Telegram's developers themselves sit next door to Vkontakte's (that was allegedly taken from him), so he's hardly dissident.
I believe no Western media tried to look behind his bravado and this saddens me. I'll be happy to help with local sources if someone is interested, though.
I'm not well read on this (eg, what company he lost, how he claims he lost it, etc) -- you seem to be responding in the context of a story, but not a story I know about.
I believe that's the gist of his "official" image. However, there are cracks in that story to which I alluded in the comment.
tl;dr: they tried to use SHA-1 as a MAC. This is something of a crypto 101 mistake. Had they even used HMAC they'd be in much better shape. Worse, even after this was pointed out to them and people started writing papers about potential attacks, they have stood by their shaky design, refusing to update it or even admit mistakes were made in the initial design.
But even worse than that, end-to-end encryption is off-by-default, and users must opt into it. Why?
"This allows Telegram to be widely adopted in broad circles, not just by activists and dissidents, so that the simple fact of using Telegram does not mark users as targets for heightened surveillance in certain countries. We are convinced that the separation of conversations into Cloud and Secret chats represents the most secure solution currently possible for a massively popular messaging application."
Putting aside the fact that if Telegram's cryptography were properly implemented, an outside observer shouldn't be able to tell whether or not end-to-end encryption is being used or not (i.e. Telegram does not provide proper separation of data-at-rest versus data-in-motion), this seems to be the "I don't need encryption because I have nothing to hide" argument, but perpetrated by what's allegedly supposed to be a secure messenger.
The real reason why Telegram doesn't enable end-to-end encryption is pretty clear: they don't have the features to provide a good end-to-end encryption experience: They don't support end-to-end encrypted group chats. They don't have encrypted backups like Signal and WhatsApp.
Because it is unusable. It has no synchronization between devices, only 1 device to 1 device. And if you accidentally closed the chat, you have to verify it again. You can't store trusted key fingerprint.
It is not suitable for mobile devices. Secure chats are like OTR, but with bad crypto. Signal and WhatsApp are the same protocol as OMEMO, originally designed for Signal.
> so that the simple fact of using Telegram does not mark users as targets for heightened surveillance in certain countries
And yet Telegram is associated with terrorism more that, e.g., WhatsApp.
It's a not yet proven secure protocol, is the worst you can say against it I believe. I'm not a security guy mind you, I'm just parroting what I keep seeing. Rightfully so, lack of deep audit is a very valid reason to worry. Yet, worry and untrusted is vastly different than actively exploitable / broken.
Whenever one can consider a person's arguments in isolation from your opinion of said person, it's wise to do so. This is one of those opportunities.
I enjoy the comments of tpacek and ryanlol
Another day on HN...
"Security researchers call for Guardian to retract false WhatsApp backdoor story" 
The Guardian claims they have offered to let Zeynep Tufekci write a rebuttal; according to Tufekci, they have repeatedly delayed and are not taking the offer they made seriously.
If you have spread this misinformation in other places, you might want to follow up with the people you misled.
> WhatsApp does not give governments a “backdoor” into its systems and would fight any government request to create a backdoor
The problem is that I have to take their word for that, while they have the ability to activate the backdoor at will.
I linked to HN for a reason, so that people could see what other people think concerning the article. Here is the HN version of that opinion https://news.ycombinator.com/item?id=13394900
The misleading is that people are told that proprietary and centralised messaging services such as whatsapp can guarantee security - the truth is that they probably can't.
Do you still believe HN users instead of security researchers?
Do you still give security advice based on randos on HN instead of security researchers?
I think Signal doesn't even have the metadata available, but I'm not sure about that.
 (Horrible Forbes Link)
So yeah, Signal doesn't retain metadata. All they can provide is whether you're using Signal or not and when you've started using it.
Telegram protocol is open and has an open source reference client. While not as good as having both the server and the client open source, at least it allows independent verification that the protocol is being implemented faithfully.
That said, I think it's pretty clear that WhatsApp does have a way to allow the FBI to snoop on users:
I know it's controversial and many don't believe WhatsApp would actually use it like that, but I don't buy it. Either they already use that "feature" as a legal intercept or they're about to do it at least in the UK, where Snoopers' Charter passed and the law is vague enough to force them to do something like that. It may happen in Australia, too, if the new anti-crypto bill passes.
I still trust Signal for now, but I'm not sure whether or not their server-based solution will be so backdoor-proof against future bills, and I think they should be prioritizing decentralizing Signal somehow.
EDIT: Sigh, Germany, too:
In 2003-2006, we built a service that was a financial system to exchange financial data through various means including AS/2 EDI over HTTP with big companies and the government suppliers such as AAFES (Army and Air Force Exchange). Initially we had RSA, PGP and a custom encryption in there, the latter two for other features besides EDI. We got a letter from the FBI asking us to switch only to RSA, they wanted to know about our use of PGP and wanted to see our custom encryption if we continued to use it. Being a small/medium company we switched to just RSA to avoid any issues. It was an odd day, when I came into the office they told me I had an FBI letter on my desk and you can imagine what happens around an office when something like that happens. Very strange day indeed.
Moral of the story, if you create your own crypto or aren't using the ones you are supposed to use, in any capacity, expect some knocking.
If you are writing about it here, I'm assuming it wasn't an NSL (national security letter) and so would you be open to publishing a copy of it publicly? Would be great to get sunlight on that.
Since we were small/medium agency/company we just complied as we were just helping smaller/medium companies sell stuff like Ukuleles and hats to AAFES so it was not a big issue. The key of the app was the EDI/AS2 integration that didn't use the PGP or custom crypto as it had to use certain algos RSA/DSA/TripleDES/FIPS/strong hashing for Drummond Group interoperability certification. The certification of interoperability and crypto communication was required to trade with Wal-mart, govt/AAFES etc.
But: avoid RSA anyways. It's inferior to modern curve crypto.
It wasn't until we had to connect to AAFES that it was a problem. We were small but we were sending a good amount of orders and financial data from Wal-mart, AAFES, and other gov't sources that had early EDI over HTTP/S going. They were also recommending us to vendors on their approved EDI software list and probably wanted to recommend ones that played nicely.
I'd hazard a guess that 'custom encryption' would be a big red flag if the FBI was doing a security audit of who has access to government data.
We were just using it for additional ways to send financial docs usually invoices/purchase orders and acks/receipts using PGP where users traded keys, in places where the company was too small to warrant EDI needs and to send invoices to/from the system. PGP was being productized at the time and people were about encrypted email for a while so we added that feature in.
In our app, we switched it to Bouncy Castle for digital signatures after removing it for the other features.
One also has to wonder if the FBI consider the Telegram team to be essentially undeclared Russian agents, and hence fair game.
 - https://telegram.org/apps
A journalist like Poitras is on all sorts of lists and incessantly harassed. There are secret courts, secret laws and secret processes at play. And beyond this the power of harassment, intimidation, blackmail and bribery. Individuals and even organizations cannot prevail against the array of capabilities.
Its nice to think of democratic theory and the rights but these only exist when not exercised as talking points. The moment you start exercising them you end up on all sorts of lists, marked for harassment and basically have a target on your back. Dissent is squashed even before it can formulate.
But my first reaction was "Cool, our government really cares, is creative and has the necessary power to get things done."
For those of you who've worked with government, you've seen how insanely difficult the procurement process is. Being as specific as needing to get competitive bids for toilet paper purchases, etc. So the fact that they could get potentially large amounts of bribe money means (a)This goes to high levels in the organization (b)They've probably done this before.
I wonder how much they offered?
And I wonder how many other pieces of software have backdoors. I would think the first things they would try and get access to is (a)Certificate issuers and (b) VPN software.
Do we know that Godaddy,LetsEncrypt, OpenVPN, Cisco VPN, Juniper, etc don't have backdoors?
I wonder what they really care about? Liberty, individual rights, security, or more power? They're people too, so I'm sure they believe in the first three, but the last one is a more seductive and drives a lot more of the bad we see governments do; worse is that the drive for more power is frequently justified as "we need this power to protect our nation".
> has the necessary power to get things done
This actually frightens me a bit. The US government has the ability (and has displayed the willingness) to absolutely destroy people's lives in their pursuit of "national security". What checks and balances are in place to keep this kind of power in check?
Two... essentially, this is power of people applied. Lot of interest in keeping status quo of power, of narrative, of money. They want to have dirt on everyone and then choose to use it when needed. Putting pressure or not can be decided later, dirt will be collected by default. No point in digging the well when you are thirsty, you need to do it well upfront the need arising.
>What checks and balances are in place to keep this kind of power in check?
Either play the game, or be too big to fail and lobby the government for your needs. I think the best one can do is choose your masters either USA, China, Russia or to some extent India or EU main players.
The rest, I can't argue with, though it doesn't really make me feel any better.
So, the push is always going to be to collect more data (barring leaks) and it's only people aware of what's going on and thus inside the system that can really resist that pull. Which means the government and politicians are often the most effective supporters of privacy.
Certainly. After all, they chose money instead of keeping the one tool they already had that were able to tell them about 9/11 before it happened and save all those people who have died in 9/11 and other attacks. Negligence is complicity when it's a government choosing money over it's citizens, so in the case of 9/11, I'd say the US government is as guilty as the attackers.
A great documentary about NSA and ThinThread is A Good American, and a good follow-up is The Maze.
Wasn't it published in NSA leaks that Cisco and Juniper do/did have backdoors?
We can trust particular crypto algorithms, but I'd bet money that for all complex, popular software there are not only vulnerabilities found by various agencies, but also vulnerabilities intentionally inserted there.
Looking back at known historical intelligence operations from the cold war, it'd be ridiculous to assume that noone has tried and succeeded to get some technician (or manager) to insert some backdoors for them. Compared to other intelligence activities, the required effort for that is so small and the benefit so large, that I'd assume that multiple nationstates have inserted their own backdoors in various key components of global infrastructure.
Let's Encrypt is CA and tools and mostly by 3rd parties.
Centralized CA system is imperfect, but certainly can't have "backdoor" in it especially now when Certificate Transparency is heavily enforced.
Government == Individuals that work in public positions and do not think as one.
Sure, you can lock up all communication for privacy reasons, and the government can spend all kinds of resources on trying to control to prevent or circumvent encryption - however it's a waste of resources as it's simply a bandaid.
If I wanted to do something violent or evil I/you can simply have regular meetings and use paper communication - the old spy-style stuff. Of course those networks can be infiltrated by governments with the resources, and they can maintain that presence by allowing certain acts within networks to occur vs. deciding which ones they should stop; it's how the war against Hitler was won once their encryption was broken - watch the very well-done The Imitation Game - http://www.imdb.com/title/tt2084970/ - for a reference.
The only real solution is dealing with the root causes. I heard an analyst on TV (a rare occasion for me) mention after Trump's Saudi visit and speech, that he didn't mention that the Saudis should look into the root causes of why there is terrorist activity growing in their countries; of course a lot of it is historical karma and rage from violent acts against their families, however a lot is because people's basic needs aren't being met which prevents the higher levels of Maslow's Hierarchy of Needs from being reached and maintained.
There's a solution and it requires building real community, locally, where you are now - and striving for people to become healthy so they don't develop bias and other coping mechanisms which prevent empathy and understanding and therefore compassion; preventing responsible ownership of weapons isn't useful either, not developing and supplying weapons on mass would be beneficial, however most attacks recently have been with vehicles or knives.
Universal Basic Income will also allow closer to a truly free work market and it can evolve from there, giving people the time to do what they feel is the most important in that moment for themselves, while not having to be forced to working in a shitty environment with shitty managers or co-workers; the health improvement and increased productivity here alone is worth it.
This is contradicted by a lot of evidence. Terrorists are most commonly middle class members of their society and often well educated. If anything, terrorism is a powerful means of satisfying the higher levels of 'needs', e.g. meaning, purpose, community.
and, "before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar"
> about the same time at the bazaar show in nyc i was contacted by a representative of us-ins and a ukrainian millitary attache at un. both investigating my involvement with openbsd. a few months later i was offered an interview for a position at the fbi office for cyber-warfare in nyc who as well offered to fix my immigration status (or none thereof at the time ;) with a greencard. nonetheless for the lack of credibility from the future employer i refused to talk to them as for me deportation did not sound like an acceptable worst-case scenario.
> before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar. later on my way back during a short stay in chicago also randomly appearing fbi agent. fellow was ordering food and beer for me and just like his navy pal gave me a warning to keep my mouth shut!
He was a foreign national visiting the US who probably got targeted by various agencies after attending some security conferences.
And if such PR herding worked, wouldn't the surveillants be prepared to pay for such efforts to make their job easier?
So, what seems readily apparent is: Telegram takes state money, to offer an insecure option, while dissimulating to the world that it's: a) secure and b) turning down state money all the time.
I know why this perspective isn't discussed in MSM. But I don't get why it's not discussed more here. It seems obvious to me. And personally IMHO, I think that's a good thing. Catch more criminals / terrorists.
I mean, simply use a public/private encryption algorithm that has proven to be highly secure:
- Share your public key openly
- Anyone can send a message to you using your public key to encrypt the message
- You decrypt with your private key on device
Do all the encryption/decryption on device and viola, secure messaging. (This is basically how https works.)
Of course this only allows a single device the ability to decrypt the message.
However, if you want to allow multiple devices to share a private key, they can simple send each other their own private keys using the same encrypted protocol.
In addition, for super paranoid use, a master password could be used to salt the private key so that would be required with the private key to enable decryption. (Which is similar to how password keepers basically work.)
What am I missing?
What are you going to use to actually encrypt messages? You don't want to directly use the public key primitives to do this.
In what mode of operation are you going to use that second, bulk encryption algorithm?
How are you going to authenticate messages?
What will you do to validate the public keys of your peers? When you close the application, will it forget everyone's keys? How do you prevent MITM on first contact?
What happens when your peers change devices, and thus public keys? How do you authenticate those changes? If you get any of this wrong, remote attackers can MITM messages.
How will you handle file transfers (and images and videos and voice, which will probably need yet another cryptosystem)? How will you cryptographically bind those transactions to the (presumably, somehow) authenticated chat session you set up?
What happens when someone's device is compromised? Is every chat they've ever sent also compromised?
What happens if someone is briefly compromised? Is every message they send in the future also necessarily compromised?
How will you handle updating your software when, inevitably, someone finds a vulnerability in it? What happens if you have to upgrade the whole protocol?
None of this is easy. Most of these problems by themselves are hard in their own right, but there's a combinatorics to them as well.
This is an excellent list of potential issues.
"What are you going to use to actually encrypt messages? You don't want to directly use the public key primitives to do this."
I'm not sure what you mean by this.
Could you explain why there is a need for another encryption protocol beyond a public/private key encryption?
If the protocol is secure against brute force attack, both the public key and the encrypted messages could be open and would not create a vulnerability to the private key.
* In most cases, an asymmetric transform gives you a deceptively small amount of headroom within which to fit your data before losing security.
* asymmetric transforms are less safe to implement than simple authenticated symmetric ciphers.
* for that matter, cost-effectively authenticating messages will require "symmetric" primitives anyways.
* modern asymmetric algorithms (like Curve25519) don't "directly" support encryption.
That's just off the top of my head. It is hard to think of a single competent public key cryptosystem that encrypts directly with the asym transform.
Could you give an example please? How do current protocols deal with that?
LastPass, for example, provides ways to do that, for example by using devices they have used recently but, I don't think it is particularly secure.
Spreading the key to multiple devices so that you have a copy of it on another device helps obviously, as does allowing an unencrypted backup of the key, for example on a USB key you store securely.
The other problem is paying for it. To deliver messages quickly to all devices, even when they are offline, the messages obviously have to be stored serverside, which takes up space and bandwidth.
A federated system, where a user is on a particular server and, you deliver messages to that server, which delivers to their devices (possibly when they come back online) makes managing paying for it easier - you can get other people to host it or, people who can can host it themselves. It also removes the single central point of failure.
Yes, payment is a separate issue. It would be assumed that there is value in having this system available to the users that would be outside their messaging needs.
Who do you trust to know the metadata about how encrypted messages are flowing?
Who do you trust to get the crypto implementation details right?
How will you support use of multiple devices? People generally expect seamless switching between phone and laptop these days.
In fact encrypted messages could be stored publically anywhere and only the intended recipient could read them.
The flow of messages is not encrypted, the system only encrypts message contents. (However, there are options to make it very difficult to trace, but that is a different issue.)
Trust is up to the user to decide and is always necessary.
Multiple devices is easy enough, a device can encrypt its own private key and send it to another device (using the target devices public key). The target device would now have 2 private keys for decryption.
Option 2: Could be true because seriously, who trusts the FBI/NSA not to violate our privacy anymore?
Really not sure what to believe about this one.
Mccarthy would be proud.
Also I have no problem with communism, I have problems with corruption and government over-reach/attempts to influence populations.
I'm not new to this. I've worked with people who flood social networks with bullshit to sway public opinion.
Always be skeptical, of both sides. And at this point I don't trust either the NSA or the Russian equivalents.
I used to wonder whether some success of social media companies couldn't be explained by secret payments for backdoor access. You could be operating out of Europe or Africa and still get offered money, and other pressure carefully applied.
You might think you'd hold true to your plan of privacy-for-all, but if they offer $x00m or more?
Especially considering how that competitors like Signal are US based. Signal is owned by twitter which by no means is a small player, so it isn't likely to fly under anyones radar.
But it's definitely a stretch to believe that they've accepted them, or that it's worked. The protocol and the source code are public and well-reviewed (Telegram has a much less well-reviewed algorithm and does code drops on GitHub); it'd be a challenge to successfully fit a back door in Signal.
The thread that @durov was replying to was portraying being hired away from a public crypto project as a "bribe". Certainly that's a thing you could successfully do, but it also doesn't weaken Signal's encryption as long as there are still people working on it.
No. OWS != WS
Durov seems very keen on supporting the myth of his "dissent" (to the point of outright lying) and very shy about the actual location of Telegram's developers and servers. Guess why
 https://lenta.ru/news/2017/03/20/durov/ (in Russian)
 https://tjournal.ru/p/durov-back-in-ussr (in Russian)
but you are right, article 2 talks about him being in Russia some time in early 2014. It also states that one of the reasons to be there was for selling a datacenter. He left shortly after that with the comment that he had no intentions to return to Russia arguing that the country is incompatible with doing internet business.
> Я уехал из России и не собираюсь возвращаться. К сожалению, в данный момент страна не совместима с ведением бизнеса в интернете
As for the office being in the same building as VK's - that is definitely weird. Even though I know that they also have offices in Berlin and London. Same goes for the fact that they also have multiple datacenters around the world for speed and security. There probably is also going to be a DC in Russia. I don't feel like my data is unsafe due to that.
And where did he lie about the location of developers or servers? I'd be very interested in seeing that. With that you could convince me that something fishy is going on.
Russophobia seems to be the only acceptable form of Xenophobia these days. And a wildly popular one, at that.
There are proponents who want to eliminate any type of generalization based on nationality/religion/etc which is also heavily flawed. Somewhere in between is the rational approach.
With this comment, the idea is that crypto software built in Russia could easily be compromised by the rather un-free Russian government, which seems pretty reasonable. Of course, the fact that this fellow no longer lives in Russia scuttles it, but that's just not being fully informed, not "xenophobia."
That's a very generous assessment.
If only accusations of "speaking to a Russian" were qualified by the conversation actually involving something malicious or illegal before suggesting it was improper then I'd be in much more agreement.
The fact is besides Michael Flynn who was immediately fired once people with real power found out, there hasn't been anything that has come out that was bad on the administrations part. Even the accusations against Kushner seemed to be the handy work of Flynn who was at the only meeting he had with a Russian official.
Even the level of manipulation of the election results was seemingly marginal. If leaking Clinton/Podestas/DNC's emails was the limit of the interference (assuming they even leaked the Podesta emails to Wikileaks), despite most Americans on the left falsely believing it involved manipulating actual votes, then I really don't think it was as bad as people seem to think.
At most the worst that can be said is that Trump/RNC's emails weren't also leaked but everyone knows Trump doesn't use email so the damage would be limited to the RNC who Trump attacked on multiple occassions.
That would be a preferred scenario to Clinton/DNC emails never being leaked IMO.
So overall I don't see the total effect of Russian manipulation having a deciding factor. Especially considering the reason he won was in poorly educated industrial swing states where things like the latest Wikileaks Podesta email isn't as big news as a Trump ralley getting people excited over populist messaging.
Lying under oath to Congress is pretty bad, and when you lie about never meeting Russian officials, and then it turns out you met Russian officials, it's even worse.
Most of your comment is downplaying the consequences of Russian interference, which doesn't make any sense to me. I don't care if Russian interference resulted in +10% of the vote for Hillary, a foreign power screwing with our democracy is capital-B Bad, and if there was in fact coordination between the Russian government and the Trump campaign, then it's capital-T Treason.
We also have a reason to connect "American company" with "NSA Spy"... and it's probably not really true for all US companies is it? :)
It's kind of ironic that Telegram is being attacked for being Russian by people coming from US of all places.
I also avoid as best as I can giving any data to usa companies or buying privacy-sensitive products from them.
> Don't suppose this would be one of Putin's patriotic citizen artists spreading fake news do you?
You might be missed it, but Pavel lost business he built to Putin's oligarch friend.
And messages that are not E2E encrypted are unsafe by default.
A few years ago, Kremlin tried getting their hands on Vkontakte (to be able to more easily monitor/censor), but Durov kept rebuffing them. Eventually, they intimidated him into selling his majority stake to one of their pet oligarchs. As you can imagine, Durov is pretty bitter about his company being taken away from him (and generally bitter about security services trying to mess with people's rights). After cashing out, he's moved to the EU and become a consistent critic of the Russian government.
I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way. This will seem like a made-up example to you but I purposefully don't want to confuse the issue with practical examples. We can all agree that at some point this should be stopped.
A reasonable time to stop it might be if intelligence agencies get a literal screenshot from a darkweb chatroom (from a concerned participant, where the participant thinks they're really going too far) where this is being planned in exacting detail but more information is needed to be precise. (For example, suppose the source of the nuclear bomb were not Russia but not enough information was given to identify it. There are actually quite a few nuclear states and many of them are quite corrupt. A short list includes India, North Korea, Pakistan.)
I would think that this kind of actionable urgent intelligence should unlock whatever privacy safeguards are in place, but the issue is that if there is a correct "technical" solution (if cryptography works 'correctly' and is not broken, in an academic sense), then there is no technical possibility to unlock anything. If Tor, crypto currencies, and encryption "work" (in a binary, yes it works, or no, it's broken sense) then following the receipt of such a screenshot there is no technical means of any further step.
Here I'm going to be philosophical for a second. The future of technology is nearly infinite human power. You can already in the next few seconds initiate a crypto currency transfer to anyone anywhere in the world, who can receive it without any banking infrastructure or oversight.
The arc of technology has been personal human enablement. When individuals become nearly God-like and all-powerful, it is dangerous to be in a position where, like the Muslims reporting the madman banned from his U.K. mosque for radical insanity, the status quo is that if you report your friend to the authorities saying, "My online friend, God-like in his powers, is planning to murder a million people just for shits and giggles, and he's kind of insane. Unfortunately, I don't know where he is or what he's doing, but I'm pretty concerned. He has a lot of money from a few ponzi schemes he ran. It's pretty credible for the following specific reasons (screenshots, quotes, etc)." And the only response from the authorities is, "Thanks for all this. We don't know where he is either, in the grand scheme of things a million deaths isn't that much and if it happens we will look at preventing another such case."
That's a pretty silly response, isn't it? That the only possible response is, sorry, nothing can be done.
Okay, now I've laid out why there should probably be some infrastructure on the back-end.
What I don't like is that this translates to humans literally reading people's private correspondence, web searches, etc. It's not very good.
What is a good middle ground?
Can't the NSA make things that run locally, so that no human is reading your correspondence or web traffic, but as you start researching nuclear weapons and making plans on how to murder a million people, and start making those transactions, all this starts adding up and, to quote the Constitution, its tools can receive instructions "particularly describing the place to be searched, and things to be seized", so that after such a report, its perpetrator can be found, or at least enough information can be collected to stop it if it is actually taking place?
I think that all of us here could be okay with being stopped at some point between purchasing a hundred million dollars in anonymous currency, and detonating a nuclear bomb. It's sensible. That can be part of the social contract.
It's difficult. Nobody wants to live with a judge, jury, and executioner in their home looking at everything they are doing in case they break some law.
I am glad that I personally don't have to answer these questions. But we can all agree on the need for privacy (no human looks at what you're doing), and also on the reasonableness, as each individual online progresses toward infinite personal power, for protecting the rest of society from credible and immediate, specific threats.
I agree with cryptographers who think of cryptography as a tool that is either working or broken. (If it has a back door, it's 'broken').
Perhaps if tools included a certain portion that runs locally they could increase the extent to which the tools are not actually 'broken' (i.e. they are actually working, and actually not backdoored), while also increasing the safety every single person has from other individuals being able to plan or pay for their specific death anonymously, and with impunity.
I realize that my suggestions here are not specific enough to be actionable, they are not clear recommendations. But I don't even see these possibilities being discussed (at least publicly), so I wanted to at least move the conversation a bit in this direction.
I'm getting downvoted pretty heavily. Let me ask point-blank: are you okay with someone being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want that person to be stopped at some point after they started doing that? The future of information is that it is ubiquitous and easy to access
[I edited this paragraph edited from first to third person.]
Actually secure communications would mean that it is technically impossible to see if someone has started communicating with people at ISIS who have overseen and helped people explode themselves. I am not saying communication should be weak and insecure, but should I really practically be able to start doing that if I want?
This is not some kind of false example, either.
Also, for downvoters: I think it is easier for you to agree with the other half of my statement, that nobody should be looking at our web traffic and correspondence, and that it should be actually secure, and also actually private.
Yes, of course. Just as I am okay with people being able to spend ten minutes to take a knife from the kitchen and kill me. People are able to do all kinds of bad stuff, doesn't mean they actually do. The only alternative to a world where people are able to do bad stuff is totalitarianism, which is itself bad stuff (if only because I don't get to say what is considered good and what is considered bad), so not actually a solution either.
Also, what you seem to think is just a logical impossibility. Either your communication can be read by third parties or it can not. You cannot build crypto that only leaks nuclear bomb plans, but keeps your medical information uncrackable.
And you might be interested in some of Cory Doctorow's talks (or blog articles or maybe even books) on the general topic of "the war against general-purpose computing", like maybe one of these:
> I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way.
Seriously, listen to yourself. Chop up that sentence and analysis it and hear how silly you sound.
Why didn't you thrown Jason Borne or James Bond in there while you were at it?
I agree with this, and nuclear disarmament seems to be the best way to stop it.
Any approach involving changes to electronic communications seems unlikely to be effective: people have been trying to bribe Russian generals for centuries, well before the internet.
> Let me ask point-blank: are you okay with me being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want me to be stopped at some point after I started doing that?
How is this different from you spending one day reading the 1971 Anarchist Cookbook? In almost a half century since it's been released, we don't seem to have had an epidemic of homemade bombs, so I don't seem to have an evidence-based reason to object to people being able to read that book. Is the dark web different?
Also, I live in America. You could just literally go buy a gun at Wal-Mart, and you have the legal, constitutional right to do everything you do up to the second where you point it (intentionally or not) at me or one of my loved ones and fire: you cannot be stopped. Shouldn't I be worried about that instead?
Technology is accelerating to the point where the destructive power that was formerly available only to state actors with proper command & control systems is now available to small states, groups, and even individuals -- chemical, bioweapons, delivery by drone, etc. It is now possible to mail-order custom gene sequences for garage bioengineering (yes, they do try to filter the requests against homebrew bioweapons, but the operative word is 'try'). Even computing power -- I'd be surprised if a random dozen people on this forum, properly motivated and funded, could not take down the US power grid within a year.
This scale of mass destruction in the hands of individuals is a far greater scale and scope of problem than the ability of any nutjob to go to WalMart and buy a hunting rifle to point at you, me, or a Congressman.
It is the kind of real problem that keeps serious security pros up at night. And there are many of these scenarios becoming more real all the time, even if logicallee's nuke example seems too fictitious for you.
The real question he's posing is whether its feasible to build an automated system that's sufficiently private and intelligent so that it could scan the comms without violating privacy while only alerting on genuine threats.
I think it's an interesting idea, but even if implementable, would fall to the <Who Guards the Guards?> problem. What is to prevent the people who build, maintain, operate the watch-system from abusing it? Nothing but the same level of ethical training that we have now, so this is simply adding one level of indirection.
Again, there have been conspiracies (and armies) in human history for centuries, and most of them didn't have realtime messages in people's pockets. They had letters carried on horseback, and it worked just fine.
People use encrypted chat apps over the internet because it happens to be easy enough and reliably secure enough. If it weren't, there's no inherent reason to keep using the internet for this. There's enough other ways to communicate.