Hacker News new | comments | show | ask | jobs | submit login
I don't trust Signal (drewdevault.com)
523 points by Bl4ckb0ne 71 days ago | hide | past | web | favorite | 455 comments



Drew DeVault doesn't trust Signal because its Android incarnation uses the Google Play Store --- the app market virtually all of its real users use --- and not F-Droid. DeVault would also like it if Signal would interoperate with other chat programs.

Instead, DeVault would prefer that you use Matrix, a system for which end-to-end encryption is (according to its own website) "in late beta", offered on a select subset of clients, and "not enabled by default"†.

This argument is clownish and we should be embarrassed it's on the front page.

There are people in the world that want to sysadmin their phones. It's a life choice they are free to make and I don't hold it against them. But the vast, overwhelming majority of users do not want to make the app market on their phone work more like Debian and less like the Play Store. Signal, to put it bluntly, does not care about the desires of the phone sysadmins. Even if they caved to the sysadmins, the application would, for virtually all its users, be no more secure. This bothers DeVault a lot, enough that he's constructed an entire psychoanalysis of Moxie Marlinspike to explain to himself how it could possibly happen that someone else on the Internet doesn't agree with him.

Also, just as a note to DeVault: the point of end-to-end encryption is that you don't have to trust Signal's server. All it does is arrange for the delivery of messages, which are secured client-to-client. Compare Signal's server to Wire's, which --- last I checked --- retains a record of every pair of users who have communicated in the past.

When this was pointed out downthread, DeVault responded: "[o]ther alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring". Telegram is a particularly funny reference to make, because not only is E2E not the default there, but --- last I checked --- it can't even do E2E group chat. Telegram's owners are adamant that TLS is adequate for group secure chat.


I feel like you didn't actually read the article or my comments in this thread.

>Drew DeVault doesn't trust Signal because its Android incarnation uses the Google Play Store --- the app market virtually all of its real users use --- and not F-Droid

It should use both.

>the point of end-to-end encryption is that you don't have to trust Signal's server. All it does is arrange for the delivery of messages, which are secured client-to-client. Compare Signal's server to Wire's, which --- last I checked --- retains a record of every pair of users who have communicated in the past.

My point is that Signal could just as easily keep a record of every pair of users who has communicated. We can't be sure because we can't run our own servers. I spoke about this in detail in the article.

>† When this was pointed out downthread, DeVault responded: "[o]ther alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring". Telegram is a particularly funny reference to make, because not only is E2E not the default there, but --- last I checked --- it can't even do E2E group chat. Telegram's owners are adamant that TLS is adequate for group secure chat.

Thanks for omitting all of the context which clarified that I hadn't researched them in depth and wasn't explicitly endorsing any of them, and the comment where I clarified that E2E encryption is enabled by default on Matrix.


I read your article, carefully, twice, once this morning (I briefly tweeted about it but didn't feel like I could do it justice and deleted the tweet) and again before writing this.

I've read all of your comments in this thread to date and, as you can see, replied to some of them.

I feel like I have fairly summarized your arguments.

"It should use both", you say. Signal disagrees. That makes Signal evil, according to your argument. "That's not how the world works" is my rebuttal.

Signal could easily keep a record of every pair of users. So can every other mainstream chat application --- and several of them do. Signal doesn't. My reply on the subthread about this issue explains what Signal does differently here, and it's not "publish the source code of the server".

People can simply read your comment on the thread --- I made clear where the quote came from --- to see exactly what you said about Wire and Telegram and Tox and Ring. I'm satisfied that I've represented your argument well.


>"It should use both", you say. Signal disagrees. That makes Signal evil, according to your argument.

You're oversimplifying this. For the full rebuttal, refer to the article.

>Signal doesn't.

You cannot know this. We don't need to have this conversation in two places, I'll just link it for others who want to follow along:

https://news.ycombinator.com/item?id=17726574

>I'm satisfied that I've represented your argument well.

I don't think so.

>People can simply read your comment on the thread

Fair enough: https://news.ycombinator.com/item?id=17724300

Full disclosure: I added the text in the parenthesis and the second paragraph of this comment about an hour after it was initially posted.


I don't think you understand that I can, in fact, just observe that Signal disagrees with you, without making a point-by-point rebuttal of your argument. Similarly, you don't indicate anywhere that you understand that Moxie can do the same without acting in bad faith, which is something you accuse him of doing.

You don't get to demand from strangers a debate on terms of your choosing.


[flagged]


We need you to present your own arguments civilly and substantively, according to the guidelines. Perhaps we could also please request “calmly”.

https://news.ycombinator.com/newsguidelines.html


I basically agree here. Some people, like DeVault, don't seem to think that there is any other threat model than "protect sysadmins from nationstates like the US or big corporations or Mossad".

I find such viewpoints rather dissapointing because I myself as a sysadmin don't hold it. My threat model is "someone steals my phone" and "someone (<1Mil. $ funding) tries to hack me". I don't particularly care that I don't know if the sand that was used to make silicon for my phone was properly sourced and audited for backdoors or plastic shovels.

I want me and my family to be reasonably secure against the background noise of the internet.

And of course not suck the battery dry like some thirsty vampire who was offered a bag of O-negative.

For this task, Signal is fully sufficient (until another messenger does it better or matrix fixes their long list of problems I have with them).


Additionally, I'm pretty sure it's trivial to verify the APKs that Google Play serves are identical to the ones the devs published.


That's not the interesting question. How easy is it to verify that the APKs are built from the published source code, without any added funny business?

The F-Droid devs put a lot of work on reproducible builds. Not all software complies, but with an interest in information security there's no exucse not to.

That's the use case of F-Droid, and comparing it to self publishing APKs without even as much as a GPG signature is so beside the point it borders on deceptive.



That blog post is deceptive. Their instructions only reproduce the Java part, which is pretty easy to do. But Signal requires libraries written in C (aka "native code"), and they do not have that building reproducibly. The only Android messenger really doing reproducible builds is Briar.


There is nothing wrong with F-Droid. The problem isn’t that F-Droid is toxic. People can disagree without either side being at fault... is a point I am at pains to make in this thread.


Tell that to the VLC developers.


Not interested. I'm not litigating F-Droid and don't need to. F-Droid advocates, and some F-Droid critics, disagree: if F-Droid is implicated in an argument, we must fully adjudicate all its pro's and con's. No, that's not how the world works. I'm sufficiently well informed about F-Droid to know --- and I mean this in a benign sense, the same way I feel about OCaml or slab allocator design --- that I don't care.


What kind of monster doesn't care about slab allocator design?


That one, presumably.


But that's not the argument the author makes. He is worried about the apps getting compromised at the platform level.


That's a security concern he feels he can address for himself if Signal is made available to him on F-Droid. But for the overwhelming majority of Signal users, there isn't even in theory a security benefit, because they're exposed to their platform vendor no matter what Signal does.

Signal has decided --- sensibly, I think! --- to focus on the needs of the "normie" users. DeVault disagrees with that decision. He is welcome to do so, but it was Signal's decision to make, not his.


He is indeed welcome to do so. It is perfectly rational for him to choose some other software.

Far from not something that warrants a character assassination. Specifically, it's not something "clownish" that we should be "ashamed" to have on the front page. We get the community we deserve.


Please read what I wrote more carefully. Him wanting to use different software isn't clownish. I respect the prerogative of the phone sysadmins.

Him saying that Moxie Marlinspike is untrustworthy because of a disagreement, and then urging people to use Matrix --- that's a clownish argument. And it is the bulk of his argument, paragraph by paragraph: all the reasons why the only rational reason anyone could disagree with Drew DeVault is if they are sneakily trying to screw people over.


Literally nowhere in the article does he say that. The article presents two main arguments why the author chooses not to use Signal (and that by extension, other people with the same interest should do the same), that it requires Google services with root privileges, and that it doesn't federate or interoperate. There are no personal attacks on Signal's author, the author presents his reasons in a fairly objective light, so there should be no reason to chastise him.


[flagged]


Matrix? E2E isn't even enabled by default anywhere, the server uses a ridiculous amount of resources, alternative servers support half of the protocol, I cannot easily discover people I know from my contacts lists via mail/phone/address/ICQ number/etc, 70% of the bridges I would need hover between "barely functional" and "the compiler isn't complaining", guilds are not easily possible as in other chat applications and their clients draw more battery than any other chat program I tried, including Skype, which I think qualifies for an achievement award of some kind.

I think I'll stick with services that offer E2E and/or sensible feature-sets.

Once Matrix fixes all their problems I'll gladly install a homeserver and run it open for other users to sign up for, until then I'm on Signal.


Some version of this post seems to circulate every few months or so. This one is more direct in its accusations of Moxie acting in bad faith. I think this is disingenuous. Moxie has been very clear[0] about the tradeoffs that Signal has made and the reasons for them. It's fine to be dissatisfied with those choices. It's another thing entirely to accuse Moxie of dissimulating.

Personally, I'd like to see Signal replace WhatsApp. That's why I support the path Signal took, and why I also have a distaste for the author's snarky dismissals of features like GIF search.

[0] https://signal.org/blog/the-ecosystem-is-moving/


But in the linked post he does not explain, why he does not maintain a F-Droid repository for people who do not trust google, nor why the original Signal Client does not connect to Signal Forks, even if they use everything the same. Security reasons? Ordinary smartphones are full of rootkits anyways, so someone using a forked Signal version probably is better of anyway, as he knows a bit more what he is doing.

So the base argument holds in my opinion: Moxies main focus is Moxie in control. And not making Signal the best and securely possible.

So I also use Signal, but as soon as Matrix gets stable, I am gone


> Moxie forbids you from distributing branded builds of the Signal app ...

Having multiple branded builds to choose from would be a terrible thing and would easily allow fake apps to gain traction.

> ... and if you rebrand he forbids you from using the official Open Whisper servers.

This seems pretty fair to me. Not only could you abuse their resources, it would greatly hinder their ability to make changes and respond to protocol-level security threats. They aren't in the API business, controlling their ecosystem allows them to make forward progress without concern for 3rd parties that they have no control over. And still there is the issue of 3rd parties abusing their server resources.


The F-Droid argument is the strongest and most evident among all. I don't trust Google, I don't trust Play.

The main point is, Moxie could take the wind out of the sails of literally all arguments in this page by publishing Signal on F-Droid but he just won't.

This alone is enough for me to lose trust in Signal.


They've already made the APK available directly on their website for over a year now.[0][1] It works just fine (albeit a little heavy on battery usage) without the Google Play Store or Google Play Services. What more do you really want?

[0]https://signal.org/android/apk/ [1]https://whispersystems.discoursehosting.net/t/how-to-get-sig...


>What more do you really want?

For it to be on F-Droid. I think that much was clear.


You can get Signal from the developers, Play Store or Apple App Store. No where else is genuine, all possibly backdoored. Why dilute that message for the 0.01% of users who would use F-Droid?


The article is clear: download from the Signal site is not secure.


Based upon what? The download is served via HTTPS, and offers a checksum also secured via HTTPS. Are we entertaining security models in which PKC is considered "not secure"?

Or are we just going by the author's ignorant or disingenuous (depending on how you interpret his words) statements?


Checksummed but not signed.


SSL provides integrity guarantees.


Only a bit of transport level integrity. But it doesn't make your average hosting provider into a high assurance one or its servers, OSes, software stacks, etc. Quite known problem since the cryptocurrency era.


Which is maybe why Moxie is encouraging people to rely on Google Play Store.

If you are so concerned about state-level actors that play store is untenable to you, signal and android on commodity hardware are probably not the solutions you want anyways.


And f-droid somehow is immune to this?


[flagged]


Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."

https://news.ycombinator.com/newsguidelines.html


Maybe Moxie doesnt see it as his problem to address concerns of non-contributing critics.

Are there any identified, non-state-level actor threats here, or is this just an ideological rant against proprietary software? If state-level actors are your concern, using android means you have already lost.


[flagged]


This kind of innuendo is beneath this thread.


What kind of innuendo? I don't understand.


This is a logical fallacy


Do go on.


It seems pretty odd to me to distrust someone because they aren't using the platform that you'd like them to use. Aren't there other issues with f-droid? You have to root your device to run it, allow third party code. Those are all security concerns too.

It was posted elsewhere but here's Moxie's take: https://github.com/signalapp/Signal-Android/issues/127#issue...


You don't need root to run F-Droid, just enable 'Untrusted Applications' in Settings, which I think is what you meant by third party code. The core reason I prefer F-Droid is that apps on it are not bundled with Play Store APIs and thus you can run on Android devices that don't have GApps installed.


> You have to root your device to run it

wtf. I have been using F-Droid for many years, and this has not been the case. as far as I know, this has never been the case, as Android has always had functions for third party app stores. in fact, even today, F-Droid recommends not using root for installs, since then you don't get the screen showing permissions.

> allow third party code

that's called running apps.

tl;dr nice FUD.


'allow third party code' means code which is not signed. Once you tick that any unsigned code can run, not only the app you downloaded. Makes exploitation significantly easier. It would be better if Android forced you to explicitly select which code could run, but too hard for most users.


Then you "untick" it until the next time you need to install something.

This is what I do on lineageOS. I don't regularly install new apps.

Side rant: This marketer-driven "install an app for everything" is a threat to the open internet and privacy. Usually the only reason is to extract more personal info.

Already, young people barely use a web browser. That appears to be the future. Now get off my lawn or I'll start talking about the war.


You might do it, but thousands wouldn't.

Android could undoubtedly be stronger in this regard, and in permission control, firewall, ad blocking etc, but it's not going to happen.

Apps wouldn't be so bad if they were actually sandboxed properly, but yeah, they suck.

I was interested in Copperhead OS as an alternative, but it seems to have fallen into a greed induced mess.


You just shot your own argument against his point in the foot. Thousands of people doesn't even make up a percentage point of the users of Android. Most people on Android use Google Play because it's sufficient against the threats that they need to it be sufficient against. Most people are okay with the risk/reward ratios that come with using commercial software because they then don't have to think about it. Signal provides a nearly turnkey level of protection above and beyond standard messaging in an easy enough format for most people to use.


IDK what you're going on about. What is your argument?

I am arguing Play store is fine, and side loading is bad policy.

I argued that for every person who will take the time to micromanage permissions, thousands wouldn't.

So what are you talking about?


We're about to see what happens with APK side loading at scale... Fortnite is bypassing Play store. I predict massive pwnage.

https://twitter.com/APKMirror/status/1027580291374702592?s=1...


Android explicitly forbids running unsigned APKs, it is not possible to do.


> that's called running apps.

And you accuse me of making things up? "Allow third party code" is not called "running apps"


> You have to root your device to run it, allow third party code.

Most likely one of those, yes. Though on Android 8+ you can only give that allow-install from unknown soures permission to F-Droid.

Also both Copperhead and Fairphone Open ship with the F-Droid priviledged extension by default allowing you to kee that setting entirely disabled.


> It seems pretty odd to me to distrust someone because they ...

... are using a platform "you" don't trust.

Really? That's not really odd.

At least, it's not odd, if that usage and what it entails is the denominating part of the persona in this question.


Whats odd is calling a application "not secure" because it uses the platform's software distribution channel.

Here's a thought. If you are so concerned about the NSA that you think Google's cloud is a problem, why are you running the OS developed by Google?


The open-source base OS is fine, the closed-source services layer and cloud platform are not.


Sounds like you're confident you (or the FOSS community collective) can spot cleverly hidden backdoors.

I'm not, and I find that position naieve. For the overwhelming majority of people who are not a cross between Bruce Schneier and Linus Torvalds, a threat model that tries to protect against the NSA and GRU and MSS pretty much requires avoiding anything with a network connection. If you have a smartphone, you should probably just use its default application store.


The application uses google play, which almost every other android application in the world uses by default. And somehow that makes them untrustworthy?


This seems more like "I disagree with Moxie," not "I don't trust Moxie," unless there's some allegation that Moxie is in cahoots with Google.

I can trust people that I think made incorrect technical decisions, because I can see that they made a decision for technical reasons and have different priorities and reasoned soundly.


Nah, people will just accuse him of publishing different binaries, or find something else to be upset about. You can't win against concern trolling by giving in to their demands. They just manufacture new demands, and all you do is waste your own resources. Haters are gonna hate, so you're better off ignoring them and focussing on your own vision.


What I don't understand is why he's not doing that in spite of the fact that he could still keep control over Signal easily. One minor modification of the app and a forced upgrade would do the trick and render the F-Droid copies useless. The competition has been forcing users to upgrade in this way for a couple of years now. So I don't understand what he's afraid of.


Are you going to pay for him to do that?


The Signal Foundation has 50 million dollars.


> Are you going to pay for him to do that?

I donated some money to them a while back. How hard could it be to push the binaries out to a second app store?


I published an app on F-Droid once: it is a very light-weight process. One gives them access to the open source repo where the code is with build instructions and they then take care of pulling updates and publishing new versions.


That process can be more complicated than it seems.

I tried to publish my open-source game on F-Droid, but the build process involves building native components with a specific third-party version of the NDK toolchain, as well as shell scripts to move files around, so it never made it to the store.


And who is paying for Signal to be on Google Play?


google, facebook, whats app, microsoft. Basically the companies that pay for signal end to end encryption in their chat apps.

There are, I'm sure, apps that are better, and that's never been moxie's goal. He's said it over and over that he'd rather have encryption for the masses than the perfect messaging app. It seems disingenuous to assume that he's acting in bad faith when he's clearly doing exactly what he said he wanted to do.

If you want to make the prefect, self-hosted, chat eco system, fire up that matrix server and invite your non tech friends to join. I'm sure that will work out incredibly well.

In the mean time, Moxie seems to realize that to accomplish his goal and make communication incrementally more secure for average users, he needs to go where the users are.

It's crazy to me that people still think that secure communication is a technical problem. We've had GPG for the competent for a long time. The hard problems in secure communication are about using the eco systems that are available to large groups of average users and still being secure.


Pushing an app out to F-Droid is trivial. There is literally no defense for only using Google Play and unsigned binaries on his own website.


I believe that to use the same app, he requires google play? Am i missing something? You can export the app yourself from google play and it doesn't run unless you have google play installed.

Am I missing something?


Does "the same app" refer to Signal? If so, you an run it without having Google Play installed (though it will be more battery-hungry), and download the APK yourself.


Build the code yourself


Google Play is the default.


"Having multiple branded builds to choose from would be a terrible thing and would easily allow fake apps to gain traction"

In this particular case, not likely. People who are into more secure communication do not randomly click on anything. They know what they are doing, or get it installed from people they trust. And if they don't - their fault. Not Signals.

And Signal can continue to work and introduce breaking changes whenever they want. They simply only support the official build of Signal. Any person using anything different, cannot complain, if things stop working. (they will anyway, sure)

And the ressouce-abuse. Can this really be a thing? I don't know in detail how the protocol works, but what can I do with the servers, I can't do with Signal anyway? Sending (encrypted) data from A to B. I can allready abuse that today, if I want.


> People who are into more secure communication do not randomly click on anything

I think you overestimate people. I told my wife to install Signal because she needed a password for something and it was way to complicated for her to remember. I know what the signal app is and could likely avoid fakes - she would not. I think it is often the case that only one party of the conversation is security minded, while the others just trust that person.

> And the ressouce-abuse. Can this really be a thing?

You know NTP, the protocol for sharing what time it is? That gets abused badly [1]. If you have an open service, there are ways it can be abused. This would without a doubt lead to DDoS-like resource abuse where lazy clients don't cache things properly and just hog server resources. There are ways to limit things like that- but they aren't always simple. Also, like I said before, Signal isn't in the API business.

[1] https://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse


People who do click on the wrong "Signal" probably clicked on many other wrong things, too. Their Security is not existent anyway, even if they accidently use the official Signal build.


So because a user clicked on a wrong thing we should completely abandon them and ignore all their security and privacy needs?


Nope, but we should not limit the freedom of other users, who know what they are doing, because of some, who do not.


I heavily disagree. If we want to protect as much people as possible we will have to limit the freedom of some users to do as they want because other users will accidentally or intentionally abuse the freedoms given.

Not limiting freedom of the user because they know what they are doing at the expense of others is maximizing the protection and freedom of a small group, not society as a whole.


> People who are into more secure communication do not randomly click on anything

Even if that wasn't wrong, it would be a fatal limitation for a social app which relies on network effects. Even if you were actually super-humanly capable of not making mistakes you'd end up using the apps that everyone else you know is actually on.


" People who are into more secure communication do not randomly click on anything."

That's not the sole market of people who would try Signal, though.


He has previously explained his reasoning: https://github.com/signalapp/Signal-Android/issues/127#issue...


"why he does not maintain a F-Droid repository for people who do not trust google"

Are you paying him to do that? No? Well, there you go. It's more work, for what appears to be very little benefit.


As another person said:

> The Signal Foundation has 50 million dollars.


None of it came from the phone sysadmins. Rather, it came from a benefactor that wanted to work with Moxie and Trevor.


Doesn't matter. F-Droid is more work to support.


I would pay a share, and I'm sure others would too. Unless he has set a price and there were no takers, that argument is fallacious.


I am not willing to support Signal if they are unwilling to federate the system.

Sure, it doesn't allow them the flexibility they'd like to have to move forward but in a way it won't be their fault if federated servers aren't keeping themselves up to date when there's a major protocol change and they get temporarily splitted from the pool.


It doesn’t matter whose “fault” it is — the issue is the practical effect that will result. This is exactly his point with the SMTP and GitHub analogy from the “moving ecosystems” post. Why do you think the split would be “temporary”?

As long as “email” is a thing, as in “just send me an email”, and it’s a federated set of randomly updated servers, “email” will never have end-to-end encryption, because the first version of SMTP didn’t have it, and the user will still expect to send messages to a server running that version.

Similarly, if “Signal” is going to be a thing, as in “contact me on Signal”, the entire network effectively has to operate at the level of the least up-to-date server — otherwise it’s not one network, and the product is therefore unreliable. But there’s no way to enforce that all the federated servers update themselves in any amount of time.


Also, it's not like Moxie hasn't discussed this. At length. GP deliberately ignores Moxie's reasoned responses to exactly this issue.


Try to look at it from a user perspective, though. It's not a question of whose fault it is when something doesn't work, it simply doesn't work.

Signal is successful in large part because it provides complex functionality (secure messaging) in a package that "just works". Federation complicates that significantly.


Your „in a way“ is totally opposed to what normal users see. They will blame OWS when their Signal client that hasn‘t been updated in two years won‘t work.


Completely agree, I have had several people move from various chat apps and texting to Signal largely because of the iMessage like features. Ultimately, I am now able to have more secure discussions with largely non-technical users which is good for everyone.


It’s not surprising. The people most interested in encryption want to trust nobody. That’s the goal of strong encryption, but as Ken Thompson says it’s not possible. Between the US Government and Moxie I’d trust Moxie but I’d rather trust neither.


Agreed, though personally I find any support of animated gifs in the year 2010 and beyond to be counterproductive.


How much non-geek non-privacy-activist socialising have you done via Signal?

Without emoji and animated gifs, I suspect 70% of my Signal contacts wouldn't use it at all. It's hard enough to convince some of my friends to use it at all, "Can't I just Facebook message you?"

For me, amongst my group of friends - it seems Moxy is making all the right security/usability tradeoffs.

If you don't trust PlayStore, it seems not much of a jump to say you also shouldn't trust Android.

If you're _rightly_ that concerned (and I'll note that Snowden recommends Signal, so I wonder what it is you're up to that makes you more of a nation-state target than him), I don't have a clue what your options are - I suspect they start with "don't use the internet at all"...


>I wonder what it is you're up to that makes you more of a nation-state target than him

I don't know what it is you're talking about. The entire point of my comment was that I don't like animated gifs. They're distracting, bandwidth intensive, and could be easily replaced by a dozen better image formats.


Sorry - I hadn't intended to aim that accusation at you specifically, but at people who are "more of a nation-state target" than Snowden. Poor sentence construction on my part there...

And while I agree with you about animated gifs, I understand WhisperSystems reluctance to try and become the force that turns non-geek non-privacy-activist users (which they and I are hoping will widely adopt Signal) to find WebP or flif or whatever alternatives to giphy or where ever else they're finding their reaction gifs and topical memes and funny cat-riding-a-roomba animations from. That's how a _huge_ percentage of users want to communicate with each other. Moxy is trying to give them a secure way to communicate how they want to, not attempting to force them into new ways of communicating that have boring justifications like "bandwidth saving" or "animation format technical merit" as the only reasons why they can't have vast libraries of funny animations to send their friends... If they can't quickly reply with Ru Paul doing fingersnaps in Signal, they'll go do it on Facebook instead.

For that reason, I'm of the opinion that _not_ supporting animated gifs is significantly more counterproductive, if you're trying to become "the secure messaging mechanism the whole world will use" or if (like me) you'd like more and more personal communication to be exclusively between the participants, and not include advertising networks and data miners and sentiment analysers (and, yeah, law enforcement and government bureaucracy)...


> I'd like to see Signal replace WhatsApp

And one day we realise it was indeed something nefarious, let's assume something of this sort happened in the future, and then we rue that we didn't act when people used to say something was amiss.

There is one line in the article that says it well:

> Truly secure systems don’t require trust.

edit:

I have supported Matrix and Firefox among others (both in code as an Android dev and with modest donations - stopped using Firefox after Pocket). But no, not Signal. I'd wait for federation (if at all).


I would agree with you if only Signal would not ask for so many permissions on my phone.


Can you elaborate? I just set it up on a new phone yesterday and all it asked for on mine was; contacts (makes sense) files (to send pictures, files, etc) receive and send texts (if you want it as your default texting app/validating phone number via sms) Access to camera and microphone (for calls and in app photography)

These all seem like reasonable permissions for the features available.


Is sharing your contacts mandatory in Signal? I don't use WhatsApp because I can't without sharing them.


No. That’s the primary reason I use Signal rather than WhatsApp. With Signal, you just need to give it the contacts that you want to chat with vua Signal.


no, it isn't.


It would be great if it asked for those permissions when it needed to do something - for example ask for mic permission at the point you want to make your first voice call. I see some apps going that direction and it's refreshing.

edit: Apparently, Signal does this for some things? See comment-replies.


On Android that's version dependant.

Older Android versions only had the idea of the app declaring "I need to be able to use your Camera, read your Contacts, and make $$$ phone calls" and then you pick "No" and don't get the app or you pick "OK". This more or less railroads users into pressing "OK", except for the most security conscious, who go without the app.

A few releases back Google had an unofficial feature that let you switch off features an app had, and it would get some dummy replacement, e.g. if it had Contacts access but you switched that off, it would see no Contacts at all. If it had Camera access, but that was switched off, it would always be told your Camera was busy in another app. Once word about this hidden feature got out, Google disabled it.

Recent releases (Certainly on my Nexus 5X for example which is a while back) enable an app to ask at runtime. If you said "No" the app gets a second chance to explain itself, and then if you keep saying "No" the feature is just disabled and Android stops prompting you. The app might not work after that of course. Like the disabled older feature, the Settings pages for apps let you undo previous authorizations, again this may make certain apps malfunction - a map app with no GPS is merely crippled, but a "barcode scanner" with no Camera access is junk.

However of course apps for an older phone don't prompt, the older Android can't handle it, so for them you still have to make the decision at install time.


The permissions system that android apps use is entirely dependent on which API version you target. Last I understood, if you made a new app today and purposely chose to target an old API version, you could force it to use the "all or nothing", user hostile permissions query you described


I believe Google is asking developers to support a recent API version to push updates to the Play Store.


Yeah - as of 1 Aug new apps or 1 Nov for updated apps - you need to "target" API level 26 (Android 8), but you can still set your minSdkVersion to whatever you like (we're still publishing version 16/Android 4.4 updated).

https://developer.android.com/distribute/best-practices/deve...

@mrguyorama - this means you can force the old landgrab user-hostile permissions to people running old Android versions, but you cannot force them onto users running Android 8.


It does. Signal supports runtime permissions[1] and it requests them dynamically while you are using the app (e.g. the camera permission prompt appears the first time you try to take a picture).

1: https://developer.android.com/training/permissions/requestin...


Last time I had it installed it scanned my contacts a half dozen times while I slept. I removed it in the AM.


The Contacts permission is completely optional, and contact information is never stored: https://signal.org/blog/private-contact-discovery/


This seems like smoke and mirrors to me:

  Traditionally, in Signal that process has looked like:
  
    The client calculates the truncated SHA256 hash of each phone number in the device’s address book.
    The client transmits those truncated hashes to the service.
    The service does a lookup from a set of hashed registered users.
    The service returns the intersection of registered users.
The phone number space is really not this big.


They acknowledge that in literally the next paragraph, and the entire post is about how they improved on that old state of things. How is that "smoke and mirrors"?


Right, my bad. This does look like a sensible solution


Is this solution deployed in production? The source code says "beta".


To be fair it does that to map you with the contacts who also are on Signal just like WhatsApp or Telegram does. Though you could also use Telegram just with user_names w/o phonebook permission but for signing up you need to use your actual phone number.


This is a really poor post. Lots of in-the-weeds long-running-feud grudge holding snark, but no real examination of the issues at hand. And his assertions don't make sense in any case. You can't trust the Google Play store because a malicious actor might have swapped out the trusted roots on you. But then why should we trust F-Droid's signing infrastructure?

Then he gripes that the posted APK has to be manually checksummed to use it. If you are truly paranoid, trusting a checksum you get from the same page you get a binary is as secure as ignoring the checksum altogether. But why would you trust a hidden signature process you can't see any more? How do you know your F-Droid binary was secure?

But worst of all is this pointless assertion: "Truly secure systems don’t require trust."

There are no truly secure systems. Malicious actors could replace your Matrix app with a lookalike clone. Your phone could have a hidden keylogger built into the OS. Or the hardware. The person's phone on the other end of your communication could have been compromised. You could be being monitored by all sorts of undetectable means.

Perfect security is an unattainable goal, but good security requires acknowledging and enabling trust to play a role in the protocols and systems we develop.


The post is literally responding line by line to a post from 5 years ago. Very poorly thought out article.


But we have to trust that Moxie is running the server software he says he is. We have to trust that he isn’t writing down a list of people we’ve talked to, when, and how often. We have to trust not only that Moxie is trustworthy, but given that Open Whisper Systems is based in San Francisco we have to trust that he hasn’t received a national security letter, too (by the way, Signal doesn’t have a warrant canary). Moxie can tell us he doesn’t store these things, but he could. Truly secure systems don’t require trust.

We have at least one data point that says that Signal stores exactly two integers about you, or did when the subpoena was issued: https://www.aclu.org/open-whisper-systems-subpoena-documents

things can always change, but that’s evidence submitted in court under the penalty of perjury, which is a fairly strong claim.


I am happy to see I am not the only person in the world that feels like this about Signal.

The interesting fact is that I "Ctrl+F" this page for Wire and I have seen nothing, even though this comment is about something that made me switch over Wire from Signal: to date, that's the unique instant messaging that has FOSS'ed both the server and the clients. (OK, the article also says about Matrix.)

I admire Wire for a number of reasons, but certainly FOSS'ing all their code is one the main reasons. (The other is... Haskell! And also Rust.)

And just to point out, not only Wire bug-fixed the library implementation of the Signal protocol, as they use the Signal protocol. And their web interface is very good!

Oh, yes... And they are not based in USA.

EDIT: I am not affiliated with Wire, but just a happy customer. :)


> that's the unique instant messaging that has FOSS'ed both the server and the clients.

Signal's server code is open source as well: https://github.com/signalapp/Signal-Server

And apparently the client can verify that the server is running that code: https://signal.org/blog/private-contact-discovery/#trust-but...


Server and the contact directory service are two different things. The remote attestation is just for a small part of the code (SGX code have a lot of constraints, including size). Also, I don't know if it changed but the new contact discovery service was not in production last time I checked.


Note: my understanding of SGX is that this still requires trust of Intel. Still, far better than nothing


It also requires there to not be a hardware sidechannel exploit on the device running SGX... such as the variants of Spectre that have ben trickling out every few weeks now all year... so yeah: Moxie's trust of SGX is pretty damning.


As I understood it, the "trust" comes down to: "it has some limits, but it's strictly better than nothing". I wouldn't call that damning.


Signal, in an indirect way, requires 1/6th of the world's population to part with their biometrics. It requires phone number, and in India, getting a phone # requires Aadhaar, which is a centralized, biometrics-based ID (photo, fingerprints AND iris at the moment).


Are you seriously telling me that everyone in India is using the phone number which matches their biometrics? Lol.

Besides which, there are well established ways to get a Signal number online, like Google Voice or VoIP telephony companies.


I see this as a good thing because it effectively rate limit the spam bot creations


I think there are better solutions for that such as CAPTCHAs, rate limiting, etc. We don't need to resort to giving out biometrics to solve spam bots.


Ya really think phone numbers are hard to get?


Since there is someone above claiming that getting a phone number in some countries "requires" biometric identification, I'd say yes.


As usual with restrictions and regulations, it's easy to obtain for bad guys and not easy for good guys.


I was on Wire for a time and even got other people on board. Then the experience suddenly degraded out of nowhere. The desktop and web clients would never finish syncing. Some other stuff I don't remember. I really liked the app though.

I mean the unfortunate reality of chat programs is that there are so many that when I'm having weird problems I'm not gonna spend time opening issues on GitHub and sending logs; I'll just go back to what works. That's even more true for my non-technical friends.


This. User experience is everything.

In my case, not Wire, but Signal. My Signal contact list is exactle two people long. Trying to get people to move from WhatsApp elsewhere is hard, especially if some of them even additionally installed Threema a few years back when Facebook bought WhatsApp.

But at least two people. With one of them I was regularly conversing on Signal, so much that she even preferred it to WhatsApp.

One day she messaged me on WhatsApp again. She said Signal had not set notifications for my messages a few times now, and she's fed up.

I never had that problem. Maybe she misunderstood or misconfigured something. But it doesn't matter. Signal is dead to her.

It's really easy to lose perspective as a developer how much this usability stuff matters. Sure, we all pay lip service to it, tell each other how bad the UI on some tool is and fake humility about our missing sense for anything design (including UI design).

But we don't really get it.


Wire mobile apps have been reliable through several years of daily use. Does not require a phone number and Wire is working on interoperability with other E2E messaging services, via the IETF MLS protocol.


I'm happy for you. Buf that doesn't undo what happened to us.


Have you filed a bug report on github?


I already mentioned in my comment above that when a chat app misbehaves I don't spend time doing that and that my friends are even less likely to. It might seem unfair to you but that's just how it is. If it doesn't work I move on. Best of luck to Wire though.


I love wire. Never have any issues with the desktop client nor mobile app. Even though it's Electron, it seems to be extremely focused on security. Unfortunately, it suffers from the same metadata issues as Signal. If you really need higher security messaging, p2p is your best bet, but then you may face correlation attacks by ISPs and whatnot... Maybe we all just need to get our HAM radio licenses :)


Well, Electron is a security trash fire. I wouldn't use it for anything if need to be remotely secure. Hopefully will be replaced by webasm in the near future, though that will likely have problems too.


How is HAM secure? (I don't know anything about HAM)


It's not secure by definition, since encryption is forbidden in ham radio.


An open-source server is certainly a step up from Signal, but since Wire doesn't support federation (either in their ToS or in practice) I'd favour Matrix.


> An open-source server is certainly a step up from Signal

https://github.com/signalapp/Signal-Server.

You are spreading a lot of incorrect or misleading information about Signal in this thread. That makes it difficult to assume that you're arguing in good faith here.


I stand corrected - though, as another reply said, it makes little difference if you can't actually use a forked server in practice.

I don't know what I could say to convince you I'm just an ordinary person concerned about my privacy, but ultimately it doesn't matter: you should definitely consider the possibility that I'm a bad actor and take nothing on faith. Equally, you shouldn't trust that Marlinspike hasn't been compromised either.

A little thought experiment: Put yourself in the NSA's position in 2013. GPG has been out there for years and, despite your best efforts, you can't break it directly when users follow proper security practices. (You have to compromise those users' computers instead, and that's vastly more expensive; every time you use one of your rootkits or exploits you run the risk of burning it, so they're reserved for high-value targets). The world is suddenly a lot more interested in privacy, and while popular culture doesn't grasp the intricacies of key exchange or forward secrecy, there are enough cryptography experts around that any obvious downgrade from GPG will be noticed and picked up on (this is just after the conclusive failure of your Dual_EC_DRBG efforts). What do you do? How do you get the public to accept something easier to compromise?

My answer is: you find a different front to attack GPG from. You talk up different kinds of attackers. You dangle a new, desirable security property that GPG doesn't have, and a theoretically clean construction - and then you compromise the metadata subtly, down in the weeds of usability features, letting you identify the higher-value targets. You get people used to using a closed-source build that auto-updates, and have a canned exploit ready (a compromised PRNG or similar) to use on those targets. And you get people to enter their phone numbers so that you can always track their location and what hardware they're running if you do have to attack their device more directly.

Maybe I'm being paranoid, but it seems distinctly odd that we see such a push behind an app that compromises so many features that were previously thought essential to security, just as the move for encryption is finally gaining momentum.


GPG has an infinitesimally small user base. Many tech savvy users still struggle to use it correctly. Moxie has explicitly stated that his aim is not to build the perfect secure messenger app, but a messenger app that provides the greatest amount of security to the greatest number of users. He has explicitly stated that he has made some design decisions that slightly compromise the ultimate security of Signal, but are necessary to establish a wide user base and avoid traps that could drastically compromise the security model because of user error.

Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.


> GPG has an infinitesimally small user base. Many tech savvy users still struggle to use it correctly. Moxie has explicitly stated that his aim is not to build the perfect secure messenger app, but a messenger app that provides the greatest amount of security to the greatest number of users.

But what's the threat model where Signal makes sense? For a less-than-nation-state attacker, basic TLS as virtually all messengers support is surely adequate. For a nation-state attacker, phone-number-as-ID is a bigger vulnerability than anything Signal helps with, and central servers means that Signal can simply be blocked outright in any case. If we're talking about, say, Turkey cracking down on protesters, they would probably rather those protesters were using Signal (where arresting one means you get the phone numbers - and therefore locations - of all their friends) than the likes of Facebook or Discord or what-have-you.

> Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.

I'd be fine with that if Marlinspike didn't also trash-talk those more secure tools.


There are nation-state attackers and nation-state attackers. Most oppressive regimes don't have the technological resources to perform complex attacks on relatively tough systems, but they can perform pervasive monitoring on layer 1, use dodgy certificates to undermine TLS and bribe or coerce corporate actors. Most people in functioning democracies aren't particularly worried about becoming the target of the full might of a three-letter agency, but they might be worried about bulk collection via L1 or PRISM intercepts.

Signal is a vast improvement over SMS, plaintext email or any commercial messaging application, but it's no more difficult to use. It's relatively foolproof, in that user error can't fatally undermine the security model in most cases. It's not perfect, but it's easily the most secure chat app that I could confidently persuade non-techies to actually use. A highly secure app that you don't know how to use offers you no security at all.


> they can perform pervasive monitoring on layer 1

Indeed (particularly as telecoms are often state-owned in those regimes), which is what makes phone-number-as-ID such a bad idea.

> use dodgy certificates to undermine TLS

Difficult in these days of certificate transparency and HPKP.

> bribe or coerce corporate actors

If that's your worry surely you want to rely on a big corporation rather than Signal. Look at e.g. Brazil having to block WhatsApp entirely because Facebook wouldn't play ball with them. Facebook has deep pockets that mean they can afford to do that kind of thing.

> Signal is a vast improvement over SMS, plaintext email

Agreed

> or any commercial messaging application,

Not convinced that there's a significant improvement here. Plenty of commercial messaging applications have encryption. If the server is under an attacker's control then you're vulnerable, but I'm not convinced that isn't the case with Signal too.


The threat model is "the backend server has a security flaw and gets exploited, dumping a bunch of information about my chats" or "the backend server is run by a company that wants to use the contents of my messages for analytics and I don't want that" or "a rogue employee with access to databases but not enough access to ship rogue code wants to read my messages".


The server might as well be closed source. We have no guarantees that Moxie is actually running this in production and he refuses to federate with third-party servers.


It's funny you put it that way. You're right: the server might as well be closed source. That is the point of end-to-end encryption.


This is addressed in https://signal.org/blog/private-contact-discovery/ – using Intel SGX, it's possible for the clients to verify that the server is running the code it should be running. I'm not sure whether this is already deployed, but it refutes any claim that Signal isn't serious about your concern.

I don't see how federation is related to this at all. We know you're bummed about it, you don't need to inject it into every subthread.


> This is addressed in https://signal.org/blog/private-contact-discovery/ – using Intel SGX, it's possible for the clients to verify that the server is running the code it should be running

Just the code inside the enclave and that's a very small amount of code, definitely not the entire server.


SGX is not a magic bullet, it's only part of a secure system. It's also come under some fire, check out this paper:

https://www.blackhat.com/docs/us-17/thursday/us-17-Swami-SGX...

SGX alone cannot solve this problem. Even in the idealized case, you can sniff traffic on the router to find out which user IPs are talking to each other and when.


Did you read that work? I did: I was on the review board that made the decision to accept it for Black Hat. Could you map Yogesh's research to something Signal is actually proposing to do and explain in any detail what the actual threat you're talking about is? Thanks.


I could, but I'm probably ill informed. I want to hear your specific rebuttal to this:

>Even in the idealized case, you can sniff traffic on the router to find out which user IPs are talking to each other and when.


That's a string that appears nowhere in Yogesh's research. Are you at this point conceding that the link you cited has nothing to do with this thread?


I concede that this:

>I did: I was on the review board that made the decision to accept it for Black Hat.

Probably makes you more qualified to talk about SGX than me, so yes, I concede that the paper may not be relevant because your understanding of it is probably better than mine.

With that, I asked you a more fundamental question, given that you are knowledgeable about this and may be able to provide an answer.


What I'm asking is whether you had a coherent argument about weaknesses in SGX based on Yogesh's research that would keep it from functioning in Signal's contact discovery scheme, or whether you saw the letters "SGX" in a thread and posted the first link you could find about issues with SGX. Maybe you should find a better link? They're out there.


There is a german guide to privacy I read that has some real issues with Wire, most that I agree with [0].

I will Google translate it for you (ironic):

> "10/06/2017: Wire.com operational Security

> Wire.com is referred to as a new star among crypto messengers. I briefly looked at the (experimental) Linux version of Wire.com and found some significant security flaws:

> Mannings Bug: Wire.com has good end-to-end encryption based on Axolotl. The chats are all but unencrypted (!) Logged on the hard disk of the computer. The logging can not be switched off.

> The unencrypted storage of encrypted communication is not a bug but an epic FAIL! > Access data: (Account name, password) to Wire.com account are also stored somewhere unencrypted on the hard drive. When starting the person does not have to authenticate in front of the screen but is automatically connected to all accounts.

> This is not a bug but a FAIL! Remote Code Execution: The Linux Wire Client contains a lot of Javascript code. Updates for the Javascript code will be downloaded and executed via HTTPS from the Amazon Cloud. After a superficial examination, the authenticity of the executed code is not additionally cryptographically verified (this is a guess, not checked in the code!). Security therefore depends solely on HTTPS encryption. The HTTPS encryption of the contacted wire servers app.wire.com, prod-assets.wire.com and prod-nginz-https.wire.com does NOT meet the BSI's requirements for secure HTTPS encryption. These servers are Amazon Cloud Server and Cloudfront Server (not your own infrastructure).

> DANE / TLSA or HPKP are NOT used to validate SSL certificates of HTTPS connections. In addition, no CAA record is defined in the DNS, which should actually be mandatory for HTTPS for a month. The security of the transport encryption between client and server thus does not correspond to the feasible state of the art.

> Potent attackers could attack with fake but valid SSL certificates as man-in-the-middle the communication to the wire servers, in combination with the remote code execution possibly also attack the end-to-end encryption (assumption!), and there are enough potent attackers who want to attack the encryption of crypto messengers. The domain wire.com is not signed DNSSEC. Instead of the privacy-friendly OCSP.Stapling, OCSP.Get is used and several CAs are contacted to verify SSL certificates via OCSP. OCSP.Get can be easily tricked, as M. Marlspike demonstrated in 2009. It contacts third party servers that are not under the control of the operators (maps.googleapis.com, images.unsplash.com) to download anything.

> Conclusion: The operational security of Wire.com is not (yet) suitable for security-critical applications after a small, superficial test. In particular, whistleblowers should learn from Manning's example and not use it.

> Disclaimer: this is NOT an audit but a short test of the Linux version."

[0] https://www.privacy-handbuch.de/diskussion.htm


To be fair,they started pinni their certificates about 3 years after claiming their VoIP stuff was encrypted. So there was only A FRIGGING THREE YEAR WINDOW when someone in your root cert db could mitm your VoIP connection. So not really friendly neighbour Hacker Joe could do, but definitely a nation state or malicious employer.


> We have at least one data point that says that Signal stores exactly two integers about you

For people wondering what they are:

> "The only information responsive to the subpoena held by OWS is the time of account creation and the date of the last connection to Signal servers for account [redacted]. Consistent with the Electronic Conununications Privacy Act ("ECPA"), 18 U.S.C. § 2703(c)(2), OWS is providing this information in response to the subpoena."

Their response to the subpoena then goes on to object to its overly broad scope, which asked for things that require a court order or a search warrant. They also object to the scope of the nondisclosure order included in the subpoena.


Is it even possible for the Signal servers to keep track of who you talk to, when, and how often? I was under the impression that those two data points they stored were the only thing they _could_ store, because the rest is sent to the servers encrypted.

Edit: Yes, apparently they have a method of doing private contact discovery and, IIUC, even a method for the client to verify that the server is running the source code they expect: https://signal.org/blog/private-contact-discovery/#trust-but...


Signal has to be able to route messages from user a to user b through their centralized server, so at a minimum they are capable of logging "This user sent a message that we relayed to this other user".


The signal server is responsible for doing the trust-on-first-use key exchange and letting you send offline messages to your counterparties. If your client could somehow be tricked into thinking that the other side was offline and you'd already sent them a large number of messages while they were offline, or that this was the first time you were talking to them, your client would leak a lot of metadata to the server. And given that the server is in charge of routing your messages to your counterparties, it seems like there's a lot of potential for a hostile server to trick the client in that way.

Theoretically it might be possible for a sufficiently paranoid client to cover all the bases. But it's certainly a huge attack surface.


If Open Whisper Systems had received a national security letter requiring them to collect more information and keep it secret that they were doing so, how would you expect them to have responded to that subpoena?


This is a good question, because it illuminates design decisions that Signal has made that confuse nerds. For instance: Signal only recently got user profiles --- user profiles! --- because they took their time figuring out how to deliver them without sacrificing privacy. Take some time to look at how they implemented Giphy sharing to get an idea of what I'm talking about.

Any information the Signal client reveals to the server is, indeed, something the USG could make a legal claim on. Probably even if Signal's server code doesn't now even collect it. The Signal team has been sounding that alarm for years: when you look at a chat program with fancy whiz-bang features, consider what those features expose (traffic-analytically, even). "Maybe have fewer features until we figure this out", Signal says.

The market does not agree, and that has been rough for Signal, and they deserve credit for the stand they are taking on it.


NSL can't require to collect new business records. They can only compel you to disclose business records that you already have.

This is beyond the legal authority of an NSL.


> This is beyond the legal authority of an NSL.

If the legal authority can't be challenged in public, how are you so sure this legal authority hasn't been skirted plenty? In an opaque system, what they can and can't do is only theoretical. Only sometimes are things disclosed well after the fact and often only in aggregate (such as numbers about how many NSLs are greenlit). This is what the author means by no trust required. They can't be asked to subvert it, even via extralegal means.


The Core Secrets leak said the FBI "compels" U.S. companies to "SIGINT-enable" their products if they don't take money. SIGINT-enable means installing backdoors. So, yeah they can. They also do this with classification order mandating secrecy from organizations and people that are immune to prosecution. In the Lavabit case, they wanted a device attached to the network to do whatever they wanted with the company ordered to lie to customers about their secrets still safe via the encryption keys. That's always worth remembering for these discussions. Plus, most companies or individuals won't shut down their operation to stop a hypothetical threat.

So, you have to assume they'll always get more power and surveillance over time via secret orders if there's no consequences for them demanding it but people on other side can be massively fined or do time for refusing. Organizations about privacy protection simply shouldn't operate in police states like the U.S..


If for some reason those methods fail, they can use BULLRUN, which has a much larger budget[1] and specifically tasked with "defeat[ing] the encryption used in specific network communication technologies"[2].

[1] "The funding allocated for Bullrun in top-secret budgets dwarfs the money set aside for programs like PRISM and XKeyscore. PRISM operates on about $20 million a year, according to Snowden, while Bullrun cost $254.9 million in 2013 alone. Since 2011, Bullrun has cost more than $800 million." ( https://www.ibtimes.com/edward-snowden-reveals-secret-decryp... )

[2] https://en.wikipedia.org/wiki/File:Classification_guide_for_...


That is a major accusation, can you provide source(s) to read more about this claim?

It is at odds with known cases, such as the fight with Apple over iPhone encryption.


It's not at odds if you know how they work. The U.S. LEO's are multiple organizations with different focuses, legal authority, and so on. They also regularly lie to protect illegal methods and activities. Let's look at some data.

Now, first indication this isn't true was Alexander and Clapper saying they didn't collect massive data on Americans. If they did, they could've solved a lot of cases by your logic of action vs capability being contradictory, right? Yet, Snowden leaks showed they were collecting everything they could: not just metadata, not just on terrorism, and were sharing it with various LEO's. So, they already lie at that point to hide massive collection even if it means crooks walking.

Next, we have the umbrella program called Core Secrets. See Sentry Owl or "relationships with industry." It says Top Secret, Compartmented Programs are doing "SIGINT-enabling programs with U.S. companies." In same document, even those with TS clearance aren't allowed to know the ECI-classified fact that specific companies are weakening products to facilitate attacks.

https://theintercept.com/2014/10/10/core-secrets/

https://theintercept.com/document/2014/10/10/national-initia...

For Lavabit trial, see Exhibit 15 and 16 for the defense against pen register. Exhibit 17 makes clear the device they attach records data live and claims constitutional authority to order that. They claim only metadata but they lied about that before. Exhibit 18 upholds that the government is entitled to the information, Lavabit has to install the backdoor, the court trusts FBI not to abuse it, and they'll all lie to Lavabit customers that nobody has access to their messages (aka secrecy order about keys).

https://edwardsnowden.com/wp-content/uploads/2013/10/redacte...

That the judge asked for a specific alternative was hopeful, though. I came up with a high-assurance, lawful-intercept concept as a backup option for event where there was no avoiding an intercept but you wanted provable limitation of what they were doing.

https://www.schneier.com/blog/archives/2014/09/fake_cell_pho...

They regularly hide what techniques they have via parallel construction or dropping cases.

https://www.eff.org/deeplinks/2013/08/dea-and-nsa-team-intel...

https://arstechnica.com/tech-policy/2015/04/fbi-would-rather...

So, you now have that backdrop where they're collecting everything, can fine companies out of existence, can jail their executives for contempt, are willing to let defendants walk to protect their secret methods, and constantly push for more power in overt methods. In the iPhone case, even Richard Clarke said he and everyone he knows believed the NSA could've cracked it. Even he, previously ardent defender of intelligence community, says FBI was trying to establish a precedent to let them bypass the crypto with legal means in regular courts.

https://www.newsweek.com/former-white-house-offiical-nsa-cou...

So, the questions would be:

(a) can they already do that legally or technically using methods like attaching hardware and software to vendors' networks/apps like in Lavabit trial?

(b) can the NSA or third parties bypass the security on iPhones publicly or in secret? Or did Apple truly make bulletproof security?

(c) did all this change just because FBI said they were honest, powerless agency hampered by unbreakable security in a press release?

I didn't think anything changed. I predicted they'd crack that iPhone the second they were blocked in court. They did. They knew they could the whole time. They lied the whole time. They wanted a precedent to expand their power like they did in the past. That simple.


I’m pretty sure that isn’t true. They can be used to compel you to build interception capabilities.


source?


It's complicated, but that's kinda what happened to the Lavabit "secure" webmail service. When the owner wouldn't install a backdoor, the FBI sought the private encryption keys so they could MITM the whole site.

https://www.newyorker.com/tech/elements/how-lavabit-melted-d...


That isn't building an interception mechanism, it's revealing some private information Lavabit held.


The big Apple vs FBI high profile lawsuit for one. Granted that was a telegraphed precedent seeking exercise by "accidentally" losing access to the work phone after the terrorists destroyed their personal phones before the attack.


That was not an NSL.


True - they are less publicly tested but it is suggestive in its own way. If they could just NSL their way to access why bother with precedents? It is wild speculation but that is what lack of transparency has wrought.


I really can’t say. Take for what it’s worth.

Edit: I may be thinking of a FISA order as opposed to an NSL. Doesn’t matter though, obviously the concern is that they would be served with whichever does allow that.


Does the NSL have any legal authority? Is it actually bound by laws? From what I understand, an NSL is "do what we tell you, because we're the NSA".


precisely the way they did, which was to challenge the gag order, and win. That’s where the documents above come from, in fact.


NSLs don’t allow that.


If I read that document correctly, it's more accurate to say that Signal stores at least two integers, rather than exactly. Signal did not provide information that does not fall under certain information categories, and they say that a court order would be needed to force them to disclose any of that information (if they possess any, which they deny).


But they did deny possessing any of that information in the subpoena reply, so if you trust that they are replying truthfully it is exactly 2.


So, nobody has ever lied in a subpoena reply before?


Uhm... this is what would happen if it was a govt job


> P.S. If you’re looking for good alternatives to Signal, I can recommend Matrix.

Yes, if you're looking for alternatives to Signal, you should totally use a solution that hasn't rolled out end-to-end encryption by default[0]. /s

...and that only two clients have implemented so far, out of 50ish that they list on their website.

[0] https://matrix.org/docs/guides/faq.html#what-is-the-status-o...


That ticks me off too. I'd rather suggest Tox.

For all the hate it gets, it does only have mode of communication: End-to-end encrypted, for your contact (as people's addresses are pubkeys) and with forward secrecy.

Most "secure" IM systems fail this basic test. When proper end-to-end encryption is optional, guess what happens.


Well. I'd rather not have anyone suggest tox. The whole "we use nacl so we are safe" attitude from a couple of years ago seems to still be around. Good on them for using nacl. A shame they don't seem to realize that you can write bad crypto with it.

The whole forward secrecy seems to be unresolved still. They have session keys,but other than that there is no rekeying.

And then we have the whole issue with it relying on supernodes for much of its functionality (offline messages, mobile phone client rs) which leads to it having a subset of the issues many have with signal.


>The whole "we use nacl so we are safe" attitude from a couple of years ago seems to still be around.

Citation needed.

>Well. I'd rather not have anyone suggest tox.

I'm repeating myself, but for all the hate it gets, I'm unable to come up with a better suggestion than Tox. There's always some kind of flaw: Centralized, no forward secrecy, end to end encryption optional, no way to verify contacts and so on.


Regarding the citation, it is just a matter of reading how the core devs reply to potential security issues. Sure, it has gotten better, but it is not too far from the good old https://github.com/irungentoo/toxcore/issues/121

The fact is we mostly know what kind of attacks are possible on signal. We know metadata is a potential problem. We know what kind of tradeoffs we get with a centralised architecture. We know how that works and how to mitigate some things. Openwhispersystems have been clear about what Signal provides and what it does not provide (even the wording around disappearing messages is well chosen not to confuse people ).

With tox there is a lot we don't know. Are the supernodes an attack vector? Why aren't the devs clear about using a less distributed architecture for mobile clients? Why don't they provide proper forward secrecy yet they claim to do so? There might be lots of strange properties of doing crypto distributed.

I would not recommend it for secure communication until the amount of unknowns is smaller. It is really that simple.


>Sure, it has gotten better, but it is not too far from the good old

So your citation for sticking to their old ways is pointing to some old example? That's not what I was looking for.

>Why aren't the devs clear about using a less distributed architecture for mobile clients?

Because it's not a priority to them; Or to me, for that matter. (I don't IM on the phone)

>I would not recommend it for secure communication until the amount of unknowns is smaller.

I don't see how the likes of Signal or the popular Telegram are any better in that regard. Also, adding more features (your phone suggestion) wouldn't help.


Signal is better because people who knows their shit has vetted the design and quite a bit of research on the protocol has been done. We a quite certain of the security properties of the signal protocol.

Tox has not had this amount of attention and it is written by people who seemed to sincerely believe that using nacl/libsodium made tox safe. If that is not a huge red flag, then I don't know what is.

Telegram is not encrypted by default and when it is it uses a weird protocol, which people warned about from the beginning. The devs were cocky even after a probably unintentional backdoor was found that would have let the server mitm every encrypted communication.

The difference between these three for secure communication is huge.


>Signal is better because people who knows their shit has vetted the design

So basically, argument from authority. No Good.


Then I ask you: as a non-cryptographer who knows enough to understand RSA, how should I verify cryptographic claims? People i find trustworthy claim signal is good, which is further confirmed by studies and third party suits. I know this is an appeal to authority, but nobody can personally verify every claim made to them. I believe I have done due diligence.


It helps that they used NaCl. When cryptographers looked at it, they found an issue: People can impersonate your friends if they get their hands on your private key.

This is pretty bad, but so is having had your private key compromised in the first place. It should be fixed next time they do a flag day.

Other than that, and the rekeying issue (keys are only renewed when the client is closed. They should be on a period of time, to make forward secrecy really effective), nothing else bad was found.

Notice that, for a couple years now, effort has gone into polishing (toktok) toxcore and documenting things, rather than else (eg: adding fancy features). That's a good thing.

Tox might be understaffed and progressing slowly, but there's nothing fundamentally bad about it. They got a bunch of important things (really distributed, DHT, public keys as addresses, temporal keys for forward secrecy, always end to end encrypted) right. I'm not aware of any other project that got this much right, unfortunately.

If only if Tox got more attention, it'd gain developers, donations, and the possibility of getting a proper audit done.


Fwiw, Matrix E2E actually exists in separate codebases in: Riot/Web, Riot/iOS, Riot/Android, nheko, matrix-python-sdk, libpurple (in PR), and shortly in Fractal (thanks to https://gitlab.gnome.org/jhaye/olm-rs etc). So yup, it sucks that it's not turned on by default in private rooms, but we're working away as fast as we can.


I love your project and am really looking forward to replacing a lot of stuff with Matrix-powered chat.

However, I strongly disagree with the author that Matrix should be recommended as a secure communication platform until E2E is stable (and, from what I understand about your project, you'll enable it by default as soon as you consider it stable enough).


For the record, E2E is currently in libpurple master (only decryption not encryption) and in PR on matrix-python-sdk. Just a slight flip there.


my bad; i thought all the python ones had landed now. in practice the work is done.

Also, since writing this list yesterday, another client has got E2E running: Seaglass (a native Cocoa macOS client): https://neilalexander.eu/seaglass/


Last time I've tried Matrix (this spring) with a group of peers, my E2E rooms were full of random "failed to decrypt" and lots of out-of-band communications "hey, are my messages working for you today?"

Yes, we had one Synapse server running on a resource-constrained machine that sometimes "fell behind" the rest of the network. I believe that is what had caused such issues. Still, the fact things easily break with server load or network issues means there is something faulty about the protocol. Resilience and reliability are no less important than security.


That mirrors my experience with the matrix.org homeserver. It has not improved much over all the months.


We've been working away fixing causes of 'unable to decrypt' bugs, to the extent that (as a poweruser) I almost never see them. If you see them, please report them via bug report so we can dig into them.

Meanwhile, the performance of the matrix.org server has definitely improved massively over the last few weeks. We hit a performance ceiling from May through mid July, but since July 19th or so we've finally got CPU headroom back again thanks to stuff like https://twitter.com/matrixdotorg/status/1019957885026144257 and https://twitter.com/matrixdotorg/status/1022095383978233856.


Author here, this is a fair criticism. Other alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring (not an endorsement of any of these). I'm an old curmodgen who just uses IRC+OTR and GPG, though, so I have to depend on others for recommendations.

Also, Matrix enables end-to-end encryption by default on clients that support it.


According to friends in the cryptography space and from a cursory reading of wikipedia, Telegram is a shitshow compared to Signal.

It's one thing not to trust Signal. It's another to recommend alternatives that are far worse.


(Totally unrelated, sorry, but about a month ago you and I had a brief exchange about the placebo effect.[1] I got distracted and missed your last reply and never responded. Sorry about that. It's too late to reply on the original thread, and I don't want to hijack this one, but while looking for an old comment I saw your reply and felt compelled to mention it to you. It was a good exchange IMO and I didn't mean to "ghost" on it.)

[1] https://news.ycombinator.com/item?id=17457085


I appreciate the apology, but I don't feel ghosting on an online conversation should require an apology.

People have other lives, most often online discussions go way past their due date (I actually like the fact that HN doesn't give you a notification when somebody replies to your comment).

It was a good discussion though, the placebo effect is fascinating.


Yeah, I almost let it go but then I figured, what's the harm? It was a good discussion. Well met.


You suggest Telegram over Signal? Now we know you're spreading FUD.


I don't actually endorse any of the alternatives, just listing them. I haven't had time to research them in depth. I have worked with Tox before and I know some of the guys behind it, though, but I think it's dead in the water.


[flagged]


We've banned this account.


Why would you rely on others for recommending those, but not for Signal? Just don't recommend anything then, or don't criticise Signal. What's the point of convincing people to move to worse alternatives?


I personally reviewed Matrix before recommending it on my blog. The other alternatives I listed here are not endorsements.


Then why would you recommend a solution that hasn't even enabled end-to-end encryption by default?


When did I do this?


You recommended Matrix in your blog which, according to the first comment in this thread, is not end-to-end encrypted by default (which you acknowledged in your first response). Why would you recommend that instead of Signal?


Oh, I should have caught that and addressed it in my earlier response. Matrix is end-to-end encrypted by default on clients that support it. The valid criticism is that some clients don't.


Because for some that's not a show-stopper.


In the context of this article, framed as an alternative to Signal, I think it should be.


Why?

Let's compare it to F-Droid: He isn't saying that Signal should be distributed via F-Droid by default, just that there should be the option. So the article doesn't seem to say that the defaults are what matter.


But in the article he's saying not to use Signal because he doesn't consider it secure enough, and to consider Matrix as an alternative. His criticism of Signal has nothing to do with defaults, and everything with security.


Let's also not forget XMPP+OMEMO.


I really like Wire, for a large number of reasons - privacy and user experience among them. I've talked about it in the past[1].

If you get the time, please give it a proper whirl. With the recent open-sourcing of their server and proper E2EE by default[2] (but abstracted away from the "regular user"), it's shaping up to be a really solid application. As far as I'm aware they use the same double-ratchet protocol as Signal, but you don't need your cellphone number to register (a big thing for some people - myself included).

1. https://news.ycombinator.com/item?id=16655743

2. https://wire-docs.wire.com/download/Wire+Security+Whitepaper...


I'd say Wire is a good substitute.


Nonsense. You can run your own Matrix server and set whatever defaults you want.


A Matrix server can force E2E on all messages passing it?


Yes, it can - for instance the Matrix servers that the French government runs do this by default already. We haven't exposed this as a config option and pushed it into mainstream Synapse yet because of https://github.com/vector-im/riot-web/issues/6779, but the tweak for doing it is:

https://github.com/matrix-org/synapse/pull/3426/files


Is this true? As far as I know, even in rooms with m.room.encryption, you can still send normal m.room.messages. Or does the config option you're describing automatically setup room PLs such that m.room.message can't be sent?


technically you can still send normal m.room.messages (or anything else), but any spec-compliant client will refuse to do so in a room where m.room.encryption has been enabled.



That looks like a no given the article has a "Create a new secure room" section where you have to explicitly enable encryption for that specific room.


e2e, by nature, is client specific. So Matrix, as a connectivity glue protocol, has nothing to do with it.

Synapse, the reference server, can handle rooms, and force encryption on the rooms.

p2p is client side. Riot, as reference client, is the one that takes care of this, and, if I get everything right, it is on by default.


Regardless of the truth of your assertions, the article you provided as evidence does not support them. At no point does it demonstrate that a matrix server can force E2E on all communications, that synapse can be configured thus, or that riot can require that configuration.


Or conversations.im? Matrix + riot leaves a heap of meta data about you on the federated server. If that server is compromised, so are you.


Surely Matrix doesn't leave any more metadata than XMPP with MAM enabled (which presumably most XMPPers use these days)?


> Off the bat, let me explain that I expect a tool which claims to be secure to actually be secure. I don’t view “but that makes it harder for the average person” as an acceptable excuse. If Edward Snowden and Bruce Schneier are going to spout the virtues of the app, I expect it to actually be secure when it matters - when vulnerable people using it to encrypt sensitive communications are targeted by smart and powerful adversaries.

I'm not so sure about this. I don't think Snowden and Schneier are praising it because it is the most secure application available that works for every threat model; I think they're doing it because it's the best attempt to up the security of the masses. In other words: there's a limit to its threat model. Signal makes it harder to do mass-scale surveillance, and allows e.g. whistle-blowers to contact journalists without standing out because they're using an encrypted messaging app.

Yes, it's important to highlight those trade-offs, and one can always do better, but as far as I can see Moxie has always justified the trade-off with arguments that were not based on being self-serving. You might not agree with his conclusions, but I think it's unfair to accuse him of being self-serving. (Unless you mean "thinking about the consequences for the success of Signal" by "self-serving". It's not really clear how it serves Moxie otherwise, and the author doesn't go into detail about that.)

In the end, I think it comes down to the author expecting different goals from Signal than the project itself has - as implied by his disdain for GIF search. Obviously Signal isn't only implementing features just to get more secure - it also wants to be widely adopted. It's just that the author apparently doesn't consider that as important.


I think Signal does a very good job at providing easy security for the masses. But for journalists and sources it can be dangerous since it is based on real phone numbers, and those phone numbers are sent to the server to be matched up. It is especially dangerous if the journalists and sources believe Signal is protecting them in that use case.


Signal is not for state-proof encrypted communication. Not large states like the USA or Russia. If you think it is, you've been misinformed. For state actor proof communications you need to evaluate every action you take and think:

"What are the assumptions that I'm making here?"

One assumption is that you're not currently on anyone's radar. Are you willing to bet the entire enterprise on this assumption? How certain are you? Are you 99.999% certain?

Another assumption is that the operating system you are running the app in is not compromised on either end of the communication. 99.99%?

Another assumption is that the screen isn't viewable by other devices. Another assumption is that the frequency of your key taps aren't picked up by a mic and then turned into intelligible letters.

Another assumption is that the encryption algorithms you're utilizing haven't been subtly chosen to be intelligible to a single actor or that they'll stay secure once we have quantum computers.

Etc. Etc. Etc.

Signal is good because it raises the bar. Stock traders buying black information probably won't get your communications. They won't be scooped up in a email server leak. They wont be visible to your wife when she enters your phone's unlock code because they auto delete, and they don't get pushed to your iPad, like FB messenger[0].

But if you want to go up against James Bond, and you're already on his radar, you need to give up the illusion that anything computer related is fully trustable. Just pre-arrange some code words or OTPs and meet in person in an area without electronics or go even more old school and use dead drops with hand written communication.

[0] I personally know 3 people that were caught cheating this way.


> Signal is not for state-proof encrypted communication. Not large states like the USA or Russia. If you think it is, you've been misinformed.

Ok, but in that case what does Signal offer that any random messenger with transport encryption doesn't? If your threat model doesn't include state actors then you can probably trust a) the HTTPS certificate infrastructure b) an international corporation like Facebook, so you can probably assume that no-one would tap your FB messenger messages in transit. "Not pushed to your iPad" sounds more like a bug than a feature - I want to be able to read my messages anywhere that I'm logged in as me (at least while I have my yubikey or what have you plugged into that device). Automatic deletion... eh, I would rather make a deliberate decision about when to delete things, personally.


1. The HTTPS infrastructure is downgradeable and relies on DNS and a multitude of certificates. And not all the ciphers are safe. Yes it can be done securely-ish, but unless you're layering another level of encryption over HTTPS it isn't fully secure. Layering is what the CIA does, according to the Snowden leaks.

2. As for the rest of it: Cool man, that sounds like you want a normal chat app that is more usable and less secure. I use Messenger too for things that don't matter.


> The HTTPS infrastructure is downgradeable and relies on DNS and a multitude of certificates. And not all the ciphers are safe. Yes it can be done securely-ish

There's no reliance on DNS. We know what the right way to do HTTPS is, and an app that doesn't have to maintain compatibility with ancient browsers can use a strictly secure profile (no old ciphers, no downgrades etc.). HTTPS is older and more complex than the Signal protocol, but it's also extremely widely deployed and gets a huge amount of attention from security researchers. I think actual attacks on the protocol are less likely with HTTPS than with Signal.

> unless you're layering another level of encryption over HTTPS it isn't fully secure.

Nonsense. Two layers of valid encryption are no more secure than one, and two layers of flawed encryption will almost certainly still be flawed.

> 2. As for the rest of it: Cool man, that sounds like you want a normal chat app that is more usable and less secure. I use Messenger too for things that don't matter.

It's not that my chats don't matter. It's that I don't think autodeletion or one-device-only represent a meaningful security improvement.


> There's no reliance on DNS.

In practice there is for most situations. Are you going to get a static IP and go through the work of finding one of the rare cert authorities to get an HTTPS cert for it authorized?

> Nonsense. Two layers of valid encryption are no more secure than one, and two layers of flawed encryption will almost certainly still be flawed.

I hate arguing about this because I feel like there is a difference between how mathematicians think and how engineers thinks. I agree that one of the layers should be HTTPS if the context allows for it, because it has a lot of eyes on it, as you mention; but I fail to see how layering encryption is bad from a privacy standpoint.

Mathematically, this statement:

> Two layers of valid encryption are no more secure than one.

Is only true if there are no mistakes and if it would take more operations in the universe to break the first layer of encryption.

But why should we, a priori, assume that there are no mistakes? We have hundreds of examples of thought-to-be-secure ciphers / one way hashes ending up in the trash heap. Look at things like Cloudbleed. In reality things break. In reality cert authorities get moled or hacked. If you've been using layered encryption you're safer. Also, HTTPS basically mandates that you use TLS, which for some contexts doesn't work because we'd prefer a one-way (i.e., connectionless) channel to communicate to stop inbound traffic at the physical layer.

> It's that I don't think autodeletion or one-device-only represent a meaningful security improvement.

It's helped plenty of people that have had their phone seized at the border or their other device seized by the police. Sometimes you don't that information is sensitive until later, and sometimes you choosing to delete it at that point is illegal or impossible.


> In practice there is for most situations. Are you going to get a static IP and go through the work of finding one of the rare cert authorities to get an HTTPS cert for it authorized?

You don't need DNS to check whether the server purporting to be messenger.com has a valid certificate for messenger.com. An attacker who controls the network can of course cut you off entirely, but an attacker who controls DNS can't intercept you messages because that doesn't get them any closer to having a certificate.

> I agree that one of the layers should be HTTPS if the context allows for it, because it has a lot of eyes on it, as you mention; but I fail to see how layering encryption is bad from a privacy standpoint.

Do you feel safer behind two locked doors than one? I guess it can't hurt, but the effort would surely be better spent on virtually any other aspect of the system. E.g. if you double the key length in a single layer of encryption you've made it 2^128 (or whatever your key length was) times harder to crack, whereas if you stack two layers then you've only made it twice as hard.

Beyond that my argument would be: many security breaches happen because someone got confused about where the security boundary was. If you use one layer of encryption then everyone knows that the encrypted data is untrusted and the decrypted data is trusted. If you have two layers it's very easy to get lazy and introduce a small hole into one layer assuming the other will cover it, then you do the same for the other layer, and then an attacker figures out how to connect those two holes in a way you hadn't thought of and suddenly you're doomed.


Well if you’re ignoring state actor security and trusting a provider, what does signal offer over irc/tls?


> Truly secure systems don’t require trust.

This is a chat app so, by definition, security requires trusting at least one other person. Also, I think experience shows that secrets can often be least trusted to those who have some interest in/use for them, with the secret owner often being the least trustworthy of all. So I'd say that if you trust yourself you're already probably trusting one of the weakest links in whatever chain of trust you would have.

But seriously, pretty much every secure system requires trust, and the more it relies on technology, the more trust is required. You need to trust there are no backdoors or holes in a long chain of hardware and software that no one person can possibly verify, and if they hypothetically could, they could only hypothetically do so with the help of verification software that they could not themselves verify, at least not without dedicating a lifetime to that goal. Trustless security does not exist, and attempting to achieve it by adding more technological layers and more complexity reduces rather than enhanced security. We should make it easy for us to choose whom to trust, not work on a futile attempt to take trust out of the system.


> Trustless security does not exist, and attempting to achieve it by adding more technological layers and more complexity reduces rather than enhanced security.

How so? If you can minimize trust to the point where you have to trust someone to only properly design federated or peer-to-peer open protocol and trust that others will participate and oversee the process it's one thing, as there is no control or power to go around. Open and secure enough implementations from other parties can emerge with more parties verifying them and a possibility to switch in case someone does something sneaky. But if you also have to trust the same organization with implementation, infrastructure, distribution, there is not much security to talk about. There is no way to even verify claims that the thing they open sourced is the same thing they compile and distribute. And so much centralized power makes the organization a lucrative target for state actors with no realistic possibility to defend.

The more centralized trust you have the less secure system can be. It's like an upper bound on security.


I understand your argument, but it cannot be shown to be more valid than the complete opposite: the less centralized a system is, the more complex it is in terms of protocols, and you need to trust many more people to design it correctly than you would need to trust to operate a centralized system. In fact, it could be argued that beyond some complexity level, an unbreakable design is virtually impossible, even in principle.

Your argument about an appealing target could also be used to show the exact opposite: decentralized systems are much harder to upgrade, and so they become attractive targets which you need to break much less frequently (especially considering that the internet backbone itself is pretty centralized), and so it makes even very expensive cracking more affordable. The argument about open-source applies pretty much equally to the centralized and decentralized case.


> the less centralized a system is, the more complex it is in terms of protocols, and you need to trust many more people to design it correctly

I disagree with that. The more centralized system is, the less trust boundaries it has and more vulnerable and insecure it is, because penetrating one trust boundary gives access to everything. Security always requires additional complexity. And decentralization forces you to take that complexity seriously for once, something you neglect, not simplify, in centralized insecure designs. Forcing you to deal with just trust explicitly and systematically leads to much more secure designs.

Other than that decentralized systems are exactly the same as centralized, just with more players and choices and incentives not to break anyone's trust. The only problem is all that embrace, extend crap large corporations always attempt to pull off and recentralize everything.


> because penetrating one trust boundary gives access to everything

The same could be true for a decentralized system if the flaw is in the centralized backbone or the shared protocols/algorithms.


I like Linus' argument, if you don't work with a web of trust then you're doing it wrong. In the context of mobile secure messaging the web of trust includes: I'm trusting every hardware component on my phone, I'm trusting Apple, I'm trusting the iOS code, I'm trusting the TLS protocol, etc.


> I like Linus' argument, if you don't work with a web of trust then you're doing it wrong

I can't find the source for this, could you tell where did you take this from? (not saying it's not true, just curious to read the full text)


It was a video on him talking to students and asked about security in the kernel IIRC. I'm on my phone now but if you find it please post the link :)


I also reacted to this line, because there is no security without trust, would it be only trust in the security system.


My new secure chat app encrypts your message then send it to /dev/null.


I guess that's acceptable if you trust the NSA not to replace /dev/null with an email forwarder.

My new secure chat app, on the other hand, encrypts your message in memory, then zeros out the bytes.


In "memory". Sheeple!


Well, you seem like a potential user for my secure messenger. It is a one-person Faraday cage in a pitch black room in my basement. Dont forget to bring a gun as failsafe.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: