Hacker News new | past | comments | ask | show | jobs | submit login
I don't trust Signal (drewdevault.com)
523 points by Bl4ckb0ne on Aug 9, 2018 | hide | past | favorite | 455 comments



Drew DeVault doesn't trust Signal because its Android incarnation uses the Google Play Store --- the app market virtually all of its real users use --- and not F-Droid. DeVault would also like it if Signal would interoperate with other chat programs.

Instead, DeVault would prefer that you use Matrix, a system for which end-to-end encryption is (according to its own website) "in late beta", offered on a select subset of clients, and "not enabled by default"†.

This argument is clownish and we should be embarrassed it's on the front page.

There are people in the world that want to sysadmin their phones. It's a life choice they are free to make and I don't hold it against them. But the vast, overwhelming majority of users do not want to make the app market on their phone work more like Debian and less like the Play Store. Signal, to put it bluntly, does not care about the desires of the phone sysadmins. Even if they caved to the sysadmins, the application would, for virtually all its users, be no more secure. This bothers DeVault a lot, enough that he's constructed an entire psychoanalysis of Moxie Marlinspike to explain to himself how it could possibly happen that someone else on the Internet doesn't agree with him.

Also, just as a note to DeVault: the point of end-to-end encryption is that you don't have to trust Signal's server. All it does is arrange for the delivery of messages, which are secured client-to-client. Compare Signal's server to Wire's, which --- last I checked --- retains a record of every pair of users who have communicated in the past.

When this was pointed out downthread, DeVault responded: "[o]ther alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring". Telegram is a particularly funny reference to make, because not only is E2E not the default there, but --- last I checked --- it can't even do E2E group chat. Telegram's owners are adamant that TLS is adequate for group secure chat.


I feel like you didn't actually read the article or my comments in this thread.

>Drew DeVault doesn't trust Signal because its Android incarnation uses the Google Play Store --- the app market virtually all of its real users use --- and not F-Droid

It should use both.

>the point of end-to-end encryption is that you don't have to trust Signal's server. All it does is arrange for the delivery of messages, which are secured client-to-client. Compare Signal's server to Wire's, which --- last I checked --- retains a record of every pair of users who have communicated in the past.

My point is that Signal could just as easily keep a record of every pair of users who has communicated. We can't be sure because we can't run our own servers. I spoke about this in detail in the article.

>† When this was pointed out downthread, DeVault responded: "[o]ther alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring". Telegram is a particularly funny reference to make, because not only is E2E not the default there, but --- last I checked --- it can't even do E2E group chat. Telegram's owners are adamant that TLS is adequate for group secure chat.

Thanks for omitting all of the context which clarified that I hadn't researched them in depth and wasn't explicitly endorsing any of them, and the comment where I clarified that E2E encryption is enabled by default on Matrix.


I read your article, carefully, twice, once this morning (I briefly tweeted about it but didn't feel like I could do it justice and deleted the tweet) and again before writing this.

I've read all of your comments in this thread to date and, as you can see, replied to some of them.

I feel like I have fairly summarized your arguments.

"It should use both", you say. Signal disagrees. That makes Signal evil, according to your argument. "That's not how the world works" is my rebuttal.

Signal could easily keep a record of every pair of users. So can every other mainstream chat application --- and several of them do. Signal doesn't. My reply on the subthread about this issue explains what Signal does differently here, and it's not "publish the source code of the server".

People can simply read your comment on the thread --- I made clear where the quote came from --- to see exactly what you said about Wire and Telegram and Tox and Ring. I'm satisfied that I've represented your argument well.


>"It should use both", you say. Signal disagrees. That makes Signal evil, according to your argument.

You're oversimplifying this. For the full rebuttal, refer to the article.

>Signal doesn't.

You cannot know this. We don't need to have this conversation in two places, I'll just link it for others who want to follow along:

https://news.ycombinator.com/item?id=17726574

>I'm satisfied that I've represented your argument well.

I don't think so.

>People can simply read your comment on the thread

Fair enough: https://news.ycombinator.com/item?id=17724300

Full disclosure: I added the text in the parenthesis and the second paragraph of this comment about an hour after it was initially posted.


I don't think you understand that I can, in fact, just observe that Signal disagrees with you, without making a point-by-point rebuttal of your argument. Similarly, you don't indicate anywhere that you understand that Moxie can do the same without acting in bad faith, which is something you accuse him of doing.

You don't get to demand from strangers a debate on terms of your choosing.


[flagged]


We need you to present your own arguments civilly and substantively, according to the guidelines. Perhaps we could also please request “calmly”.

https://news.ycombinator.com/newsguidelines.html


I basically agree here. Some people, like DeVault, don't seem to think that there is any other threat model than "protect sysadmins from nationstates like the US or big corporations or Mossad".

I find such viewpoints rather dissapointing because I myself as a sysadmin don't hold it. My threat model is "someone steals my phone" and "someone (<1Mil. $ funding) tries to hack me". I don't particularly care that I don't know if the sand that was used to make silicon for my phone was properly sourced and audited for backdoors or plastic shovels.

I want me and my family to be reasonably secure against the background noise of the internet.

And of course not suck the battery dry like some thirsty vampire who was offered a bag of O-negative.

For this task, Signal is fully sufficient (until another messenger does it better or matrix fixes their long list of problems I have with them).


Additionally, I'm pretty sure it's trivial to verify the APKs that Google Play serves are identical to the ones the devs published.


That's not the interesting question. How easy is it to verify that the APKs are built from the published source code, without any added funny business?

The F-Droid devs put a lot of work on reproducible builds. Not all software complies, but with an interest in information security there's no exucse not to.

That's the use case of F-Droid, and comparing it to self publishing APKs without even as much as a GPG signature is so beside the point it borders on deceptive.



That blog post is deceptive. Their instructions only reproduce the Java part, which is pretty easy to do. But Signal requires libraries written in C (aka "native code"), and they do not have that building reproducibly. The only Android messenger really doing reproducible builds is Briar.


There is nothing wrong with F-Droid. The problem isn’t that F-Droid is toxic. People can disagree without either side being at fault... is a point I am at pains to make in this thread.


Tell that to the VLC developers.


Not interested. I'm not litigating F-Droid and don't need to. F-Droid advocates, and some F-Droid critics, disagree: if F-Droid is implicated in an argument, we must fully adjudicate all its pro's and con's. No, that's not how the world works. I'm sufficiently well informed about F-Droid to know --- and I mean this in a benign sense, the same way I feel about OCaml or slab allocator design --- that I don't care.


What kind of monster doesn't care about slab allocator design?


That one, presumably.


But that's not the argument the author makes. He is worried about the apps getting compromised at the platform level.


That's a security concern he feels he can address for himself if Signal is made available to him on F-Droid. But for the overwhelming majority of Signal users, there isn't even in theory a security benefit, because they're exposed to their platform vendor no matter what Signal does.

Signal has decided --- sensibly, I think! --- to focus on the needs of the "normie" users. DeVault disagrees with that decision. He is welcome to do so, but it was Signal's decision to make, not his.


He is indeed welcome to do so. It is perfectly rational for him to choose some other software.

Far from not something that warrants a character assassination. Specifically, it's not something "clownish" that we should be "ashamed" to have on the front page. We get the community we deserve.


Please read what I wrote more carefully. Him wanting to use different software isn't clownish. I respect the prerogative of the phone sysadmins.

Him saying that Moxie Marlinspike is untrustworthy because of a disagreement, and then urging people to use Matrix --- that's a clownish argument. And it is the bulk of his argument, paragraph by paragraph: all the reasons why the only rational reason anyone could disagree with Drew DeVault is if they are sneakily trying to screw people over.


Literally nowhere in the article does he say that. The article presents two main arguments why the author chooses not to use Signal (and that by extension, other people with the same interest should do the same), that it requires Google services with root privileges, and that it doesn't federate or interoperate. There are no personal attacks on Signal's author, the author presents his reasons in a fairly objective light, so there should be no reason to chastise him.


[flagged]


Matrix? E2E isn't even enabled by default anywhere, the server uses a ridiculous amount of resources, alternative servers support half of the protocol, I cannot easily discover people I know from my contacts lists via mail/phone/address/ICQ number/etc, 70% of the bridges I would need hover between "barely functional" and "the compiler isn't complaining", guilds are not easily possible as in other chat applications and their clients draw more battery than any other chat program I tried, including Skype, which I think qualifies for an achievement award of some kind.

I think I'll stick with services that offer E2E and/or sensible feature-sets.

Once Matrix fixes all their problems I'll gladly install a homeserver and run it open for other users to sign up for, until then I'm on Signal.


Some version of this post seems to circulate every few months or so. This one is more direct in its accusations of Moxie acting in bad faith. I think this is disingenuous. Moxie has been very clear[0] about the tradeoffs that Signal has made and the reasons for them. It's fine to be dissatisfied with those choices. It's another thing entirely to accuse Moxie of dissimulating.

Personally, I'd like to see Signal replace WhatsApp. That's why I support the path Signal took, and why I also have a distaste for the author's snarky dismissals of features like GIF search.

[0] https://signal.org/blog/the-ecosystem-is-moving/


But in the linked post he does not explain, why he does not maintain a F-Droid repository for people who do not trust google, nor why the original Signal Client does not connect to Signal Forks, even if they use everything the same. Security reasons? Ordinary smartphones are full of rootkits anyways, so someone using a forked Signal version probably is better of anyway, as he knows a bit more what he is doing.

So the base argument holds in my opinion: Moxies main focus is Moxie in control. And not making Signal the best and securely possible.

So I also use Signal, but as soon as Matrix gets stable, I am gone


> Moxie forbids you from distributing branded builds of the Signal app ...

Having multiple branded builds to choose from would be a terrible thing and would easily allow fake apps to gain traction.

> ... and if you rebrand he forbids you from using the official Open Whisper servers.

This seems pretty fair to me. Not only could you abuse their resources, it would greatly hinder their ability to make changes and respond to protocol-level security threats. They aren't in the API business, controlling their ecosystem allows them to make forward progress without concern for 3rd parties that they have no control over. And still there is the issue of 3rd parties abusing their server resources.


The F-Droid argument is the strongest and most evident among all. I don't trust Google, I don't trust Play.

The main point is, Moxie could take the wind out of the sails of literally all arguments in this page by publishing Signal on F-Droid but he just won't.

This alone is enough for me to lose trust in Signal.


They've already made the APK available directly on their website for over a year now.[0][1] It works just fine (albeit a little heavy on battery usage) without the Google Play Store or Google Play Services. What more do you really want?

[0]https://signal.org/android/apk/ [1]https://whispersystems.discoursehosting.net/t/how-to-get-sig...


>What more do you really want?

For it to be on F-Droid. I think that much was clear.


You can get Signal from the developers, Play Store or Apple App Store. No where else is genuine, all possibly backdoored. Why dilute that message for the 0.01% of users who would use F-Droid?


The article is clear: download from the Signal site is not secure.


Based upon what? The download is served via HTTPS, and offers a checksum also secured via HTTPS. Are we entertaining security models in which PKC is considered "not secure"?

Or are we just going by the author's ignorant or disingenuous (depending on how you interpret his words) statements?


Checksummed but not signed.


SSL provides integrity guarantees.


Only a bit of transport level integrity. But it doesn't make your average hosting provider into a high assurance one or its servers, OSes, software stacks, etc. Quite known problem since the cryptocurrency era.


Which is maybe why Moxie is encouraging people to rely on Google Play Store.

If you are so concerned about state-level actors that play store is untenable to you, signal and android on commodity hardware are probably not the solutions you want anyways.


And f-droid somehow is immune to this?


[flagged]


Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."

https://news.ycombinator.com/newsguidelines.html


Maybe Moxie doesnt see it as his problem to address concerns of non-contributing critics.

Are there any identified, non-state-level actor threats here, or is this just an ideological rant against proprietary software? If state-level actors are your concern, using android means you have already lost.


[flagged]


This kind of innuendo is beneath this thread.


What kind of innuendo? I don't understand.


This is a logical fallacy


Do go on.


It seems pretty odd to me to distrust someone because they aren't using the platform that you'd like them to use. Aren't there other issues with f-droid? You have to root your device to run it, allow third party code. Those are all security concerns too.

It was posted elsewhere but here's Moxie's take: https://github.com/signalapp/Signal-Android/issues/127#issue...


You don't need root to run F-Droid, just enable 'Untrusted Applications' in Settings, which I think is what you meant by third party code. The core reason I prefer F-Droid is that apps on it are not bundled with Play Store APIs and thus you can run on Android devices that don't have GApps installed.


> You have to root your device to run it

wtf. I have been using F-Droid for many years, and this has not been the case. as far as I know, this has never been the case, as Android has always had functions for third party app stores. in fact, even today, F-Droid recommends not using root for installs, since then you don't get the screen showing permissions.

> allow third party code

that's called running apps.

tl;dr nice FUD.


'allow third party code' means code which is not signed. Once you tick that any unsigned code can run, not only the app you downloaded. Makes exploitation significantly easier. It would be better if Android forced you to explicitly select which code could run, but too hard for most users.


Then you "untick" it until the next time you need to install something.

This is what I do on lineageOS. I don't regularly install new apps.

Side rant: This marketer-driven "install an app for everything" is a threat to the open internet and privacy. Usually the only reason is to extract more personal info.

Already, young people barely use a web browser. That appears to be the future. Now get off my lawn or I'll start talking about the war.


You might do it, but thousands wouldn't.

Android could undoubtedly be stronger in this regard, and in permission control, firewall, ad blocking etc, but it's not going to happen.

Apps wouldn't be so bad if they were actually sandboxed properly, but yeah, they suck.

I was interested in Copperhead OS as an alternative, but it seems to have fallen into a greed induced mess.


You just shot your own argument against his point in the foot. Thousands of people doesn't even make up a percentage point of the users of Android. Most people on Android use Google Play because it's sufficient against the threats that they need to it be sufficient against. Most people are okay with the risk/reward ratios that come with using commercial software because they then don't have to think about it. Signal provides a nearly turnkey level of protection above and beyond standard messaging in an easy enough format for most people to use.


IDK what you're going on about. What is your argument?

I am arguing Play store is fine, and side loading is bad policy.

I argued that for every person who will take the time to micromanage permissions, thousands wouldn't.

So what are you talking about?


We're about to see what happens with APK side loading at scale... Fortnite is bypassing Play store. I predict massive pwnage.

https://twitter.com/APKMirror/status/1027580291374702592?s=1...


Android explicitly forbids running unsigned APKs, it is not possible to do.


> that's called running apps.

And you accuse me of making things up? "Allow third party code" is not called "running apps"


> You have to root your device to run it, allow third party code.

Most likely one of those, yes. Though on Android 8+ you can only give that allow-install from unknown soures permission to F-Droid.

Also both Copperhead and Fairphone Open ship with the F-Droid priviledged extension by default allowing you to kee that setting entirely disabled.


> It seems pretty odd to me to distrust someone because they ...

... are using a platform "you" don't trust.

Really? That's not really odd.

At least, it's not odd, if that usage and what it entails is the denominating part of the persona in this question.


Whats odd is calling a application "not secure" because it uses the platform's software distribution channel.

Here's a thought. If you are so concerned about the NSA that you think Google's cloud is a problem, why are you running the OS developed by Google?


The open-source base OS is fine, the closed-source services layer and cloud platform are not.


Sounds like you're confident you (or the FOSS community collective) can spot cleverly hidden backdoors.

I'm not, and I find that position naieve. For the overwhelming majority of people who are not a cross between Bruce Schneier and Linus Torvalds, a threat model that tries to protect against the NSA and GRU and MSS pretty much requires avoiding anything with a network connection. If you have a smartphone, you should probably just use its default application store.


The application uses google play, which almost every other android application in the world uses by default. And somehow that makes them untrustworthy?


This seems more like "I disagree with Moxie," not "I don't trust Moxie," unless there's some allegation that Moxie is in cahoots with Google.

I can trust people that I think made incorrect technical decisions, because I can see that they made a decision for technical reasons and have different priorities and reasoned soundly.


What I don't understand is why he's not doing that in spite of the fact that he could still keep control over Signal easily. One minor modification of the app and a forced upgrade would do the trick and render the F-Droid copies useless. The competition has been forcing users to upgrade in this way for a couple of years now. So I don't understand what he's afraid of.


Nah, people will just accuse him of publishing different binaries, or find something else to be upset about. You can't win against concern trolling by giving in to their demands. They just manufacture new demands, and all you do is waste your own resources. Haters are gonna hate, so you're better off ignoring them and focussing on your own vision.


Are you going to pay for him to do that?


The Signal Foundation has 50 million dollars.


And who is paying for Signal to be on Google Play?


google, facebook, whats app, microsoft. Basically the companies that pay for signal end to end encryption in their chat apps.

There are, I'm sure, apps that are better, and that's never been moxie's goal. He's said it over and over that he'd rather have encryption for the masses than the perfect messaging app. It seems disingenuous to assume that he's acting in bad faith when he's clearly doing exactly what he said he wanted to do.

If you want to make the prefect, self-hosted, chat eco system, fire up that matrix server and invite your non tech friends to join. I'm sure that will work out incredibly well.

In the mean time, Moxie seems to realize that to accomplish his goal and make communication incrementally more secure for average users, he needs to go where the users are.

It's crazy to me that people still think that secure communication is a technical problem. We've had GPG for the competent for a long time. The hard problems in secure communication are about using the eco systems that are available to large groups of average users and still being secure.


Pushing an app out to F-Droid is trivial. There is literally no defense for only using Google Play and unsigned binaries on his own website.


I believe that to use the same app, he requires google play? Am i missing something? You can export the app yourself from google play and it doesn't run unless you have google play installed.

Am I missing something?


Does "the same app" refer to Signal? If so, you an run it without having Google Play installed (though it will be more battery-hungry), and download the APK yourself.


Build the code yourself


Google Play is the default.


> Are you going to pay for him to do that?

I donated some money to them a while back. How hard could it be to push the binaries out to a second app store?


I published an app on F-Droid once: it is a very light-weight process. One gives them access to the open source repo where the code is with build instructions and they then take care of pulling updates and publishing new versions.


That process can be more complicated than it seems.

I tried to publish my open-source game on F-Droid, but the build process involves building native components with a specific third-party version of the NDK toolchain, as well as shell scripts to move files around, so it never made it to the store.


"Having multiple branded builds to choose from would be a terrible thing and would easily allow fake apps to gain traction"

In this particular case, not likely. People who are into more secure communication do not randomly click on anything. They know what they are doing, or get it installed from people they trust. And if they don't - their fault. Not Signals.

And Signal can continue to work and introduce breaking changes whenever they want. They simply only support the official build of Signal. Any person using anything different, cannot complain, if things stop working. (they will anyway, sure)

And the ressouce-abuse. Can this really be a thing? I don't know in detail how the protocol works, but what can I do with the servers, I can't do with Signal anyway? Sending (encrypted) data from A to B. I can allready abuse that today, if I want.


> People who are into more secure communication do not randomly click on anything

I think you overestimate people. I told my wife to install Signal because she needed a password for something and it was way to complicated for her to remember. I know what the signal app is and could likely avoid fakes - she would not. I think it is often the case that only one party of the conversation is security minded, while the others just trust that person.

> And the ressouce-abuse. Can this really be a thing?

You know NTP, the protocol for sharing what time it is? That gets abused badly [1]. If you have an open service, there are ways it can be abused. This would without a doubt lead to DDoS-like resource abuse where lazy clients don't cache things properly and just hog server resources. There are ways to limit things like that- but they aren't always simple. Also, like I said before, Signal isn't in the API business.

[1] https://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse


People who do click on the wrong "Signal" probably clicked on many other wrong things, too. Their Security is not existent anyway, even if they accidently use the official Signal build.


So because a user clicked on a wrong thing we should completely abandon them and ignore all their security and privacy needs?


Nope, but we should not limit the freedom of other users, who know what they are doing, because of some, who do not.


I heavily disagree. If we want to protect as much people as possible we will have to limit the freedom of some users to do as they want because other users will accidentally or intentionally abuse the freedoms given.

Not limiting freedom of the user because they know what they are doing at the expense of others is maximizing the protection and freedom of a small group, not society as a whole.


> People who are into more secure communication do not randomly click on anything

Even if that wasn't wrong, it would be a fatal limitation for a social app which relies on network effects. Even if you were actually super-humanly capable of not making mistakes you'd end up using the apps that everyone else you know is actually on.


" People who are into more secure communication do not randomly click on anything."

That's not the sole market of people who would try Signal, though.


He has previously explained his reasoning: https://github.com/signalapp/Signal-Android/issues/127#issue...


"why he does not maintain a F-Droid repository for people who do not trust google"

Are you paying him to do that? No? Well, there you go. It's more work, for what appears to be very little benefit.


As another person said:

> The Signal Foundation has 50 million dollars.


None of it came from the phone sysadmins. Rather, it came from a benefactor that wanted to work with Moxie and Trevor.


Doesn't matter. F-Droid is more work to support.


I would pay a share, and I'm sure others would too. Unless he has set a price and there were no takers, that argument is fallacious.


I am not willing to support Signal if they are unwilling to federate the system.

Sure, it doesn't allow them the flexibility they'd like to have to move forward but in a way it won't be their fault if federated servers aren't keeping themselves up to date when there's a major protocol change and they get temporarily splitted from the pool.


It doesn’t matter whose “fault” it is — the issue is the practical effect that will result. This is exactly his point with the SMTP and GitHub analogy from the “moving ecosystems” post. Why do you think the split would be “temporary”?

As long as “email” is a thing, as in “just send me an email”, and it’s a federated set of randomly updated servers, “email” will never have end-to-end encryption, because the first version of SMTP didn’t have it, and the user will still expect to send messages to a server running that version.

Similarly, if “Signal” is going to be a thing, as in “contact me on Signal”, the entire network effectively has to operate at the level of the least up-to-date server — otherwise it’s not one network, and the product is therefore unreliable. But there’s no way to enforce that all the federated servers update themselves in any amount of time.


Also, it's not like Moxie hasn't discussed this. At length. GP deliberately ignores Moxie's reasoned responses to exactly this issue.


Try to look at it from a user perspective, though. It's not a question of whose fault it is when something doesn't work, it simply doesn't work.

Signal is successful in large part because it provides complex functionality (secure messaging) in a package that "just works". Federation complicates that significantly.


Your „in a way“ is totally opposed to what normal users see. They will blame OWS when their Signal client that hasn‘t been updated in two years won‘t work.


Completely agree, I have had several people move from various chat apps and texting to Signal largely because of the iMessage like features. Ultimately, I am now able to have more secure discussions with largely non-technical users which is good for everyone.


It’s not surprising. The people most interested in encryption want to trust nobody. That’s the goal of strong encryption, but as Ken Thompson says it’s not possible. Between the US Government and Moxie I’d trust Moxie but I’d rather trust neither.


Agreed, though personally I find any support of animated gifs in the year 2010 and beyond to be counterproductive.


How much non-geek non-privacy-activist socialising have you done via Signal?

Without emoji and animated gifs, I suspect 70% of my Signal contacts wouldn't use it at all. It's hard enough to convince some of my friends to use it at all, "Can't I just Facebook message you?"

For me, amongst my group of friends - it seems Moxy is making all the right security/usability tradeoffs.

If you don't trust PlayStore, it seems not much of a jump to say you also shouldn't trust Android.

If you're _rightly_ that concerned (and I'll note that Snowden recommends Signal, so I wonder what it is you're up to that makes you more of a nation-state target than him), I don't have a clue what your options are - I suspect they start with "don't use the internet at all"...


>I wonder what it is you're up to that makes you more of a nation-state target than him

I don't know what it is you're talking about. The entire point of my comment was that I don't like animated gifs. They're distracting, bandwidth intensive, and could be easily replaced by a dozen better image formats.


Sorry - I hadn't intended to aim that accusation at you specifically, but at people who are "more of a nation-state target" than Snowden. Poor sentence construction on my part there...

And while I agree with you about animated gifs, I understand WhisperSystems reluctance to try and become the force that turns non-geek non-privacy-activist users (which they and I are hoping will widely adopt Signal) to find WebP or flif or whatever alternatives to giphy or where ever else they're finding their reaction gifs and topical memes and funny cat-riding-a-roomba animations from. That's how a _huge_ percentage of users want to communicate with each other. Moxy is trying to give them a secure way to communicate how they want to, not attempting to force them into new ways of communicating that have boring justifications like "bandwidth saving" or "animation format technical merit" as the only reasons why they can't have vast libraries of funny animations to send their friends... If they can't quickly reply with Ru Paul doing fingersnaps in Signal, they'll go do it on Facebook instead.

For that reason, I'm of the opinion that _not_ supporting animated gifs is significantly more counterproductive, if you're trying to become "the secure messaging mechanism the whole world will use" or if (like me) you'd like more and more personal communication to be exclusively between the participants, and not include advertising networks and data miners and sentiment analysers (and, yeah, law enforcement and government bureaucracy)...


> I'd like to see Signal replace WhatsApp

And one day we realise it was indeed something nefarious, let's assume something of this sort happened in the future, and then we rue that we didn't act when people used to say something was amiss.

There is one line in the article that says it well:

> Truly secure systems don’t require trust.

edit:

I have supported Matrix and Firefox among others (both in code as an Android dev and with modest donations - stopped using Firefox after Pocket). But no, not Signal. I'd wait for federation (if at all).


I would agree with you if only Signal would not ask for so many permissions on my phone.


Can you elaborate? I just set it up on a new phone yesterday and all it asked for on mine was; contacts (makes sense) files (to send pictures, files, etc) receive and send texts (if you want it as your default texting app/validating phone number via sms) Access to camera and microphone (for calls and in app photography)

These all seem like reasonable permissions for the features available.


Is sharing your contacts mandatory in Signal? I don't use WhatsApp because I can't without sharing them.


No. That’s the primary reason I use Signal rather than WhatsApp. With Signal, you just need to give it the contacts that you want to chat with vua Signal.


no, it isn't.


It would be great if it asked for those permissions when it needed to do something - for example ask for mic permission at the point you want to make your first voice call. I see some apps going that direction and it's refreshing.

edit: Apparently, Signal does this for some things? See comment-replies.


On Android that's version dependant.

Older Android versions only had the idea of the app declaring "I need to be able to use your Camera, read your Contacts, and make $$$ phone calls" and then you pick "No" and don't get the app or you pick "OK". This more or less railroads users into pressing "OK", except for the most security conscious, who go without the app.

A few releases back Google had an unofficial feature that let you switch off features an app had, and it would get some dummy replacement, e.g. if it had Contacts access but you switched that off, it would see no Contacts at all. If it had Camera access, but that was switched off, it would always be told your Camera was busy in another app. Once word about this hidden feature got out, Google disabled it.

Recent releases (Certainly on my Nexus 5X for example which is a while back) enable an app to ask at runtime. If you said "No" the app gets a second chance to explain itself, and then if you keep saying "No" the feature is just disabled and Android stops prompting you. The app might not work after that of course. Like the disabled older feature, the Settings pages for apps let you undo previous authorizations, again this may make certain apps malfunction - a map app with no GPS is merely crippled, but a "barcode scanner" with no Camera access is junk.

However of course apps for an older phone don't prompt, the older Android can't handle it, so for them you still have to make the decision at install time.


The permissions system that android apps use is entirely dependent on which API version you target. Last I understood, if you made a new app today and purposely chose to target an old API version, you could force it to use the "all or nothing", user hostile permissions query you described


I believe Google is asking developers to support a recent API version to push updates to the Play Store.


Yeah - as of 1 Aug new apps or 1 Nov for updated apps - you need to "target" API level 26 (Android 8), but you can still set your minSdkVersion to whatever you like (we're still publishing version 16/Android 4.4 updated).

https://developer.android.com/distribute/best-practices/deve...

@mrguyorama - this means you can force the old landgrab user-hostile permissions to people running old Android versions, but you cannot force them onto users running Android 8.


It does. Signal supports runtime permissions[1] and it requests them dynamically while you are using the app (e.g. the camera permission prompt appears the first time you try to take a picture).

1: https://developer.android.com/training/permissions/requestin...


Last time I had it installed it scanned my contacts a half dozen times while I slept. I removed it in the AM.


The Contacts permission is completely optional, and contact information is never stored: https://signal.org/blog/private-contact-discovery/


This seems like smoke and mirrors to me:

  Traditionally, in Signal that process has looked like:
  
    The client calculates the truncated SHA256 hash of each phone number in the device’s address book.
    The client transmits those truncated hashes to the service.
    The service does a lookup from a set of hashed registered users.
    The service returns the intersection of registered users.
The phone number space is really not this big.


They acknowledge that in literally the next paragraph, and the entire post is about how they improved on that old state of things. How is that "smoke and mirrors"?


Right, my bad. This does look like a sensible solution


Is this solution deployed in production? The source code says "beta".


To be fair it does that to map you with the contacts who also are on Signal just like WhatsApp or Telegram does. Though you could also use Telegram just with user_names w/o phonebook permission but for signing up you need to use your actual phone number.


This is a really poor post. Lots of in-the-weeds long-running-feud grudge holding snark, but no real examination of the issues at hand. And his assertions don't make sense in any case. You can't trust the Google Play store because a malicious actor might have swapped out the trusted roots on you. But then why should we trust F-Droid's signing infrastructure?

Then he gripes that the posted APK has to be manually checksummed to use it. If you are truly paranoid, trusting a checksum you get from the same page you get a binary is as secure as ignoring the checksum altogether. But why would you trust a hidden signature process you can't see any more? How do you know your F-Droid binary was secure?

But worst of all is this pointless assertion: "Truly secure systems don’t require trust."

There are no truly secure systems. Malicious actors could replace your Matrix app with a lookalike clone. Your phone could have a hidden keylogger built into the OS. Or the hardware. The person's phone on the other end of your communication could have been compromised. You could be being monitored by all sorts of undetectable means.

Perfect security is an unattainable goal, but good security requires acknowledging and enabling trust to play a role in the protocols and systems we develop.


The post is literally responding line by line to a post from 5 years ago. Very poorly thought out article.


But we have to trust that Moxie is running the server software he says he is. We have to trust that he isn’t writing down a list of people we’ve talked to, when, and how often. We have to trust not only that Moxie is trustworthy, but given that Open Whisper Systems is based in San Francisco we have to trust that he hasn’t received a national security letter, too (by the way, Signal doesn’t have a warrant canary). Moxie can tell us he doesn’t store these things, but he could. Truly secure systems don’t require trust.

We have at least one data point that says that Signal stores exactly two integers about you, or did when the subpoena was issued: https://www.aclu.org/open-whisper-systems-subpoena-documents

things can always change, but that’s evidence submitted in court under the penalty of perjury, which is a fairly strong claim.


I am happy to see I am not the only person in the world that feels like this about Signal.

The interesting fact is that I "Ctrl+F" this page for Wire and I have seen nothing, even though this comment is about something that made me switch over Wire from Signal: to date, that's the unique instant messaging that has FOSS'ed both the server and the clients. (OK, the article also says about Matrix.)

I admire Wire for a number of reasons, but certainly FOSS'ing all their code is one the main reasons. (The other is... Haskell! And also Rust.)

And just to point out, not only Wire bug-fixed the library implementation of the Signal protocol, as they use the Signal protocol. And their web interface is very good!

Oh, yes... And they are not based in USA.

EDIT: I am not affiliated with Wire, but just a happy customer. :)


> that's the unique instant messaging that has FOSS'ed both the server and the clients.

Signal's server code is open source as well: https://github.com/signalapp/Signal-Server

And apparently the client can verify that the server is running that code: https://signal.org/blog/private-contact-discovery/#trust-but...


Server and the contact directory service are two different things. The remote attestation is just for a small part of the code (SGX code have a lot of constraints, including size). Also, I don't know if it changed but the new contact discovery service was not in production last time I checked.


Note: my understanding of SGX is that this still requires trust of Intel. Still, far better than nothing


It also requires there to not be a hardware sidechannel exploit on the device running SGX... such as the variants of Spectre that have ben trickling out every few weeks now all year... so yeah: Moxie's trust of SGX is pretty damning.


As I understood it, the "trust" comes down to: "it has some limits, but it's strictly better than nothing". I wouldn't call that damning.


Signal, in an indirect way, requires 1/6th of the world's population to part with their biometrics. It requires phone number, and in India, getting a phone # requires Aadhaar, which is a centralized, biometrics-based ID (photo, fingerprints AND iris at the moment).


Are you seriously telling me that everyone in India is using the phone number which matches their biometrics? Lol.

Besides which, there are well established ways to get a Signal number online, like Google Voice or VoIP telephony companies.


I see this as a good thing because it effectively rate limit the spam bot creations


I think there are better solutions for that such as CAPTCHAs, rate limiting, etc. We don't need to resort to giving out biometrics to solve spam bots.


Ya really think phone numbers are hard to get?


Since there is someone above claiming that getting a phone number in some countries "requires" biometric identification, I'd say yes.


As usual with restrictions and regulations, it's easy to obtain for bad guys and not easy for good guys.


I was on Wire for a time and even got other people on board. Then the experience suddenly degraded out of nowhere. The desktop and web clients would never finish syncing. Some other stuff I don't remember. I really liked the app though.

I mean the unfortunate reality of chat programs is that there are so many that when I'm having weird problems I'm not gonna spend time opening issues on GitHub and sending logs; I'll just go back to what works. That's even more true for my non-technical friends.


This. User experience is everything.

In my case, not Wire, but Signal. My Signal contact list is exactle two people long. Trying to get people to move from WhatsApp elsewhere is hard, especially if some of them even additionally installed Threema a few years back when Facebook bought WhatsApp.

But at least two people. With one of them I was regularly conversing on Signal, so much that she even preferred it to WhatsApp.

One day she messaged me on WhatsApp again. She said Signal had not set notifications for my messages a few times now, and she's fed up.

I never had that problem. Maybe she misunderstood or misconfigured something. But it doesn't matter. Signal is dead to her.

It's really easy to lose perspective as a developer how much this usability stuff matters. Sure, we all pay lip service to it, tell each other how bad the UI on some tool is and fake humility about our missing sense for anything design (including UI design).

But we don't really get it.


Wire mobile apps have been reliable through several years of daily use. Does not require a phone number and Wire is working on interoperability with other E2E messaging services, via the IETF MLS protocol.


I'm happy for you. Buf that doesn't undo what happened to us.


Have you filed a bug report on github?


I already mentioned in my comment above that when a chat app misbehaves I don't spend time doing that and that my friends are even less likely to. It might seem unfair to you but that's just how it is. If it doesn't work I move on. Best of luck to Wire though.


I love wire. Never have any issues with the desktop client nor mobile app. Even though it's Electron, it seems to be extremely focused on security. Unfortunately, it suffers from the same metadata issues as Signal. If you really need higher security messaging, p2p is your best bet, but then you may face correlation attacks by ISPs and whatnot... Maybe we all just need to get our HAM radio licenses :)


Well, Electron is a security trash fire. I wouldn't use it for anything if need to be remotely secure. Hopefully will be replaced by webasm in the near future, though that will likely have problems too.


How is HAM secure? (I don't know anything about HAM)


It's not secure by definition, since encryption is forbidden in ham radio.


An open-source server is certainly a step up from Signal, but since Wire doesn't support federation (either in their ToS or in practice) I'd favour Matrix.


> An open-source server is certainly a step up from Signal

https://github.com/signalapp/Signal-Server.

You are spreading a lot of incorrect or misleading information about Signal in this thread. That makes it difficult to assume that you're arguing in good faith here.


I stand corrected - though, as another reply said, it makes little difference if you can't actually use a forked server in practice.

I don't know what I could say to convince you I'm just an ordinary person concerned about my privacy, but ultimately it doesn't matter: you should definitely consider the possibility that I'm a bad actor and take nothing on faith. Equally, you shouldn't trust that Marlinspike hasn't been compromised either.

A little thought experiment: Put yourself in the NSA's position in 2013. GPG has been out there for years and, despite your best efforts, you can't break it directly when users follow proper security practices. (You have to compromise those users' computers instead, and that's vastly more expensive; every time you use one of your rootkits or exploits you run the risk of burning it, so they're reserved for high-value targets). The world is suddenly a lot more interested in privacy, and while popular culture doesn't grasp the intricacies of key exchange or forward secrecy, there are enough cryptography experts around that any obvious downgrade from GPG will be noticed and picked up on (this is just after the conclusive failure of your Dual_EC_DRBG efforts). What do you do? How do you get the public to accept something easier to compromise?

My answer is: you find a different front to attack GPG from. You talk up different kinds of attackers. You dangle a new, desirable security property that GPG doesn't have, and a theoretically clean construction - and then you compromise the metadata subtly, down in the weeds of usability features, letting you identify the higher-value targets. You get people used to using a closed-source build that auto-updates, and have a canned exploit ready (a compromised PRNG or similar) to use on those targets. And you get people to enter their phone numbers so that you can always track their location and what hardware they're running if you do have to attack their device more directly.

Maybe I'm being paranoid, but it seems distinctly odd that we see such a push behind an app that compromises so many features that were previously thought essential to security, just as the move for encryption is finally gaining momentum.


GPG has an infinitesimally small user base. Many tech savvy users still struggle to use it correctly. Moxie has explicitly stated that his aim is not to build the perfect secure messenger app, but a messenger app that provides the greatest amount of security to the greatest number of users. He has explicitly stated that he has made some design decisions that slightly compromise the ultimate security of Signal, but are necessary to establish a wide user base and avoid traps that could drastically compromise the security model because of user error.

Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.


> GPG has an infinitesimally small user base. Many tech savvy users still struggle to use it correctly. Moxie has explicitly stated that his aim is not to build the perfect secure messenger app, but a messenger app that provides the greatest amount of security to the greatest number of users.

But what's the threat model where Signal makes sense? For a less-than-nation-state attacker, basic TLS as virtually all messengers support is surely adequate. For a nation-state attacker, phone-number-as-ID is a bigger vulnerability than anything Signal helps with, and central servers means that Signal can simply be blocked outright in any case. If we're talking about, say, Turkey cracking down on protesters, they would probably rather those protesters were using Signal (where arresting one means you get the phone numbers - and therefore locations - of all their friends) than the likes of Facebook or Discord or what-have-you.

> Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.

I'd be fine with that if Marlinspike didn't also trash-talk those more secure tools.


There are nation-state attackers and nation-state attackers. Most oppressive regimes don't have the technological resources to perform complex attacks on relatively tough systems, but they can perform pervasive monitoring on layer 1, use dodgy certificates to undermine TLS and bribe or coerce corporate actors. Most people in functioning democracies aren't particularly worried about becoming the target of the full might of a three-letter agency, but they might be worried about bulk collection via L1 or PRISM intercepts.

Signal is a vast improvement over SMS, plaintext email or any commercial messaging application, but it's no more difficult to use. It's relatively foolproof, in that user error can't fatally undermine the security model in most cases. It's not perfect, but it's easily the most secure chat app that I could confidently persuade non-techies to actually use. A highly secure app that you don't know how to use offers you no security at all.


> they can perform pervasive monitoring on layer 1

Indeed (particularly as telecoms are often state-owned in those regimes), which is what makes phone-number-as-ID such a bad idea.

> use dodgy certificates to undermine TLS

Difficult in these days of certificate transparency and HPKP.

> bribe or coerce corporate actors

If that's your worry surely you want to rely on a big corporation rather than Signal. Look at e.g. Brazil having to block WhatsApp entirely because Facebook wouldn't play ball with them. Facebook has deep pockets that mean they can afford to do that kind of thing.

> Signal is a vast improvement over SMS, plaintext email

Agreed

> or any commercial messaging application,

Not convinced that there's a significant improvement here. Plenty of commercial messaging applications have encryption. If the server is under an attacker's control then you're vulnerable, but I'm not convinced that isn't the case with Signal too.


The threat model is "the backend server has a security flaw and gets exploited, dumping a bunch of information about my chats" or "the backend server is run by a company that wants to use the contents of my messages for analytics and I don't want that" or "a rogue employee with access to databases but not enough access to ship rogue code wants to read my messages".


The server might as well be closed source. We have no guarantees that Moxie is actually running this in production and he refuses to federate with third-party servers.


It's funny you put it that way. You're right: the server might as well be closed source. That is the point of end-to-end encryption.


This is addressed in https://signal.org/blog/private-contact-discovery/ – using Intel SGX, it's possible for the clients to verify that the server is running the code it should be running. I'm not sure whether this is already deployed, but it refutes any claim that Signal isn't serious about your concern.

I don't see how federation is related to this at all. We know you're bummed about it, you don't need to inject it into every subthread.


> This is addressed in https://signal.org/blog/private-contact-discovery/ – using Intel SGX, it's possible for the clients to verify that the server is running the code it should be running

Just the code inside the enclave and that's a very small amount of code, definitely not the entire server.


SGX is not a magic bullet, it's only part of a secure system. It's also come under some fire, check out this paper:

https://www.blackhat.com/docs/us-17/thursday/us-17-Swami-SGX...

SGX alone cannot solve this problem. Even in the idealized case, you can sniff traffic on the router to find out which user IPs are talking to each other and when.


Did you read that work? I did: I was on the review board that made the decision to accept it for Black Hat. Could you map Yogesh's research to something Signal is actually proposing to do and explain in any detail what the actual threat you're talking about is? Thanks.


I could, but I'm probably ill informed. I want to hear your specific rebuttal to this:

>Even in the idealized case, you can sniff traffic on the router to find out which user IPs are talking to each other and when.


That's a string that appears nowhere in Yogesh's research. Are you at this point conceding that the link you cited has nothing to do with this thread?


I concede that this:

>I did: I was on the review board that made the decision to accept it for Black Hat.

Probably makes you more qualified to talk about SGX than me, so yes, I concede that the paper may not be relevant because your understanding of it is probably better than mine.

With that, I asked you a more fundamental question, given that you are knowledgeable about this and may be able to provide an answer.


What I'm asking is whether you had a coherent argument about weaknesses in SGX based on Yogesh's research that would keep it from functioning in Signal's contact discovery scheme, or whether you saw the letters "SGX" in a thread and posted the first link you could find about issues with SGX. Maybe you should find a better link? They're out there.


There is a german guide to privacy I read that has some real issues with Wire, most that I agree with [0].

I will Google translate it for you (ironic):

> "10/06/2017: Wire.com operational Security

> Wire.com is referred to as a new star among crypto messengers. I briefly looked at the (experimental) Linux version of Wire.com and found some significant security flaws:

> Mannings Bug: Wire.com has good end-to-end encryption based on Axolotl. The chats are all but unencrypted (!) Logged on the hard disk of the computer. The logging can not be switched off.

> The unencrypted storage of encrypted communication is not a bug but an epic FAIL! > Access data: (Account name, password) to Wire.com account are also stored somewhere unencrypted on the hard drive. When starting the person does not have to authenticate in front of the screen but is automatically connected to all accounts.

> This is not a bug but a FAIL! Remote Code Execution: The Linux Wire Client contains a lot of Javascript code. Updates for the Javascript code will be downloaded and executed via HTTPS from the Amazon Cloud. After a superficial examination, the authenticity of the executed code is not additionally cryptographically verified (this is a guess, not checked in the code!). Security therefore depends solely on HTTPS encryption. The HTTPS encryption of the contacted wire servers app.wire.com, prod-assets.wire.com and prod-nginz-https.wire.com does NOT meet the BSI's requirements for secure HTTPS encryption. These servers are Amazon Cloud Server and Cloudfront Server (not your own infrastructure).

> DANE / TLSA or HPKP are NOT used to validate SSL certificates of HTTPS connections. In addition, no CAA record is defined in the DNS, which should actually be mandatory for HTTPS for a month. The security of the transport encryption between client and server thus does not correspond to the feasible state of the art.

> Potent attackers could attack with fake but valid SSL certificates as man-in-the-middle the communication to the wire servers, in combination with the remote code execution possibly also attack the end-to-end encryption (assumption!), and there are enough potent attackers who want to attack the encryption of crypto messengers. The domain wire.com is not signed DNSSEC. Instead of the privacy-friendly OCSP.Stapling, OCSP.Get is used and several CAs are contacted to verify SSL certificates via OCSP. OCSP.Get can be easily tricked, as M. Marlspike demonstrated in 2009. It contacts third party servers that are not under the control of the operators (maps.googleapis.com, images.unsplash.com) to download anything.

> Conclusion: The operational security of Wire.com is not (yet) suitable for security-critical applications after a small, superficial test. In particular, whistleblowers should learn from Manning's example and not use it.

> Disclaimer: this is NOT an audit but a short test of the Linux version."

[0] https://www.privacy-handbuch.de/diskussion.htm


To be fair,they started pinni their certificates about 3 years after claiming their VoIP stuff was encrypted. So there was only A FRIGGING THREE YEAR WINDOW when someone in your root cert db could mitm your VoIP connection. So not really friendly neighbour Hacker Joe could do, but definitely a nation state or malicious employer.


> We have at least one data point that says that Signal stores exactly two integers about you

For people wondering what they are:

> "The only information responsive to the subpoena held by OWS is the time of account creation and the date of the last connection to Signal servers for account [redacted]. Consistent with the Electronic Conununications Privacy Act ("ECPA"), 18 U.S.C. § 2703(c)(2), OWS is providing this information in response to the subpoena."

Their response to the subpoena then goes on to object to its overly broad scope, which asked for things that require a court order or a search warrant. They also object to the scope of the nondisclosure order included in the subpoena.


Is it even possible for the Signal servers to keep track of who you talk to, when, and how often? I was under the impression that those two data points they stored were the only thing they _could_ store, because the rest is sent to the servers encrypted.

Edit: Yes, apparently they have a method of doing private contact discovery and, IIUC, even a method for the client to verify that the server is running the source code they expect: https://signal.org/blog/private-contact-discovery/#trust-but...


Signal has to be able to route messages from user a to user b through their centralized server, so at a minimum they are capable of logging "This user sent a message that we relayed to this other user".


The signal server is responsible for doing the trust-on-first-use key exchange and letting you send offline messages to your counterparties. If your client could somehow be tricked into thinking that the other side was offline and you'd already sent them a large number of messages while they were offline, or that this was the first time you were talking to them, your client would leak a lot of metadata to the server. And given that the server is in charge of routing your messages to your counterparties, it seems like there's a lot of potential for a hostile server to trick the client in that way.

Theoretically it might be possible for a sufficiently paranoid client to cover all the bases. But it's certainly a huge attack surface.


If Open Whisper Systems had received a national security letter requiring them to collect more information and keep it secret that they were doing so, how would you expect them to have responded to that subpoena?


This is a good question, because it illuminates design decisions that Signal has made that confuse nerds. For instance: Signal only recently got user profiles --- user profiles! --- because they took their time figuring out how to deliver them without sacrificing privacy. Take some time to look at how they implemented Giphy sharing to get an idea of what I'm talking about.

Any information the Signal client reveals to the server is, indeed, something the USG could make a legal claim on. Probably even if Signal's server code doesn't now even collect it. The Signal team has been sounding that alarm for years: when you look at a chat program with fancy whiz-bang features, consider what those features expose (traffic-analytically, even). "Maybe have fewer features until we figure this out", Signal says.

The market does not agree, and that has been rough for Signal, and they deserve credit for the stand they are taking on it.


NSL can't require to collect new business records. They can only compel you to disclose business records that you already have.

This is beyond the legal authority of an NSL.


> This is beyond the legal authority of an NSL.

If the legal authority can't be challenged in public, how are you so sure this legal authority hasn't been skirted plenty? In an opaque system, what they can and can't do is only theoretical. Only sometimes are things disclosed well after the fact and often only in aggregate (such as numbers about how many NSLs are greenlit). This is what the author means by no trust required. They can't be asked to subvert it, even via extralegal means.


The Core Secrets leak said the FBI "compels" U.S. companies to "SIGINT-enable" their products if they don't take money. SIGINT-enable means installing backdoors. So, yeah they can. They also do this with classification order mandating secrecy from organizations and people that are immune to prosecution. In the Lavabit case, they wanted a device attached to the network to do whatever they wanted with the company ordered to lie to customers about their secrets still safe via the encryption keys. That's always worth remembering for these discussions. Plus, most companies or individuals won't shut down their operation to stop a hypothetical threat.

So, you have to assume they'll always get more power and surveillance over time via secret orders if there's no consequences for them demanding it but people on other side can be massively fined or do time for refusing. Organizations about privacy protection simply shouldn't operate in police states like the U.S..


If for some reason those methods fail, they can use BULLRUN, which has a much larger budget[1] and specifically tasked with "defeat[ing] the encryption used in specific network communication technologies"[2].

[1] "The funding allocated for Bullrun in top-secret budgets dwarfs the money set aside for programs like PRISM and XKeyscore. PRISM operates on about $20 million a year, according to Snowden, while Bullrun cost $254.9 million in 2013 alone. Since 2011, Bullrun has cost more than $800 million." ( https://www.ibtimes.com/edward-snowden-reveals-secret-decryp... )

[2] https://en.wikipedia.org/wiki/File:Classification_guide_for_...


That is a major accusation, can you provide source(s) to read more about this claim?

It is at odds with known cases, such as the fight with Apple over iPhone encryption.


It's not at odds if you know how they work. The U.S. LEO's are multiple organizations with different focuses, legal authority, and so on. They also regularly lie to protect illegal methods and activities. Let's look at some data.

Now, first indication this isn't true was Alexander and Clapper saying they didn't collect massive data on Americans. If they did, they could've solved a lot of cases by your logic of action vs capability being contradictory, right? Yet, Snowden leaks showed they were collecting everything they could: not just metadata, not just on terrorism, and were sharing it with various LEO's. So, they already lie at that point to hide massive collection even if it means crooks walking.

Next, we have the umbrella program called Core Secrets. See Sentry Owl or "relationships with industry." It says Top Secret, Compartmented Programs are doing "SIGINT-enabling programs with U.S. companies." In same document, even those with TS clearance aren't allowed to know the ECI-classified fact that specific companies are weakening products to facilitate attacks.

https://theintercept.com/2014/10/10/core-secrets/

https://theintercept.com/document/2014/10/10/national-initia...

For Lavabit trial, see Exhibit 15 and 16 for the defense against pen register. Exhibit 17 makes clear the device they attach records data live and claims constitutional authority to order that. They claim only metadata but they lied about that before. Exhibit 18 upholds that the government is entitled to the information, Lavabit has to install the backdoor, the court trusts FBI not to abuse it, and they'll all lie to Lavabit customers that nobody has access to their messages (aka secrecy order about keys).

https://edwardsnowden.com/wp-content/uploads/2013/10/redacte...

That the judge asked for a specific alternative was hopeful, though. I came up with a high-assurance, lawful-intercept concept as a backup option for event where there was no avoiding an intercept but you wanted provable limitation of what they were doing.

https://www.schneier.com/blog/archives/2014/09/fake_cell_pho...

They regularly hide what techniques they have via parallel construction or dropping cases.

https://www.eff.org/deeplinks/2013/08/dea-and-nsa-team-intel...

https://arstechnica.com/tech-policy/2015/04/fbi-would-rather...

So, you now have that backdrop where they're collecting everything, can fine companies out of existence, can jail their executives for contempt, are willing to let defendants walk to protect their secret methods, and constantly push for more power in overt methods. In the iPhone case, even Richard Clarke said he and everyone he knows believed the NSA could've cracked it. Even he, previously ardent defender of intelligence community, says FBI was trying to establish a precedent to let them bypass the crypto with legal means in regular courts.

https://www.newsweek.com/former-white-house-offiical-nsa-cou...

So, the questions would be:

(a) can they already do that legally or technically using methods like attaching hardware and software to vendors' networks/apps like in Lavabit trial?

(b) can the NSA or third parties bypass the security on iPhones publicly or in secret? Or did Apple truly make bulletproof security?

(c) did all this change just because FBI said they were honest, powerless agency hampered by unbreakable security in a press release?

I didn't think anything changed. I predicted they'd crack that iPhone the second they were blocked in court. They did. They knew they could the whole time. They lied the whole time. They wanted a precedent to expand their power like they did in the past. That simple.


I’m pretty sure that isn’t true. They can be used to compel you to build interception capabilities.


source?


It's complicated, but that's kinda what happened to the Lavabit "secure" webmail service. When the owner wouldn't install a backdoor, the FBI sought the private encryption keys so they could MITM the whole site.

https://www.newyorker.com/tech/elements/how-lavabit-melted-d...


That isn't building an interception mechanism, it's revealing some private information Lavabit held.


The big Apple vs FBI high profile lawsuit for one. Granted that was a telegraphed precedent seeking exercise by "accidentally" losing access to the work phone after the terrorists destroyed their personal phones before the attack.


That was not an NSL.


True - they are less publicly tested but it is suggestive in its own way. If they could just NSL their way to access why bother with precedents? It is wild speculation but that is what lack of transparency has wrought.


I really can’t say. Take for what it’s worth.

Edit: I may be thinking of a FISA order as opposed to an NSL. Doesn’t matter though, obviously the concern is that they would be served with whichever does allow that.


Does the NSL have any legal authority? Is it actually bound by laws? From what I understand, an NSL is "do what we tell you, because we're the NSA".


precisely the way they did, which was to challenge the gag order, and win. That’s where the documents above come from, in fact.


NSLs don’t allow that.


If I read that document correctly, it's more accurate to say that Signal stores at least two integers, rather than exactly. Signal did not provide information that does not fall under certain information categories, and they say that a court order would be needed to force them to disclose any of that information (if they possess any, which they deny).


But they did deny possessing any of that information in the subpoena reply, so if you trust that they are replying truthfully it is exactly 2.


So, nobody has ever lied in a subpoena reply before?


Uhm... this is what would happen if it was a govt job


> P.S. If you’re looking for good alternatives to Signal, I can recommend Matrix.

Yes, if you're looking for alternatives to Signal, you should totally use a solution that hasn't rolled out end-to-end encryption by default[0]. /s

...and that only two clients have implemented so far, out of 50ish that they list on their website.

[0] https://matrix.org/docs/guides/faq.html#what-is-the-status-o...


That ticks me off too. I'd rather suggest Tox.

For all the hate it gets, it does only have mode of communication: End-to-end encrypted, for your contact (as people's addresses are pubkeys) and with forward secrecy.

Most "secure" IM systems fail this basic test. When proper end-to-end encryption is optional, guess what happens.


Well. I'd rather not have anyone suggest tox. The whole "we use nacl so we are safe" attitude from a couple of years ago seems to still be around. Good on them for using nacl. A shame they don't seem to realize that you can write bad crypto with it.

The whole forward secrecy seems to be unresolved still. They have session keys,but other than that there is no rekeying.

And then we have the whole issue with it relying on supernodes for much of its functionality (offline messages, mobile phone client rs) which leads to it having a subset of the issues many have with signal.


>The whole "we use nacl so we are safe" attitude from a couple of years ago seems to still be around.

Citation needed.

>Well. I'd rather not have anyone suggest tox.

I'm repeating myself, but for all the hate it gets, I'm unable to come up with a better suggestion than Tox. There's always some kind of flaw: Centralized, no forward secrecy, end to end encryption optional, no way to verify contacts and so on.


Regarding the citation, it is just a matter of reading how the core devs reply to potential security issues. Sure, it has gotten better, but it is not too far from the good old https://github.com/irungentoo/toxcore/issues/121

The fact is we mostly know what kind of attacks are possible on signal. We know metadata is a potential problem. We know what kind of tradeoffs we get with a centralised architecture. We know how that works and how to mitigate some things. Openwhispersystems have been clear about what Signal provides and what it does not provide (even the wording around disappearing messages is well chosen not to confuse people ).

With tox there is a lot we don't know. Are the supernodes an attack vector? Why aren't the devs clear about using a less distributed architecture for mobile clients? Why don't they provide proper forward secrecy yet they claim to do so? There might be lots of strange properties of doing crypto distributed.

I would not recommend it for secure communication until the amount of unknowns is smaller. It is really that simple.


>Sure, it has gotten better, but it is not too far from the good old

So your citation for sticking to their old ways is pointing to some old example? That's not what I was looking for.

>Why aren't the devs clear about using a less distributed architecture for mobile clients?

Because it's not a priority to them; Or to me, for that matter. (I don't IM on the phone)

>I would not recommend it for secure communication until the amount of unknowns is smaller.

I don't see how the likes of Signal or the popular Telegram are any better in that regard. Also, adding more features (your phone suggestion) wouldn't help.


Signal is better because people who knows their shit has vetted the design and quite a bit of research on the protocol has been done. We a quite certain of the security properties of the signal protocol.

Tox has not had this amount of attention and it is written by people who seemed to sincerely believe that using nacl/libsodium made tox safe. If that is not a huge red flag, then I don't know what is.

Telegram is not encrypted by default and when it is it uses a weird protocol, which people warned about from the beginning. The devs were cocky even after a probably unintentional backdoor was found that would have let the server mitm every encrypted communication.

The difference between these three for secure communication is huge.


>Signal is better because people who knows their shit has vetted the design

So basically, argument from authority. No Good.


Then I ask you: as a non-cryptographer who knows enough to understand RSA, how should I verify cryptographic claims? People i find trustworthy claim signal is good, which is further confirmed by studies and third party suits. I know this is an appeal to authority, but nobody can personally verify every claim made to them. I believe I have done due diligence.


It helps that they used NaCl. When cryptographers looked at it, they found an issue: People can impersonate your friends if they get their hands on your private key.

This is pretty bad, but so is having had your private key compromised in the first place. It should be fixed next time they do a flag day.

Other than that, and the rekeying issue (keys are only renewed when the client is closed. They should be on a period of time, to make forward secrecy really effective), nothing else bad was found.

Notice that, for a couple years now, effort has gone into polishing (toktok) toxcore and documenting things, rather than else (eg: adding fancy features). That's a good thing.

Tox might be understaffed and progressing slowly, but there's nothing fundamentally bad about it. They got a bunch of important things (really distributed, DHT, public keys as addresses, temporal keys for forward secrecy, always end to end encrypted) right. I'm not aware of any other project that got this much right, unfortunately.

If only if Tox got more attention, it'd gain developers, donations, and the possibility of getting a proper audit done.


Fwiw, Matrix E2E actually exists in separate codebases in: Riot/Web, Riot/iOS, Riot/Android, nheko, matrix-python-sdk, libpurple (in PR), and shortly in Fractal (thanks to https://gitlab.gnome.org/jhaye/olm-rs etc). So yup, it sucks that it's not turned on by default in private rooms, but we're working away as fast as we can.


I love your project and am really looking forward to replacing a lot of stuff with Matrix-powered chat.

However, I strongly disagree with the author that Matrix should be recommended as a secure communication platform until E2E is stable (and, from what I understand about your project, you'll enable it by default as soon as you consider it stable enough).


For the record, E2E is currently in libpurple master (only decryption not encryption) and in PR on matrix-python-sdk. Just a slight flip there.


my bad; i thought all the python ones had landed now. in practice the work is done.

Also, since writing this list yesterday, another client has got E2E running: Seaglass (a native Cocoa macOS client): https://neilalexander.eu/seaglass/


Last time I've tried Matrix (this spring) with a group of peers, my E2E rooms were full of random "failed to decrypt" and lots of out-of-band communications "hey, are my messages working for you today?"

Yes, we had one Synapse server running on a resource-constrained machine that sometimes "fell behind" the rest of the network. I believe that is what had caused such issues. Still, the fact things easily break with server load or network issues means there is something faulty about the protocol. Resilience and reliability are no less important than security.


That mirrors my experience with the matrix.org homeserver. It has not improved much over all the months.


We've been working away fixing causes of 'unable to decrypt' bugs, to the extent that (as a poweruser) I almost never see them. If you see them, please report them via bug report so we can dig into them.

Meanwhile, the performance of the matrix.org server has definitely improved massively over the last few weeks. We hit a performance ceiling from May through mid July, but since July 19th or so we've finally got CPU headroom back again thanks to stuff like https://twitter.com/matrixdotorg/status/1019957885026144257 and https://twitter.com/matrixdotorg/status/1022095383978233856.


Author here, this is a fair criticism. Other alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring (not an endorsement of any of these). I'm an old curmodgen who just uses IRC+OTR and GPG, though, so I have to depend on others for recommendations.

Also, Matrix enables end-to-end encryption by default on clients that support it.


According to friends in the cryptography space and from a cursory reading of wikipedia, Telegram is a shitshow compared to Signal.

It's one thing not to trust Signal. It's another to recommend alternatives that are far worse.


(Totally unrelated, sorry, but about a month ago you and I had a brief exchange about the placebo effect.[1] I got distracted and missed your last reply and never responded. Sorry about that. It's too late to reply on the original thread, and I don't want to hijack this one, but while looking for an old comment I saw your reply and felt compelled to mention it to you. It was a good exchange IMO and I didn't mean to "ghost" on it.)

[1] https://news.ycombinator.com/item?id=17457085


I appreciate the apology, but I don't feel ghosting on an online conversation should require an apology.

People have other lives, most often online discussions go way past their due date (I actually like the fact that HN doesn't give you a notification when somebody replies to your comment).

It was a good discussion though, the placebo effect is fascinating.


Yeah, I almost let it go but then I figured, what's the harm? It was a good discussion. Well met.


You suggest Telegram over Signal? Now we know you're spreading FUD.


I don't actually endorse any of the alternatives, just listing them. I haven't had time to research them in depth. I have worked with Tox before and I know some of the guys behind it, though, but I think it's dead in the water.


[flagged]


We've banned this account.


Why would you rely on others for recommending those, but not for Signal? Just don't recommend anything then, or don't criticise Signal. What's the point of convincing people to move to worse alternatives?


I personally reviewed Matrix before recommending it on my blog. The other alternatives I listed here are not endorsements.


Then why would you recommend a solution that hasn't even enabled end-to-end encryption by default?


When did I do this?


You recommended Matrix in your blog which, according to the first comment in this thread, is not end-to-end encrypted by default (which you acknowledged in your first response). Why would you recommend that instead of Signal?


Oh, I should have caught that and addressed it in my earlier response. Matrix is end-to-end encrypted by default on clients that support it. The valid criticism is that some clients don't.


Because for some that's not a show-stopper.


In the context of this article, framed as an alternative to Signal, I think it should be.


Why?

Let's compare it to F-Droid: He isn't saying that Signal should be distributed via F-Droid by default, just that there should be the option. So the article doesn't seem to say that the defaults are what matter.


But in the article he's saying not to use Signal because he doesn't consider it secure enough, and to consider Matrix as an alternative. His criticism of Signal has nothing to do with defaults, and everything with security.


Let's also not forget XMPP+OMEMO.


I really like Wire, for a large number of reasons - privacy and user experience among them. I've talked about it in the past[1].

If you get the time, please give it a proper whirl. With the recent open-sourcing of their server and proper E2EE by default[2] (but abstracted away from the "regular user"), it's shaping up to be a really solid application. As far as I'm aware they use the same double-ratchet protocol as Signal, but you don't need your cellphone number to register (a big thing for some people - myself included).

1. https://news.ycombinator.com/item?id=16655743

2. https://wire-docs.wire.com/download/Wire+Security+Whitepaper...


I'd say Wire is a good substitute.


Nonsense. You can run your own Matrix server and set whatever defaults you want.


A Matrix server can force E2E on all messages passing it?


Yes, it can - for instance the Matrix servers that the French government runs do this by default already. We haven't exposed this as a config option and pushed it into mainstream Synapse yet because of https://github.com/vector-im/riot-web/issues/6779, but the tweak for doing it is:

https://github.com/matrix-org/synapse/pull/3426/files


Is this true? As far as I know, even in rooms with m.room.encryption, you can still send normal m.room.messages. Or does the config option you're describing automatically setup room PLs such that m.room.message can't be sent?


technically you can still send normal m.room.messages (or anything else), but any spec-compliant client will refuse to do so in a room where m.room.encryption has been enabled.



That looks like a no given the article has a "Create a new secure room" section where you have to explicitly enable encryption for that specific room.


e2e, by nature, is client specific. So Matrix, as a connectivity glue protocol, has nothing to do with it.

Synapse, the reference server, can handle rooms, and force encryption on the rooms.

p2p is client side. Riot, as reference client, is the one that takes care of this, and, if I get everything right, it is on by default.


Regardless of the truth of your assertions, the article you provided as evidence does not support them. At no point does it demonstrate that a matrix server can force E2E on all communications, that synapse can be configured thus, or that riot can require that configuration.


Or conversations.im? Matrix + riot leaves a heap of meta data about you on the federated server. If that server is compromised, so are you.


Surely Matrix doesn't leave any more metadata than XMPP with MAM enabled (which presumably most XMPPers use these days)?


> Off the bat, let me explain that I expect a tool which claims to be secure to actually be secure. I don’t view “but that makes it harder for the average person” as an acceptable excuse. If Edward Snowden and Bruce Schneier are going to spout the virtues of the app, I expect it to actually be secure when it matters - when vulnerable people using it to encrypt sensitive communications are targeted by smart and powerful adversaries.

I'm not so sure about this. I don't think Snowden and Schneier are praising it because it is the most secure application available that works for every threat model; I think they're doing it because it's the best attempt to up the security of the masses. In other words: there's a limit to its threat model. Signal makes it harder to do mass-scale surveillance, and allows e.g. whistle-blowers to contact journalists without standing out because they're using an encrypted messaging app.

Yes, it's important to highlight those trade-offs, and one can always do better, but as far as I can see Moxie has always justified the trade-off with arguments that were not based on being self-serving. You might not agree with his conclusions, but I think it's unfair to accuse him of being self-serving. (Unless you mean "thinking about the consequences for the success of Signal" by "self-serving". It's not really clear how it serves Moxie otherwise, and the author doesn't go into detail about that.)

In the end, I think it comes down to the author expecting different goals from Signal than the project itself has - as implied by his disdain for GIF search. Obviously Signal isn't only implementing features just to get more secure - it also wants to be widely adopted. It's just that the author apparently doesn't consider that as important.


I think Signal does a very good job at providing easy security for the masses. But for journalists and sources it can be dangerous since it is based on real phone numbers, and those phone numbers are sent to the server to be matched up. It is especially dangerous if the journalists and sources believe Signal is protecting them in that use case.


Signal is not for state-proof encrypted communication. Not large states like the USA or Russia. If you think it is, you've been misinformed. For state actor proof communications you need to evaluate every action you take and think:

"What are the assumptions that I'm making here?"

One assumption is that you're not currently on anyone's radar. Are you willing to bet the entire enterprise on this assumption? How certain are you? Are you 99.999% certain?

Another assumption is that the operating system you are running the app in is not compromised on either end of the communication. 99.99%?

Another assumption is that the screen isn't viewable by other devices. Another assumption is that the frequency of your key taps aren't picked up by a mic and then turned into intelligible letters.

Another assumption is that the encryption algorithms you're utilizing haven't been subtly chosen to be intelligible to a single actor or that they'll stay secure once we have quantum computers.

Etc. Etc. Etc.

Signal is good because it raises the bar. Stock traders buying black information probably won't get your communications. They won't be scooped up in a email server leak. They wont be visible to your wife when she enters your phone's unlock code because they auto delete, and they don't get pushed to your iPad, like FB messenger[0].

But if you want to go up against James Bond, and you're already on his radar, you need to give up the illusion that anything computer related is fully trustable. Just pre-arrange some code words or OTPs and meet in person in an area without electronics or go even more old school and use dead drops with hand written communication.

[0] I personally know 3 people that were caught cheating this way.


> Signal is not for state-proof encrypted communication. Not large states like the USA or Russia. If you think it is, you've been misinformed.

Ok, but in that case what does Signal offer that any random messenger with transport encryption doesn't? If your threat model doesn't include state actors then you can probably trust a) the HTTPS certificate infrastructure b) an international corporation like Facebook, so you can probably assume that no-one would tap your FB messenger messages in transit. "Not pushed to your iPad" sounds more like a bug than a feature - I want to be able to read my messages anywhere that I'm logged in as me (at least while I have my yubikey or what have you plugged into that device). Automatic deletion... eh, I would rather make a deliberate decision about when to delete things, personally.


1. The HTTPS infrastructure is downgradeable and relies on DNS and a multitude of certificates. And not all the ciphers are safe. Yes it can be done securely-ish, but unless you're layering another level of encryption over HTTPS it isn't fully secure. Layering is what the CIA does, according to the Snowden leaks.

2. As for the rest of it: Cool man, that sounds like you want a normal chat app that is more usable and less secure. I use Messenger too for things that don't matter.


> The HTTPS infrastructure is downgradeable and relies on DNS and a multitude of certificates. And not all the ciphers are safe. Yes it can be done securely-ish

There's no reliance on DNS. We know what the right way to do HTTPS is, and an app that doesn't have to maintain compatibility with ancient browsers can use a strictly secure profile (no old ciphers, no downgrades etc.). HTTPS is older and more complex than the Signal protocol, but it's also extremely widely deployed and gets a huge amount of attention from security researchers. I think actual attacks on the protocol are less likely with HTTPS than with Signal.

> unless you're layering another level of encryption over HTTPS it isn't fully secure.

Nonsense. Two layers of valid encryption are no more secure than one, and two layers of flawed encryption will almost certainly still be flawed.

> 2. As for the rest of it: Cool man, that sounds like you want a normal chat app that is more usable and less secure. I use Messenger too for things that don't matter.

It's not that my chats don't matter. It's that I don't think autodeletion or one-device-only represent a meaningful security improvement.


> There's no reliance on DNS.

In practice there is for most situations. Are you going to get a static IP and go through the work of finding one of the rare cert authorities to get an HTTPS cert for it authorized?

> Nonsense. Two layers of valid encryption are no more secure than one, and two layers of flawed encryption will almost certainly still be flawed.

I hate arguing about this because I feel like there is a difference between how mathematicians think and how engineers thinks. I agree that one of the layers should be HTTPS if the context allows for it, because it has a lot of eyes on it, as you mention; but I fail to see how layering encryption is bad from a privacy standpoint.

Mathematically, this statement:

> Two layers of valid encryption are no more secure than one.

Is only true if there are no mistakes and if it would take more operations in the universe to break the first layer of encryption.

But why should we, a priori, assume that there are no mistakes? We have hundreds of examples of thought-to-be-secure ciphers / one way hashes ending up in the trash heap. Look at things like Cloudbleed. In reality things break. In reality cert authorities get moled or hacked. If you've been using layered encryption you're safer. Also, HTTPS basically mandates that you use TLS, which for some contexts doesn't work because we'd prefer a one-way (i.e., connectionless) channel to communicate to stop inbound traffic at the physical layer.

> It's that I don't think autodeletion or one-device-only represent a meaningful security improvement.

It's helped plenty of people that have had their phone seized at the border or their other device seized by the police. Sometimes you don't that information is sensitive until later, and sometimes you choosing to delete it at that point is illegal or impossible.


> In practice there is for most situations. Are you going to get a static IP and go through the work of finding one of the rare cert authorities to get an HTTPS cert for it authorized?

You don't need DNS to check whether the server purporting to be messenger.com has a valid certificate for messenger.com. An attacker who controls the network can of course cut you off entirely, but an attacker who controls DNS can't intercept you messages because that doesn't get them any closer to having a certificate.

> I agree that one of the layers should be HTTPS if the context allows for it, because it has a lot of eyes on it, as you mention; but I fail to see how layering encryption is bad from a privacy standpoint.

Do you feel safer behind two locked doors than one? I guess it can't hurt, but the effort would surely be better spent on virtually any other aspect of the system. E.g. if you double the key length in a single layer of encryption you've made it 2^128 (or whatever your key length was) times harder to crack, whereas if you stack two layers then you've only made it twice as hard.

Beyond that my argument would be: many security breaches happen because someone got confused about where the security boundary was. If you use one layer of encryption then everyone knows that the encrypted data is untrusted and the decrypted data is trusted. If you have two layers it's very easy to get lazy and introduce a small hole into one layer assuming the other will cover it, then you do the same for the other layer, and then an attacker figures out how to connect those two holes in a way you hadn't thought of and suddenly you're doomed.


Well if you’re ignoring state actor security and trusting a provider, what does signal offer over irc/tls?


> Truly secure systems don’t require trust.

This is a chat app so, by definition, security requires trusting at least one other person. Also, I think experience shows that secrets can often be least trusted to those who have some interest in/use for them, with the secret owner often being the least trustworthy of all. So I'd say that if you trust yourself you're already probably trusting one of the weakest links in whatever chain of trust you would have.

But seriously, pretty much every secure system requires trust, and the more it relies on technology, the more trust is required. You need to trust there are no backdoors or holes in a long chain of hardware and software that no one person can possibly verify, and if they hypothetically could, they could only hypothetically do so with the help of verification software that they could not themselves verify, at least not without dedicating a lifetime to that goal. Trustless security does not exist, and attempting to achieve it by adding more technological layers and more complexity reduces rather than enhanced security. We should make it easy for us to choose whom to trust, not work on a futile attempt to take trust out of the system.


> Trustless security does not exist, and attempting to achieve it by adding more technological layers and more complexity reduces rather than enhanced security.

How so? If you can minimize trust to the point where you have to trust someone to only properly design federated or peer-to-peer open protocol and trust that others will participate and oversee the process it's one thing, as there is no control or power to go around. Open and secure enough implementations from other parties can emerge with more parties verifying them and a possibility to switch in case someone does something sneaky. But if you also have to trust the same organization with implementation, infrastructure, distribution, there is not much security to talk about. There is no way to even verify claims that the thing they open sourced is the same thing they compile and distribute. And so much centralized power makes the organization a lucrative target for state actors with no realistic possibility to defend.

The more centralized trust you have the less secure system can be. It's like an upper bound on security.


I understand your argument, but it cannot be shown to be more valid than the complete opposite: the less centralized a system is, the more complex it is in terms of protocols, and you need to trust many more people to design it correctly than you would need to trust to operate a centralized system. In fact, it could be argued that beyond some complexity level, an unbreakable design is virtually impossible, even in principle.

Your argument about an appealing target could also be used to show the exact opposite: decentralized systems are much harder to upgrade, and so they become attractive targets which you need to break much less frequently (especially considering that the internet backbone itself is pretty centralized), and so it makes even very expensive cracking more affordable. The argument about open-source applies pretty much equally to the centralized and decentralized case.


> the less centralized a system is, the more complex it is in terms of protocols, and you need to trust many more people to design it correctly

I disagree with that. The more centralized system is, the less trust boundaries it has and more vulnerable and insecure it is, because penetrating one trust boundary gives access to everything. Security always requires additional complexity. And decentralization forces you to take that complexity seriously for once, something you neglect, not simplify, in centralized insecure designs. Forcing you to deal with just trust explicitly and systematically leads to much more secure designs.

Other than that decentralized systems are exactly the same as centralized, just with more players and choices and incentives not to break anyone's trust. The only problem is all that embrace, extend crap large corporations always attempt to pull off and recentralize everything.


> because penetrating one trust boundary gives access to everything

The same could be true for a decentralized system if the flaw is in the centralized backbone or the shared protocols/algorithms.


I like Linus' argument, if you don't work with a web of trust then you're doing it wrong. In the context of mobile secure messaging the web of trust includes: I'm trusting every hardware component on my phone, I'm trusting Apple, I'm trusting the iOS code, I'm trusting the TLS protocol, etc.


> I like Linus' argument, if you don't work with a web of trust then you're doing it wrong

I can't find the source for this, could you tell where did you take this from? (not saying it's not true, just curious to read the full text)


It was a video on him talking to students and asked about security in the kernel IIRC. I'm on my phone now but if you find it please post the link :)


I also reacted to this line, because there is no security without trust, would it be only trust in the security system.


My new secure chat app encrypts your message then send it to /dev/null.


I guess that's acceptable if you trust the NSA not to replace /dev/null with an email forwarder.

My new secure chat app, on the other hand, encrypts your message in memory, then zeros out the bytes.


In "memory". Sheeple!


Well, you seem like a potential user for my secure messenger. It is a one-person Faraday cage in a pitch black room in my basement. Dont forget to bring a gun as failsafe.


"The APK direct download doesn’t even accomplish the stated goal of “harm reduction”. The user has to manually verify the checksum, and figure out how to do it on a phone, no less. A checksum isn’t a signature, by the way - if your government- or workplace- or abusive-spouse-installed certificate authority gets in the way they can replace the APK and its checksum with whatever they want."

This is true for just about every single piece of software that one downloads. But nice job deflecting it onto Signal to solve for you. Installing an APK by hand is not difficult either, you transfer it to your phone and open it. I don't see how Signal is doing any better or worse of a job from similar apps. Also, Signal's checksum verification is SHA-256 which I'd say is "good enough." It's also being served from an HTTPS webpage. Is there something missing here?


A lot of the complaints here appear to actually be "if you download software, or run that software on Android, or on any third party hardware at all, it could be compromised", but with Signal singled out.

For a given user, I don't understand how signing is better than a checksum under this threat model. Going online to download an app is not part of an established connection with that app, so an attacker could simply serve you a page of their choosing with a signature they issued. Verifying identity for an initial connection is definitionally what signatures can't solve.

I think, maybe, the idea is that a signed app could reject communications from unsigned/mis-signed installations? So if someone has entirely hijacked your session and served you a fake app, you won't be able to communicate with the rest of the network. It sounds like a coherent model at least: you can protect single targets by verifying their software when they try to communicate.

(If so, I don't understand how that meshes with the call to federate Signal, which would require it to accept signatures from outside parties. Is the idea something like a CA web-of-trust where only people approved by accepted signers can federate their apps?)


And that's not even what Moxie meant by "harm reduction" - he meant that he wanted to stop people downloading APKs from random third parties, which has surely been pretty much accomplished.


Signal doesn't need to solve it, because F-Droid would.


Additionally with Android APKs, the APK has to be signed and additional updates will be verified to match the same vendor.


Which as far as I can tell is what Marlinspike meant by harm reduction.

It's not preventing anyone from hijacking your encrypted session and serving you a bad app, I'm not sure how it could. ("How do you secure your connection given that your security has already been silently compromised?" isn't a question I really understand.) But it helps ensure that people are at least requesting the genuine app, and if they get it then they'll get signature verification for future versions.


"If Edward Snowden and Bruce Schneier are going to spout the virtues of the app, I expect it to actually be secure when it matters - when vulnerable people using it to encrypt sensitive communications are targeted by smart and powerful adversaries."

Because if the adversary is, say, an abusive ex that happens to work for the telco, for example, then it doesn't matter. Unless you're actively hunted by a G7 country your problems are inconsequential.


How do you defend abused spouses in discourse by comparing their needs to people hunted by the most powerful political forces? Surely these two cases ought not be on the same table for comparison.


The original autor outright dismissed the entire class of threats, which, while they may seem "lesser," are also far more common. And it's not just abusive exes, not even every state actor has access to an NSA. Neither Signal nor any other existing application based by the same protocol (which exist and are _more_ popular than Signal itself — What's App is one, for example) are sufficient, but on the other hand, it requires far less knowledge to operate.

Signal comes from recognition that very few people can practically operate high-effort tools, and if the effort to operate a fancy tool distracts them too much from the things they actually aim to do (it's rare that someone's goal in life is "using Matrix"), they'll fall back on something that doesn't distract them.

For example, even when facing a state actor — if training your cadres to securely operate an encryption tool takes away too much from the core activism of your organization, the tool is no longer helpful. Same applies if the actual overhead of using it ("both have to be online" requirements, for example, or for somewhat lower threat profiles, "no emoji") makes it impractical.

For most people the threat is going to be someone they know, or high-volume-low-effort attempts. Whether it's an ex or a boss or even most states, it's unlikely they're facing NSA. They often have a lot of other things to do, and neither their ability nor want to enforce technical solutions on others are high. That's the range Signal-backed apps generally target.


> not even every state actor has access to an NSA

This is a massively undervalued point. As far as I can tell, most of the people relying on Signal/WhatsApp/Firechat/etc for life-or-death issues are neither hiding from exes nor fleeing the US government. The bulk of use seems to be journalists and protestors in places like Turkey or Bangladesh who have reason to fear state monitoring of communications, but are unlikely to face deep or targeted attacks like having been served a compromised APK from day one.

That Turkey example is seriously relevant. In 2013, the Turkish government created a fraudulent certificate allowing it to intercept traffic to all Google domains, potentially viewing things like gchat and gmail communications. This offered a whole bunch of lessons:

- That the CA web-of-trust model is seriously broken, as we've seen repeatedly.

- That even if companies secure their data and respect your privacy, non-E2E data transfer can still be unsafe.

- That any software or data obtained under a hostile network without a prior signed session is unsafe. Play Store Signal installs in that time could theoretically have been compromised, but "app store versus dedicated download site" and "checksum vs. signature" were irrelevant; all four paths could have been compromised via the same attack vector.

- That "secure download environment, insecure usage environment" is a major, meaningful category of use. That describes a journalist installing Signal in the US and flying to Turkey; an activist installing it prior to a regime change; or a protestor copying it from a trusted friend's installation.

So you're exactly right: Signal isn't a complete solution for an adversary with unlimited resources and direct access to every layer of a network, but it's still massively important even for dealing with most state actors.


I always give people the "Who Uses Tor?" page when they wonder about various threats we're addressing with privacy technology. It has a lot of examples that laypeople will understand. It also counters the "only used by criminals" myth that some believe. It also has examples of law enforcement depending on these technologies. That counters the follow-up claim that encryption or privacy tech is just a negative for law enforcement. So, I suggest just giving people the link telling them they'll see many of the benefits. It's worked so far for me.

https://www.torproject.org/about/torusers.html.en


I think the top-level comment is unfair, but I also think you're misreading it. It's not defending abusive exes, but saying they are, though a lesser threat than major governments, still a threat worth defending against.


I don't know anything about Moxie derailing threads or anything like that but if we just listened to critics all the time then we just wouldn't have anything. Signal is better than a lot of what is out there and being used as scale and that counts for something. More secure is always better than not secure at all.


Read the end of an article as well. We have solutions like Matrix, and like XMPP with OMEMO.


Signal did what those things failed to do which is to actually gain some popularity outside of HN. I hope Matrix takes off! In the meantime if people are convincing their families and friends to get on Signal then that's a net positive to me.


@ynniv - because if you can coax a friend who isn't on anything but Facebook to use Signal over, say, Messenger, then it is a net win. Is it perfect? No. Is it a better situation than the current? Yes.


Exactly. Signal's most important feature is that it isn't Facebook.


What's the point of gaining traction if it doesn't deliver on its fundamental promise?


Because the theoretical possibility of being monitored on Signal is preferable to the near-certainty of being monitored on Facebook Messenger.


That's a strawman: you need to beat boring SMS and PSTN, not Facebook Messenger.


Until I started forcing them to when I quit Facebook, I had no acquaintances that used SMS, they all use FB Messenger. This is the problem.


For most products it's better to have something than nothing, but not for cryptography. Bad encryption can be worse than no encryption, since it leads to a false sense of security. So it's really important to get it right, and worth criticizing even if you don't have anything better.


The article isn't about bad encryption though. It's not about a flaw in the signal protocol or something like that. It's stuff like Moxie doesn't like F-Droid. Which is not an invalid criticism but I'm not gonna stop recommending Signal over Facebook Messenger because of that.

Regarding false sense of security...eh. I can't speak for endangered activists but everyday people write stupid things whether it's on SMS or Signal. But might as well be on Signal.


> The article isn't about bad encryption though. It's not about a flaw in the signal protocol or something like that. It's stuff like Moxie doesn't like F-Droid. Which is not an invalid criticism but I'm not gonna stop recommending Signal over Facebook Messenger because of that.

You have to look at the whole system, not just one algorithm that it uses - if any part of the system is secure then the whole is insecure. Do you believe Signal-the-system is meaningfully more secure than Facebook Messenger (which uses industry-standard HTTPS, so at the low-level protocol/algorithm level it's certainly secure enough)? If so, why/how? If not, why recommend it?

(I agree that Signal is more secure than SMS, but so is Facebook Messenger or any number of other messaging apps).


Isn't the whole point of Signal that it's e2e encrypted and therefore can't really read and share your messages? Whereas Facebook's system is centralized? So yeah you're safe from outside attacks but not internal ones?

I mean if you're asking me if I know for certain that Signal is better than Facebook, there's no way for me to know for sure.

But at some point there is a level of trust required and I trust a company like OWS more than I trust a company like FB. Call it blind faith, I dunno, but I also have to trust my operating system otherwise I wouldn't get anything done.

Edit: although I should add that while it may be labeled as blind faith it's also fueled by experience. OWS aren't the ones that periodically reset my privacy settings or tried to wage war against accounts not using real names etc etc.


> Isn't the whole point of Signal that it's e2e encrypted and therefore can't really read and share your messages?

Maybe. They have an awkward, compromised design, because fundamentally you can only the key exchange stuff that's necessary for forward secrecy if you're both online at the same time, but of course they want to support offline messaging, so they have a protocol that's mostly-e2e but the server also participates in it in some cases. In theory maybe it's all fine, but it's complex and has a lot of surface area. Combine that with Signal keeping the server not-quite-open and being weirdly insistent on not having federation, and I'm suspicious.

> But at some point there is a level of trust required and I trust a company like OWS more than I trust a company like FB. Call it blind faith, I dunno, but I also have to trust my operating system otherwise I wouldn't get anything done.

Agreed, which is really what the article is about. I have very little trust in OWS and Marlinspike in particular because of his attacks on the most important/effective working cryptosystems we have (OpenPGP), and his willingness to compromise security properties that seem very important to me (open-source auditability, federation, stronger anonymity than a phone number has) for the sake of features that I think are less significant (forward secrecy), and his refusal to even acknowledge that a tradeoff is being made.


> They have an awkward, compromised design, because fundamentally you can only the key exchange stuff that's necessary for forward secrecy if you're both online at the same time.

The initial key exchange is done through the server using "pre-keys" (which, unless verified, is trust on first use). Any new key data is sent with the messages (and as such, there is not much extra done by the server)

I don't see how signal could get any more auditability. Since they switched to webrtc-based VoIP the whole server is open source. They have made a lot more progress in letting the client verify what the server is running compared to any other messenger out there, unless you are able to run your own.

I would say that the goal of signal was more about making an encrypted secure messenger for my mom than making crypto nerds safe from targeted attacks by nation states.


> The initial key exchange is done through the server using "pre-keys" (which, unless verified, is trust on first use). Any new key data is sent with the messages.

How confident are you that the server can't trick the client into downgrading to a new trust-on-first-use exchange? I'd also ask what happens when one party sends multiple messages while the other is offline - eventually you must exhaust your preshared keys, at which point you have no good options - presharing more keys compromises forward secrecy, encrypting without more exchanges compromises forward secrecy, and it's very difficult to make it clear to the user what the tradeoffs are. And again, whatever approach you choose opens the door to downgrade attacks (particularly if we're assuming that the OWS servers are hostile - Signal fans always claim that you don't have to trust the server at all but then don't really commit to that when talking about these edge cases. If we really aren't trusting the server then we should assume the servers are under attacker control when analysing these edge cases)

> I would say that the goal of signal was more about making an encrypted secure messenger for my mom than making crypto nerds safe from targeted attacks by nation states.

Slurs against those who disagree with you do not improve your case.

Are there any messengers that don't use TLS left? (Even IRC servers tend to use it these days). Your mom is adequately served by transport encryption. The Venn diagram of people who need more security than transport encryption and people who can safely use phone numbers as identifiers looks like: OO


Prekeys are to start a session with someone, it's basically a public key. You generate a new public private keypair, do a DHE to establish the session secrets, send your new public key along with the encrypted message. If you send more messages to the same person, they use the same session.

TLS is fine enough for messages in flight, but a lot of messengers store message archives on their servers, and there may be tens of thousands of employees who have potential access to that; not sure if that's really what anybody's mom needs.


> Prekeys are to start a session with someone, it's basically a public key. You generate a new public private keypair, do a DHE to establish the session secrets, send your new public key along with the encrypted message.

At which point you have essentially decayed to conventional PKI and don't get any the security properties that you were supposed to get from the fancy Signal protocol (i.e. PFS).


Well, you always have to verify keys. No security is guaranteed without it. That is the case everywhere. Those pre-keys are just there to start a session, which in my communication with my brother case has lasted for as long as I had my phone (about 3 years). That session creates new key material with every message, providing forward secrecy.

I think it is a pretty elegant solution to key distribution, even though I wouldn't plan any bomb attempts without first validating the fingerprints.


The fancy protocol gives you PFS by generating new keys for each messaging round trip between parties. Anything in flight is subject to decryption if you can grab the keys from an endpoint before they're cycled.

The upside is that users can communicate with each other without needing both endpoints online simultaneously (which is fairly hard to guarantee, given all the battery saving stuff in mobile OSes and lack of 100% network coverage).

The downside is the key cycle time is much longer than they would be if all communications were done with both parties online, but it's still much shorter than a conventional PKI (ex PGP).


So how fast does cycling happen if the server is under an attacker's control and trying to ensure that cycling is delayed as long as possible? It's messy and complicated, and certainly doesn't leave you with the simple "messages can never be read in the future" security property that Signal's advocates claim.

Regular key rotation is good practice when using PGP or similar - have a master key that you use only for signing subkeys, generate a new subkey every month (say) and destroy the ones older than 2 months each time - though admittedly UI/tool support for doing this is limited.


> So how fast does cycling happen if the server is under an attacker's control and trying to ensure that cycling is delayed as long as possible?

If you're getting bi-directional communication with your partner, you're getting key cycling. The server can only delay key cycling in a session by delaying one direction of communication.

The server can't hand out the same pre-key to multiple users, because the client will delete pre-key pairs on first use, with the exception of a "last resort" pre-key. The server could exclusively hand out the last resort pre-key to all users attempting to contact you, and refuse to accept new pre-keys. Then the first flight of messages from users establishing a new session would not be PFS, but any messages sent once you respond would have PFS.

Adding, of course, the server could also hand out incorrect keys and man in the middle all the steps; so long as users don't verify the keys. And a malicious client and server could conspire to include the correct keys for verification and the MITM keys for transport. This would be visible in the shipped code, but if the backdoored client is only distributed to a limited set of users, it wouldn't be subject to random reverse engineering like the normal client is. But I assume everybody downloads multiple copies of apks from different networks and compares to ensure they're byte identical ;)


> If you're getting bi-directional communication with your partner, you're getting key cycling. The server can only delay key cycling in a session by delaying one direction of communication.

That implies the client has to have a code path for sending multiple messages without cycling. And it forces a tough choice between losing messages that are received out of order and not destroying keys quite as quickly as nominally expected.

Maybe it's fine, but for me it pushed the complexity threshold over the point where I could feel any confidence in it. I'm comfortable with traditional PKI. I'm comfortable with the online-only OTR/axolotl ratchet. But I'm very dubious of having this many edge cases.

> The server could exclusively hand out the last resort pre-key to all users attempting to contact you, and refuse to accept new pre-keys. Then the first flight of messages from users establishing a new session would not be PFS, but any messages sent once you respond would have PFS.

Assuming there's no way for the client to end up on the initial-message codepath.

Again, yeah, maybe it's fine. But it all just feels so hacky and fiddly. These edge cases aren't where anyone studying the protocol is going to spend any time, but it's security-critical that they be implemented right.


I just re-read your message, and thought I should clarify: The client has two ratchets going. One is an opportunistic DH ratchet, and the other is a hash-based one that provides forward secrecy if the contents of the last DH ratchet was not intercepted and decrypted. Which, if you have verified the keys and no device key change has happened, it hasn't.

If you have a successful compromise of one message, a missed message is all it takes for the ratchet to self-heal and you have lost the ability to decrypt future messages. It is PFS+ in a sense.


> Combine that with Signal keeping the server not-quite-open and being weirdly insistent on not having federation, and I'm suspicious.

I'm not exactly sure what not-quite-open means (I thought the code was available, do you mean the server won't accept modified clients?), but Signal was federated once, with Cyanogen, and they opted not to continue the federated model. I don't think it's weird to not want to coordinate upgrades across servers in multiple organizations. Looking at existing federation models that mostly work, we have things like email, where there's no coordination and some servers never get upgraded, leading to a very low lowest common denominator; there's also IRC, where servers in a network really need to run the same software and are upgraded in lockstep -- that's fine enough if everyone is on board, but it doesn't look that much different than one organization running the servers.


Evidently Facebook themselves don't agree with you, since their "Secret Conversations" feature uses Signal's protocol (many other systems also have equivalent features built out of Signal Protocol, Skype, Google Chat, XMPP ... it's a sort of trend)

In terms of how Signal compares to something like Facebook Messenger using HTTPS that's an actual technical question that's worth talking about (whereas "Oh no, Moxie Marlinspike is a mean person who hates F-Droid" is basically just noise)

HTTPS gives us two ingredients, both relevant here, TLS and HTTP. TLS is intended to deliver privacy and data integrity between two parties, it does a pretty good job at that, but it's very common to have configurations which do not result in Forward Secrecy. This means messages that couldn't be read by any potential adversary today could become readable tomorrow - long after both parties don't need them any more - if the adversary obtains a long term key. Your web browser might, possibly, tell you about this risk if you ask, but it definitely won't flag it in the main UI. Next, Facebook is authenticated to you by its proof of possession of a Private Key corresponding to the Public Key in a certificate from a Trusted Third Party CA. An adversary could corrupt this CA, but hopefully that's difficult.

Now, why does HTTP even matter? Well, although TLS _can_ authenticate both parties, on the Web today we rarely do that. Instead the web server is authenticated using TLS but the client (a Facebook user) has some crummy HTTP layer authentication, maybe a password like "1LvUrDog" filled into an HTML form field.

So, between those two cracks, if the Secret Police either steal your brilliant canine-related Facebook password or get Facebook to give them a long term key, they can read all your Facebook Messenger chat, eavesdropping on it indefinitely.

Signal doesn't use passwords. Your device has randomly picked a Private Key, but unlike Facebook you don't have a certificate from a CA, instead you can compare the associated Public Key on your device with that shown for another participant on their phone, if they don't match there's a Man in the Middle. So immediately that's an improvement, no password guessing.

Signal also has Forward Secrecy. In fact each message sent and received changes the keys used for future messages. As a result an adversary can only eavesdrop by actively impersonating one of the participants. In a two person conversation that's often likely to become obvious pretty quickly whereas passive eavesdropping is undetectable.


> although TLS _can_ authenticate both parties, on the Web today we rarely do that. Instead the web server is authenticated using TLS but the client (a Facebook user) has some crummy HTTP layer authentication, maybe a password like "1LvUrDog" filled into an HTML form field.

I would love to see more use of client certificates, but assuming good password practice is there a real security difference? Either way both parties authenticate themselves to the other.

> Next, Facebook is authenticated to you by its proof of possession of a Private Key corresponding to the Public Key in a certificate from a Trusted Third Party CA. An adversary could corrupt this CA, but hopefully that's difficult.

And hopefully Certificate Transparency would catch them if they did.

> Signal doesn't use passwords. Your device has randomly picked a Private Key, but unlike Facebook you don't have a certificate from a CA, instead you can compare the associated Public Key on your device with that shown for another participant on their phone, if they don't match there's a Man in the Middle. So immediately that's an improvement, no password guessing.

Well, depends what you're trying to verify. Verifying that someone is always using the same device is one choice with its own set of tradeoffs (e.g. many people change devices quite often). Verifying that someone always knows a given password is another. I think tied-to-device keys lose you more than you gain, though I appreciate there's room for disagreement here.

> Signal also has Forward Secrecy. In fact each message sent and received changes the keys used for future messages. As a result an adversary can only eavesdrop by actively impersonating one of the participants. In a two person conversation that's often likely to become obvious pretty quickly whereas passive eavesdropping is undetectable.

You don't have to actively participate as such - you can just forward messages between the two. And Signal's servers are already sitting in the right place to do that.

In theory PFS is a valuable benefit. But the level of compromise needed to incorporate it into a practical system where people want to be able to send offline messages and messages to new contacts who they haven't exchanged keys with beforehand... IMO the resulting level of protocol complexity compromises your security more than the fairly weak guarantees you get out of it in practice are worth. Certainly not when the cost is no federation and identity-tied-to-phone-number.


OK, so yes, if you steal Alice's credentials AND have control over the co-ordinating server you can trick Alice and Bob into continuing to communicate with you in the middle, and so long as you keep this up it's relatively undetectable.

I think I can see how to repair this (Alice doesn't know Bob's private key, but she does know a long term public key for him, as a result she could periodically and automatically re-verify that she's still talking to Bob and not just someone who has her short term keys and is actively conducting a MITM) but Signal doesn't attempt such a repair and maybe I'm wrong.


Edited: Hmm. I wrote a long claim here and now I'm not sure depending on exactly what you steal, from whom, and when. I will re-think and re-post this.


AFAIK, Signal has an open source client, and an open source server. If you want federation, you can go ahead and build it, and find users, and you can start from a reasonably well working base. Moxie isn't going to build it, because he doesn't think federation works; to convince him, you'll need to show him it works, not just tell him. Is there an example of a federated chat service which has end to end encryption that just works?

Peer to peer chat is interesting, but it means that IPs of communicating users are more widely exposed -- now anybody in the network path between two users can see they're communicating with each other, not just that they're both communicating with Signal. I may not want to share my IP with some (or most) people I communicate with. Additionally, there's a lot of hard work around actually getting a peer to peer connection on today's internet, for a large fraction of connections, you're going to have to proxy packets for them anyway.


> Is there an example of a federated chat service which has end to end encryption that just works?

Yes. SilenceIM is a fork of Signal that maintains only the SMS implementation of the Axolotl ratchet. It works perfectly.

For that matter, every other chat network does too, if you use Pidgin and the pidgin-otr plugin.

End to end encryption is a property of the clients, not the network, practically by definition.


SilenceIM and pidgin-otr add e2e over existing networks. That means I can attempt to send messages to people who won't be able to receive them. That is the opposite of 'just works'.

With Signal, or other services, where e2e support is a mandatory part of the client and is the only way to send messages, if someone is available on the platform, I know that I'll have a e2e message stream. (subject to MITM of key exchange, of course)


The article actually proposes an alternative: Matrix, and Matrix is, in fact, a good piece of software, with federation options.

I tend to agree with most parts of the article, especially the lack of federation options.

My real pain point with Signal is that there is no real desktop application for it - no, a connected web interface is not a desktop application. For example, XMPP with OMEMO can be used simultaneously from Android Conversations AND Pidgin - same account, same messages (yes, it needs XMPP Carbons on the server), e2e.


What do you mean by a "connected" web interface? And what would being a desktop application bring it?

Signal Desktop is somewhat buggy, not that full-features, and doesn't integrate that well with the rest of my OS, but otherwise it's working fine, and I can use it simultaneously with my phone. (But I can also use it with my phone turned off, which I love.)


> but otherwise it's working fine

It doesn't work at all for me, because it requires a mobile phone number, which I don't have (a phone + any monthly subscription fee doesn't fit in a tiny fixed income budget).


I don't think that has anything to do with the desktop app specifically?


It does. Desktop app is desktop, end of story - no GSM connection and/or phone number should be required.


It sounds like you'd also be against requiring phone numbers if they didn't have a desktop app...


A phone app requiring a phone number is reasonable; it doesn't matter if I like it or not.

On a desktop app, it's not reasonable to ask for a phone number. Ask for something else; email, for example.


I understand you may not have a phone number at all, but Signal doesn't actually require that you have a mobile phone number at all. It can be a landline or VOIP line that can receive a call and you get a voice verification.

Of course many don't want to tie their communications to some other identifier and that's fine but that does mean that Signal isn't for those people.


Google Voice numbers are free.

Is that perfect? No, but if you want another unambiguous identifier, it's going to cost. Moxie does not have a lever that can move the world; if you think you do, you are probably well invited to haul on it.


> if you want another unambiguous identifier, it's going to cost

Nonsense - my XMPP addresses are free, globally unambiguous, federated, and support 3 types of end-to-end encryption. A phone number is patently unnecessary. The only reason to require a phone number is to tie the encrypted packets to a known identity.


An identity that everyone else already knows, yes. I figured that was so obvious as to not require elaboration.


> My real pain point with Signal is that there is no real desktop application for it

Signal is open source, you can write your own -- just kidding. Moxie Marlinspike wants to lock you into his client as Facebook, Google and their ilk lock you into theirs.[1]

[1]https://github.com/LibreSignal/LibreSignal/issues/37#issueco...


>Google Play

use yalp store

> Packages on F-Droid are reviewed by a human being and are cryptographically signed

>The app has to update itself, using a similarly insecure mechanism. F-Droid handles updates and actually signs their packages

so are all android APKs. granted it's trust on first use: it accepts any signature for the first install, and only enforces the signature if you try to install an update.

>A checksum isn’t a signature, by the way - if your government- or workplace- or abusive-spouse-installed certificate authority gets in the way they can replace the APK and its checksum with whatever they want

this is probably the only legitimate concern, to use f-droid so you have a permanent anchor of trust (f-droid, rather than whatever CAs you have installed) for the first install. this isn't even that big of an issue when you can install using yalp store. google might be a rootkit or whatever, but at least you can be reasonably sure that the apks are the originals.


This doesn't even touch on the fact that Signal depends on Play Services. It has a websocket option, but the setting is actually not in the GUI


Hmm, how do you configure it? IIRC it just automatically used it because I don't have Play Services.


AFAIK it's not user configurable, and the only way to enable it is to have a device without play services installed.


So? Only an incredibly tiny population of android users aren't running play services. If your goal is to get as many people as possible to use e2e messaging, why spend time designing for the 0.001% of people who aren't running play services?


> There’s an alternative to the Play Store for Android. F-Droid is an open source app “store” (repository would be a better term here) which only includes open source apps (which Signal thankfully is). By no means does Signal have to only be distributed through F-Droid - it’s certainly a compelling alternative. This has been proposed, and Moxie has definitively shut the discussion down.

Adjunct to the rest of this discussion: just read through that GH issue and came away with markedly different conclusions than the author of the blog post.

It reads like someone who is trying hard to justify and prioritize dev time/resourcing in the face of what is a demanding and vitriolic minority. No evidence of disingenuous intent or desire to push a particular agenda. I see nothing that would have prevented the old OSS adage: "if you want to see it, do it".

Drew, I don't know you, or the background for the argument you're making, but it seems like you have something stuck in your craw here. Maybe take a little time and try to view the situation with fresh eyes? You're obviously passionate about this subject -- and the unique perspective is appreciated -- but it devalues the rest of the info presented, and I don't buy the precept you're proposing.


People should just know by now, if you need to communicate something in private, you should just never use any electronic device that uses public networks. All of these "secure" tools that are being used must be understood in that context. They are "secure" against honest people.

What I mean by that is that it's a lot like your home or apartment. Sure, you should lock your door and turn on your alarm system when you leave. At the same time, if you know there are three letter agencies surveilling you, it's probably wise to go ahead and assume they broke into your home and placed bugs in it despite your security precautions.

Because they have.


People still need to communicate with their peers in insecure networks. Now you need to compare the nitty gritty details and choose the most secure one for your needs. If you need content protection to keep dick picks out of NSA office circulation, Signal is probably the best. For metadata-free chat, Ricochet and Briar are currently the top duo.


This article is entirely about the Play store and F-droid.

As a user, when an app claims to be 'secure', I expect the app itself to have made reasonable security tradeoffs. I don't however expect them to change my OS, my package manager, or anything else. The security of those other components isn't their concern.


> Truly secure systems don’t require trust.

Security is something which only makes sense in relation to an attacker model. Only after you specified that, then we can discuss if something is secure or not.

Signal is not secure if the NSA is after you. Signal is secure if your Chinese competitor is after your business data. Signal is secure if you are a journalist in Turkey.


Remember that OTR, Cryptocat and PGP were secure enough when Snowden was agreeing about handing data to Greenwald and Poitras. So while Signal isn't secure if you're NSA's target, it might be secure enough to protect you from passive threat scanning.


The author is a delusional crank. He is very deliberately ignoring the very cogent arguments for the Signal architecture in favour of some specious moaning about how play store is subverted by the NSA.

If you want a federated / onion-routed message transport, start coding. You can use the signal ratchet mechanism if you want, you just can't call the resulting shibboleth Signal. Distribute only by obscure methods, easily subverted by users installing malware versions with higher search rankings. Then stand back and watch as hardly anyone used your app.


> This is a strong accusation, I know. The thing which convinced me of its truth is Signal’s centralized design and hostile attitude towards forks.

The thing that convinced you that Moxie feels a certain way is that Signal has a 'centralized design'.

Please, if you're going to accuse someone of acting in bad faith with no evidence the least you can do is be honest about it. You have nothing but your feelings for proof of anything.


I agree that Tox is better but at the same time I know people who truly need to stay hidden and they use Signal on a burner phone with a cash sim-card. That way it doesn't matter which medium the messages are transmitted over because it still can't be traced back to them.

And as far as I know the encryption is solid.

Unlike some other alternatives like Wickr Signal actually open sources their app and their communication protocol.


Until Tox defaults it's communication through Tor, it doesn't offer any notable differences. Sure, there is no central server, but intelligence agencies can see who you talk to without compromising server just by looking at the destination IP address of packets. Tox suffers from same MITM problems if the ToxID is changed e.g. on Twitter page of your contact, the same way the author of the article claims the "checksum" of Signal's APK can be changed by NSA, your employer or angry spouse.


If the consequences of sending messages are torture and death, I wouldn't trust any form of electronic communication. That's what face to face meetings are for and have always been for. I did not think signal is insecure, but either party could be compromised in other ways like a key logger or other local software that intercepts messages on the device they are composed on. I certainly wouldn't trust any mobile os based app although desktop ones might not necessarily be better even if they both run on a Linux os that's fully open source. Most people are not up against such threats, so in most cases it doesn't matter. For the people that are, they are brave in using such software. I would never place my life in the hands of such software. I simply wouldn't trust any such software with my life. By comparison, the software in my car or on a plane is a different matter but it's also engineered to different standards and has proven itself in a verifiable manner--I haven't died after much driving and many flights.


I would trust Signal on iOS, depending on who I was messaging. I'd turn on timed messages though, and the signal number wouldn't be my main phone number. Far less likely to be key logged on iOS. If you don't browse websites on the device that helps.

You have to consider that face to face meetings are often observed by third parties, you can be tracked easily, extremely incriminating generally. The other person can talk, and provide evidence of your location. In comparison, sending a signal message is comparatively covert.


The other person can show your signal messages to the wrong people just as well as they can talk about the conversation. At least with a conversation, you have plausible deniability although that may not count for much. You do also have to be careful of being recorded. Still, the point is, I would not trust a software platform. Ios or Android doesn't make a difference. They are both easily exploitable and have tons of security bugs no doubt, many that the biggest state actors are likely hoarding as 0 days for just such an occasion. There is no perfect solution.


I don't think many experts would call iOS "easily exploitable", especially if you don't install random apps and browse the web. 0-days aren't easily available for places like Iran, Syria, Egypt, etc, or even larger countries like Brazil, and wouldn't be burnt on any ordinary suspect. They still usually rely on the user clicking a link. A locked down iPhone (no BT, WPA2-EAP, VPN, Signal, hard passphrase) is a hard target.

If your contact is the type who will record the fading Signal messages with another phone you're already fucked.


To be completely honest, Android should be considered as "insecure" for the same reasons. It's binary blobs that are hacked around by distributors with limited support after a year or so (when phones stop being manufactured and widely sold).

Can we just get a proper Linux OS running on mobile devices already that's properly open source and easily re-flash-able? It's clear that ARM is here to stay and if Linux is to stay relevant, it needs to move towards support for one of the most popular computing devices on the planet. Desktops made their way into each home and mobile have made their may into each pocket.

That way, running something like Signal would be more trust-able coming from a package manager, especially with something like Debian's reproducible builds.


I don't prefer messaging apps that require phone numbers, they always feel less trustworthy to me because that one aspect of privacy isn't there


« those are all really convenient excuses for an argument which allows him to design systems which serve his own interests. »

I wish the author would actually lay out what they think Moxie's interests are.


As a Signal user, I just wish I could make my own personal fork of the desktop app and still talk to everyone without having to use the beta servers and fear of having access cut off, because the visual design and UX of the desktop app is absolutely atrocious. And the latest update that was pushed a few days ago was a massive step back; the bloated UI now looks like some iOS app from 2007. It's just embarrassing. And don't even get me started on the lack of a search function -- something the mobile client has.


Hmm, if you can improve the UI by yourself, could you not submit a pull request to do that? That would probably still allow you to use the improved UI without fear of having access cut off.

(I wasn't aware of a redesign - just updated, and I don't really like it either, but ah well.)


In theory yes, but I don't think the team would appreciate any old random schmuck to change their product's look-and-feel :)


I have recently switched to Riot (built atop Marix, which the author endorses at the end) for some family communications and yeah, I think I do prefer it to Signal.


I personally don't distrust Signal.

I just refuse to use it. This comes up on HN a lot and everytime I have to admit that I am kinda unfair here: Signal is heralded as the nice and secure solution - but seems incomplete to me. I don't doubt all the more clever persons that tell me that Signal is the best choice for encryption right now. But as long as it doesn't support federation (I miss XMPP) and as long as it does require a phone number (None of anyone's business, not required for my contacts, a baaaad way to handle identification) it is utterly broken for me.

I'll continue to use Telegram for family, friends and casual business stuff. The applications are awesome across platforms, I can initiate conversations with people without using a phone number. Worse encryption? Probably. Likely. Just as centralized? Yes - hate it there as well.

But I hoped that Signal would be the solution. I'm unfair. Signal gets judged for NOT being open (federation, phone number). Telegram is just a random service that I use instead then - works better anyway.


> But as long as it doesn't support federation (I miss XMPP) and as long as it does require a phone number (None of anyone's business, not required for my contacts, a baaaad way to handle identification) it is utterly broken for me.

Why not simply use XMPP then? https://conversations.im/omemo/


Federation is not some sort of magic dust that would fix signal, you'd be just exchanging one problem (centralization) with another (spam).

Plus in all likelihood even if they did federate, it would just be like email with gmail that the Open Whisper Systems is the dominant player so most conversations have at least one party running on Moxie's hardware.


> one problem (centralization) with another (spam)

I'd take the risk of possibly receiving more spam over the risk of depending on yet another walled garden.


The blog post states the following:

> [Moxie] makes arguments which don’t hold up, derails threads, leans on logical fallacies, and loops back around to long-debunked positions when he runs out of ideas.

Can anyone provide examples of threads where Moxie is acting like this? The blog post didn't give any.


You don't need to have absolute trust in Signal, you just need to trust it more than WhatsApp.


The line about F-Droid doing no automated scanning is particularly troubling. Since he can't possibly imply that a Signal compromise would be detected this way, Moxie is making a political argument against the way people are using F-Droid to install other applications. He refuses - on principle, no less - the right for users to control their hardware and have full control over the software they install, and thinks the walled garden approach should be forced on every Signal user.

Sorry, there is no excuse for Signal not to be available on F-Droid. I understand the automatic updates argument if it was valid at the time, but Signal has no right to impose what other applications I run and how I get them.


Does F-Droid support reproducible builds now? Or does it offer any other kind of assurance that the software downloaded actually comes from the purported origin?


Yes, they only publish the signed binaries produced from public sources according to a recipe anyone should be able to follow.

https://f-droid.org/en/docs/Reproducible_Builds/


That's what Moxie does too, and F-Droid won't trust that. So what's different? Why are F-Droid's builds trustworthy?


If you prefer obscure alternatives try:

https://vsee.com/messenger/

https://zangi.com/


Where's the "This Post is Bullshit" button?


I trust it more than unencrypted SMS or Facebook Messenger.

I trust it less than p2p chat over an encrypted network I control with layered defense in depth.

Security is not a boolean.


Security is not a boolean, but when your whole selling point is security you'd better be good at it.

I trust Signal the same amount as Facebook Messenger or any other centralised messaging system that uses transport encryption (e.g. Skype, IRC+SSL, WhatsApp...). But what's the USP that means I should use Signal rather than any alternative?


Their USP is "security made simple", in other words: use Signal because it's as easy as the mainstream alternatives, but far more secure (but not perfect).


That only works if they're actually more secure than the mainstream alternatives. I don't think they are; Signal has transport encryption but the system probably isn't secure against Open Whisper Systems themselves or someone who controls the central server, which is the same security situation you'd be in with any of the alternatives I mentioned.


Everything in Signal is end-to-end encrypted.


Not giving all your metadata to Facebook (Messenger, WhatsApp) or Microsoft (Skype) for a start. The message may be e2e encrypted, but damn sure these companies are logging as much as possible about who's chatting with whom, when, & where each of them are, how much data is sent each time, etc.

Some people trust Moxie / OWS more than the others, and with good reason.


> Security is not a boolean.

Very true.


Seriously, why do they use the smartphone in the first place? The smartphone ecosystem, be it Android or iPhone, is not secure. It can not be trusted.

Even if we avoid Apple and Google's software distribution platform, Your smartphone still has binary blob kernel module, baseband processor and the OS runs on top of that.

People who claims secure and trust on top of smartphone are all liar, idiot or both.

Don't use the smartphone.


Unless it's Librem-5 (https://puri.sm/shop/librem-5/), although we've yet to see what comes out of it


Still not secure. I describe the risks here:

https://news.ycombinator.com/item?id=10906999


Maybe not perfectly secure, but it's a big step up from the current devices. Being able to physically separate devices (e.g. baseband/modem) and toggle them off would allow you to obtain a much more secure environment.


If your goal is to make secure communication possible for as many people as possible, you need to create an Android and iPhone app.


"Don't use smartphones" is not an effective method of getting people to communicate securely. We've had GPG forever. How many people used it?

Also, for the large majority of people their smartphone is going to be more secure than their laptop.


+1 I agree wholeheartedly wiht the concerns and complaints in this post. Even if you were to have the most trustworthy person leading a system like this, who is to say that this person's mind won't change. Or worse, a different successor could redefine the goals - this is created under a company after all. What's the solution? Trust in design.


If signal was somehow federated(without a central server) and open source(which it is) then there's not much to not to trust.

When they figure out a way to make signal serverless, then the only thing you would have to worry about is the OS of the phone and its underlying architecture...

I have no doubt we will reach that point but I wish we get there sooner rather than later.


What's wrong with using Google Play Services?

Correct me if I'm wrong, but I assume it has to do with message notifications. So, by using GCM, Signal would be leaking some metadata about when and who, etc. I assume. But wouldn't someone be able to get that same information from your ISP (with a little more work)?

You're losing the benefits of longer battery life for basically nothing.

Security isn't absolute. I don't know why this blogger has the attitude that there is such a thing.


It allows Google to easily circumvent any end-to-end encryption since it's a rootkit.


How? What mechanism?

Does it access the plain text from the keyboard before the app encrypts it?


It could do that.


Google certainly doesn‘t need the Play services to do that. If you assume Google to be malicious in that way, no app on your Android phone can be secure.


Sure, but the Play services make it A LOT easier for them.


How? (I work for Google, I don't work on Android)


The Play services run as root IIRC and listen to commands from Google's servers (e.g. to push updates to the devices). Even if they don't have enough priveleges to intercept the keyboard, Google has all the signing keys to replace system apps that do.


The full set of Android processes running as root on my Pixel 2 is https://pastebin.com/sTtQVz3m - which of these is from Play Services?

I think that in most cases the system services that are in a position to interfere with the keyboard in any way are provided by the phone manufacturer rather than Google - obviously if you're running gBoard then Google has control over that, but I'm not aware of any way that Google fundamentally has control over the other frameworks that the keyboard interacts with (but, again, I don't work on Android - I'd love a more definitive answer on this)


Slightly off topic: How do you convince your friends & family to switch to Signal from WhatsApp?


For those looking for an open-source, private and secure messenger take a look at Adamant: https://adamant.im/


Is the .apk reproducible from the source?



Although signal has cash they need a lot more support. It’s a good time to remind people they are hiring


Signal is at least as good as all the other cloud messaging apps... (privacy wise)


Sure but I think that is a given. The fact that it's "at least as [secure]" as something that stores chats in plaintext on their servers (Telegram) is not exactly news...


> The fact that it's "at least as [secure]" as something that stores chats in plaintext on their servers (Telegram) is not exactly news...

probably news to most people... because most people appear to be trusting it....


Secure messaging on android seems like an oxymoron.


What's "secure" without stating your threat model?


Why?


probably because google play services that's installed on nearly every android phone.


How does that make Android insecure?


"Google Play Services lets Google do silent background updates on apps on your phone and give them any permission they want. Having Google Play Services on your phone means your phone is not secure."

Yes, Google can install a backdoored version of Signal. This is bad. But if you can't take that risk, you can install e.g. LineageOS without Google Apps, download the source code, reproducibly compile the apk, and install it on your android. If you have a better idea, maybe it can be implemented.

"A checksum isn’t a signature, by the way - if your government- or workplace- or abusive-spouse-installed certificate authority gets in the way they can replace the APK and its checksum with whatever they want."

If they can add a certificate on your smartphone/PC, why can't they replace Signal with malicious one? Why can't they replace F-Droid? There is no 100% method to solve this issue, unless perhaps if you can meet with F-Droid developers, obtain the authentic public key from them to verify the F-Droid client's signature. Calling SHA256 cryptographic hash a checksum shows slight dishonesty on your side. The differences in connotations between the words are significant.

F-Droid doesn't magically solve this problem. The root of trust comes from another SHA256 hash -- 61:DB:51:32:39:47:61:C4:D4:3F:8A:9B:AE:72:B0:2E:B0:8D:F3:B5:ED:F2:92:1C:7B:14:7E:2F:29:30:83:03 -- that authenticates the certificate of f-droid.org.

Or it comes from the hash F3:33:D2:E7:FA:A3:68:7F:B2:99:3E:6D:F6:9D:EE:1D:DA:77:36:11:DD:CA:B3:3A:B6:79:87:AA:40:56:94:22 that authenticates the MIT's PGP key server that has the signature verification key for F-droid clients: https://pgp.mit.edu/pks/lookup?search=f-droid&op=index All your suggestion does is, it adds a layer or two where we hope the NSA doesn't compromise them in case you'd want to use that chain to install and validate Signal. And even if you personally verify the authenticity of public key, you haven't solved the issue of private key exfiltration via hacking. You need expensive HW like HSMs to even start combatting exfiltration. And Google can afford those.

"...centralized servers and trademarks."

Of course you can't call a fork with the same or similar name as the original. You don't want malicious entities to create projects with names like "Signal Official Client" etc. Having distinct name helps both the fork and the original one.

Centralized servers fix a crucial issue, shitty designs that linger forever. It also fixes the issue of having to deal with backwards compatibility indefinitely. Moxie can actually see what versions are still deployed, and push updates to most users. The idea here being, you don't have to support older protocols (e.g. the group chat had a big issue that was or is currently being worked on), implement backwards compatilibity that risks downgrade attacks etc.

Let me give you an example. Riot decided to go with stupid, stupid base64 public key fingerprints. What happens here the only way to jump to smart choice of base10, is if all clients switch at the same time. If one client shows fingerprint in different base, it's not compatible. Sure, you can add a feature that lets the clients negotiate which fingerprint to use but then you need to get that deployed to every client. This happens really slowly, and it must usually follow the waterfall model with first deciding about these things on future revisions of Matrix protocol. And if you want to know how that will turn out, take a good look at OpenPGP research group: since SHAppening, they haven't even been able to agree on a new hash function for fingerprints. And once decided, that hash function will wait for years before the next revision of protocol is ready. Then you wait for it to be implemented in upcoming reference libraries and forks of those. And then you wait for them to be deployed in clients. Moxie changed all users' fingerprints from Base16 to Base10 -- my guess -- within a week by pushing the update. The advantage of agility is obvious.

"But we have to trust that Moxie is running the server software he says he is."

For content encryption, we absolutely don't have to trust him. For metadata, yes, we must trust the server runs the version that only collects registration date and some other minor detail, I forget. If you want to remove metadata, use Ricochet or Briar. Because Signal isn't lying about being anonymous by design, the only thing I think we can agree is, it should be stated in clear on their front page: "End-to-end encrypted, but not anonymous, we know your phone number and IP-address, and can see who you talk to, when and how much".

"We can stop Signal from knowing when we’re talking to each other by using peer-to-peer chats."

Yes, but that doesn't prevent global passive adversaries from seeing who we connect to directly. In some authoritarian country, the government could see Alice and Bob talk to each other. With centralized design, they only see connection to service providing domain fronting, or connection to Signal server at most. If you really wanted to solve this, you would run Ricochet or Briar.

Federation is a horrible idea. I trust they are not interested in my metadata personally. I won't trust metadata of all my chats to a friend of mine who runs personal instance of Signal Server. He watches porn on that same computer. He downloads Russian game cracks to that computer. He has friends who are my enemies and vice versa. He has repressed personal grudges, reasons to fuck me over, or he doesn't have 50M in foundation money (and he'd prefer $5k over our weekend hang-outs that admittedly are getting boring) or strong cypherpunk ideology to prevent corruption. He's a chinese refugee who has relatives he loves in political prisons, waiting to hand out their organs to rich members of the political party, and he's being extorted for my metadata on his computer. His computer isn't patching itself automatically so there as RCE vulnerability that got him compromised by our common adversary. He clicked on wrong link, once. The number of threats is endless.

Federated system doesn't distribute risks across hundreds of operators, it increases the attack surface tremendously, while dropping the number of targets the metadata of which is compromised at the moment. But I don't care about others, I care about the fact my friend doesn't have as good security as Google and Signal devs. Government agencies are really, really, really, really good at hacking and the trend is towards mass hacking. Having shitty servers makes that free because you can use exploits that should already be useless due to system updates.

"Federation would also open the possibility for bridging the gap with several other open source secure chat platforms to all talk on the same federated network -"

Yeah let's talk about that. Currently many Matrix channels lack end-to-end encryption because there is a backdoor: an IRC-bridge bot that leaks all conversations to non-end-to-end encrypted environment. Like you said: "Tradeoffs are necessary - but self-serving tradeoffs are not.", the possibility of having bots is extremely dangerous. The fact Matrix isn't end-to-end encrypted by default is horrible. The E2EE is in beta, and the fingerprint verification in clients suck. For the past three years I've been complaining about this, every time there is a developer assuring this will be fixed. This bug should never have existed in the first place. Now the users have come to accustomed to having the possiblity for briges to insecure systems.

"but those are all really convenient excuses for an argument which allows him to design systems which serve his own interests."

You should not make such generalized defamatory claims if you want to be taken seriously. I took this seriously at start but your arguments really lost their traction. It was another badly thought post that didn't show understanding of design choices and that hurt more than in helped: People might now switch to less secure Matrix protocol. Or they might even go with unaudited Tox, designed by non-experts.


I stopped reading after:

All your suggestion does is, it adds a layer or two where we hope the NSA doesn't compromise them in case you'd want to use that chain to install and validate Signal.

You can't possibly think that compromising the full infrastructure of MIT or F-Droid, a noisy criminal act with serious repercussions against the perpetrators, is in any way comparable to a MITM against a suspect.

That's like saying "Ok, North Korea has some small nukes, but if they really want to get serious about a nuclear attack, they can always penetrate the White House and steal the nuclear football from under Trump's ass".


TL;DR he doesn’t trust Signal because he doesn’t trust the Android operating system, and something about federation.

> No doubt these are non-trivial problems to solve. But I have personally been involved in open source projects which have collectively solved similarly difficult problems a thousand times over with a combined budget on the order of tens of thousands of dollars.

Shut up and code then. I’ll personally review your fully decentralized and secure chat app which nobody uses because it’s not available on any app store. Let me know when it’s done.


The author expressly endorses Matrix.


And which operating system does the author expect people to use Matrix on? The one he personally wrote and reviews every commit to? Does he expect everyone to only chat on "trusted" open-source desktop operating systems and not their phones?

The F-Droid argument is a really empty one. Packages are cryptographically signed? Are you verifying those signatures? In an article about "trust", can you explain how exactly you trust F-Droid packages and not Google Play ones?

What about iOS?

The whole article is extremely vapid and lacks any compelling argument. Signal has introduced state-of-the-art encryption to millions of people in an accessible way.

The author goes on to poke fun at the animated GIF feature of Signal as if it is a waste of time compared to working on an F-Droid distribution, but neglects to address five of the seven points written by Moxie (which he links to) about why they chose not to do that.


> In an article about "trust", can you explain how exactly you trust F-Droid packages and not Google Play ones?

It's easier for Google to manipulate a package on Google Play than on F-Droid.

> Signal has introduced state-of-the-art encryption to millions of people in an accessible way.

WhatsApp has done that to even more people, so what's the point of Signal?


> It's easier for Google to manipulate a package on Google Play than on F-Droid.

That's not how Android app signing works. It's a "trust on first use" model, so once you install an app, any update must be signed by the same key or the system will refuse to install it. That key is held by Signal, not Google, so Google cannot sign updates to apps.

> WhatsApp has done that to even more people, so what's the point of Signal?

You know who implemented the end-to-end encryption in WhatsApp? The people behind Signal. But compared to Signal, WhatsApp provides a lot more metadata to the server, and it's owned by Facebook, a company not commonly associated with guarding your privacy. Both have their pros and cons, and both have legitimate reasons to exist.


> That's not how Android app signing works.

Google has root, so it can change how app signing works at any time.

> Both have their pros and cons, and both have legitimate reasons to exist.

I just looks to me like whenever someone mentions a pro point of Signal, it's something where WhatsApp shines even more (e.g. "brings end-to-end encryption to the masses", "it's available on the App stores", "better iOS support than something like Tox", ...).


> Google has root, so it can change how app signing works at any time.

This argument boils down to "what if the government compels Google to infect your phone with spyware", and it's already been established elsewhere in this thread that using a smartphone for sensitive communication might not be the best idea if the NSA is after you.

Regarding advantages of Signal over WhatsApp, I just gave you one in the message you replied to. WhatsApp provides a lot of metadata to their servers (this is necessary for some of their features, like group invite links). And while end-to-end encryption protects the contents of your messages, Facebook can still observe when and how much you chat with whom.

Some further advantages off the top of my hat: Signal has reproducible builds (on Android), the Desktop client works when your phone is off. They're also working on private contact discovery (I don't know how far this has progressed): https://signal.org/blog/private-contact-discovery/


Also, the whole “play services are running as root” thing, while repeated often, appears to be just not true: https://news.ycombinator.com/item?id=17727705


> That's not how Android app signing works. It's a "trust on first use" model, so once you install an app, any update must be signed by the same key or the system will refuse to install it. That key is held by Signal, not Google, so Google cannot sign updates to apps.

My understanding (and I'd love to be corrected if I'm wrong) is that Google Play Services run with access to root privileges, and thus can circumvent this protection without the user becoming aware.


"It's easy! Just just generate a 2048 bit PGP key using this command (make sure you don't use the default insecure options) and then mail it through the post to anyone you need to communicate with."



Is there a preference of Telegram over Signal or vice versa?


Telegram doesn't have end-to-end encryption by default (only in secret chats).

That's all you need to understand to know that it's an inferior product in comparison to Signal.


Telegram's encryption algorithm is also homebrewed.

https://gizmodo.com/why-you-should-stop-using-telegram-right...


It's also the case you can't use Telegram's end-to-end encryption on desktop clients at all.


Telegram doesn't use end-to-end encryption by default and likely never will. Paul Durov has been quite hostile against that feature in the past. So yes, choose Signal over Telegram.

I find that instead of trying to convince friends/family to use Signal I just tell them "use Signal as your default SMS app" or install it myself for them. This tactic worked well in the Internet Explorer/Firefox and then Chrome transitions, so I don't see why it can't work again.

Then whenever I will be chatting with them, it won't matter if they try to send me a SMS, as long as I use Signal, their messages will be encrypted, as they will be data messages. I wouldn't trust the default SMS app/Android Messages not to steal your SMS texts anyway, especially on Chinese phones.

Also, I can't wait until the Matrix-based Riot gets its redesign:

https://medium.com/@RiotChat/a-sneak-peek-at-a-whole-new-rio...


The problem with that is assuming they will have internet connectivity always on when you want to contact them. Which I never found true even for the few people I regularly communicate over Signal. Most people turn off data and only turn it on somewhat regularly over the day to check stuff. Older people (like family) have no idea or barely know what internet is or even the button for that does and just expect communication to work like always: sms and phoning. But if people cam do the above approach good for them. I can barely convince anyone, even tech people from work or friends to install even more chat clients/services. Most are just fed up and feel tired of the whole thing.


Which country are you speaking of where people turn off data (to save money presumably)?

Signal used to be TextSecure, and there is a fork which still supports encrypted SMS.


To save money, battery and potentially protect weird apps trying to abuse the use of internet on background. Have heard and seen so many justifications for it. Barely know people that have internet 100% of time on. Somewhere in Europe ;)


This is good advice.

I would to those following it that this advice is likely most useful if your family members are on Android devices.

iOS does not allow you to change the default SMS application and if your family member is on iOS, the iMessage platform lock in is nearly impossible to break out of.


I've convinced quite a few acquaintances using iOS to switch, or to use both.


Apart from not being encrypted by default, Telegram uses its own homegrown crypto instead of a tried and tested one for its secret chat feature. That itself is a red flag.


Technically, Signal also uses homegrown crypto.

The difference here was it was endorsed by Moxie's acquaintances from the crypto circles, followed by a very loud and aggressive disparaging campaign against Telegram led by some of these people. I've been on metzdowd list for a very long time and while cryptographers aren't the chummiest people in the slightest, there's always an underlying mutual respect. The Telegram bashing was the first time I've ever seen geniuenly vicious behavior and hate displayed towards a project that not even remotely deserve it. Almost as if the goal was just to bury the competition.


You conveniently left out the fact that Telegram has a history of actual backdoors http://habrahabr.ru/post/206900/


This is completely unrelated to what I said.


>The difference here was it was endorsed by Moxie's acquaintances from the crypto circles

You suggest that this is the major difference between Signal and Telegram, but that's dishonest at best.


upvoting, I am having same feelings towards Telegram vs Signal. Both are man made things, maybe Telegram has not made audit on their crypto yet, but whenever I see something related to IM + crypto, people praise Signal, write hate speech towards Telegram, as if were people are waiting HN to have a mention of Telegram and writing some bad words about it.


Telegram is (with default settings) a cloud service where your messages are without time limits stored. Telegram cannot be compared to Signal/WhatsApp because they work in totally different ways (temporary encrypted e2e storing vs. encrypted cloud storing with keys in various countries).


Telegram has a history of obvious backdoors, everyone should avoid telegram. http://habrahabr.ru/post/206900/

Some people may try to blame this on incompetence, but they still won't be able to explain how exactly someone might arrive at this implementation without malicious intent.


Would you stop posting this in every Telegram thread?

I personally explained to you why this is a complete nonsense as far as conspiracy theories go [1] as did several others. Give it a rest already.

https://news.ycombinator.com/item?id=17195956


You call this complete nonsense, but you never explain the existence of this backdoor.

If you actually stop for a minute to consider what's happening here, it's pretty difficult to dismiss telegrams choices as mere incompetence.

Do you feel personally insulted by comments criticizing telegram?


> you never explain the existence of this backdoor.

I certainly do.


Telegram doesn't federate either, but at least they allow forks to connect to their main servers. And it's also available on F-Droid: https://f-droid.org/en/packages/org.telegram.messenger/


I think no, because I saw many voices tell they don't trust Telegram either. So maybe matrix IM is the compromise.


I use Telegram, mainly because the UI is extremely superior. It also allows bots and has a desktop client.

Tried to use Signal, but it was buggy and slow in comparison. I also trust Telegram much more than Facebook/Whatsapp.


Google, APK ... if you're concerned about security, you would use Apple iOS only.


Apple has root access to every device just like Google has to Android phones running Google Play Services.


Security wise, Apple iOS is superior in any possible aspect to Android. Forensics people never complain how hard it is do Android, never :)


Great. Nobody cares.

There are way more android users than iphone users. If your goal is to improve the security for the most people possible, you make an android app.


> Security wise, Apple iOS is superior in any possible aspect to Android.

One aspect where Android is superior is that more of it is open-source.


"Google Play Services is a proprietary background service and API package for Android devices from Google" [0] - I don't know a phone running pure AOSP without any proprietary code.

[0] en.wikipedia.org/wiki/Google_Play_Services


> more of it

not all of it


If you're concerned about security you don't use a smart phone and don't use your phone as a computing device. I know it's a lot to ask of people and so most will simply ignore the massive, unfixable (at least yet), security and privacy issues (ie, 5 years of your location 24 hours a day tracked, stored, and sold to whoever wants to pay).

But really, smart phones are bad. Even dumb cell phones are bad. And if you care about security you'll use a real computer without a closed, bug-ridden (an maybe even backdoor-ridden) baseband modem that can read/write to your entire user OS RAM.

All this bikeshedding about the minor security issues on top is a waste of time. That Signal requires a phone number to use is all you need to know. It's not secure.


Totally agree, and will use a real computer once I find pants with wide enough pocket for it :)


My everyday solution is pretty simple. I don't carry a computing device with me. If I do carry a cell phone it's turned off.

If I'm in my car I use an old thinkpad laptop and my own 900 MHz wireless network using ubiquiti transceivers and a 30ft pole on my house at home with a custom antenna sticking up out of my vehicle's sunroof. The car's kit re-serves this IP link to my home network over wifi to any devices in the vicinity of my car. ie, I park it in front of my friend's house.


Hmm, have you thought about Intel Firmware Support Package (FSP) and Intel ME, that is like baseband on mobiles, and can too read/write to your entire OS installed on your Thinkpad?

https://libreboot.org/faq.html#intel

Point is, there is no pure platforms today, except probably some Marvell ARM boards with full open source firmware.


I have, quite a bit. Luckily a lot of the intel chipsets can be mitigated with me_cleaner. Additionally, they don't have raw network access since they have to go through my network (and firewalls, transparent and edge) before getting access to anything that's not mine.

The problem with modem baseband cpu and their insecurity is they are the connection and you literally can't watch or mitigate them.


Here you go: https://pyra-handheld.com/boards/pages/pyra/

Real computer with 4G voice and data, open source, isolated baseband, runs Debian, qwerty keyboard, size of an original Gameboy.

Only catch is some drivers require blobs. And it's not out yet. Any month now...


[flagged]


I don't think that is a fair summary of this critique at all.

The author has laid out specific, actionable items, such as putting Signal on F-Droid and allowing federation. Maybe he doesn't like Moxie, but it's not so simple as attacking his character. Instead, this is a reasonable summary of steps Signal can take to win his trust (or, to give him a true impression that he needn't trust it).


I think it's fair.

He accuses Moxie of being deceptive and insincere:

"It can be hard to distinguish these from genuine positions held by the person you’re talking to, but when it conveniently allows them to make self-serving plays, it’s a big red flag."

He then admits it's "a strong accusation". Later he dismisses Moxie's reasonable stance re: federation with "Moxie can write as many blog posts which appeal to wispy ideals and “moving ecosystems” as he wants".

That's not a reasonable critique, that's pure vitriol.

I would love to see federation in Signal, but I can understand why OWS decided otherwise. The author is not able to distinguish between "someone has weighed the issue differently than I would have" and "he's doing that for his own personal gain".


[] deleted




That's a joke right?


Does Signal work in foreign countries? Like South America & Middle East for example.


Signal immediately asks for your phone number. Dead giveaway that they are not about privacy. So I assumed its a honey trap.


most of these services aren't allowed to grow (i.e. not heavily invested in by the people with actual money) if they dont have some form of data mining or things like 'oops we facilitated key generatyion and kept all the keys' etc.

If you want to securely communicate, either be smart about it outside of the app you chose. (encrypted or encoded with your own keys / tools where an app is just a medium of transfer) or create your own secure channels (not too difficult these days with good vetted open source implementations of crypto on multiple platforms...)

I would say anyone who fully trusts any of these apps, and is worried about their privacy, is contradicting their worries with their behaviour.

just google 'signal vulnerabilities' or that for any other of these apps... even if they have some good form of archntecture it's riddled with bugs... people can access your data. live with it, avoid it, or make the actual data incomprehensible for any 'eve' yourself instead of trusting another to do it for you.


Want to lose all faith in signal, try filing a bug fix as pull request

- It probably will be ignored forever or shouted down - wanna notify the mailing list. Guess what you have to join rise up! Aka if you wanna file a patch to signal I hope you are ok with Joining an “anarchist” mailing list - Then when you are approved to make noises on the mailing list, it still gets ignored, no explainatiob

1 year later I removed the obvious bugs in the base64 implementation of signal


OWS's staunch refusal to permit anything other than phone numbers as identifiers should tell you everything you need to know about Signal.

It is an authenticated, nonrepudiable communications platform using identifiers that are very difficult (possible, yes, but most people will get it wrong) to comprehensively anonymize.

The ability to present nonrepudiable communications to a judge is precisely the wet dream of law enforcement officers, ambitious prosecutors, and despotic regimes everywhere. All they need to do is flip the people you're communicating with, and you're done.


Seriously, if Signal become decentralized and doesn't require a phone number to use, I'd switch to it without hesitate. Call me lunatic or whatever, all the court related news, security analysis only makes me feel Signal is just another honey trap or will become one eventually, because none of these positive reviews solves trust issues existed long ago. There are better models out there, they just don't want apply, I can't stop asking why. You'd think they'll re-consider the options after so many users expressed their concerns, or at least provide multiple choices, but no, it's been years, nothing has changed.

I'd keep using Riot until then, even it's less secure and less user friendly, but it's good enough for me.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: