Instead, DeVault would prefer that you use Matrix, a system for which end-to-end encryption is (according to its own website) "in late beta", offered on a select subset of clients, and "not enabled by default"†.
This argument is clownish and we should be embarrassed it's on the front page.
There are people in the world that want to sysadmin their phones. It's a life choice they are free to make and I don't hold it against them. But the vast, overwhelming majority of users do not want to make the app market on their phone work more like Debian and less like the Play Store. Signal, to put it bluntly, does not care about the desires of the phone sysadmins. Even if they caved to the sysadmins, the application would, for virtually all its users, be no more secure. This bothers DeVault a lot, enough that he's constructed an entire psychoanalysis of Moxie Marlinspike to explain to himself how it could possibly happen that someone else on the Internet doesn't agree with him.
Also, just as a note to DeVault: the point of end-to-end encryption is that you don't have to trust Signal's server. All it does is arrange for the delivery of messages, which are secured client-to-client. Compare Signal's server to Wire's, which --- last I checked --- retains a record of every pair of users who have communicated in the past.
† When this was pointed out downthread, DeVault responded: "[o]ther alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring". Telegram is a particularly funny reference to make, because not only is E2E not the default there, but --- last I checked --- it can't even do E2E group chat. Telegram's owners are adamant that TLS is adequate for group secure chat.
>Drew DeVault doesn't trust Signal because its Android incarnation uses the Google Play Store --- the app market virtually all of its real users use --- and not F-Droid
It should use both.
>the point of end-to-end encryption is that you don't have to trust Signal's server. All it does is arrange for the delivery of messages, which are secured client-to-client. Compare Signal's server to Wire's, which --- last I checked --- retains a record of every pair of users who have communicated in the past.
My point is that Signal could just as easily keep a record of every pair of users who has communicated. We can't be sure because we can't run our own servers. I spoke about this in detail in the article.
>† When this was pointed out downthread, DeVault responded: "[o]ther alternatives (which I have not reviewed in depth) include Tox, Telegram, Wire, and Ring". Telegram is a particularly funny reference to make, because not only is E2E not the default there, but --- last I checked --- it can't even do E2E group chat. Telegram's owners are adamant that TLS is adequate for group secure chat.
Thanks for omitting all of the context which clarified that I hadn't researched them in depth and wasn't explicitly endorsing any of them, and the comment where I clarified that E2E encryption is enabled by default on Matrix.
I've read all of your comments in this thread to date and, as you can see, replied to some of them.
I feel like I have fairly summarized your arguments.
"It should use both", you say. Signal disagrees. That makes Signal evil, according to your argument. "That's not how the world works" is my rebuttal.
Signal could easily keep a record of every pair of users. So can every other mainstream chat application --- and several of them do. Signal doesn't. My reply on the subthread about this issue explains what Signal does differently here, and it's not "publish the source code of the server".
People can simply read your comment on the thread --- I made clear where the quote came from --- to see exactly what you said about Wire and Telegram and Tox and Ring. I'm satisfied that I've represented your argument well.
You're oversimplifying this. For the full rebuttal, refer to the article.
You cannot know this. We don't need to have this conversation in two places, I'll just link it for others who want to follow along:
>I'm satisfied that I've represented your argument well.
I don't think so.
>People can simply read your comment on the thread
Fair enough: https://news.ycombinator.com/item?id=17724300
Full disclosure: I added the text in the parenthesis and the second paragraph of this comment about an hour after it was initially posted.
You don't get to demand from strangers a debate on terms of your choosing.
I find such viewpoints rather dissapointing because I myself as a sysadmin don't hold it. My threat model is "someone steals my phone" and "someone (<1Mil. $ funding) tries to hack me". I don't particularly care that I don't know if the sand that was used to make silicon for my phone was properly sourced and audited for backdoors or plastic shovels.
I want me and my family to be reasonably secure against the background noise of the internet.
And of course not suck the battery dry like some thirsty vampire who was offered a bag of O-negative.
For this task, Signal is fully sufficient (until another messenger does it better or matrix fixes their long list of problems I have with them).
The F-Droid devs put a lot of work on reproducible builds. Not all software complies, but with an interest in information security there's no exucse not to.
That's the use case of F-Droid, and comparing it to self publishing APKs without even as much as a GPG signature is so beside the point it borders on deceptive.
Signal has decided --- sensibly, I think! --- to focus on the needs of the "normie" users. DeVault disagrees with that decision. He is welcome to do so, but it was Signal's decision to make, not his.
Far from not something that warrants a character assassination. Specifically, it's not something "clownish" that we should be "ashamed" to have on the front page. We get the community we deserve.
Him saying that Moxie Marlinspike is untrustworthy because of a disagreement, and then urging people to use Matrix --- that's a clownish argument. And it is the bulk of his argument, paragraph by paragraph: all the reasons why the only rational reason anyone could disagree with Drew DeVault is if they are sneakily trying to screw people over.
I think I'll stick with services that offer E2E and/or sensible feature-sets.
Once Matrix fixes all their problems I'll gladly install a homeserver and run it open for other users to sign up for, until then I'm on Signal.
Personally, I'd like to see Signal replace WhatsApp. That's why I support the path Signal took, and why I also have a distaste for the author's snarky dismissals of features like GIF search.
So the base argument holds in my opinion: Moxies main focus is Moxie in control. And not making Signal the best and securely possible.
So I also use Signal, but as soon as Matrix gets stable, I am gone
Having multiple branded builds to choose from would be a terrible thing and would easily allow fake apps to gain traction.
> ... and if you rebrand he forbids you from using the official Open Whisper servers.
This seems pretty fair to me. Not only could you abuse their resources, it would greatly hinder their ability to make changes and respond to protocol-level security threats. They aren't in the API business, controlling their ecosystem allows them to make forward progress without concern for 3rd parties that they have no control over. And still there is the issue of 3rd parties abusing their server resources.
The main point is, Moxie could take the wind out of the sails of literally all arguments in this page by publishing Signal on F-Droid but he just won't.
This alone is enough for me to lose trust in Signal.
For it to be on F-Droid. I think that much was clear.
Or are we just going by the author's ignorant or disingenuous (depending on how you interpret his words) statements?
If you are so concerned about state-level actors that play store is untenable to you, signal and android on commodity hardware are probably not the solutions you want anyways.
Are there any identified, non-state-level actor threats here, or is this just an ideological rant against proprietary software? If state-level actors are your concern, using android means you have already lost.
It was posted elsewhere but here's Moxie's take: https://github.com/signalapp/Signal-Android/issues/127#issue...
wtf. I have been using F-Droid for many years, and this has not been the case. as far as I know, this has never been the case, as Android has always had functions for third party app stores. in fact, even today, F-Droid recommends not using root for installs, since then you don't get the screen showing permissions.
> allow third party code
that's called running apps.
tl;dr nice FUD.
This is what I do on lineageOS. I don't regularly install new apps.
Side rant: This marketer-driven "install an app for everything" is a threat to the open internet and privacy. Usually the only reason is to extract more personal info.
Already, young people barely use a web browser. That appears to be the future. Now get off my lawn or I'll start talking about the war.
Android could undoubtedly be stronger in this regard, and in permission control, firewall, ad blocking etc, but it's not going to happen.
Apps wouldn't be so bad if they were actually sandboxed properly, but yeah, they suck.
I was interested in Copperhead OS as an alternative, but it seems to have fallen into a greed induced mess.
I am arguing Play store is fine, and side loading is bad policy.
I argued that for every person who will take the time to micromanage permissions, thousands wouldn't.
So what are you talking about?
And you accuse me of making things up? "Allow third party code" is not called "running apps"
Most likely one of those, yes. Though on Android 8+ you can only give that allow-install from unknown soures permission to F-Droid.
Also both Copperhead and Fairphone Open ship with the F-Droid priviledged extension by default allowing you to kee that setting entirely disabled.
... are using a platform "you" don't trust.
Really? That's not really odd.
At least, it's not odd, if that usage and what it entails is the denominating part of the persona in this question.
Here's a thought. If you are so concerned about the NSA that you think Google's cloud is a problem, why are you running the OS developed by Google?
I'm not, and I find that position naieve. For the overwhelming majority of people who are not a cross between Bruce Schneier and Linus Torvalds, a threat model that tries to protect against the NSA and GRU and MSS pretty much requires avoiding anything with a network connection. If you have a smartphone, you should probably just use its default application store.
I can trust people that I think made incorrect technical decisions, because I can see that they made a decision for technical reasons and have different priorities and reasoned soundly.
I donated some money to them a while back. How hard could it be to push the binaries out to a second app store?
I tried to publish my open-source game on F-Droid, but the build process involves building native components with a specific third-party version of the NDK toolchain, as well as shell scripts to move files around, so it never made it to the store.
There are, I'm sure, apps that are better, and that's never been moxie's goal. He's said it over and over that he'd rather have encryption for the masses than the perfect messaging app. It seems disingenuous to assume that he's acting in bad faith when he's clearly doing exactly what he said he wanted to do.
If you want to make the prefect, self-hosted, chat eco system, fire up that matrix server and invite your non tech friends to join. I'm sure that will work out incredibly well.
In the mean time, Moxie seems to realize that to accomplish his goal and make communication incrementally more secure for average users, he needs to go where the users are.
It's crazy to me that people still think that secure communication is a technical problem. We've had GPG for the competent for a long time. The hard problems in secure communication are about using the eco systems that are available to large groups of average users and still being secure.
Am I missing something?
In this particular case, not likely. People who are into more secure communication do not randomly click on anything. They know what they are doing, or get it installed from people they trust. And if they don't - their fault. Not Signals.
And Signal can continue to work and introduce breaking changes whenever they want. They simply only support the official build of Signal. Any person using anything different, cannot complain, if things stop working. (they will anyway, sure)
And the ressouce-abuse. Can this really be a thing? I don't know in detail how the protocol works, but what can I do with the servers, I can't do with Signal anyway? Sending (encrypted) data from A to B.
I can allready abuse that today, if I want.
I think you overestimate people. I told my wife to install Signal because she needed a password for something and it was way to complicated for her to remember. I know what the signal app is and could likely avoid fakes - she would not. I think it is often the case that only one party of the conversation is security minded, while the others just trust that person.
> And the ressouce-abuse. Can this really be a thing?
You know NTP, the protocol for sharing what time it is? That gets abused badly . If you have an open service, there are ways it can be abused. This would without a doubt lead to DDoS-like resource abuse where lazy clients don't cache things properly and just hog server resources. There are ways to limit things like that- but they aren't always simple. Also, like I said before, Signal isn't in the API business.
Not limiting freedom of the user because they know what they are doing at the expense of others is maximizing the protection and freedom of a small group, not society as a whole.
Even if that wasn't wrong, it would be a fatal limitation for a social app which relies on network effects. Even if you were actually super-humanly capable of not making mistakes you'd end up using the apps that everyone else you know is actually on.
That's not the sole market of people who would try Signal, though.
Are you paying him to do that? No? Well, there you go. It's more work, for what appears to be very little benefit.
> The Signal Foundation has 50 million dollars.
Sure, it doesn't allow them the flexibility they'd like to have to move forward but in a way it won't be their fault if federated servers aren't keeping themselves up to date when there's a major protocol change and they get temporarily splitted from the pool.
As long as “email” is a thing, as in “just send me an email”, and it’s a federated set of randomly updated servers, “email” will never have end-to-end encryption, because the first version of SMTP didn’t have it, and the user will still expect to send messages to a server running that version.
Similarly, if “Signal” is going to be a thing, as in “contact me on Signal”, the entire network effectively has to operate at the level of the least up-to-date server — otherwise it’s not one network, and the product is therefore unreliable. But there’s no way to enforce that all the federated servers update themselves in any amount of time.
Signal is successful in large part because it provides complex functionality (secure messaging) in a package that "just works". Federation complicates that significantly.
Without emoji and animated gifs, I suspect 70% of my Signal contacts wouldn't use it at all. It's hard enough to convince some of my friends to use it at all, "Can't I just Facebook message you?"
For me, amongst my group of friends - it seems Moxy is making all the right security/usability tradeoffs.
If you don't trust PlayStore, it seems not much of a jump to say you also shouldn't trust Android.
If you're _rightly_ that concerned (and I'll note that Snowden recommends Signal, so I wonder what it is you're up to that makes you more of a nation-state target than him), I don't have a clue what your options are - I suspect they start with "don't use the internet at all"...
I don't know what it is you're talking about. The entire point of my comment was that I don't like animated gifs. They're distracting, bandwidth intensive, and could be easily replaced by a dozen better image formats.
And while I agree with you about animated gifs, I understand WhisperSystems reluctance to try and become the force that turns non-geek non-privacy-activist users (which they and I are hoping will widely adopt Signal) to find WebP or flif or whatever alternatives to giphy or where ever else they're finding their reaction gifs and topical memes and funny cat-riding-a-roomba animations from. That's how a _huge_ percentage of users want to communicate with each other. Moxy is trying to give them a secure way to communicate how they want to, not attempting to force them into new ways of communicating that have boring justifications like "bandwidth saving" or "animation format technical merit" as the only reasons why they can't have vast libraries of funny animations to send their friends... If they can't quickly reply with Ru Paul doing fingersnaps in Signal, they'll go do it on Facebook instead.
For that reason, I'm of the opinion that _not_ supporting animated gifs is significantly more counterproductive, if you're trying to become "the secure messaging mechanism the whole world will use" or if (like me) you'd like more and more personal communication to be exclusively between the participants, and not include advertising networks and data miners and sentiment analysers (and, yeah, law enforcement and government bureaucracy)...
And one day we realise it was indeed something nefarious, let's assume something of this sort happened in the future, and then we rue that we didn't act when people used to say something was amiss.
There is one line in the article that says it well:
> Truly secure systems don’t require trust.
I have supported Matrix and Firefox among others (both in code as an Android dev and with modest donations - stopped using Firefox after Pocket). But no, not Signal. I'd wait for federation (if at all).
These all seem like reasonable permissions for the features available.
edit: Apparently, Signal does this for some things? See comment-replies.
Older Android versions only had the idea of the app declaring "I need to be able to use your Camera, read your Contacts, and make $$$ phone calls" and then you pick "No" and don't get the app or you pick "OK". This more or less railroads users into pressing "OK", except for the most security conscious, who go without the app.
A few releases back Google had an unofficial feature that let you switch off features an app had, and it would get some dummy replacement, e.g. if it had Contacts access but you switched that off, it would see no Contacts at all. If it had Camera access, but that was switched off, it would always be told your Camera was busy in another app. Once word about this hidden feature got out, Google disabled it.
Recent releases (Certainly on my Nexus 5X for example which is a while back) enable an app to ask at runtime. If you said "No" the app gets a second chance to explain itself, and then if you keep saying "No" the feature is just disabled and Android stops prompting you. The app might not work after that of course. Like the disabled older feature, the Settings pages for apps let you undo previous authorizations, again this may make certain apps malfunction - a map app with no GPS is merely crippled, but a "barcode scanner" with no Camera access is junk.
However of course apps for an older phone don't prompt, the older Android can't handle it, so for them you still have to make the decision at install time.
@mrguyorama - this means you can force the old landgrab user-hostile permissions to people running old Android versions, but you cannot force them onto users running Android 8.
Traditionally, in Signal that process has looked like:
The client calculates the truncated SHA256 hash of each phone number in the device’s address book.
The client transmits those truncated hashes to the service.
The service does a lookup from a set of hashed registered users.
The service returns the intersection of registered users.
Then he gripes that the posted APK has to be manually checksummed to use it. If you are truly paranoid, trusting a checksum you get from the same page you get a binary is as secure as ignoring the checksum altogether. But why would you trust a hidden signature process you can't see any more? How do you know your F-Droid binary was secure?
But worst of all is this pointless assertion: "Truly secure systems don’t require trust."
There are no truly secure systems. Malicious actors could replace your Matrix app with a lookalike clone. Your phone could have a hidden keylogger built into the OS. Or the hardware. The person's phone on the other end of your communication could have been compromised. You could be being monitored by all sorts of undetectable means.
Perfect security is an unattainable goal, but good security requires acknowledging and enabling trust to play a role in the protocols and systems we develop.
We have at least one data point that says that Signal stores exactly two integers about you, or did when the subpoena was issued: https://www.aclu.org/open-whisper-systems-subpoena-documents
things can always change, but that’s evidence submitted in court under the penalty of perjury, which is a fairly strong claim.
The interesting fact is that I "Ctrl+F" this page for Wire and I have seen nothing, even though this comment is about something that made me switch over Wire from Signal: to date, that's the unique instant messaging that has FOSS'ed both the server and the clients. (OK, the article also says about Matrix.)
I admire Wire for a number of reasons, but certainly FOSS'ing all their code is one the main reasons. (The other is... Haskell! And also Rust.)
And just to point out, not only Wire bug-fixed the library implementation of the Signal protocol, as they use the Signal protocol. And their web interface is very good!
Oh, yes... And they are not based in USA.
EDIT: I am not affiliated with Wire, but just a happy customer. :)
Signal's server code is open source as well: https://github.com/signalapp/Signal-Server
And apparently the client can verify that the server is running that code: https://signal.org/blog/private-contact-discovery/#trust-but...
Besides which, there are well established ways to get a Signal number online, like Google Voice or VoIP telephony companies.
I mean the unfortunate reality of chat programs is that there are so many that when I'm having weird problems I'm not gonna spend time opening issues on GitHub and sending logs; I'll just go back to what works. That's even more true for my non-technical friends.
In my case, not Wire, but Signal. My Signal contact list is exactle two people long. Trying to get people to move from WhatsApp elsewhere is hard, especially if some of them even additionally installed Threema a few years back when Facebook bought WhatsApp.
But at least two people. With one of them I was regularly conversing on Signal, so much that she even preferred it to WhatsApp.
One day she messaged me on WhatsApp again. She said Signal had not set notifications for my messages a few times now, and she's fed up.
I never had that problem. Maybe she misunderstood or misconfigured something. But it doesn't matter. Signal is dead to her.
It's really easy to lose perspective as a developer how much this usability stuff matters. Sure, we all pay lip service to it, tell each other how bad the UI on some tool is and fake humility about our missing sense for anything design (including UI design).
But we don't really get it.
You are spreading a lot of incorrect or misleading information about Signal in this thread. That makes it difficult to assume that you're arguing in good faith here.
I don't know what I could say to convince you I'm just an ordinary person concerned about my privacy, but ultimately it doesn't matter: you should definitely consider the possibility that I'm a bad actor and take nothing on faith. Equally, you shouldn't trust that Marlinspike hasn't been compromised either.
A little thought experiment: Put yourself in the NSA's position in 2013. GPG has been out there for years and, despite your best efforts, you can't break it directly when users follow proper security practices. (You have to compromise those users' computers instead, and that's vastly more expensive; every time you use one of your rootkits or exploits you run the risk of burning it, so they're reserved for high-value targets). The world is suddenly a lot more interested in privacy, and while popular culture doesn't grasp the intricacies of key exchange or forward secrecy, there are enough cryptography experts around that any obvious downgrade from GPG will be noticed and picked up on (this is just after the conclusive failure of your Dual_EC_DRBG efforts). What do you do? How do you get the public to accept something easier to compromise?
My answer is: you find a different front to attack GPG from. You talk up different kinds of attackers. You dangle a new, desirable security property that GPG doesn't have, and a theoretically clean construction - and then you compromise the metadata subtly, down in the weeds of usability features, letting you identify the higher-value targets. You get people used to using a closed-source build that auto-updates, and have a canned exploit ready (a compromised PRNG or similar) to use on those targets. And you get people to enter their phone numbers so that you can always track their location and what hardware they're running if you do have to attack their device more directly.
Maybe I'm being paranoid, but it seems distinctly odd that we see such a push behind an app that compromises so many features that were previously thought essential to security, just as the move for encryption is finally gaining momentum.
Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.
But what's the threat model where Signal makes sense? For a less-than-nation-state attacker, basic TLS as virtually all messengers support is surely adequate. For a nation-state attacker, phone-number-as-ID is a bigger vulnerability than anything Signal helps with, and central servers means that Signal can simply be blocked outright in any case. If we're talking about, say, Turkey cracking down on protesters, they would probably rather those protesters were using Signal (where arresting one means you get the phone numbers - and therefore locations - of all their friends) than the likes of Facebook or Discord or what-have-you.
> Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.
I'd be fine with that if Marlinspike didn't also trash-talk those more secure tools.
Signal is a vast improvement over SMS, plaintext email or any commercial messaging application, but it's no more difficult to use. It's relatively foolproof, in that user error can't fatally undermine the security model in most cases. It's not perfect, but it's easily the most secure chat app that I could confidently persuade non-techies to actually use. A highly secure app that you don't know how to use offers you no security at all.
Indeed (particularly as telecoms are often state-owned in those regimes), which is what makes phone-number-as-ID such a bad idea.
> use dodgy certificates to undermine TLS
Difficult in these days of certificate transparency and HPKP.
> bribe or coerce corporate actors
If that's your worry surely you want to rely on a big corporation rather than Signal. Look at e.g. Brazil having to block WhatsApp entirely because Facebook wouldn't play ball with them. Facebook has deep pockets that mean they can afford to do that kind of thing.
> Signal is a vast improvement over SMS, plaintext email
> or any commercial messaging application,
Not convinced that there's a significant improvement here. Plenty of commercial messaging applications have encryption. If the server is under an attacker's control then you're vulnerable, but I'm not convinced that isn't the case with Signal too.
I don't see how federation is related to this at all. We know you're bummed about it, you don't need to inject it into every subthread.
Just the code inside the enclave and that's a very small amount of code, definitely not the entire server.
SGX alone cannot solve this problem. Even in the idealized case, you can sniff traffic on the router to find out which user IPs are talking to each other and when.
>Even in the idealized case, you can sniff traffic on the router to find out which user IPs are talking to each other and when.
>I did: I was on the review board that made the decision to accept it for Black Hat.
Probably makes you more qualified to talk about SGX than me, so yes, I concede that the paper may not be relevant because your understanding of it is probably better than mine.
With that, I asked you a more fundamental question, given that you are knowledgeable about this and may be able to provide an answer.
I will Google translate it for you (ironic):
> "10/06/2017: Wire.com operational Security
> Wire.com is referred to as a new star among crypto messengers. I briefly looked at the (experimental) Linux version of Wire.com and found some significant security flaws:
> Mannings Bug: Wire.com has good end-to-end encryption based on Axolotl. The chats are all but unencrypted (!) Logged on the hard disk of the computer. The logging can not be switched off.
> The unencrypted storage of encrypted communication is not a bug but an epic FAIL!
> Access data: (Account name, password) to Wire.com account are also stored somewhere unencrypted on the hard drive. When starting the person does not have to authenticate in front of the screen but is automatically connected to all accounts.
> This is not a bug but a FAIL!
The HTTPS encryption of the contacted wire servers app.wire.com, prod-assets.wire.com and prod-nginz-https.wire.com does NOT meet the BSI's requirements for secure HTTPS encryption. These servers are Amazon Cloud Server and Cloudfront Server (not your own infrastructure).
> DANE / TLSA or HPKP are NOT used to validate SSL certificates of HTTPS connections. In addition, no CAA record is defined in the DNS, which should actually be mandatory for HTTPS for a month. The security of the transport encryption between client and server thus does not correspond to the feasible state of the art.
> Potent attackers could attack with fake but valid SSL certificates as man-in-the-middle the communication to the wire servers, in combination with the remote code execution possibly also attack the end-to-end encryption (assumption!), and there are enough potent attackers who want to attack the encryption of crypto messengers.
The domain wire.com is not signed DNSSEC.
Instead of the privacy-friendly OCSP.Stapling, OCSP.Get is used and several CAs are contacted to verify SSL certificates via OCSP. OCSP.Get can be easily tricked, as M. Marlspike demonstrated in 2009.
It contacts third party servers that are not under the control of the operators (maps.googleapis.com, images.unsplash.com) to download anything.
> Conclusion: The operational security of Wire.com is not (yet) suitable for security-critical applications after a small, superficial test. In particular, whistleblowers should learn from Manning's example and not use it.
> Disclaimer: this is NOT an audit but a short test of the Linux version."
For people wondering what they are:
> "The only information responsive to the subpoena held by OWS is the time of account creation and the date of the last connection to Signal servers for account [redacted]. Consistent with the Electronic Conununications Privacy Act ("ECPA"), 18 U.S.C. § 2703(c)(2), OWS is providing this information in response to the subpoena."
Their response to the subpoena then goes on to object to its overly broad scope, which asked for things that require a court order or a search warrant. They also object to the scope of the nondisclosure order included in the subpoena.
Edit: Yes, apparently they have a method of doing private contact discovery and, IIUC, even a method for the client to verify that the server is running the source code they expect: https://signal.org/blog/private-contact-discovery/#trust-but...
Theoretically it might be possible for a sufficiently paranoid client to cover all the bases. But it's certainly a huge attack surface.
Any information the Signal client reveals to the server is, indeed, something the USG could make a legal claim on. Probably even if Signal's server code doesn't now even collect it. The Signal team has been sounding that alarm for years: when you look at a chat program with fancy whiz-bang features, consider what those features expose (traffic-analytically, even). "Maybe have fewer features until we figure this out", Signal says.
The market does not agree, and that has been rough for Signal, and they deserve credit for the stand they are taking on it.
This is beyond the legal authority of an NSL.
If the legal authority can't be challenged in public, how are you so sure this legal authority hasn't been skirted plenty? In an opaque system, what they can and can't do is only theoretical. Only sometimes are things disclosed well after the fact and often only in aggregate (such as numbers about how many NSLs are greenlit). This is what the author means by no trust required. They can't be asked to subvert it, even via extralegal means.
So, you have to assume they'll always get more power and surveillance over time via secret orders if there's no consequences for them demanding it but people on other side can be massively fined or do time for refusing. Organizations about privacy protection simply shouldn't operate in police states like the U.S..
 "The funding allocated for Bullrun in top-secret budgets dwarfs the money set aside for programs like PRISM and XKeyscore. PRISM operates on about $20 million a year, according to Snowden, while Bullrun cost $254.9 million in 2013 alone. Since 2011, Bullrun has cost more than $800 million." ( https://www.ibtimes.com/edward-snowden-reveals-secret-decryp... )
It is at odds with known cases, such as the fight with Apple over iPhone encryption.
Now, first indication this isn't true was Alexander and Clapper saying they didn't collect massive data on Americans. If they did, they could've solved a lot of cases by your logic of action vs capability being contradictory, right? Yet, Snowden leaks showed they were collecting everything they could: not just metadata, not just on terrorism, and were sharing it with various LEO's. So, they already lie at that point to hide massive collection even if it means crooks walking.
Next, we have the umbrella program called Core Secrets. See Sentry Owl or "relationships with industry." It says Top Secret, Compartmented Programs are doing "SIGINT-enabling programs with U.S. companies." In same document, even those with TS clearance aren't allowed to know the ECI-classified fact that specific companies are weakening products to facilitate attacks.
For Lavabit trial, see Exhibit 15 and 16 for the defense against pen register. Exhibit 17 makes clear the device they attach records data live and claims constitutional authority to order that. They claim only metadata but they lied about that before. Exhibit 18 upholds that the government is entitled to the information, Lavabit has to install the backdoor, the court trusts FBI not to abuse it, and they'll all lie to Lavabit customers that nobody has access to their messages (aka secrecy order about keys).
That the judge asked for a specific alternative was hopeful, though. I came up with a high-assurance, lawful-intercept concept as a backup option for event where there was no avoiding an intercept but you wanted provable limitation of what they were doing.
They regularly hide what techniques they have via parallel construction or dropping cases.
So, you now have that backdrop where they're collecting everything, can fine companies out of existence, can jail their executives for contempt, are willing to let defendants walk to protect their secret methods, and constantly push for more power in overt methods. In the iPhone case, even Richard Clarke said he and everyone he knows believed the NSA could've cracked it. Even he, previously ardent defender of intelligence community, says FBI was trying to establish a precedent to let them bypass the crypto with legal means in regular courts.
So, the questions would be:
(a) can they already do that legally or technically using methods like attaching hardware and software to vendors' networks/apps like in Lavabit trial?
(b) can the NSA or third parties bypass the security on iPhones publicly or in secret? Or did Apple truly make bulletproof security?
(c) did all this change just because FBI said they were honest, powerless agency hampered by unbreakable security in a press release?
I didn't think anything changed. I predicted they'd crack that iPhone the second they were blocked in court. They did. They knew they could the whole time. They lied the whole time. They wanted a precedent to expand their power like they did in the past. That simple.
Edit: I may be thinking of a FISA order as opposed to an NSL. Doesn’t matter though, obviously the concern is that they would be served with whichever does allow that.
Yes, if you're looking for alternatives to Signal, you should totally use a solution that hasn't rolled out end-to-end encryption by default. /s
...and that only two clients have implemented so far, out of 50ish that they list on their website.
For all the hate it gets, it does only have mode of communication: End-to-end encrypted, for your contact (as people's addresses are pubkeys) and with forward secrecy.
Most "secure" IM systems fail this basic test. When proper end-to-end encryption is optional, guess what happens.
The whole forward secrecy seems to be unresolved still. They have session keys,but other than that there is no rekeying.
And then we have the whole issue with it relying on supernodes for much of its functionality (offline messages, mobile phone client rs) which leads to it having a subset of the issues many have with signal.
>Well. I'd rather not have anyone suggest tox.
I'm repeating myself, but for all the hate it gets, I'm unable to come up with a better suggestion than Tox. There's always some kind of flaw: Centralized, no forward secrecy, end to end encryption optional, no way to verify contacts and so on.
The fact is we mostly know what kind of attacks are possible on signal. We know metadata is a potential problem. We know what kind of tradeoffs we get with a centralised architecture. We know how that works and how to mitigate some things. Openwhispersystems have been clear about what Signal provides and what it does not provide (even the wording around disappearing messages is well chosen not to confuse people ).
With tox there is a lot we don't know. Are the supernodes an attack vector? Why aren't the devs clear about using a less distributed architecture for mobile clients? Why don't they provide proper forward secrecy yet they claim to do so? There might be lots of strange properties of doing crypto distributed.
I would not recommend it for secure communication until the amount of unknowns is smaller. It is really that simple.
So your citation for sticking to their old ways is pointing to some old example? That's not what I was looking for.
>Why aren't the devs clear about using a less distributed architecture for mobile clients?
Because it's not a priority to them; Or to me, for that matter. (I don't IM on the phone)
>I would not recommend it for secure communication until the amount of unknowns is smaller.
I don't see how the likes of Signal or the popular Telegram are any better in that regard. Also, adding more features (your phone suggestion) wouldn't help.
Tox has not had this amount of attention and it is written by people who seemed to sincerely believe that using nacl/libsodium made tox safe. If that is not a huge red flag, then I don't know what is.
Telegram is not encrypted by default and when it is it uses a weird protocol, which people warned about from the beginning. The devs were cocky even after a probably unintentional backdoor was found that would have let the server mitm every encrypted communication.
The difference between these three for secure communication is huge.
So basically, argument from authority. No Good.
This is pretty bad, but so is having had your private key compromised in the first place. It should be fixed next time they do a flag day.
Other than that, and the rekeying issue (keys are only renewed when the client is closed. They should be on a period of time, to make forward secrecy really effective), nothing else bad was found.
Notice that, for a couple years now, effort has gone into polishing (toktok) toxcore and documenting things, rather than else (eg: adding fancy features). That's a good thing.
Tox might be understaffed and progressing slowly, but there's nothing fundamentally bad about it. They got a bunch of important things (really distributed, DHT, public keys as addresses, temporal keys for forward secrecy, always end to end encrypted) right. I'm not aware of any other project that got this much right, unfortunately.
If only if Tox got more attention, it'd gain developers, donations, and the possibility of getting a proper audit done.
However, I strongly disagree with the author that Matrix should be recommended as a secure communication platform until E2E is stable (and, from what I understand about your project, you'll enable it by default as soon as you consider it stable enough).
Also, since writing this list yesterday, another client has got E2E running: Seaglass (a native Cocoa macOS client): https://neilalexander.eu/seaglass/
Yes, we had one Synapse server running on a resource-constrained machine that sometimes "fell behind" the rest of the network. I believe that is what had caused such issues. Still, the fact things easily break with server load or network issues means there is something faulty about the protocol. Resilience and reliability are no less important than security.
Meanwhile, the performance of the matrix.org server has definitely improved massively over the last few weeks. We hit a performance ceiling from May through mid July, but since July 19th or so we've finally got CPU headroom back again thanks to stuff like https://twitter.com/matrixdotorg/status/1019957885026144257 and https://twitter.com/matrixdotorg/status/1022095383978233856.
Also, Matrix enables end-to-end encryption by default on clients that support it.
It's one thing not to trust Signal. It's another to recommend alternatives that are far worse.
People have other lives, most often online discussions go way past their due date (I actually like the fact that HN doesn't give you a notification when somebody replies to your comment).
It was a good discussion though, the placebo effect is fascinating.
Let's compare it to F-Droid: He isn't saying that Signal should be distributed via F-Droid by default, just that there should be the option. So the article doesn't seem to say that the defaults are what matter.
If you get the time, please give it a proper whirl. With the recent open-sourcing of their server and proper E2EE by default (but abstracted away from the "regular user"), it's shaping up to be a really solid application. As far as I'm aware they use the same double-ratchet protocol as Signal, but you don't need your cellphone number to register (a big thing for some people - myself included).
Synapse, the reference server, can handle rooms, and force encryption on the rooms.
p2p is client side. Riot, as reference client, is the one that takes care of this, and, if I get everything right, it is on by default.
I'm not so sure about this. I don't think Snowden and Schneier are praising it because it is the most secure application available that works for every threat model; I think they're doing it because it's the best attempt to up the security of the masses. In other words: there's a limit to its threat model. Signal makes it harder to do mass-scale surveillance, and allows e.g. whistle-blowers to contact journalists without standing out because they're using an encrypted messaging app.
Yes, it's important to highlight those trade-offs, and one can always do better, but as far as I can see Moxie has always justified the trade-off with arguments that were not based on being self-serving. You might not agree with his conclusions, but I think it's unfair to accuse him of being self-serving. (Unless you mean "thinking about the consequences for the success of Signal" by "self-serving". It's not really clear how it serves Moxie otherwise, and the author doesn't go into detail about that.)
In the end, I think it comes down to the author expecting different goals from Signal than the project itself has - as implied by his disdain for GIF search. Obviously Signal isn't only implementing features just to get more secure - it also wants to be widely adopted. It's just that the author apparently doesn't consider that as important.
"What are the assumptions that I'm making here?"
One assumption is that you're not currently on anyone's radar. Are you willing to bet the entire enterprise on this assumption? How certain are you? Are you 99.999% certain?
Another assumption is that the operating system you are running the app in is not compromised on either end of the communication. 99.99%?
Another assumption is that the screen isn't viewable by other devices. Another assumption is that the frequency of your key taps aren't picked up by a mic and then turned into intelligible letters.
Another assumption is that the encryption algorithms you're utilizing haven't been subtly chosen to be intelligible to a single actor or that they'll stay secure once we have quantum computers.
Etc. Etc. Etc.
Signal is good because it raises the bar. Stock traders buying black information probably won't get your communications. They won't be scooped up in a email server leak. They wont be visible to your wife when she enters your phone's unlock code because they auto delete, and they don't get pushed to your iPad, like FB messenger.
But if you want to go up against James Bond, and you're already on his radar, you need to give up the illusion that anything computer related is fully trustable. Just pre-arrange some code words or OTPs and meet in person in an area without electronics or go even more old school and use dead drops with hand written communication.
 I personally know 3 people that were caught cheating this way.
Ok, but in that case what does Signal offer that any random messenger with transport encryption doesn't? If your threat model doesn't include state actors then you can probably trust a) the HTTPS certificate infrastructure b) an international corporation like Facebook, so you can probably assume that no-one would tap your FB messenger messages in transit. "Not pushed to your iPad" sounds more like a bug than a feature - I want to be able to read my messages anywhere that I'm logged in as me (at least while I have my yubikey or what have you plugged into that device). Automatic deletion... eh, I would rather make a deliberate decision about when to delete things, personally.
2. As for the rest of it: Cool man, that sounds like you want a normal chat app that is more usable and less secure. I use Messenger too for things that don't matter.
There's no reliance on DNS. We know what the right way to do HTTPS is, and an app that doesn't have to maintain compatibility with ancient browsers can use a strictly secure profile (no old ciphers, no downgrades etc.). HTTPS is older and more complex than the Signal protocol, but it's also extremely widely deployed and gets a huge amount of attention from security researchers. I think actual attacks on the protocol are less likely with HTTPS than with Signal.
> unless you're layering another level of encryption over HTTPS it isn't fully secure.
Nonsense. Two layers of valid encryption are no more secure than one, and two layers of flawed encryption will almost certainly still be flawed.
> 2. As for the rest of it: Cool man, that sounds like you want a normal chat app that is more usable and less secure. I use Messenger too for things that don't matter.
It's not that my chats don't matter. It's that I don't think autodeletion or one-device-only represent a meaningful security improvement.
In practice there is for most situations. Are you going to get a static IP and go through the work of finding one of the rare cert authorities to get an HTTPS cert for it authorized?
> Nonsense. Two layers of valid encryption are no more secure than one, and two layers of flawed encryption will almost certainly still be flawed.
I hate arguing about this because I feel like there is a difference between how mathematicians think and how engineers thinks. I agree that one of the layers should be HTTPS if the context allows for it, because it has a lot of eyes on it, as you mention; but I fail to see how layering encryption is bad from a privacy standpoint.
Mathematically, this statement:
> Two layers of valid encryption are no more secure than one.
Is only true if there are no mistakes and if it would take more operations in the universe to break the first layer of encryption.
But why should we, a priori, assume that there are no mistakes? We have hundreds of examples of thought-to-be-secure ciphers / one way hashes ending up in the trash heap. Look at things like Cloudbleed. In reality things break. In reality cert authorities get moled or hacked. If you've been using layered encryption you're safer. Also, HTTPS basically mandates that you use TLS, which for some contexts doesn't work because we'd prefer a one-way (i.e., connectionless) channel to communicate to stop inbound traffic at the physical layer.
> It's that I don't think autodeletion or one-device-only represent a meaningful security improvement.
It's helped plenty of people that have had their phone seized at the border or their other device seized by the police. Sometimes you don't that information is sensitive until later, and sometimes you choosing to delete it at that point is illegal or impossible.
You don't need DNS to check whether the server purporting to be messenger.com has a valid certificate for messenger.com. An attacker who controls the network can of course cut you off entirely, but an attacker who controls DNS can't intercept you messages because that doesn't get them any closer to having a certificate.
> I agree that one of the layers should be HTTPS if the context allows for it, because it has a lot of eyes on it, as you mention; but I fail to see how layering encryption is bad from a privacy standpoint.
Do you feel safer behind two locked doors than one? I guess it can't hurt, but the effort would surely be better spent on virtually any other aspect of the system. E.g. if you double the key length in a single layer of encryption you've made it 2^128 (or whatever your key length was) times harder to crack, whereas if you stack two layers then you've only made it twice as hard.
Beyond that my argument would be: many security breaches happen because someone got confused about where the security boundary was. If you use one layer of encryption then everyone knows that the encrypted data is untrusted and the decrypted data is trusted. If you have two layers it's very easy to get lazy and introduce a small hole into one layer assuming the other will cover it, then you do the same for the other layer, and then an attacker figures out how to connect those two holes in a way you hadn't thought of and suddenly you're doomed.
This is a chat app so, by definition, security requires trusting at least one other person. Also, I think experience shows that secrets can often be least trusted to those who have some interest in/use for them, with the secret owner often being the least trustworthy of all. So I'd say that if you trust yourself you're already probably trusting one of the weakest links in whatever chain of trust you would have.
But seriously, pretty much every secure system requires trust, and the more it relies on technology, the more trust is required. You need to trust there are no backdoors or holes in a long chain of hardware and software that no one person can possibly verify, and if they hypothetically could, they could only hypothetically do so with the help of verification software that they could not themselves verify, at least not without dedicating a lifetime to that goal. Trustless security does not exist, and attempting to achieve it by adding more technological layers and more complexity reduces rather than enhanced security. We should make it easy for us to choose whom to trust, not work on a futile attempt to take trust out of the system.
How so? If you can minimize trust to the point where you have to trust someone to only properly design federated or peer-to-peer open protocol and trust that others will participate and oversee the process it's one thing, as there is no control or power to go around. Open and secure enough implementations from other parties can emerge with more parties verifying them and a possibility to switch in case someone does something sneaky. But if you also have to trust the same organization with implementation, infrastructure, distribution, there is not much security to talk about. There is no way to even verify claims that the thing they open sourced is the same thing they compile and distribute. And so much centralized power makes the organization a lucrative target for state actors with no realistic possibility to defend.
The more centralized trust you have the less secure system can be. It's like an upper bound on security.
Your argument about an appealing target could also be used to show the exact opposite: decentralized systems are much harder to upgrade, and so they become attractive targets which you need to break much less frequently (especially considering that the internet backbone itself is pretty centralized), and so it makes even very expensive cracking more affordable. The argument about open-source applies pretty much equally to the centralized and decentralized case.
I disagree with that. The more centralized system is, the less trust boundaries it has and more vulnerable and insecure it is, because penetrating one trust boundary gives access to everything. Security always requires additional complexity. And decentralization forces you to take that complexity seriously for once, something you neglect, not simplify, in centralized insecure designs. Forcing you to deal with just trust explicitly and systematically leads to much more secure designs.
Other than that decentralized systems are exactly the same as centralized, just with more players and choices and incentives not to break anyone's trust. The only problem is all that embrace, extend crap large corporations always attempt to pull off and recentralize everything.
The same could be true for a decentralized system if the flaw is in the centralized backbone or the shared protocols/algorithms.
I can't find the source for this, could you tell where did you take this from? (not saying it's not true, just curious to read the full text)
My new secure chat app, on the other hand, encrypts your message in memory, then zeros out the bytes.