I wish the definitions were spelled out. It says Signal isn't "anonymous", which I assume means "uses a phone number to find peers". And it has the usual feature matrix problem: sure XMPP "does E2E". But what does that mean? It supports S/MIME. Do you want S/MIME? (You don't.) It supports OTR, TS and SCIMP too: but you need to be an expert in messaging schemes to understand how those are different. None of them implement double ratchets. None of them implement even close to the privacy features Signal has implemented. But on this diagram it is clearly better because there is more green and less red.
Another example: "open server" and "on-premise" says nothing about whether or not you really want to run one of those instances. It just says that hypothetically one could.
In terms of errors: the linked "E2E audit" for Telegram did not audit E2E at all, and in fact only cites sources saying that it's probably fucked. Wire has a real audit that isn't listed. WhatsApp uses Signal, just with fewer of the.
Use WhatsApp to talk to normal people. Use Signal for nerds, and... probably Matrix for group collab? Or maybe stop caring about secure messages for group collab so much :-)
> Another example: "open server" and "on-premise" says nothing about whether or not you really want to run one of those instances. It just says that hypothetically one could.
I know a number of people that run matrix.org servers for personal use and companies. The entire French government runs on riot.im/matrix.org.
When security and really privacy matters, you don't want a third party being able to push updates to your clients/servers at any time without warning.
> None of them implement even close to the privacy features Signal has implemented. But on this diagram it is clearly better because there is more green and less red.
What features specifically? Happy to add more columns if signal really has anything unique to offer here.
The things Signal gets red marks on are pretty fair though imo, and things others do better.
> Use WhatsApp to talk to normal people.
I think you will find many options above WhatsApp on the list in terms of security and privacy that have clients that are every bit as simple to use.
Other than their (very) effective marketing advantage, -why- would you encourage people towards these respective walled gardens instead of more open alternatives listed?
In a backchannel, as a consequence of this HN article, someone (names withheld to protect the guilty, they can identify themselves if they'd like) started looking at Dust and figured out the key store password is a hardcoded, short, ASCII string and the messages are encrypted with unauthenticated AES-CBC. They did not need the source code to do that.
JP Aumasson phrased it more eloquently than I could: https://research.kudelskisecurity.com/2018/10/02/open-source...
I will never use Signal with their current direction and don't recommend anyone use it, but they get credit where it is due. They actually offer source code for their walled garden and allow that their basic crypto can -generally- be verified except for extreme cases like my other comment.
Allo, Whatsapp and other closed systems that give you no reason to trust them other than faith in the people advocating them and that their engineers got it 100% right. Their claims can't be verified so they can only be marked as just that, claims.
Sure, plenty of obvious flaws can be spotted without source code access, but many are hard to find even if you -do- have source code to the point they would probably have never been found were they not open to allow the right set of eyes to eventually read the right section of code (Heartbleed etc).
I found random number generation flaws in Terraform I would of -never- found without source code access. It is for this reason I trust Terraform and Hashicorp quite a bit. They normally get it right, but are not afraid to have other people audit and point out flaws because they don't have this arrogant idea their engineers will get everything right 100% of the time.
Security is -hard- and anyone that thinks they can get it right with a SPOF closed source approach is more interested in marketshare than security.
Trust, but verify. Likewise if you are not even allowed to verify, you should instantly distrust.
I have repeatedly refuted that point and you have repeatedly ignored it.
> Sure, plenty of obvious flaws can be spotted without source code access, but many are hard to find even if you -do- have source code to the point they would probably have never been found were they not open to allow the right set of eyes to eventually read the right section of code (Heartbleed etc).
You are implying that no source code prevents you from finding obvious flaws, and I have given you two counterexamples in a messaging service that came up in this thread that I had never heard of before. That was casual peeking, not even a serious audit.
> I found random number generation flaws in Terraform I would of -never- found without source code access.
Do you professionally audit software? The fact that you can't find a bug without source does not mean that no-one can. Project Zero finds bugs in Windows and Edge every other day that are a lot more complicated than figuring out what RNG Terraform uses and how Terraform uses it.
> Security is -hard- and anyone that thinks they can get it right with a SPOF closed source approach is more interested in marketshare than security.
Since you've impugned my motives in two separate places in that post (referring to me as a "shill"), I am no longer interested in discussing this with you. I'm sure people can make up their minds from the thread.
> Do you professionally audit software?
I do as a matter of fact. I'll be honest it is normally much easier in closed products as I know what to look for. It is generally much harder to find flaws in popular open source systems as someone else has usally long beaten me to the low hanging fruit in critical codepaths.
I have been burned and seen others burned so many times by closed software and teams that ship security regressions that I now personally use only open tools I can audit at some level, or can audit the auditing and reproducible build process. I have seen "security" companies cut corners too many times in order to feature farm to trust anything that won't let me see the source code.
I have even professionally audited multiple systems on the spreadsheet myself, some of which I am aware of vulnerabilities for currently under embargo.
So far you have cited examples which I not only didn't ignore but responded to in the form of counter examples.
This is however turning into a open source vs closed source debate with subjective evidence, which in the end always boils down to if you have blind faith in a small group of people or not.
In the spreadsheet I tried to only include things that could be fairly objective, but when we get into concepts of trust of binaries and their authors, it gets muddy to be sure, and we will end up choosing paths based on our own threat profiles and experience.
This is why I tried to include -everything- in the spreadsheet I am aware of, so people with threat profiles different than mine can make informed choices.
I feel qualified only to certify that a tool is obviously insecure, but I don't think -anyone- will ever be qualified to solo certify something as totally secure. (but sadly that is how clean audits are generally read). I have in the past hired 3-4 audit firms and each would find flaws the others did not, and that mine did not, but fail to spot flaws that mine did. No one has the full picture.
The only relevant one not under embargo atm was my recent casual 2-3 hour audit of Lifesize which I found right away had a number of alarming issues the company would not address. Some of these may of been found to be non issues under further inspection, but there was more than enough to assert security was not a major focus of the platform.
This cursory look was all it took for me to feel confident in ending any consideration of that product to protect the interests of the entity considering it.
I reported all these to the company and there was no response for over 90 days and made my findings public after due warning.
They are available here: https://gist.github.com/lrvick/6e600d8484cfb415d1e2b06e8b345...
(reminder: this was only 2-3 hours of work and by all means grain of salt)
1. I think LVH is right to point out that "audit" doesn't have much meaning in the document you've produced; it gloms together assessments of wildly differing depth and quality and condenses them all to a single pass/fail.
2. Secure messaging security is hard, much harder than secure transports (which you alluded to in the other subthread when you mentioned TLS). There are in fact not that many people in the world who can do a proper cryptographic assessment of a messenger at this point (not because it's prohibitively difficult; it should be well within the reach of everyone with a graduate degree in cryptography who enjoys coding --- rather, just because it's a specialized skill set that not many people have an opportunity to get good at). Like I said: I wouldn't say I'm qualified to perform such an assessment (LVH, different story). And all that "just" gets you the cryptography! If your messenger gets popular, the framework it's built on becomes one of the most important targets on the Internet!
So I get itchy when people say things that amount to "I've audited things, I have an authoritative opinion about this stuff". Probably not? Maybe, but, like, if we were going to place a bet...?
You can turn that logic around on me. But I'm not the one making an extraordinary claim. You are. So: what really bugs me is the idea that this is a problem domain you can reduce to a Wikipedia-style comparison chart. That kind of chart will get people hurt.
You did ask me if I audited things before and I answered honestly. I frequently report vulnerabilities in a range of open and closed systems and read a lot about those others find. I also don't consider myself an expert and generally distrust people that claim they are in this space because it is, as you say, really hard.
I initially started a spreadsheet to document the very high level objective traits of messengers I find useful and others found it interesting/useful and helped extend it.
It is far too much cognitive overhead to do deep dive evaluation of 75+ messengers, but what this list allows one to do is quickly eliminate services from consideration that lack features vital to address their use cases, platform targets, or threat profile: for instance the ability to self host, or end to end encryption.
From there once you have your short list, you can then make more effective use of time reading code, doing audits, reviewing audits of others.
If a list like this simply makes someone aware of new up and coming projects in the space, or old ones that have been quietly evolving in capability... then I feel justified in having shared it instead of having kept it as a private document of my personal notes.
I for one learned about a number of new projects when putting this sheet together, and interesting new approaches on solving these problems.
The thing that is pretty subjective about the list is how I sorted items based on my own research and threat profile.
I won't sit here and say Matrix or briar or any other items near the top are perfect, or lacks any security flaws. I personally place my bets on Matrix at this point based on my deep dive evaluation of it and similarly featured alternatives, but that is subject to change! Others are free to share your view that a well funded but closed source messenger like whatsapp is the general best bet.
To sober up on this topic a bit: Matrix has had glaring security flaws in the past, but also so have the options you personally recommend like Signal and Whatsapp.
At the end of the day all one can do is collect data, look over all the options, deep dive into the relative merits and claims of each to the extent one can, look at the research provided by others, and make a judgement call.
This list should be considered a starting point for research and a way to discover lesser known projects as it was for me.
It should -not- be seen as an end all "use this it is most secure" recommendation by any means. Anyone that looks at any one high level list of binary data points and makes an exclusive choice on that alone is doomed to shoot themselves in the foot with or without my help.
Hopefully that addresses your concerns about my intent here.
You published a spreadsheet listing messaging apps 'ordered by security' in which Signal, Whatsapp and iMessage are shown as way less secure than IRC. It also still says, and you've repeatedly argued here, that things like whether, say, WhatsApp uses E2E encryption is essentially unknowable.
Notably if there was a blackbox audit of Whatsapp or iMessage 6 months ago you have no path to easily check if a blatant backdoor was introduced in the build you installed last night. You also can't know if there were obvious flaws in the code the whole time that would be very hard spot in blackbox testing if you didn't know what to look for. Maybe the app build from last night leaks its key intentionally via very subtle steganography in metadata?
Compare to a binary I installed via F-Droid that I can confirm was built reproducibly from a given git head I can go see the code review for.
To use a simple analogy: I can see the exact ingredients of what went into -my- meal instead of what went into the meal of the food inspector.
This allows much deeper release accountability and -is- a major security feature iMessage and Whatsapp lack worth flagging.
Verifying security with source code is hard enough. Without source code it is substantially harder and I for one have no interest in using or recommending security tools that fail to be 100% transparent about how they work.
Without source code all we have are claims that can never be thoroughly verified.
I made specific arguments and use cases to justify my position and you have simply told me I am wrong without directly addressing them.
Once again, I find the term "expert" overrated. I for one admit I am not an expert on security, a field that is already hard enough on auditors like myself without withheld source code.
I have also worked with a half dozen or so security auditing firms all of which stated source code access would make their job much easier.
It didn't take me hours of blackbox testing for me to find CVE-2018-9057. It took me 20 minutes of reading code on Github half asleep at 4am because I was curious about an unrelated bug.
I remain convinced blackbox testing would of very probably never found that vuln, and even if it did, not in as short of a time period. I trust Terraform over closed alternatives because it was patched within a couple hours of me mentioning it on IRC by a peer who submitted the bug report and patch in one shot. I could verify the source code fix easily and compile the new binary myself to take advantage of the patch before Hashicorp even merged it.
I can also easily verify there are no regressions in future releases.
Tell me how you go about solving for this or other subtle cases like stego exfiltration more easily -without- source code. Also how you or your team could of patched the issue yourself without source code.
If I really solved this the hard way then I will by all means move my security engineering career to focus more on blackbox testing as you seem to be advocating for.
For your particular example: go download a copy of radare and pull up any go build artifact and see for yourself how hard it is understand what's going on.
I don't know about your security engineering career, but if you intend to get serious about vulnerability research, yes, you should learn more about how researchers test shrink-wrap software. I spent years at a single stodgy midwest financial client doing IDA assessments to find vulnerabilities in everything from Windows management tools to storage appliances. It wasn't my IDA license; I was augmenting their in-house security team, which had 4 other licenses. This was in 2005.
If you can understand a PRNG algorithim and how it was seeded without source code using nothing but radare faster than I could read the code... then you really do have some superhuman skill, and most of my arguments fall flat.
Subtle cryprography flaws like this could be introduced intentionally as well by a bad actor, or pressure from a state actor. They are very hard to see without source code in my experience.
You just kind of made my point for me, in that seeing the output in something like radare -is- often much harder to understand what is going on than just looking at the source code.
Don't get me wrong, I have a deep respect for people that are very good at finding bugs this way. When you -don't have source code then finding bugs via methods like this is the only thing you have on the table, and it is impressive.
What I am taking issue with is you trying to in effect claim that some people like yourself are so good at blackbox testing that you could find all potential bugs faster with those tactics than you could reading the relevant source code.
Consider though that not all researchers work this way. Many bugs have been found by myself and other researchers I know by simply reading source code, so your argument that a vendor releasing source code gives it no security advantage is just not true.
While I am no fan of Signal, the fact they make their source code public makes it much easier to audit and trust its e2e cryptography implementation than say Whatsapp. Even the two tools you favor are wildly unequal in transparency and auditability.
Perhaps the majority of my background working with FOSS software has made me undervalue blackbox testing and you have made a good argument for it. It would make me more well rounded and I intend to pursue it.
I think if there is anything you can take from my side of this discussion it is seeing the value of providing source code to the right eyeballs that know how to quickly spot certian classes of issues.
That source code in the hands of the right person is a faster way to find some bugs than one could in a binary reverse engineering environment.
(The Terraform function you found this problem in is literally called "generatePassword", in case you planned on writing 6 paragraphs on how hard it is to find the right symbol to start analysis at.)
This is such a silly, marginal bug, it's bizarre to see you kibbitzing on the "right" way to fix it (the bug is that they're using math/rand to generate a password instead of crypto/rand, not that they're seeding math/rand from time). But whatever! Either way: it's clear as day from literally just the call graph of the program what is happening.
Your example of a bug that is hard to spot from disassembly is a terrible, silly example, that I've trivially refuted in just a couple of Unix shell commands.
I don't think you understand the arguing you're trying to have. I get it: you have a hard time looking for bugs in software. That's fine: auditing messengers is supposed to be hard. You don't have to be up to the task; others are.
The fact that you can replace OTR with OTP in this sort of statement and it becomes even truer should tell you what a lousy argument for the practical security of anything it is.
"It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."
This is mostly for discovery of options to consider diving into.
Sure, maybe Signal has done some useful technicality -legal- protections for now for US citizens, but what happens when a state actor threatens to kill the family of a Signal employee if they don't ship a very subtle compromise in how their binaries source random numbers, or if they don't sell the metadata of who is talking to who.
Signal is not anonymous so that metadata alone could have real value. It is at the end of the day using phone numbers as identifiers. Sure they do SGX remote attestation but that has been demonstrated broken multiple times and won't stand up to a motivated physical attacker. Even if it -is- solid now, I would not underestimate how far a state actor will go. (As demonstrated by NSA wiretaps on google datacenters). Can they compel the right Intel employee to CA certify a manipulated enclave? Can they just get handed the key?
Also why would people outside the US trust the legal protections afforded to a US company to protect US citizens?
Their refusal to federate their network just creates a Lavabit sized target... and I have yet to hear any technical reasons for doing so particularly when, again, other projects have demonstrated end to end encryption and decentralization are not ast mutually exclusive as Moxie claims.
The idea that only Signal can do this right, and only if they keep it centralized on their servers, with them being the only people that can sign the binaries... is pure hubris imo.
There are a lot of alternatives we all should be carefully considering for the next -standard- for ubiquitious secure messaging.
If your threat model includes Mossad, you're going to get Mossaded upon. You say "thinking only Signal can do this is hubris", but in the same breath suggest we're just carefully consideration of a standard away from being safe against Mossad threatening someone's kids. That's hubris.
I did try to respond to the sopoena comment in that that is only of limited value. A blackhat will proably have an easier time getting to servers than a lawyer with Signals setup, and I do credit them with providing a substantially better assurances than say Whatsapp... but still not good enough for my particular threat profile.
Personally I don't like using permanant non anonymous identifiers like phone numbers so I would not use the feature as they have implemented it, but that doesn't mean it does not have some value for some use cases worh exploring.
That is at least something that can be looked at objectivly in the scope of the spreadsheet.
No messenger system protects you against that. You seem to be going through the full sequence of well-known poor ways to evaluate the security of something like an instant messenger, starting with the feature matrix, going through 'it can't be secure if it's not open source/self-hosted/federated' and reaching the Mossad. Which is a worthwhile and educational exercise but it's still not a good way to evaluate the security of something like an instant messenger.
If the system doesn't have central server then it would become more difficult. You cannot subpoena a Tox network.
Any messenger that requires and stores a phone number (read your real-world identity and physical location) is neither anonymous nor private.
Also, a centralized messenger with a single server means that all traffic between all the users around the world goes through one data-center in one country that can do what? Observe the traffic and detect who is using this messenger, at least their IP address and country.
Of course, for most users this doesn't change anything because they don't do anything illegal. But if they don't do anything illegal then they don't really need protection and can use VK or Telegram.
I mean, yes, a single server would have that property. But what system are we discussing that has a single server in a single data center?
Furthermore: if you actually care about metadata hiding it's a lot more complicated than "we have more than one person operating the servers".
> Of course, for most users this doesn't change anything because they don't do anything illegal. But if they don't do anything illegal then they don't really need protection and can use VK or Telegram.
Privacy is not just for people who do illegal things.
When five people call the known mob boss you follow all of them. Maybe one is just a friend from high school. Another is the mob accountant, another an enforcer, you're getting leads. But what if it's five thousand people - now you can't follow them all, it's overwhelming.
Knowing five people in your country use Signal puts them all on the watchlist. If it's five million that's pointless. Without "Don't stand out" the encryption used just makes you a target.
Barry, who uses Barry's very own private self-hosted server for a popular federated system, Stands Out. Message from Japan to Barry's system? That's for Barry. How do we know? Well Barry's the only one on that server, easy. No cryptography can fix this.
Don't Stand Out
The threat that was brought up was an actor with state-level resources, coercive capability and lack of scruples - the specific example being the threat of murdering someone's family, not subpoenas.
Journalists covering sensitive topics in sensitive areas -must- care about these questions.
If you are using something anonymous, fully end to end encrypted with open source reproducible verified builds on decentralized servers, you can greatly limit the risk of having a central third party that can be compelled to act against your interests.
Maybe in the US we don't think we need those sorts of protections, but we should consider the worst cases when designing security systems, and ensure no one single compromised person has the ability to backdoor thousands or millions of people.
Not everyone cares about this sort of thing though, and there are 70+ other options listed with various tradeoffs.
In fact: a huge portion of commercial vulnerability research and assessment work is done on binaries of closed-source products, and it has never been easier to do that kind of work (we're coming up on a decade into the era of IR lifters). Meanwhile, the types of vulnerabilities we're discussing here --- cryptographic exfiltration --- are difficult to identify in source, and source inspection has a terrible track record of finding them.
There's no expertise evident in the strident arguments you've made all over this thread (network security work is not the same as shrink-wrap software security work, which is in turn not the same as cryptographic security work --- these are all well-defined specialties) and it concerns me that people might be taking these arguments seriously. No. This isn't how things actually work; it's just how advocates on message boards want them to.
You can also look into a binary application and study jumps, where the private key gets loaded into memory etc etc.
I am with you on this, there is tons of low hanging fruit you can do on a binary application, and I do a lot of this sort of thing myself.
Making sure an application at a high level seems to do what it says on the tin is a great phase one audit of something as a baseline, and should be done even on open source projects.
Trouble with closed source tools is you either have to do this over and over again on every new binary released, -or- you can look at the diffs from one release to next and see if any of those critical codepaths have changed.
You also -much- more easily evaluate bad entropy in key generation if you have the source code. People think they are clever all the time using silly approaches along the lines of random.seed(time.now()+"somestring") and it is much less time consuming to spot those types of flaws if you have the code.
This is why I argue you need both whitebox and blackbox testing when it comes to mission critical security tools, and a clear public auditable log of changes made to a codebase so review can continually happen, instead of only at specific checkpoints.
Again, some people may not care about that level of rigor and just want to share memes with their friends instead of journalists communicating in the feild. I tried to be pretty comprehensive with the list, and welcome recommendations for new objective criteria to include.
It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use.
People who do a lot of binary reversing vuln research were doing a version of this long before the we had modern tooling; you don't look at each load and store, but rather the control flow graph. People were auditing C programs, by manual inspection, in IDA Pro before there even was a market for secure messaging applications.
And that's just manual static analysis! Professional software assessors do a lot more than that; for instance, Trail has been talking about and even open-sourcing symbolic and concolic evaluation tools for years, and, obviously, people have been iterating and refining fuzzers and fault injectors for at least a decade before that.
This isn't "low hanging fruit" analysis where we find out, like, the messenger application is logging secret keys to a globally accessible log file, or storing task-switcher screenshots with message content. This is verification of program logic --- and, I'd argue even more importantly, of cryptographic protocol security.
"Bad entropy in key generation" is a funny example. Straightforward binary analysis will tell you what the "entropy source" is. If it's not the platform secure random number generator, you already have a vulnerability.
The problem with your chart is that there is no real rigor to it. It's just a list of bullets. It's obviously catnip to nerds. We love things like this: break a bunch of things into a grid, with columns for all sorts of make-me-look-smart check-marks! But, for instance, there's no way to tell in this chart whether a messenger "audit" was done by JP Aumasson for Kudelski, or some random Python programmer who tcpdumps the application and looks for local-XSS.
You can't even neatly break these audits down by "did they do crypto or not". The Electron messengers got bit a few months ago because they'd been audited by non-Electron specialists --- that's right, your app needed to be audited by someone who actually understood Electron internals, or it almost might as well not have been audited.
Given that: what's the point of this spreadsheet? It makes superficial details look important and super-important details look like they don't exist. You didn't even capture forward secrecy.
People who pay a lot of attention to cryptographic software security took a hard bite on this problem 5 years ago, with the EFF Secure Messaging Scorecard. EFF almost did a second version of it (I saw a draft). They gave up, because their experts convinced them it was a bad idea. I see LVH here trying to tell you the same thing, in detail, and I see you shrugging him off and at one point even accusing him of commenting in bad faith. Why should people trust your analysis?
I also fully agree you need to get people that specialize in the gotchas of a given framework to be able to spot the vuln. In fact that kind of bolsters my point, that you need the -right- eyeballs on something to spot the vuln, and this is much easier if something is open source vs only shown to a few select people without the right background knowledge to see a flaw.
The point I was trying to make is that you need both continual whitebox and checkpointed blackbox testing and if a tool is not open source then half of that story is gone unless you are Google sized and really can pay dedicated red teams to continually do both for every release... and even they miss major things sometimes in chromium/android that third parties have to point out!
Being open source client and server does -not- imply something has a sane implementation (and to your point things that rank well on paper like Tox have notable crypto design flaws).
This is more so you can quickly evaluate a few tool that contain the highest level criteria you care about for your threat profile, then you can drill down those options with less objective research into the team, funding, published research, personal code audting etc etc to make a more informed choice.
For me as an AOSP user, something without F-Droid reproducible build support is a non starter. Anything that is closed source I can't personally audit the critical codepaths of and have knowledge a lot of the security researchers I communicate with can do the same... is also a non starter. Lastly I want strong knowledge I can control the metadata end to end when it counts, and being able to self-host or being decentralized entirely is also an important consideration.
Many people have reviewd this list and described to me they have learned about a lot of options they didn't even know existed, or didn't know where closed or open source. That is the point. Discoverability.
It would be unreasonable and even irresponsible for someone to fully choose a tool based -only- on this list imo. If that was not clear elsewhere then hopefully it is clear now.
It's clear to me from all your comments that you are somebody who wants to sysadmin their phone. That's fine; phone sysadmin is a legitimate lifestyle choice. But it has never been clear to me what any of the phone sysadmins are talking about when they demand that the rest of us, including billions of users who will never interact with a shell prompt, should care about their proclivities. The argument always seems to devolve to "well, if we can't sysadmin this application onto our phones, you can't be sure they're secure". But that's, respectfully, total horseshit, and only a few of the many reasons that it's horseshit appear among the many informed criticisms of your spreadsheet on this thread.
You might want to put your warning about how unreasonable and irresponsible this tool is on the actual spreadsheet, instead of just advocating that it be enshrined in Wikipedia.
This is precisely pvg's point. The problem with your methodology is a systemic one that emerges in every crowdsourced threat modeling exercise. You've enumerated every possible security attribute and security feature of every software in a specific category, then tossed them all into a matrix of boolean values. But that does not result in a threat model users can competently assess, for several reasons:
1. You're treating all features as equal - if not in intention, then at least in presentation. Even if you don't intend it, the sea of green at the top is a loud proclamation of safety; likewise the sea of red at the bottom is a siren of insecurity.
2. You're not allowing any nuance in assessment feature or attributes. Boolean flags cannot capture all the nuance inherent in cryptographic security. Which specific party was responsible for an assessment? What are their credentials? What did they find?
3. You're including features which most users don't and shouldn't care about just because some minority might. Moreover you're not being opinionated enough, which is something that comes with expertise - for many of the "features" you listed, the minority that cares probably shouldn't if they only care because of a vague notion of security.
4. You're leaving out important features which should absolutely be considered for security. Where is forward secrecy? Where is authenticated encryption? Where is consideration of specific algorithms or primitives? Where is nonce misuse resistance?
5. Most importantly: you do not have any explicitly called out methodology that allows someone to audit what you've done. If you begin by pre-supposing that a given feature is worthy of inclusion because it's an important security metric, your conclusion is just going to end up magnifying that bias. Therefore it's paramount that you call out methodologies explicitly and early.
We see this time and time again in security. People try to first principles the security of an entire category of software by being exhaustive about every type of threat and security feature they can think of. But they invariably leave out important threats/features, underestimate the importance of some and overestimate the importance of others. Exercises which attempt to give users the world almost always end up "empowering" them to boil the ocean. People looking at this spreadsheet are approximately all unqualified to make an informed decision based on a critical assessment of all those features, which means they're likely to just go with the most green option (or worse, proselytize the most green option as the most secure one to their friends and coworkers).
Oh, hello there. The short answer is, of course we don't.
It's a very obvious attack surface that Signal and Whatsapp could avoid if they wanted to. For Whatsapp, being tied to Facebook, I kind of get why they don't (it's mainly that I don't expect them to be better).
But for Signal there aren't any good reasons. I've read various threads (on github and HN?) with Moxie being asked questions about this and I've not heard reasons that satisfied me. On occasion he was even evasive. Now, when the reasons given aren't good enough to explain why to take such a major security affecting decision when there is an obvious better alternative, and Signal seems to be very meticulous about doing the right thing in almost every other area of the protocol and systems around it, then there MUST be another motivation behind the decision that is not stated openly. Maybe it's just something benign left unsaid, who knows?
But even then, the only reasons I can imagine for choosing becoming this huge target, are reasons that are just good for Signal/Whispersystems but meaningless risk to its users.
These are valid challenges, but moving to propietary and centralized solutions instead is throwing away the baby with the bathwater. Was your WhatsApp conversation encrypted? You honestly can't know, and even if it is right now, Facebook could disable Whatsapp's e2e encryption at any time without you even noticing.
FWIW, OMEMO has been the (only) de facto encryption mechanism for modern XMPP clients in the last couple of years, and most clients that support it clearly distinguish encrypted and non-encrypted messages.
Another option is trusting audits or the developers. Last but not least you can inspect the source code of open source apps. So I don't know how deep you want to go with this, but for XMPP there are plenty of options to make sure the client does what it advertises.
Btw. I do not think that OMEMO is fundamentally better than WhatsApp does, as they are implementing the same protocol (Double Ratchet). The main differences are that one is an experimental public standard while the other is a proprietary protocol extension.
Other than that, sure, you have no guarantees, yet it's still desirable for such critical security components to be free, or at least "open source".
EDIT: I previously said "turns off E2E", which I didn't say in my original referred-to post, and that's more misleading than "hamstrings", which is how the actual attack works.
If the client is open source, you can verify exactly what it does. Compile the app yourself or download it from F-Droid and you can be sure that the binary you get matches those sources.
Sure you can argue this all the way down to "Trusting Trust", but that doesn't really make sense when comparing two apps/ecosystems that operate in the same real world's constraints.
The random update bit is real! But also real for Conversations or whatever, and more real for small developers less likely to have their opsec in check. For the vast vast majority of people in this fashion WhatsApp is identical to Conversations and Signal.
WhatsApp is a proprietary app and as such it's only available on the Play Store. Conversations is open source so you can download it from the Play Store, or from F-Droid, or compile it from source. So if you care, you can be significantly more sure that your version of Conversations "does what it says" than you can be of WhatsApp.
No such luck for Whatsapp and Signal. And although they may be fair now I think putting all eggs in one company's basket is a bad idea in general (see e.g.: Google).
Why the scare quotes?
What about software's source code being available makes it more desirable for your non-technical users who will never modify their software?
Also, this is a fallacy. Just because "typical user" won't care or won't be able to do something doesn't mean we shouldn't strive to build and popularize platforms that do right things. Of course it also has to be good at what users actually do care for to have any chance of taking off, but that's not the point.
What is a fallacy? I asked a question.
Non-technical users will never modify their software. The act of doing so would recategorize them as "technical". I don't see any fallacy here?
The most likely interpretation of your comment is, you're assuming an argument that I am not making.
> Just because "typical user" won't care or won't be able to do something doesn't mean we shouldn't strive to build and popularize platforms that do right things.
That's a disagreement with an argument I did not make.
> Of course it also has to be good at what users actually do care for to have any chance of taking off, but that's not the point.
What is the point, even?
I just asked how something being "open source" (to borrow the scare quotes) is more desirable to users who do not read/write software code? What is their incentive supposed to be to prioritize open source over proprietary software?
That's not an argument. It's certainly not a fallacious one.
> Even though I'm not a "non-technical" user, I don't review crypto of my XMPP client.
Little bit of background: I'm the sort of person who would review the crypto of an XMPP client.
> And that's fine, I know a few people that did and I trust them enough. This way I don't have to trust an entity that may benefit from being able to access my messages.
Cool, let's ask an trustworthy expert then. I can think of no finer expert to chime in on this discussion than JP Aumasson, one of the co-authors of SipHash and BLAKE2, who wrote the book Serious Cryptography and conducted many cryptography audits in his career.
DNSSEC doesn't encrypt, it only signs.
That sentence reveals that you only know WhatsApp.
Imagine this argument:
> I think it would be fair to say that everyone uses Google Chrome and I know what they get, [the other people] maybe they get some kind of security but who knows which one and what properties that has.
That is, assuming you meant: https://wiki.xmpp.org/web/XMPP_E2E_Security
Imagine, one of your contacts is captured; attackers get his contact list that includes you; then they get your phone number from Signal; then they get your location and put you to all kinds of black lists, extremists lists, no fly lists, watch lists and so on.
Signal is nothing better than Telegram. They should be on the same position.
Sure, you go ahead and buy an illegal burner number, then download Signal/Whatsapp from the Play Store and reverse engineer that binary to see if it "does what it says". Other people might find it useful to look at this comparison to discover alternatives that better fit the characteristics they find important.
Second, this is inconvenient: if you change your IP address or log out, the server will require you to provide a SMS code.
"AOSP" support in this context implies f-droid and implies reproducible builds, but maybe I should break that out more clearly.
Even if you have repeatable builds you still audit what the thing actually does.
Mozilla has done some research to close these issues  but until this is enforced on a system level reproducible builds won't solve the underlying problem.
It's tough to reason about on a message board. Signal does some important privacy things better than anyone else does; for instance, Signal cares more about metadata that I think any mainstream messenger does.
On the other hand, Signal made a conscious, deliberate choice to ensure that it works for ordinary users. It is not a goal of Signal's to mollify the tiny fraction of people in the world who have strong opinions about Tor; it is much more important to them that, say, any immigration lawyer in the country could readily pick it up and start using it.
The phone number ID thing creates an instantly bootstrapped social network. It solves a problem Signal actually cares about, while making a problem nerds care about a little more annoying. That is a reasonable reason not to use Signal (if you're one of those nerds), but, like DeVault's inane F-Droid conspiracy theory, not a reasonable reason to warn people off of Signal.
Signal, WhatsApp, Wire: any one of those is going to be radically better than the alternatives nonspecialists are likely to use (Fb Messenger, email). But my confidence drops rapidly when you go to anything else on this list.
I don’t even need to warn people about it, they already don’t want to share their phone number with strangers. That’s in spite of them not even knowing a fraction of what a dangerous person can do with your phone number.
Journalists and nerds know how to get burner numbers. They have less to worry about. I know people who have been sim-jacked to bypass their 2FA and steal their savings, which they did not get back. I know people who have been stalked using cell tower geolocation. I know (of) people who have hacked LexisNexus accounts and can get your whole life with a phone number. Hell, I know which forums I could spend $20 on to get that info.
Aside from the really dangerous stuff, many people don’t want harassing texts or calls from creepy nerds, so they have reservations about sharing their phone number (and rightly so). The set of people I want to casually message is far larger than the set of people who I want to have my number, and I’m not even particularly worried about these things.
I'm not sure what the solution is, besides much more interactive and thorough presentation of features in a way that allows classification of how advanced they are or likely you are to need them, but that's a lot of work. Until then, a comparison like this will always suffer from rarely matching exactly what the reader is looking for. They do work well as quick references though.
Could you provide your source? I've never seen S/MIME used in XMPP. Client certificates for authentication sure but not for E2E security.
> It supports OTR, TS and SCIMP too: but you need to be an expert in messaging schemes to understand how those are different.
OTR is being rolled back from clients in favor of OMEMO for good reasons: https://conversations.im/omemo/
I am aware that OTR is being rolled back. My point is that extensibility is at odds with practical security and privacy for the vast majority of users. The people who need it the most do not understand the difference between OMEMO and OTR, but I can tell them to go install WhatsApp or Signal, give them a rough idea of why you want one or the other, and everything will be copacetic.
And you don't have to explain the difference between OTR and OMEMO to anyone, just tell them to "Go use Conversations, it's on the Google Play store" and they'll be using OMEMO to speak with you.
> The people who need it the most do not understand the difference between OMEMO and OTR,
For most of them there won't be any difference. Modern clients do not implement OTR so people will not even encounter the term and OMEMO is enabled by default so users don't need to bother with it.
Federated protocols can be made secure, if there are popular, secure by default players on the market. Look at what happened with HTTP2 and TLS 1.3: browsers basically used their powerful position to upgrade the security for the entire federated ecosystem. There are also other minor factors, such as tooling (SSL Labs) that incentivizes people to maintain their servers. And most certainly users of HTTPS don't need to understand how e.g. Certificate Transparency works, that's handled internally by their clients (browsers).
Of course there will always be a niche of low security clients, but who cares that TLS 1.2 doesn't work on some old Java?
If calling HTTP/2 federated bothers you I can rephrase my argument:
> Federated protocols can be made secure, if there are popular, secure by default players on the market. Look at what happened with the other IETF-standard protocol: HTTP2: browsers basically used their powerful position to upgrade the security for the entire ecosystem. There are also other minor factors, such as tooling (SSL Labs) that incentivizes people to maintain their servers. And most certainly users of HTTPS don't need to understand how e.g. Certificate Transparency works, that's handled internally by their clients (browsers).
This has been my go-to advice for a while now too! The key driving point is that amazing crypto is 100% useless if the person you're talking to doesn't use it, or uses it incorrectly.
The only sticking point with the above advice is the nerds who think they understand crypto but don't and insist on you using some crazy app :/
Maybe try listening to those nerds and try out some open alternatives with security that is possible to verify.
You might be surprised to find both tools are pretty low on the list in respect to security and privacy compared to tools with smaller marketing budgets.
The issue of long-term identifiers for offline delivery is well-understood (e.g. Rottermanner05) but also not actually a Signal problem. In that light: what do you propose we do instead? (You can probably see the response coming already: let's just say metadata protection is, ahem, complicated.)
Frankly, the fact that people like you believe so strongly that Signal doesn't do this should be damning for Signal, as it goes to show just how deceptive they are being about this issue: the reality is that Signal is quite careless with metadata :/.
(As for what you do instead, there are tons of trivial ways of making a secure messaging system that are better about metadata than Signal, and even ways that allow you to implement various forms of rate limiting. Signal is just being lazy here.)
I do realize that Signal is open source, yes (and I'm guessing from your phrasing you know I know that), but I don't feel a moral imperative to source dive every time someone says something weird. Putting that burden on the claimant seems pretty reasonable. This set of threads alone was exhausting enough without having links to GitHub with every message :-)
In particular: I interpreted your "stored (at least temporarily)" claim as in like, a logfile that's rotated daily or something. I think we can both agree that would be much worse than a 60 second leaky bucket and hence I interpreted that as more outlandish than what you intended.
But regardless, you're right: that's not how it works today, I read that code an extremely long time ago (when it was really just sender ID limited), and having both sender and receiver in plaintext is clearly worse than having just one, and having a leaky bucket with a timer is clearly a much higher resolution timestamp than the day previously claimed. I'm guessing the distinction is between what's stored and what's not? But I'm definitely uncomfortable and will make a note to follow up. In particular, now I would like to know under what circumstances that Redis cluster will attempt to persist. I think the answer is "never--it only has caches and the directory", but you've definitely shaken my confidence in that answer :)
I am a nerd - I went through a phase of trying to get people to use PGP and then XMPP. Both are technical masterpieces but a complete disaster for actual users.
Here's the crux: I'd rather have all my conversations encrypted than just the ones I have with other nerds. In this respect WhatsApp has been the best thing that's happened to messaging since the invention of the internet.
- Telegram: E2E Private: TRUE
- WhatsApp: E2E Private: CLAIMED
These are either both "true", or both "claimed". Pick one.
In particular, what's the definition of the "Open Spec" column? Signal's GPL spec gets a FALSE here so I'm presuming the definition is something along the lines of "Spec produced by one ofa group of arbitrarily approved bodies of which Open Whisper is not a member"
> Not possible to verify as application is closed source. Maintainer could compromise security at any time without detection.
I think it's useful to have this differentiation, even though technically you could say E2E is TRUE for both of these.
For each release, independent people would have to compile the same source code and inspect if there is any differences with the published APK.
One way to make this easier would be to fix the build system so each build with the same source would provide a bit-exact APK file (without timestamps). Then the independent parties just have to check if all of the bits are the same (with a hashing function).
Ideally Google would also publish the hash output of each APK so that it can be checked against another distribution channel on installation.
Technically one could still argue the same for Telegram unless you're self-building given their source-release delays but that might be nitpick too far.
Authentication can be tied into 3rd party apps (LDAP, phpBB, etc) but I have not tested this yet. 
If you try it, use their latest snapshot for server and client. Incredible sound quality. Nice UI/UX experience. Decent support for game overlays. Very low CPU usage.
 - https://wiki.mumble.info/wiki/Main_Page
 - https://wiki.mumble.info/wiki/3rd_Party_Applications#Authent...
So yes, XMPP supports audio and video calls but finding two different clients which work on the first try together can be a challenge. Sometimes I wish there would be some compatibility XEP which defines a common set of supported XEPs including a test suite to run it against.
What is still missing is everything above the wire protocol level. The XSF, being the XMPP Standards Foundation, is guarding the protocols, and things like UX and client interoperability are considered as off-scope. However, there are people interested in these topics as well looking for fresh collaborators.
I'm getting so tired of this. Real world user experience is that the constantly changing interfaces are driving everyone mad.
Stick to a thing and let people learn it.
(Yeah Slack, I'm looking at you.)
Granted XMPP is not a messenger it's a protocol and a bunch of standards but still it's hard not to laugh.
We came to the conclusion that XMPP was built for desktop computers connected to strong internet via LAN connections not for mobile networks. We went on to invent our own protocol which was byte efficient and minimized roundtrips.
Now take all of this with a grain of salt. It is now 2018 and things have changed considerably.
Oh it definitely was. However, extending it to work well on mobile devices without throwing the whole protocol away turned out to be feasible.
Sadly, those weren't really enough at the time, and the new protocol Babel happened.
Completely understandable. And well, looking back it was the right decision. You definitely have solid proof.
Yeah as much as XMPP looks good on paper I don't think it will ever take over the messaging world. Even if the devices and the network capacity could handle the larger messages I feel developers don't really want to deal with XML standards. There were a lot of cool things in the 90's but XML wasn't one of them.
Fortunately, in XMPP country, we're always just a XEP away! </s>
See also: https://xmpp.org/extensions/xep-0286.html
Part of the problem is the inability to assign nicknames to contacts, so you have to remember everyone's Matrix ID.
IMO: Text chat with a few emojis and images here and there should not ever be among the things that slows your computer to a crawl.
EDIT: I'm speaking of the UI, not the network connection; the latter is sometimes slow too, but that's understandable
We're still aiming for a full redesign of Riot.im to be out before end of the year, there should be things to look at at the end of the month, all crafted for the non-techy friends, so watch de the space!
And yes, the backend is not helping neither, although we've also done good progress on perf improvement there in the last months and still rolling new ones (e.g. Py3 being deployed as we speak and reducing server RAM by 3). Switching away from matrix.org can help, and agreed that a directory of public servers à la Mastodon could be interesting (although we would need to find a non-scary way to do so, lots of non-tech people would run away from it: they just want one click onboarding without having to understand what's happening behind the scenes).
We've also soft launched a paid hosted offering, for 50-100 people teams who could do with their own DNS and faster servers at http://modular.im
Native IRC clients for example have no such problem, and consume ~0% CPU at all times.
Aha, I was thinking of the mobile client, which is fine perf-wise imo. But yes, the Desktop/web client is very Slack-esque. Not in a good way.
Also Signal isn't federated and doesn't use the Matrix protocol.
Android (no audio/video): https://conversations.im/
Web with extras: https://movim.eu/
Desktop (Mac): https://adium.im/
Desktop (Win): https://gajim.org/ (not sure about this)
Desktop (Linux): http://pidgin.im/ (needs extras: https://petermolnar.net/instant-messenger-hell/#extra-plugin... )
Hence the link for extras, but it is a valid issue.
Gajim is XMPP only, where Pidgin is multi-protocol, which is the main reason why I'm still using it.
The EFF doesn't do recommendations for all users - whether a messenger works for someone depends on their threat model. https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-r...
I did a gradual migration of our group chat from $LEGACY_CHAT_SYSTEM to Telegram because at the time, all the options with better crypto stories fell down; WhatsApp is very anti-bot, Wire's API was unusable on free accounts, Signal's group chats work differently / would've needed O(n) phone numbers, Rocketchat's IRC integration was incompatible, Matrix couldn't be simplified to hide the federation for standalone, etc etc.
We literally had better privacy when we had analog phone lines that anyone could tap into. That's just terrible.
FWIW I believe Riot/Matrix are planning e2e by default as soon as their implementation stabilises. Theirs is more complex/powerful than WhatsApp's though since they have multidevice support (which WhatsApp lacks). They've avoided making it default sofar due to bad UX and the possibility of losing access to conversations across devices, but it's improving rapidly.
* who's chatting with whom,
* via what means (text, audio, video),
* conversation time and duration,
* location of participants,
* how much data transferred etc.
There's a lot they can gather and imply from all that when WA users phone numbers will be known to FB, so they can graph connections via others' who run the FB app or previously shared their contact info. Also those who are perhaps logged in on, or simply visiting other sites they track via like/share buttons etc. Whilst what you say is encrypted, the circumstances around that conversation can be used, at least to advertise to you.
There is no way that an open IM platform will be able to guarantee E2E by default on all clients simply because someone/somewhere will produce a client that doesn't or doesn't do it properly. It is probably better to start with the E2E encryption system (in my example OMEMO) and then see where you can get it.
Anonymous account creation? Open source? Audited?
Because LINE claims to support E2E by default ("Letter Sealing"), but only one of those two listing says "claimed" (the other say false).
Stealthy.im isn't mentioned (I develop that presently).
Stealthy makes use of decentralized identity in the blockchain and a decentralized storage system called GAIA. Regular chat is E2E encrypted using ECIES. Currently released for Android and iOS.
Regarding Wire, is not contact list stored on Wire's server? Then it is even less private than Jabber.
Would love if the decentralised/federated were two distinct columns. :D
Obviously your threat model informs this.
Many irc servers listen on an ssl port, but this then only encrypts data between client and server. The data is decrypted on the server and sent to other clients. (Maybe encrypted)
Some ircd's support "encrypted only channels" where if you're not connected via ssl, you can't /join
Also this is lacking a column for "stickers". Seriously they are one of the things that has my and my crowd using Telegram, especially since it's really easy to make your own custom sets instead of relying on whatever they pay someone to draw for you.
Blockchain, as usual, purports to solve a problem nobody had. Dust doesn't try to address the simplest, best-understood problems we have for reputation systems on chained blocks.