Hacker News new | comments | show | ask | jobs | submit login
A Formal Security Analysis of the Signal Messaging Protocol (iacr.org)
253 points by galadran 169 days ago | hide | past | web | 220 comments | favorite

I don't understand all this recent Signal bashing here on HN.

I have been using Signal for some time now on a daily basis and I haven't had real usability issues. We cannot hold OWS responsible for the insecurity of our operating systems, the nature of today's cloud or hardware infrastructure, the choices we make for comfort reasons, and what not. What they do is provide us with, and I think most of us would actually agree, a secure messenger that is both free of charge and best of breed, or close to. And as with every open source project, it's their project, but you're free to fork it and provide us with something better if you don't agree with their choices.

So if you need a secure messenger now, because you need or want privacy, Signal is an excellent option, free of charge, open source. What are we actually complaining about?

I don't know moxie but it seems that he's actually open to suggestions and offers if you're willing to provide some manpower as well. Then he has his convictions but still offers to discuss them constructively. Again, what is there to complain about?

As for myself, I contact support if I have a question or issue, and they have been very helpful. I donated to the project, also because they are supported by http://freedom.press, and I value a free press. And even though I am absolutely not interested into the giphy thing (I'm on iOS so I haven't really seen it yet), I'll open an issue on github if I want Signal to change. And I invite everybody to support the project in this way, and make sure that the projects that are actually supporting our interests don't get abandoned in favour of comfortable-to-use data-hogs like Facebook, WhatsApp or Telegram.

This may not be true of everyone, but my own criticisms of Signal are meant as a constructive critique, and not as "bashing." Signal occupies a valuable spot in a new market, and we all win if it continues to improve.

Think about what is at stake. Many people working in politics, law, human rights, and other areas absolutely need a way to communicate securely, especially when their causes don't align with the interests of those in power. If you're wondering why people are so passionate about having a more secure platform, that's why. In this case, it's important to be forgiving if people seem hostile or overly critical.

I'm one to claim that Signal isn't for me on a regular basis. Can't find the link right now, but I confessed a good while ago that I somehow hold Signal to another standard.

Currently I'm backing off, trying to reduce my critical comments. It's a great project, and it probably should be more widely adopted.

For me? I feel this is a weird uncanny valley effect for IM solutions. It's not ugly as shit, but it's also not quite there yet and that leads me to leash out at Signal every now and then (keywords for me: lacking federation, phone number is a crappy _mandatory_ identity).

Now, that's really not fair. I don't complain about WhatsApp as much as I complain about Signal. It's a double standard and I understand that - but can't help myself here.

Personally speaking I would have much rather have a feature in Signal that allows me to send documents or attachments from my various cloud providers rather than a gif search.

Its great that the underlying Signal protocol is secure, my major worry is how do we get more people to use it, and how do we make it more reliable?

As it stands right now, Signal is used exclusively by the moderately technically inclined, with a little over 1 million users. In a perfect world, it would be bigger than Whatsapp, which uses libsignal but has many metadata related issues, and also misses many older demographics in the US. Additionally, server reliability has been something like 710hrs out of 720hrs usually every month, with an outage just this last Saturday from 19:51 to 22:08 PST for everything but ZRTP calls.

I know Moxie will likely never allow interoperability with his servers after the cluster that was CyanogenMod WhisperPush interop, but we need something to allow for self hosting or alternative hosting, Signal's servers are not bulletproof and a local server in remote areas can be invaluble, essentially XMPP with Conversations is all we have in this arena right now.

Hmm, I'm not sure where you got these numbers, but none of them are correct. Ten hours of downtime a month? We measure this pretty obsessively, and haven't had that much downtime in a three year period.

I appreciate the sentiment for what you're saying, but our direct user growth has surprised even us (again, not sure where you're getting your numbers), and Signal Protocol is now on over two billion devices.

If your major concerns are reliability and user growth, I think federated protocols are likely to exacerbate rather than improve those conditions -- as we've seen with XMPP historically. However, I would love it if you proved me wrong. Signal can be deployed in a federated environment today, let me know if you need any help setting it up.

This is what I've observed on T-Mobile, Centurylink and Comcast over the past month across two phones, it does allow me to backdate messages and send them later at least, so not a total loss.

In regards to federation for Signal, what are you looking for in terms of technical competence and skills? I'm already intimately familiar with running servers, VOIP, SMS delivery, etc so I've got a good grip on things, and I'd like to tie in our existing PBXes with Signal whereby ZRTP calls go straight to endpoints, perhaps we can force the issue with Grandstream and get them to add ZRTP support on device for their GXP2140 & GXP2170 phones.

How many direct users does Signal-the-app have?

If you can further narrow that to active users per $TIMEPERIOD (for example, MAU), that would be great.

I find it's really easy to get people to use it. I tell them to install it and then use it as my primary communication channel with them. As I do this with more and more of my contacts some of them will have each others numbers and will automatically be able to talk to each other via Signal.

I have no problem to get people to use it, the problem is to get people to keep using it.

For example some time ago when Signal was still called TextSecure, we converted all our group chats for a friend group of technically minded people to TextSecure. We all agreed that encryption is important and worth some minor discomforts. We switched back to Facebook (which we all dislike) within a month because TextSecure didn't have consistent message ordering in group chats, making some conversations impossible to understand for those unfortunate enough to receive messages in a bad order.

This is mostly fixed as of a few months ago, the desktop client also makes it as convenient as Facebook.

The problem is that they chose to make it a Chrome Apps, which will eventually be dropped from all plateforms but ChromeOS.

You can't really complain about the choice of the best option at the time, just because it's owner subsequently killed it. I think it was madness and internal Google politics (because it competes with Android Dalvik JVM apps ) that killed it, not technical problems.

True, I assume they'll just ride that till the bitter end and then unhappily move to using nwjs instead of Chrome, and distribute it from their website instead (even though they seem highly opposed to that).

I wouldn't mind if they decided to make an Electron app instead.

I would, I really dislike Electron. Even Chrome Apps are a pain. I would be much happier with a native implementation.

Or a fucking website

Yeah, same here. Except then messages arrive out of order, duplicate, for another session, "bad encrypted message", or simply take minutes to deliver. I don't understand why duplicates are even possible.

So while I'm able to get people to try it out or use it for "sensitive" stuff, these issues really hurt usage and I'll just end up reverting to SMS once a day at least.

Edit: I don't want to sound to be complaining too much. I love Signal and try to convert people. Just hard when basics mess up. And it's also frustrating because on the surface some of this stuff doesn't make sense.

I experienced this with TextSecure over a year ago, but since then it has been smooth sailing outside of minor outages (10hrs a month), since the outages are less than the regional outages we've been experiencing on AT&T and T-Mobile for SMS as of late (20 to 30hrs a month). Calling has also improved quite a bit, for Signal enabled contacts I've been using it exclusively for the past 9 months, I can be driving for an hour without call quality issues and maintain a stable call reliably.

Is the codec choice any better? The last time I tried to use it the audio quality was awful.

Audio quality has been good enough for me, but there's definitely room for improvement as they're still using Speex (if I'm not mistaken).

If they'd use Opus I'd really appreciate it, hell, if I could even get interop, I have hacked together an app to pass the ZRTP codeword to my deskphone when Freeswitch terminates it and reencodes it as SRTP/TLS for it, since no deskphone vendor supports ZRTP.

Would make those that insist on ZRTP in my life a lot more convenient to have extended convos with.

I use it for calling when abroad, and so far the audio quality has been quite good. I don't know if it was worse in the past, but maybe just give it another try?

My major problem is when a contact stops using Signal for some reason (new phone etc.).

My messages go into the ether without any indication that they are not being received.

You should just get single-check meaning server received it but not the other user's device.

That's the kind of stuff I was getting on TextSecure and RedPhone. Many people refused to use it after it happened enough. Just the tech types that are hardcore about privacy.

In other words, the Chrome evangelist strategy (or Firefox evangelist strategy prior to that): Let me install this app for you - trust me, it's much better!"

way easier than others...wickr etc

One hurdle for me (at least) is the lack of export/import/backup. I got a new phone recently and discovered that there was no way to restore message conversations onto the new phone.

This might not be a deal-breaker for some, maybe even a very nice feature for some, but for me it made me go back to other services.

Github issue: https://github.com/WhisperSystems/Signal-iOS/issues/967

At one point Signal had working backup & restore on Android, whereby you could keep your private keys and seamlessly move between phones, but they did away with that since a few users encountered bugs.

I really wish they hadn't axed it, it was a lot easier to use it than do an adb backup of the app & data.

Looking at that GitHub issue, Moxie appears to have popped in for half a second out of annoyance, then the main repo maintainer michaelkirk went and locked & limited the issue to just contributors.

The problem with backup & restore in Android was that it didn't work reliably. I know first-hand. If the feature isn't there, people won't rely on it, as they may do when it's there but doesn't work.

The thing with locking issues is that Signal issues get so much noise that the content will drown in it. I dislike not being able to add on-topic comments, too, but I understand how they got where they are.

I'm ignorant on how backups works on Android, but I have never met an iOS app where data has not been properly backed up and restored. Supposedly this works fine: https://developer.apple.com/library/content/documentation/iP...

It was a feature in the app to export an encrypted backup to external storage (encrypted with a passphrase of your choice), independent of OS-provided functions. It basically serialised and unserialised its own database.

Just to clarify, this is iOS only.

On Android, the client can export plaintext messages to storage, or you can use something like Titanium Backup, which wraps up the app installer AND data directories in a tarball and can restore it across other many different versions of Android.

The lack of some good history mechanism is what has always kept me away from this kind of software. But I don't know how we could possibly get the kind of messenger I would dream of (secure/encrypted, history backup to cloud, and web app for messaging).

> essentially XMPP with Conversations is all we have in this arena right now

There's also Matrix (matrix.org). Took me about half an hour to set up a homeserver, has federation and TLS for now but full end to end encryption is already in beta (since I'm using my own homeserver I'm fine with just TLS) and Riot, the biggest client, is easy to set up and use. My wife is rather opposed to new things and isn't amazing with technology but she's happy enough with it.

Conversations on the other hand was a pain to set up self hosted (more Prosody/eJabberd's fault though) and had silent message send/receive failures.

> how do we get more people to use it?

For me the missing link is a good, native Linux client. Signal would have "got" me the last time I was shopping around for a communication app if it had that. Then again, I probably represent maybe 0.01% of the audience you are trying to reach...

Note that while they do not retain or analyze it Signal offers no metadata protection either.

Exactly why I like XMPP, it allows me to protect my metadata from leakage, if I could get normies to use it for more than a few weeks.

> Exactly why I like XMPP, it allows me to protect my metadata from leakage

I read this frequently, but I'm still not sure how this is supposed to work - would you care to explain?

Say I run the server under my control, and I talk to other people on and off my server via XMPP. What is occuring is my laptop, phone & tablet are connecting back to my XMPP server, which then connects to those clients, thus not leaking my IP address, OS, etc. Additionally, I can tunnel this all over CJDNS or OpenVPN and nooone will be the wiser that I am even using XMPP.

Comparatively, as it stands now Google is getting a bunch of metadata on Signal users, such as when messages are sent and received and from which device, IP addresses, OS info, etc.

> Say I run the server under my control, and I talk to other people on and off my server via XMPP. What is occuring is my laptop, phone & tablet are connecting back to my XMPP server, which then connects to those clients, thus not leaking my IP address, OS, etc.

Here it seems that you're defining "metadata" as your IP address (and OS?). That's kind of a non-standard definition of "metadata" in this space -- most people approach the topic more concerned about who is communicating with who.

Email is federated, and I run my own mail server, but almost every single email I send or receive has GMail at the other end of it -- so running my own server does not provide me with any meaningful metadata protection, even though it is a federated protocol. The idea that everyone in the world is going to run their own mail server (or messaging server, or whatever) has not born out in practice, even in environments that natively support federation.

I think serious metadata protection is going to require new protocols and new techniques, so we're much more likely to see major progress in centralized rather than distributed environments (in the same way that Signal Protocol is now on over two billion devices, but we're unlikely to ever see even basic large scale email end to end encryption).

If all you want to do is hide your IP address, it sounds like you should just use Tor or a VPN.

> Comparatively, as it stands now Google is getting a bunch of metadata on Signal users, such as when messages are sent and received and from which device, IP addresses, OS info, etc.

This is not true. You're referring to GCM? The only thing GCM does is wake up a device to connect to the Signal server when the app is running in the background, nothing is actually transmitted over GCM.

> The only thing GCM does is wake up a device to connect to the Signal server when the app is running in the background, nothing is actually transmitted over GCM.

I consider myself an educated Signal user and I had no idea about that. Preach it, shout it, this is great for everyone who thought "GCM == messages"!

I can't see that allowing for self-hosting / federation will do anything to drive adoption.

If that was the case then Conversations would be bigger than Signal, but it just isn't

Improving server reliability is a separate thing.

It could be like email, except with libsignal as its base. Federation definitely drove email to be as big as it is today, and it has made server reliability a non-issue (your server holds the message until the other server is online).

XMPP itself is huge, GCM is essentially just a frontend for it. Conversations is smaller than Signal in part due to Signal being around since 2011, and Conversations being created in 2014, and also Signal gaining many high profile endorsements, from Snowden to Hillary. Still, Signal is microscopic compared to Whatsapp.

> essentially XMPP with Conversations is all we have in this arena right now.

Maybe this isn't too bad. How does modern XMPP compare with Signal's protocol?

Since Signal refuses to learn from XMPP (especially the federated part), maybe the solution is to make XMPP learn from Signal. Then, Signal's failure to become federated would have been an important step to make XMPP finally useful for secure messaging.

Its not too bad, XMPP is very easy with a good client like Conversations. That being said it is not for normies, over the course of a year Signal will be used and appreciated by normies, normies using Conversations meanwhile will go perma offline after a few weeks from what I've seen.

Conversations is essentially Signal but for XMPP, the problem is it doesn't blend in normal SMS and additional value adds to sweeten the pot for normies and keep them using it.

The protocol may be in good shape, but Signal's permission model is flawed for an application that handles sensitive data.

A few examples of excessive permissions:

* Disable your screen lock

* Location permissions

* Set wallpaper ("kitchen sink" feature here)

* External storage (why not use internal "app only" storage?)

* System log data

Android's sandbox will do its best to protect the user from compromised applications, but it can't do anything to protect you if the application already has full permissions. Based on recent events, I would assume that Signal users are at risk of targeting as a block -- their desire for privacy makes them interesting targets from an intelligence and LEO perspective.

Many successful applications follow a plugin model, where intrusive permissions are split off into separate, optional applications. Signal should do the same.

> A few examples of excessive permissions: > * Disable your screen lock

I believe this is for the "call screen" (though it's never really worked for me and my phone gets or stays locked nonetheless when I call someone or someone calls me).

> * External storage (why not use internal "app only" storage?)

Because otherwise you can't extract photos you received (let alone view videos) which, from a security standpoint, is good but it's something you'll have a hard time selling to the average user. The same goes for sharing your location.

I agree that there are legitimate reasons for these permissions, I just don't think they match up with everyone's use case for the product. This is why they should be split into plugins.

Example of how the UX for this works:

1. User chooses "attach photo."

2. If user has not installed the plugin, Signal gives them an informational prompt and a button that opens the app store link.

3. User clicks the button to go to the app store.

4. User clicks "install." Application is installed (should be quick, small app)

5. User can now attach photos from external storage.

Steps 2-4 are short and occur only one time. You would not want this kind of extra friction in a true mass market app, but I'd argue that Signal is not and never will be mass-market. (We can hope, but it's not likely.) Signal's target users, on the other hand, would be likely to appreciate this extra focus on security and user control.

Here's how the UX actually works:

1. User chooses "attach photo."

2. The user hasn't installed the plugin. Who does that? So signal doesn't work.

3. The user sends their contact a facebook message saying "signal isn't working" and attaches the photo.

4. Both users uninstall signal and tell everyone who mentions it that it can't even send picture messages.

That would be poor design. Are you saying that the Signal team would not be capable of implementing a more effective path? Other applications have followed this approach before; it's not rocket science.

I believe you misread tedks.

I read

> 2. The user hasn't installed the plugin. Who does that? So signal doesn't work.

as "when the user sees that it doesn't work out of the box, and sends you to an app store instead, they consider it broken."

If that was their intent, then yes, that would be an issue. The key is good UX; don't send the user there without explanation. A well-designed "read this!" screen is key, and even then you will lose some users. It's a trade-off.

Also, I did acknowledge that this approach will turn away "mass market" users, but again, I don't think that those users will ever be Signal's primary user base. Most people are going to use stock apps or whatever is most heavily marketed (read: whoever spends the most dollars on acquiring users). Signal frankly can't afford to buy its way into the mass market. It's a niche app, and it should focus on catering to that niche.

I think in more recent Androids an app can defer obtaining the permission and then fail to perform the action if it was not granted. I've had a few programs do this (specifically ask permission to access external storage only when attempted) and was pretty happy with the experience.

> Many successful applications follow a plugin model

Uhm, do you have examples to support your claim? None of the truly successful applications I know do this, as it's rather inconvenient from the user's perspective.

All in all, I think your issue is with the Android permission model, not Signal, while suggesting workarounds how Signal could improve the situation a little.

Automate (by Llamalab) is the best example that comes to mind. Tasker also uses plugins, but their "base" app has too many permissions.

A few other applications that use plugins:

* ES File Explorer

* Trigger

* FB Reader

* Threema

And there are others. Most of these don't do this for the explicit purpose of permissions management (Automate may be the only one), but there's no reason that security can't be the primary motivation for a plugin system.

There's another bonus to this approach: a plugin architecture allows you to add controversial features without forcing them on your user base. Don't like a new feature? Don't install it.

Yes, a plugin architecture adds complexity. However, Android's intent system is built to simplify this kind of design, so it's not like you have to build it all from scratch.

I could be wrong, but it seems like none of those apps are really mass marketed to consumers (versus power users). The goal for Signal is to make it something that could gain mass adoption, and requiring users to install plugins in order to do simple things like attach photos would be a serious hinderance there. It's hard enough to convince non-techie friends to install another messenger as it is.

Threema is a direct competitor to Signal.

While I see it as a competitor, I can't see their target markets being the same. Signal, I feel, is targeting average users. Threema's sign up process and verification makes me feel it's for the power user.

I used Threema with my extended family for a while (so not power users). Sign up and (optional!) verification was not a problem. Group management was and ultimately made everybody switch to WhatsApp. Things like "group creator left group, now we must create a new group..." and "identity-only backup means you lose group admin rights".

On Marshmallow or newer, can't these be blocked?

I was alarmed at the Location permissions at first too, but I believe it's only used so that you can send someone your GPS location. If you simply deny the request, that feature will be disabled, but I don't think it prevents the app from working.

Only some can be blocked. Unfortunately Android still groups permissions together, and all related permissions must be enabled or disabled as a block. (For instance, storage read and write permissions are technically separate, but they are placed together in a single "storage" group.)

Most (all?) permissions in the catch-all "other" group cannot be disabled at all.

One of the interesting irrevocable permissions in that "Other" group is full network access, so any app can portscan your network or try to hack your IoT devices. Bluetooth pairing is also in Other. The Android permission model is terrible.

What ever happened to Intents? Shouldn't photo sharing be a simple Intent rather than requiring a new UI and storage permissions for every single app?

Can I suggest a nice program called NetGuard?

But yes, and this is a deliberate Google policy, also like the don't want ad blocking programs. You are an input to their revenue generating system, not a client.

It's open source.

You can see exactly what it's doing.

The issue with permissions is closed source apps that do shady things with extra permissions...

The problem isn't whether you can understand the code or not. The problem is whether or not there are undiscovered exploits in the code that would allow attackers to take advantage of Signal's relatively open permissions.

It's very hard to write bug-proof code. Restricted permissions would be a sensible countermeasure since Signal is a likely target.

I was down voted heavily for suggesting that! I don't really think that breaking out of its sandbox is the problem though, its touching and manipulating data from the network. All the data I worry about losing is inside the signal app.

It would be better if features could be disabled. Sadly the only way to achieve that is to fork the code base and run your own dev build.

> * External storage (why not use internal "app only" storage?)

I'm low on space in my 16Gb android phone. I'm moving everything I can to external storage. My greatest problem is some apps that doesn't allow it, like WhatsApp.

Android does not have fine grained permissions(to be fair, neither does iOS), nor the ability to internally segregate components or define inter-app trust. So while you are correct, the only option is to disable features. Features are necessary to build usage out of the pgp using ghetto to the ordinary cell phone user.

Rather pointless to have a "trusted" application on an operating system you cannot trust – and not even the possibility to run the application on an even remotely trusted and private system, in particular without giving unaccountable root access to Google.

Your same logic means we shouldn't even bother trying to make safer languages (like Rust is attempting) to run on existing OSes and hardware. What's the point? We can't trust the underlying OS.

The point is, it's a step towards a future where a much greater percentage of our systems is vetted, verified, and shown to be secure/stable (modulo external components beyond their control) and minimizing those external components.

> and shown to be secure/stable

For various definitions of "secure/stable".

For me, anything which Google can reach and amass information from, and thus NSA, is not secure. Rust on Linux where I am playing with, is fine.

And its not only the "conspiracy style" "why would Google put backdoors in its 'Play Services'", no its more like

"oh Google receives and sends notifications for every Signal message sent and recieved, among other information, such as Device ID, phone number, android version" - in short who is using signal and when.

Signal is amassing huge amounts of information for benefit of Google. Look at their github page, where they even say they want more to amass more data and to "annoy the hell out of users" to make them update - shove updates down their throat a la Microsoft style.

> Look at their github page, where they even say they want more to amass more data and to "annoy the hell out of users" to make them update - shove updates down their throat a la Microsoft style.

I imagine this is by design. What do I do if there is a vulnerability that a patch fixed but the person I'm taking to refuses to update?

I am not saying I like the idea of bricking an app if there's no update for three months but if you agree that the fight is against dragnet not targeted surveillance then this is a reasonable compromise.

Nope it isnt, its up to me as user to decide when to update, what Signal/other-app can do in such case is provide a simple in-app messaging system "Hey there was a vulernaribility, read more about it here in this message". Then user can decide.

And not "hey user, take this update which contains backdoors since the main developers got gagged/blackmailed, trust us this time for real".

Again, the idea that Signal's developers were "gagged" or "blackmailed" is a lie, and you should be embarrassed to repeat it.

In a security sense, this is actually a very real concern with which even a warrant canary cannot help. Could you provide hard evidence that Signal's developers have NOT been "gagged" or "blackmailed"? I think such a proof would be infeasible at best.


Again: criticism of Signal is in-bounds, but outright allegations that Signal is a shill for Google is not. Moxie is a member of the HN community, and you cannot make these kinds of allegations about him here in this fashion.


How does it make more difficult for USA to get your data if encryption is not as good? This is pure nonsense, sorry. If you think the NSA is dependent on Google to suck up your data, oh boy... Sir, would you be interested in purchasing a fine bridge?

I upvoted this comment of yours because it seemed kind of unfair to me for it to be downvoted, and I don't like people being told what they can or can't do, but the gist of what you're trying to argue is complete nonsense. GCM dependence is a complete non-issue in context, and to say that because Telegram doesn't have that dependency it is safer from US intelligence is so over the top ridiculous. Signal-the-app has it's problems, and a fair share of questionable decisions, but GCM and Play Store are not among those.

That Moxie acts with contempt toward people on GitHub issues is true. It is also true that most of those people (in exchanges I have read) are complete ignorants who wouldn't know security if it hit them on the head, yet insist and yell with noble indignation that they're correct. The kind of people who were thought that more FOSS = more security is an axiom and go from there. Since he gets tons of that, I kinda understand why he might act like that. I understand, but do not condone, it's extremely off putting. In the end, one of Signal-the-app's biggest problems is that it's mostly criticized by unreasonable people.

Thank you.

There is 3 issues which Im trying to clear up here. With GCM and Signal:

1, Google and in turn USA based data-centers receive events/more-data to analyze. That this data is not the encrypted message, not the decrypted message doesnt matter. This data can be analyzed and used effectively. Its not impossible to think of "push-message for android phone available at X time - which looking through our other database shows it has Signal but doesnt have WhatsApp, hm, our other databse shows that android device has Signal app usage of X%". Oh boy.

2. Google has root on your phone, so even if the transport security is very good, it doesnt matter as the phone is effectively owned by Google and thus messages/keys can be stolen at will.

3. The choice should be with the users and not Moxie. Moxie has shown contempt and disregard for users wishes, see quote "annoy the fucking hell out of our users".

That commenter's point is relevant here because major groups in threat model have 0-days in the platforms Signal runs on and many black hats hack Android too. So, anyone with that threat model can't trust Signal at all.

Outside that threat model, it's a useful privacy tool in that it at least reduces risk ftom some vectors. Still need OS-level security like putting and trusted path on OKL4 with secure firmware. Even then, subversion risk is so great that still can't use it for nation-states. Better to put usable front end on very cross-platform, easy-to-isolate tool like GPG. Or communicate in person or mailing encrypted files/messages.

This is more or less a way of saying it's "rather pointless to have secure messengers on iOS". I understand why open source advocates say that, because they've been saying it for 20 years now, but I'm not sure we need to litigate the point or pretend it's some great insight.

Let me put it like this – if OpenWhisperSystems had an explicit toggle in their protocol which, after flipping it, would allow them to access all future communications and where the user was unable to tell whether it had or had not been flipped, nobody would call the protocol "secure" or write a "Trust It" headline about it.

However, if OWS only supports systems on which such a toggle exists via a third-party provider, that somehow makes them secure?

I find this hard to understand. Yes, of course an app which encrypts data against some adversaries is nice, but it should definitely be called "secure-against-some-people", not "secure", and people shouldn’t write "Trust It" but rather "Trust It if you also trust X and Y and Z".

The same can be said about a "trusted" OS like say Qubes OS, with untrusted hardware, like Intel's. Actually, that's what the developers of Qubes OS and other "free" operating systems have said as well.



If anything, I'm more frustrated with the Signal team that the app doesn't have as good call quality/performance as WhatsApp, nor does it have video call support, and that the Chrome desktop "app" doesn't seem to import my phone contacts for some reason - all of which is making me continue to mostly use less secure and less trusted alternatives.

My point is we should aim for getting things "more secure" constantly, and I think we have in the past few years. So rather than just say "what's the point?", we should say "let's put more pressure on X company to open source/prove their system is secure" and hope that in time enough pressure is built that those companies actually agree to do those things.

And since I was talking about putting pressure on companies, let me start:

Where the hell is Google's End-to-End tool? It hasn't had any commits in over half an year, and we already know NSA's bestie, Yahoo, has given up on it. Should we start drawing some conclusions about the Google/NSA relationship, too? Did Google abandon the project?


There - who's next?

> The same can be said about a "trusted" OS like say Qubes OS, with untrusted hardware, like Intel's. Actually, that's what the developers of Qubes OS and other "free" operating systems have said as well.

If you're really paranoid, go for open hardware supported by libreboot [0] or the Talos Workstation and run a hardened "free" OS.

However, I don't think Intel ME (or similar firmware in AMD and ARM) has ever been used to compromise user security and privacy. The threat probably exists and is real but has it ever been exploited? On the other hand, I suspect that there is no lack of zero-days and other vulnerabilities for iOS and Android.

[0] https://libreboot.org/

[1] https://www.crowdsupply.com/raptor-computing-systems/talos-s...

> the Chrome desktop "app" doesn't seem to import my phone contacts for some reason

Have you tried re-importing them manually via the "Import now" button in the Desktop app's settings? Maybe that helps.

Again: this is a point that can be made to sound interesting with lots of extra words, but all you're saying is that people run applications on operating systems you don't like. They're not going to switch.

> all you're saying is that people run applications on operating systems you don't like

No, he's saying that people run applications on fundamentally insecure operating systems.

> They're not going to switch.

That doesn't make them right, nor him wrong.

> They're not going to switch.

Only because there's there's nothing to switch to. There's just no solid FOSS phone OS at the moment, and, IMO, fixing that is more important than securing messaging systems.

Dissidents in places like Iran have already been attacked through weaknesses in secure messaging systems. No, I think you're on the wrong side of this argument.

And dissidents have been attacked through holes in iOS too.

I generally accept Moxie's / OWS's argument that upstream, patched Android with Google services and spyware/backdoor and all, is in general more secure than running a hodgepodge of FOSS software on a rooted phone - especially for less technical minded users (ie: almost everyone if your target market is everyone).

I don't think it follows that a transparent platform running fully open and user-controlled software, perhaps backed by some form of web-of-trust cacert-like CA system can't ever work - and might not be a good idea to have available as a fallback if it turns out that the anti-democratic paramilitary organization you have to fight is one backed by the NSA.

I'm a little surprised how polarized these discussions tend to get - as if two ideas have to be mutually exclusive.

I think I understand OWS reasoning with locking down their network and forcing phone number IDs - I don't really agree - but I understand the reasoning behind it.

It's really on all of us that care about open federated protocols to set up an alternative network, and OWS have even graciously provided source code and a protocol as a starting point - but it's a shame that rather than some email-like model where all systems could federate in a predictable way, we are forced to have three different networks (a hypothetical open-signal, signal and whatsapp).

I guess there's a lot of people that are still sore about Facebook and Google discarding XMPP, and breaking the unification trend that we saw a glimmer of a few years back. Even without federation, I could have one sane XMPP client, with OTR support, and chat both to my non-technical friends on gtalk and facebook - and have encrypted chats over those same servers, or through the federated XMPP network.

Now I have some people in Facebook's silo, some in Google's Hangouts silo, still quite a few on SMS/regular phone service, and a handful on Signal. That's not really the fault of OWS - I actually have a few non-technical contacts I can reach via Signal thanks to their focus on a simple SMS-replacing app. I just still wish I could cut back on the number of clients and have some sane federation.

Don't get me wrong: We absolutely need both

But fixing the client when the host is still insecure/unknown is just going to move the target. If messages are secure, governments are just going to move to the OS-layer.

So you have two attack vectors, the OS host and the client application; why is it bad to secure the client? It doesn't ADD any attack vectors. What is the point in saying "Let's not secure the client software until the OS is secured"? It isn't like these are the same people working on the problem; Moxie isn't going to suddenly start working on securing iOS if he isn't working on OWS.

I don't disagree with you. This is a multi-pronged problem and we need multi-pronged solutions. I just think the OS is a higher priority than a texting-client

And the counterpoint is? ‘People should give Google/Apple root access on their devices to run this new secure messenger’?

> And the counterpoint is? ‘People should give Google/Apple root access on their devices to run this new secure messenger’?

I feel like you might've misstated your intended point, but in any case:

- Most threat models exclude the situation which you're discussing here because risks are generally low and, in the event of such a threat becoming material, the entity is probably screwed regardless of whether that threat is considered due to the costs of mitigation. (Seriously -- how would a company or person mitigate this short of independently auditing the code for the OS? Or building their own? And what happens after you look at the code? Do you then look at the hardware too? How low would you go? How low would your attackers go, for that matter?)

- If you're the target of attackers who would actually try to gain access to your device through compromising the device maker, you've got bigger problems.

The philosophical argument doesn't really work here because there's no practical solution that anyone can (or would, really) adequately fund.

P.s. just to clarify, I'm not tptacek.

Sorry, I don't understand. What do you mean by "give Apple/Google root access"? I've always assumed they already have that.. if no, how does OWS give them root access?

You can compile Signal yourself, and install it on a rooted phone, running presumably a Linux kernel and some Android/ASOP sub-system. In that case, excepting base-band backdoors and a few other details, Google won't have access to your phone at all (assuming no Google services etc here).

OWS doesn't then allow you to use their servers for routing/discovery etc - so you need to run your own servers, and set up a different network that cannot federate with the one users of the Google Play Appstore version of Signal use.

If you do that, and install eg. the F-Droid store, you've now given another actor (the F-Droid store) access to your phone. OWS argues that in general you're less likely to manage to run a safe, patched system this way.

> You can compile Signal yourself, and install it on a rooted phone [...] OWS doesn't then allow you to use their servers for routing/discovery etc

? That's a misunderstanding. You can of course use the official servers with your self-compiled version. (side note: I also don't think your phone needs to be rooted for this)

Ok, that makes sense. It's only compiled binaries distributed through third party app stores that cannot (should not) use the official servers?

Yeah, they prefer if you don't distribute your builds (i.e. something named Signal and / or using their servers) to other people (because they don't actually know what's inside the builds, they've got no update channel, etc.)

Ad-personam attacks are not useful

Clarification: "...operating systems you don't like" implies that claudius is biased and that his point about OS security is made invalid by that.

That's true, but this isn't one. This is what respectfully disagreeing looks like. Ad hominem would be "No, you're an idiot and people don't care what you think just because you disagree with choices they made". That would have been inappropriate.

The implication was there - that a point was made 'only' because of an os they didn't like. Its Ad-Hominem. It was pretty far from 'respectfully disagreeing'

That isn't ad hominem, though. An attack that implies personal qualities isn't the same as an attack on the person.

I think the point is that someone who is choosing an OS that is controlled by a particular company has chosen to trust that company.

It's "ad-hominem", but I agree.

"Ad-personem" and "ad-hominem" are equally accurate in this case, because it wasn't either.

I done goofed. I wasn't even aware what an "ad-personam" was. I found a related question on Quora and that explained it quite well I think.

Quora question: https://www.quora.com/What-are-concrete-examples-of-ad-homin...

It may not be a great insight but it is an important fact to be aware of, especially in the context of something like Signal. It does need to be drilled into our heads again and again that the weakest link breaks the chain. Lest we forget... and we will. Or at least, I know I will.

> This is more or less a way of saying it's "rather pointless to have secure messengers on iOS".

No! that is not at all what is being said. There is no way to use signal that doesn't give Google or Apple remote code execution privileges in the process.

This means that for people who aren't already exposing themselves to these companies use of signal is a step down in security.

I think that you are displaying your arrogance. These open source advocates are fighting with peoples ignorance. People are known to be ignorant about deeper consequences out of the convenience and I do not believe that very sizeable amount of them are given a change to be more informed.

I'll give you a different example. These are two positive reaction examples for the cashless society:

1) I pay with the card all the time anyway.

2) I do not like coins.

These are naive reactions considering only personal convenience. If these people are guided to have a longer more focused thought about the issue then they are able to make more informed decision.

So put your money where your mouth is and build something that can win in the market. Hectoring people and complaining that open source or free software is judged too harshly accomplishes nothing and benefits no one.

It is pointless to build something when there is no market for it. You can see this, as you call it, hectoring as a market generation. If the market is ready, a product will emerge for it.

This process is also made more difficult by arrogant people like you who out of their ignorance or self interest actively work against it.

Let me explain: saying put your money where your mouth is and build something that can win in the market is considerably arrogant position as it states that an argument is simply wrong just because current market will probably not sustain it. But it will not sustain it because the market is not informed enough and it is very difficult to campaign against actors with huge resources on the sea of ignorance.

Besides, I am simple observer, not a one I was describing. But I am becoming to believe more and more that the basic infrastructure were are using must be open to reclaim the lost trust within the society.

> You can see this, as you call it, hectoring as a market generation.

I've been seeing it for well over two decades now. That's more than enough time for a market to emerge, were it ever likely to produce one.

False binary dilemma in that the insight is the statement is true. Both legal cases and excessive patting on the back are interesting to contemplate but irrelevant to the truth or falsehood of the statement.

Obvious arrogance aside, security is not a scalar value.

A centralized service is a convenient stop for the three letter agencies to do their work. Multiple independent implementations of the protocol and interoperability is a much stronger ecosystem. Even if the security of one individual user might not be better.

If you applied the argument to the web instead, it might be tempting to say the security of a single user would improve if Google just ran the whole web, instead of all of these small shops with shoddy security, but very few people would argue that it would improve the reliability and security of the system as a whole.

"Just centralize it" is not some great insight either.

No, I'm not going to let you pretend that we are on opposite sides of a "centralize" versus "decentralize" argument. Find someone else to take the "Google should control the web" side, and debate with them. What you're saying here has nothing to do with what I'm saying.

You replied to a fictitious argument. What I said is that security is not that simple, it matters on your threat model, and things like resilience and platform diversity matter too. Crypto is not the weak link for Signal (nor is it likely to be for comparable products).

What claudius said was that in essence was that a trusted application should not depend on giving remote root to Google, likely referring to not be able to compile and distribute the software in a useful way. That is worth a more meaningful answer. Distribution and the run time environment are central to any realistic threat model and reducing that to open source zealotry kind of misses the point.

Crypto has already been the weak link in other "secure" messaging applications.

You can use Signal with MicroG[0] and "checkin" to GCM in order for notifications for queue wakeup to reach your device. GCM is only used for notifying the device there is a message on Signal servers.

You can also disable permissions on the Google Service Framework and use something like XPrivacy for MUCH more explicit permission control (revocation, spoofing, etc...) if you still want GApps on your device.

[0] https://microg.org/

The LibreSignal fork is available on sailfishOS.

Do you personally review every bit of code that runs on your device? No? Then you're trusting someone else who claims it's secure. No different than trusting Apple/Google.

> No different than trusting Apple/Google.

It is different. I expect Apple and Google to insert backdoors deliberately into their operating systems for three letter agencies (it's easy to do it when you've got either a proprietary OS like iOS or a "technically open but practically closed" OS like Android). They've probably done it before and are part of the PRISM program either way.

However, I don't expect the FSF or Linus Torvalds to do it. They haven't done it yet and they probably won't do it.

It's subjective opinion to trust Linus/FSF more. On top of that Linus doesn't review every piece of code that you run. Some random people vet plenty of code that the distro contains. In addition, I would say it's easier for the NSA to make subtle changes to open source software to sneak in heartbleed-style vulnerabilities under the guise of new unrelated features/bugfixes.

Serious question, if both the endpoints are completely powned by corporations and governments, what do I gain by having the traffic on the wire be secure? Who is the only Opfor I'm defending against, Sprint? They can barely provide working service and correct billing.

Its a given that secret juicy electronic stuff always ends up on wikileaks, so anything important I discuss live in person and never electronically. So if, in a massive delusion of self importance, everything I do thats juicy can't show up on wikileaks because its not electronic, regardless of any app I use or don't use, and the only thing I use electronics for is the security equivalent of "don't forget to buy a quart of milk at the store on the way home" then how does encrypting my quart of milk purchase help me? Is there any reason to not take it as a given that any juicy electronic stuff ends up on wikileaks regardless of this app?

Realize that if I wanted to keep my visit to the supermarket a secret using this app, I can't. Facebook and google sell my GPS data. Tomorrow google rewards will send me a survey asking what I thought of my visit to the store. The store sniffs the wifi MACs and bluetooth data and camera data to track my every move, that free internet for customers isn't entirely free. Not to mention I'm on probably 50 camera recordings. And the phone company knows where I am, every step of the way, for supposed 911 purposes. And my credit card is rubbed up against my receipt purchase data to data mine the hell out of my milk purchase. But I'm supposed to feel perfectly private and secure because Sprint can't read the contents of my wife's shopping list, uh huh.

If you keep things super nebulous and don't think too hard, it seems I'd be protecting myself against someone, and protecting is always good and there's always a someone to fear so obviously it must be awesome. But analysis shows there's not a problem and I'm not defended against any important forces only against a single weak and unimportant force and wide open to absolutely everyone else.

You may be abnormal, but most people text sexually explicit comments they'd prefer neither be in corporate or government databases. (And a ton of other highly private, perfectly inocuous material.)

There's no reason people should be confined to only discussing those topics with people in physical proximity, and encrypted IM apps perfectly fit that use case.

There's also a legal and technical distinction between the NSA (or phone company) reading plaintext on the wire and actively compromising a device. Your comment faceteously ignores that.

Ed: I think of it like locking my front door. My deadbolt won't stop the government getting in, but it establishes (for legal reasons) that I had taken steps to ensure privacy and it raises the chance they leave signs of entry, rather than being covert.

It's not that Im trying to hide things from the government -- they could just ask me anything they wanted to know. I just want them to have to ask, not just covertly take whatever they want.

I admit your front door deadbolt analogy is a very persuasive argument, however this is being marketed as a technically perfect nuclear material / army weapons locker grade bank vault door that solves all security problems when installed and used. Which might be correct.

However, I will extend your admittedly excellent analogy with unfortunately this probably high quality piece of security hardware is installed in a garden shed that has easily breakable windows with no curtains and at least a couple unlocked back doors and an unknown number of (old fashioned electronic) bugs installed in the shed and the news is full, every day, of stories of garden sheds being broken into and peoples secrets on the front page, or at least the front page of wikileaks. Yet the marketing spiel is something like "once you install this really nice door, that's all you need to be completely secure and can feel comfy doing anything that needs to be private or is illegal"

"There's no reason people should be confined to only discussing those topics with people in physical proximity"

Pragmatically, sure there is, its because they don't want it made public. Extremely optimistically, all you need to do is install this really top quality bank vault door on your garden shed, then ...

I don't disagree that security talk could be better about threat models, total evaluation, etc.

But in this case, we also have to imagine that the bank-grade vault door to the shed costs about the same as a regular door.

While I agree the marketing is nonsense (you need lots of other secure features too!), there's absolutely no reason people shouldn't a) start locking their doors, since most current "robberies" are walking in the front door without challenge and b) use the high security door, because the cost is the same as a regular one while the benefits are strictly greater.

I agree that Signal needs to tone down the complete security language, but I think too many security professionals scare people out of making improvements by talking about how there are still compromises. There are lots of middle-ground social goals, like reaching a level of security that makes bulk collection untennable, but leaves targetted attacks open. It's the digital equivalent of closing your blinds in a locked house -- government can still get in to see if they have reason, but they can't see when just wandering by on the street. And they can't pretend their intent wasn't to violate your privacy by entering, since the low technical barriers still require active bypass.

The vault door may not keep your shed from being robbed, afterall there's a ton of easy-to-kick-in windows, but forcing it to be B&E instead of a walk in is meaningful. The law might be ambiguous about walk-ins, but is clear about B&E. (Id argue the other "unlocked" doors are really just getting keys from the landlord, which is a separate problem.)

"we also have to imagine that the bank-grade vault door to the shed costs about the same as a regular door."

I admit defeat. Two extremely strong back to back arguments, both very persuasive and well written.

I see we have common ground on the toning down the security language. That specific aspect of the issue triggered me a bit into a general, eventually proven somewhat wrong, rant.

Have a pleasant day!

There are a lot of people that will be pretty happy if they can reduce their list of (communication) adversaries down to Open Whisper Systems, Apple and the US Government.

You take it as a given that everything will end up on Wikileaks, but I wonder if putting some of their day to day procedural discussion on a secure, ephemeral messaging system would have kept some egg off of faces at the DNC.

When I first looked at this, it boggled my mind that they require a phone number to sign up. Maybe that's no longer the case? But assuming it is, it just struck me as the epitome of breaking away from the concept of secure anonymity.

You are absolutely right! Plus, getting a copy of all your contacts is and invasion of privacy for an app that is advocating for privacy and security. You cant even use a online voip phone number for this app. Its just such a turn off and I'm extremely disappointed with endorsements from people like Snowden ignoring such fundamental flaws.

> You cant even use a online voip phone number for this app.

That's incorrect. Signal works perfectly fine with any mobile, landline or VoIP number.

In fact, what you just claimed is wrong. The app waits for the phone to receive the text and there is no way to enter the verification code you receive in another voip app. This is on top of not being able to use this app on multiple devices.

Let the SMS verification expire and do a phone call verification.

Concerning multiple devices: I use Signal on my phone and on two desktops, works perfectly fine.

I have tried that and it didnt work but I'll give it a try one more time. Regarding multiple devices I mean multiple mobile devices (iOS and Android apps) and not through a browser and extension.

> Plus, getting a copy of all your contacts is and invasion of privacy for an app that is advocating for privacy and security.

You really need to do research into this before making this claim. The phone numbers are hashed (Or something like it) before being sent to the Signal servers.

Unfortunately, hashing provides no meaningful protection here. The preimage space (i.e. the set of all possible phone numbers) is just too small. See https://whispersystems.org/blog/contact-discovery/

Hm, Signal was never about anonymity, it's just an end-to-end encrypted chat and marketed as such.

I thought the system was supposed to provide encryption to known communication, not anonymity?

If that's the case, choosing to use an existing identity management system most people are part of is an excellent bootstrapping decision. Identity management is hard, and it's a fair problem to punt on if it's not your focus.

Encryption != anonymity. (In fact, it usually does the opposite, because encrypted messages stand out.)

Currently I'm on telegram primarily. Loving it, using the bots, using the gifs built in, using the stickers and the cloud storage. I don't love the encryption.

But my next switch will be to a self hosted/federated solution such as https://riot.im/, at least for me and my wife. I will not be able to make friends use and trust my server (they shouldn't for their private messages) and they won't run their own.

Signal looks nice but man the Telegram desktop client on Linux is also very very convenient, plus the fact that you don't need your phone to be on.

Signal-Desktop doesn't need your phone to be on, either. It's not the nicest desktop client (and some people dislike that it's implemented as a Chrome/chromium App), but unlike WhatsApp Web it's independent of the phone.

Leaving the crypto protocol aside, it's also important to know that only "Secret Chats" using End-to-End encryption.[0]

[0] https://telegram.org/faq#q-so-how-do-you-encrypt-data

That article is complete bullshit. I sent them a big email pointing out mistake after mistake and of course never got any response nor did they publish anything to correct their "journalism".

There are legitimate concerns with Telegram that I share with all other technically inclined people, but this article is like saying "use a firewalled Windows 95 instead" because something is wrong with Vista. You really should disregard everything they say because the true parts are too interwoven with fabrications.

Sure, I didn't look for an amazing article, just remember hearing about their homebrew "crypto" years ago and going eww.

This might be slightly better:


You should provide some specific counterpoints and sources to back said points up - because rolling their own encryption is by itself reason enough not to use them. defaulting encryption to off is just icing on the cake

Let's see, what does the article say?

- Telegram isn't end-to-end encrypted per default: true

- Telegram uses homebrew encryption (MTproto): true

- Telegram presence information, at one point, could be read by someone who isn't your contact: true

It is as insecure and secure as Signal is, that means, for some actors, that is for some governements and corporations it is secure, while for others it is not.

Eh, Telegram looks more insecure [1] AFAICT. Sure, Signal will not stand up to high threat models due to hardware failings (backdoored baseband RTOS, missing dual backup RTC batteries for secure boot) and software failings (insecure Android), about the only thing designed to be attacked like that is a terminal like a Pax S80 or a Verifone VX520 and they do a bunch of fun stuff with magnetic tamper sensors, conductive wires that tear on opening, heat sensors, etc.

Then again, that isn't the threat model Signal is designed for, and if you are looking to protect against that type of threat model you could totally design hardware to support it for not too much (see Blackphone) and throw Replicant on there (with some development of course). Then, if it were Signal vs Telegram, the latter would likely be your security weakness.

[1] - http://security.stackexchange.com/questions/130559/is-telegr...

Not only is Telegram far less secure --- even when its opt-in(!) end-to-end messaging is enabled --- than Signal, but Reuters has reported that Iran has exploited some of its faults to hijack accounts and track activists. Don't use Telegram.


You should be embarrassed to be repeating these falsehoods on this thread.

your falsehood claims are no better, Joe. telegram puts their money where their claims are. and you cite reuters as reputable source. just wow.

Yeah, using GCM to wake up the device is totally worse than sending 99% messages without end-to-end encryption.

What's wrong with Reuters? Would you prefer RT?

So Iran can claim 300k award for breaking telegram crypto, can it?

That reward is bullshit. Their requirements are so strict that no one could defeat it. If they did that same challenge in Iran, Iran would win the challenge and the reward.

How does it relate to 300k contest i am talking about?

Signal is providing usage data to Google, it is not anonymous, the messages sent are available for Google to decrypt at wish.

Telegram is open to Russian version of NSA, and the company that made it.

For me and you both, it doesnt matter, unless you work for NSA.

I didnt mean the messages are available to decrypt, that messages would hit GCM or that encryption methods are flawed or anything like that.

More like, Google has root on your phone so it can steal your private keys any time.

Your friends can just trust the default server when using riot.im. Thats what they do on Signal, Telegram, Whatsapp or any other chat app.

I am trying to Switch to riot (matrix) as well. Specially now that they have e2e crypto.

I don't understand how a company based in the US and one that requires a phone number can make any kind of claims about security or privacy, without being looked on as a honeypot untill it redeems itself with evidence to the contrary.

Why would a privacy centric protocol choose to use a phone number which directly connects a user to their identity. How can this make sense?

There is enough evidence most US based companies are in bed with the nsa, compromised or can be easily compromised.

Companies or open source projects can be bullied and threatened by government officals, legally forced to give up their users, gagged and forced to betray users, co-opted, infiltrated or compromised. Lavabit has already happened.

Why do we need encryption, security or privacy? If it is exclusively against state actors then we know its a serious challenge against extremely powerful, well resourced, and legally empowered actors and illusions of privacy, hand flailing 'something is better than nothing' and half baked measures won't do.

It's reasonable then to expect any solution claiming security or privacy in this context to explicitly spell out how they address or plan to address these threat models. The alternative is acting in bad faith and making users vulnerable.

Since I'm very happy with the usability of the WIRE messenger I would appreciate if someone would do a formal security analysis on their (modified) axolotl protocol.

Maybe I'm too grumpy today, but the gist of the summary of the review to me is not 'trust it' but rather 'the protocol is new and overly complex and the security goals have not been stated clearly' with the addition that no major error was found.

Yeah, pretty much. Its good enough that Whatsapp, Allo and many others paid chump change to license it ($1 mil each I heard) since its cheaper and better vetted than anything they'd develop internally.

I think that this is relevant https://news.ycombinator.com/item?id=12880520

I would recommend to read the article first and then follow the protectionism in the comments later.

The main claim of the article is that we need federation, as we do with the email (but imo we are loosing it).

In addition, Signal shares a problem with email - information about your communication circle is not secure.

This is a classic example of making the perfect the enemy of the good.

I disagree. These are valid points and dismissing them based on ignorance would be wrong.

Instead we should discuss why these are not implemented and how could be proceed to implement them.

That's not what the article you cite is saying. It's saying "don't use Signal because Signal is less than perfect". Calling the counterarguments in that thread "protectionism" suggests you feel likewise.

If you want to discuss how a follow-on from Signal could and maybe should address some or all of those points, great! That's a conversation worth having. But it doesn't sound from your toplevel comment as though it is the conversation you're trying to start.

> It's saying "don't use Signal because Signal is less than perfect".

This is mis-categorisation. The article says "I won't recommend the use of Signal" and gives reasons and desired improvements.

I call it protectionism because it tries to dismiss the issues instead of discussing them.

I think that you are misinterpreting the articles intention. It is more complex than a single statement. I believe that the author wants Signal to change in certain direction. It is hard to do when the userbase is growing regardless and people are not aware of the issues.

That link has in fact nothing to do with this paper.

Yes, but it is relevant (also these were my first words btw).

Not only is it not relevant, but your summary of the thread and of Signal's security model is inaccurate.

I do not know, perhaps your reply is addressed to some another comment as I did not mention Signal security model and I did not give a summary of the linked thread?

This is relevant to practically using Signal but not actually to it's security.

Yes, but what counts is practical security. I think we should keep this in mind.

For example someone may make a false impression that because the messenger uses the signal protocol, it is secure (and points to this paper), but in fact its implementation makes it considerably less secure.

Installed Signal, wanted to use it. First step is to connect with my phone number and there is no other way to create an account. This is an unfortunate privacy-blind choice for what otherwise could be a great platform.

It's probably somebody from the "Usernames are bad UX and if we have bad UX nobody will use the messanger and then we don't actually get a secure messenger because we don't get a messenger that anybody uses at all"-camp".

You can still register with some landline or VoIP number if you want.

Ugh. The paper isn't from "the International Association for Cryptologic Research". IACR is simply a site that hosts academic crypto papers. The paper is fine, but probably disregard the article.

And what article would that be? The link goes directly to the PDF...

The submission has been updated. It originally pointed to an article on The Register.

Ah I was not aware of that, thank you!

I can't help but feel like the word "normies" is somewhat insulting. Although I never read it before, but it sounds very condescending.

We detached this subthread from https://news.ycombinator.com/item?id=12899926 and marked it off-topic.

That's a big harsh. I didn't mean we should go full PC on it. Just wanted to underline the tone was unexpected.

It has origin in 4chan, like a lot of memes, and is at least as insulting to its user as to its object - being in essence an insult directed at functional adults by caricatures of NEETs, the humor coming from the fact that the latter really aren't in a position to look down on the former.

Of course, as with any hot meme, it gets a semantic makeover in the course of widening adoption, but if you're looking for an answer to the question of whether someone who uses "normie" unironically, especially in a context outside Reddit or a *chan, merits taking seriously...well, at the very least, I think it's reasonable to interpret that at least as a strong clue that the user isn't in the habit of thoroughly considering his utterances before emitting them.

Ah ok. In the context of 4chan or memes, it's not a problem. You usually like them because of the politically incorrectness. It just seems out of place on HN, where people tends to be overly analytic, and rarely call people names.

In a conversation like this I read it as a short-hand for "average non-techical user" that's less directly insulting than "lusers". I'd probably just say "normal user" or "average user" myself, but I don't find it name-calling in context.

It is out of place on HN.

It is moderately condescending, but it is very handy to describe average technical competence and what to expect from said humans.

Things need to be straightforward & familiar for normies (eg. Pokemon Go vs Ingress) and you can't expect to hold a high level conversation with a good chunk of them, whether that be about mathematics, policy (so much circular logic), reality (citations & sources are not a thing many normies are willing to use), etc.

That being said, its not all bad, just set expectations accordingly, just like you would going on HN. I do not expect the average HN reader to understand much about traffic dynamics and the minimal efficiency gains that may come with self driving cars, or the sheer volume of people a moderate sized light rail network can move in a timely manner, so I set my expectations very low.

Its like talking to a Microsoftie about rail or self driving cars, there is a lack of knowledge (the fact that Amtrak runs trains from Vancouver to Seattle to Portland and is paying BNSF to make the route more reliable) and a conceptual barrier that I do not expect them to rapidly grasp (bullet trains need dead straight rights of way, no exceptions).

Edit: Apparently I can't reply to you, nevertheless I picked up normie as a term in meatspace, and while it might not meet your sensibilities, I do not see a more accurate term, and I'm not here to intimately know & defend your sensibilities. Same goes for asking me or telling me your pronoun, great for you, I give not a single shit, use what you want and cut to the chase.

I agree with the throwaway. You should drop it in favor of average person or something else that's neutral. People seeing us insult them will only hurt adoption. Plus, many of these people that dont know much about computers are smart in other fields or have other talents. We aren't all supposed to have same strengths. So it's double-insulting shen it's an intelligent, but non-technical, user we're talking about.

I don't know about the rest of you, but I'm here for the insights. Trying to court everyone just waters down the comments until HN is indistinguishable from reddit. I prefer apparent condescension and a thick skin over PC half-conversations and watered-down intelligence.

You can have the insights without the insults. They add literally nothing, except to degrade the conversation until HN is indistinguishable from 4chan.

I agree. I didn't mean to start a PC war, just noticed that the term seemed out of place.

The expectation I set when going on HN is that my fellow commenters will have better sense, both socially and semiotically, than to use /r9k/ memes like "normie". I doubt I am alone in this.


> automatic software updates without user consent

Do not include that in your list of complaints. That is a security feature, not a privacy deficit, but only if done right.


TL;DR you want automatic updates

There are other update mechanisms available that don't let bad actors ship out malicious updates to compromise specific users.

Did you even read the article I linked?

Yes, and can you understand the concept of a silent malicious update being sent to specific individuals signed by the proper key?

That was addressed by the third section, under "targeted attacks":


And yet Signal doesn't have the capability to do what the article describes in that section.

That sounds like a really good technical discussion to have with the OWS team.



> So you admit that Signal is insecure by design.

No, I would not say it like that.

Security isn't TRUE/FALSE. Signal is more secure than other products like Telegram. There are a lot of things it could add to increase its security. But it's pretty damn good and that it doesn't do things that would make it better doesn't change the fact that it's damn good.

> All it seems like you're trying to do is distract from these glaring issues.

I take issue with listing "automatic software updates without user consent" as a list item in criticisms about Signal because "automatic software updates without user interaction" are a damn good idea for the threat models that take most computer crime into account. Even the paranoid (I don't use this term lightly) models can be mitigated by a well implemented secure update infrastructure.

What would you rather have?

  - Activists being pwned by 1day vulnerabilities
  - The patch being applied automatically as soon as it's available
This is a criticism of words you said, not some attempt to distract from "these glaring issues".

WordPress, which powers 26% of websites on the Internet, doesn't even cryptographically sign its updates. If you pwn their update server, you've got a backdoor into millions of websites. The Mirai botnet? Child's play in comparison.

That's a glaring issue.

> If OWS was legitimately interested in security and anonymity, they wouldn't be including proprietary components and shutting down other open source projects that take matters into their own hands.

So says your ideology.

If OWS wasn't legitimately interested in security and anonymity, why would they publish their protocols as open specifications that anyone can use to develop their own protocols and apps?

Go on, take their papers and build an app that doesn't do all the things you disagree with.

Fork their project (It's GPL; you can fork it!), remove all Signal trademarks and branding, and release your own GPL app that doesn't rely on proprietary components. Make your app/protocol federated. If you do these things, there's literally nothing OWS can do to "shut down" your project.

Don't bother worrying about integrating with Signal users or using Signal servers. Do a better job and convince people to use your fork instead.

What's stopping you?

This comment is a mixture of falsehoods ("sending messages over Google servers") and out-of-bounds attacks on the integrity of HN users ('moxie did not build a "honey pot").

Thankfully: you cannot comment like this on HN. If you want to criticize Signal, you can do that, but you can't do it this way.

No, you have factually incorrect information. OWS shut down this fork because it removed Google Cloud Messaging:


In what way is it out of bounds to point out deeply suspicious behavior?

To say "OWS shut down this fork" is a rather drastic distortion of the truth. The project was simply not entitled to running on OWS servers, or of re-using the Signal trademark, both of which are perfectly understandable.

And if you look at the repo that you linked, this is called out right at the top of the README:

In the same thread, Marlinspike said that he would be willing to consider "a clean, well written, and well tested" pull request that would add WebSocket support to the Android version of Signal. This would effectively eliminate Signal's dependency on GCM and thus allow the official Signal app to function on custom Android ROMs that do not include Google Play Services. There is now a bounty on making this pull request, and the person/team making the pull request would also receive whatever the BitHub payout would be at the time.


To be clear, Signal relying on GCM is only an issue for people who use a custom Android ROM without Google Play Services. For the vast majority of people who do have Google Play on their phone, this issue is completely irrelevant. Signal is designed so that GCM is only used for a wakeup event and never sees any of the encrypted Signal messages.

Moxie has explained this over and over again, and comments like this continue to pretend that he hasn't, while re-raising the falsehood that Google sees Signal messages. This is tendentious trolling, not good-faith conversation and debate. Which is why your last comment was flagged off the site.

You're mischaracterizing my claims. GCM allows Google to extract metadata about who you are messaging and when you are messaging them. They then have the capability to reconstruct your social network graph and select you for further targeting via silent updates if needed.

Assuming you are in fact the other newly created account with the random-looking username above, then you explicitly stated that Signal "forces you to send messages over Google servers".

This is straightforwardly false: the user does not send anything over Google servers in the course of using Signal. Signal sends an empty push notification over GCM to users who have received a message; while technically true, to, in a discussion of a messaging platform, refer to empty push notifications as sending "messages over Google servers" is prone to misinterpretation as false statements that the "messages" going over Google servers contain any information about the messages being sent over the messaging platform.

Additionally, the use of GCM only allows Google to extract metadata about when a user receives messages; GCM messages aren't necessary to notify you when you've sent a message, so I'm not sure how you propose that Google can infer when a user sends a message and who that user is messaging. Could you elaborate on this?

No, it does not. GCM is used to wake the app, and for no other purpose. No message metadata is sent over it.

And why do you suppose that they're unable to record the wake events? They don't need metadata from Signal: they can make their own, which is precisely what I've been trying to say this entire time. That's enough for them to reconstruct your social graph.

Signal is seemingly built around providing the image of anonymity and security, but has glaring design flaws which negate these.

I'm sorry, I seem to have given the impression that I doubt your ability to play six-degrees-of-Google-breaking-Signal. I do not doubt that. You've ably demonstrated your ability.


Please stop commenting using a collection of throwaway accounts like this (see https://news.ycombinator.com/item?id=12529147).

As if working with Facebook wasn't enough of a red flag.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact