I have been using Signal for some time now on a daily basis and I haven't had real usability issues. We cannot hold OWS responsible for the insecurity of our operating systems, the nature of today's cloud or hardware infrastructure, the choices we make for comfort reasons, and what not. What they do is provide us with, and I think most of us would actually agree, a secure messenger that is both free of charge and best of breed, or close to. And as with every open source project, it's their project, but you're free to fork it and provide us with something better if you don't agree with their choices.
So if you need a secure messenger now, because you need or want privacy, Signal is an excellent option, free of charge, open source. What are we actually complaining about?
I don't know moxie but it seems that he's actually open to suggestions and offers if you're willing to provide some manpower as well. Then he has his convictions but still offers to discuss them constructively. Again, what is there to complain about?
As for myself, I contact support if I have a question or issue, and they have been very helpful. I donated to the project, also because they are supported by http://freedom.press, and I value a free press. And even though I am absolutely not interested into the giphy thing (I'm on iOS so I haven't really seen it yet), I'll open an issue on github if I want Signal to change. And I invite everybody to support the project in this way, and make sure that the projects that are actually supporting our interests don't get abandoned in favour of comfortable-to-use data-hogs like Facebook, WhatsApp or Telegram.
Think about what is at stake. Many people working in politics, law, human rights, and other areas absolutely need a way to communicate securely, especially when their causes don't align with the interests of those in power. If you're wondering why people are so passionate about having a more secure platform, that's why. In this case, it's important to be forgiving if people seem hostile or overly critical.
Currently I'm backing off, trying to reduce my critical comments. It's a great project, and it probably should be more widely adopted.
For me? I feel this is a weird uncanny valley effect for IM solutions. It's not ugly as shit, but it's also not quite there yet and that leads me to leash out at Signal every now and then (keywords for me: lacking federation, phone number is a crappy _mandatory_ identity).
Now, that's really not fair. I don't complain about WhatsApp as much as I complain about Signal. It's a double standard and I understand that - but can't help myself here.
As it stands right now, Signal is used exclusively by the moderately technically inclined, with a little over 1 million users. In a perfect world, it would be bigger than Whatsapp, which uses libsignal but has many metadata related issues, and also misses many older demographics in the US. Additionally, server reliability has been something like 710hrs out of 720hrs usually every month, with an outage just this last Saturday from 19:51 to 22:08 PST for everything but ZRTP calls.
I know Moxie will likely never allow interoperability with his servers after the cluster that was CyanogenMod WhisperPush interop, but we need something to allow for self hosting or alternative hosting, Signal's servers are not bulletproof and a local server in remote areas can be invaluble, essentially XMPP with Conversations is all we have in this arena right now.
I appreciate the sentiment for what you're saying, but our direct user growth has surprised even us (again, not sure where you're getting your numbers), and Signal Protocol is now on over two billion devices.
If your major concerns are reliability and user growth, I think federated protocols are likely to exacerbate rather than improve those conditions -- as we've seen with XMPP historically. However, I would love it if you proved me wrong. Signal can be deployed in a federated environment today, let me know if you need any help setting it up.
In regards to federation for Signal, what are you looking for in terms of technical competence and skills? I'm already intimately familiar with running servers, VOIP, SMS delivery, etc so I've got a good grip on things, and I'd like to tie in our existing PBXes with Signal whereby ZRTP calls go straight to endpoints, perhaps we can force the issue with Grandstream and get them to add ZRTP support on device for their GXP2140 & GXP2170 phones.
If you can further narrow that to active users per $TIMEPERIOD (for example, MAU), that would be great.
For example some time ago when Signal was still called TextSecure, we converted all our group chats for a friend group of technically minded people to TextSecure. We all agreed that encryption is important and worth some minor discomforts. We switched back to Facebook (which we all dislike) within a month because TextSecure didn't have consistent message ordering in group chats, making some conversations impossible to understand for those unfortunate enough to receive messages in a bad order.
So while I'm able to get people to try it out or use it for "sensitive" stuff, these issues really hurt usage and I'll just end up reverting to SMS once a day at least.
Edit: I don't want to sound to be complaining too much. I love Signal and try to convert people. Just hard when basics mess up. And it's also frustrating because on the surface some of this stuff doesn't make sense.
Would make those that insist on ZRTP in my life a lot more convenient to have extended convos with.
My messages go into the ether without any indication that they are not being received.
This might not be a deal-breaker for some, maybe even a very nice feature for some, but for me it made me go back to other services.
Github issue: https://github.com/WhisperSystems/Signal-iOS/issues/967
I really wish they hadn't axed it, it was a lot easier to use it than do an adb backup of the app & data.
Looking at that GitHub issue, Moxie appears to have popped in for half a second out of annoyance, then the main repo maintainer michaelkirk went and locked & limited the issue to just contributors.
The thing with locking issues is that Signal issues get so much noise that the content will drown in it. I dislike not being able to add on-topic comments, too, but I understand how they got where they are.
On Android, the client can export plaintext messages to storage, or you can use something like Titanium Backup, which wraps up the app installer AND data directories in a tarball and can restore it across other many different versions of Android.
There's also Matrix (matrix.org). Took me about half an hour to set up a homeserver, has federation and TLS for now but full end to end encryption is already in beta (since I'm using my own homeserver I'm fine with just TLS) and Riot, the biggest client, is easy to set up and use. My wife is rather opposed to new things and isn't amazing with technology but she's happy enough with it.
Conversations on the other hand was a pain to set up self hosted (more Prosody/eJabberd's fault though) and had silent message send/receive failures.
For me the missing link is a good, native Linux client. Signal would have "got" me the last time I was shopping around for a communication app if it had that.
Then again, I probably represent maybe 0.01% of the audience you are trying to reach...
I read this frequently, but I'm still not sure how this is supposed to work - would you care to explain?
Comparatively, as it stands now Google is getting a bunch of metadata on Signal users, such as when messages are sent and received and from which device, IP addresses, OS info, etc.
Here it seems that you're defining "metadata" as your IP address (and OS?). That's kind of a non-standard definition of "metadata" in this space -- most people approach the topic more concerned about who is communicating with who.
Email is federated, and I run my own mail server, but almost every single email I send or receive has GMail at the other end of it -- so running my own server does not provide me with any meaningful metadata protection, even though it is a federated protocol. The idea that everyone in the world is going to run their own mail server (or messaging server, or whatever) has not born out in practice, even in environments that natively support federation.
I think serious metadata protection is going to require new protocols and new techniques, so we're much more likely to see major progress in centralized rather than distributed environments (in the same way that Signal Protocol is now on over two billion devices, but we're unlikely to ever see even basic large scale email end to end encryption).
If all you want to do is hide your IP address, it sounds like you should just use Tor or a VPN.
> Comparatively, as it stands now Google is getting a bunch of metadata on Signal users, such as when messages are sent and received and from which device, IP addresses, OS info, etc.
This is not true. You're referring to GCM? The only thing GCM does is wake up a device to connect to the Signal server when the app is running in the background, nothing is actually transmitted over GCM.
I consider myself an educated Signal user and I had no idea about that. Preach it, shout it, this is great for everyone who thought "GCM == messages"!
If that was the case then Conversations would be bigger than Signal, but it just isn't
Improving server reliability is a separate thing.
XMPP itself is huge, GCM is essentially just a frontend for it. Conversations is smaller than Signal in part due to Signal being around since 2011, and Conversations being created in 2014, and also Signal gaining many high profile endorsements, from Snowden to Hillary. Still, Signal is microscopic compared to Whatsapp.
Maybe this isn't too bad. How does modern XMPP compare with Signal's protocol?
Since Signal refuses to learn from XMPP (especially the federated part), maybe the solution is to make XMPP learn from Signal. Then, Signal's failure to become federated would have been an important step to make XMPP finally useful for secure messaging.
Conversations is essentially Signal but for XMPP, the problem is it doesn't blend in normal SMS and additional value adds to sweeten the pot for normies and keep them using it.
A few examples of excessive permissions:
* Disable your screen lock
* Location permissions
* Set wallpaper ("kitchen sink" feature here)
* External storage (why not use internal "app only" storage?)
* System log data
Android's sandbox will do its best to protect the user from compromised applications, but it can't do anything to protect you if the application already has full permissions. Based on recent events, I would assume that Signal users are at risk of targeting as a block -- their desire for privacy makes them interesting targets from an intelligence and LEO perspective.
Many successful applications follow a plugin model, where intrusive permissions are split off into separate, optional applications. Signal should do the same.
I believe this is for the "call screen" (though it's never really worked for me and my phone gets or stays locked nonetheless when I call someone or someone calls me).
> * External storage (why not use internal "app only" storage?)
Because otherwise you can't extract photos you received (let alone view videos) which, from a security standpoint, is good but it's something you'll have a hard time selling to the average user. The same goes for sharing your location.
Example of how the UX for this works:
1. User chooses "attach photo."
2. If user has not installed the plugin, Signal gives them an informational prompt and a button that opens the app store link.
3. User clicks the button to go to the app store.
4. User clicks "install." Application is installed (should be quick, small app)
5. User can now attach photos from external storage.
Steps 2-4 are short and occur only one time. You would not want this kind of extra friction in a true mass market app, but I'd argue that Signal is not and never will be mass-market. (We can hope, but it's not likely.) Signal's target users, on the other hand, would be likely to appreciate this extra focus on security and user control.
2. The user hasn't installed the plugin. Who does that? So signal doesn't work.
3. The user sends their contact a facebook message saying "signal isn't working" and attaches the photo.
4. Both users uninstall signal and tell everyone who mentions it that it can't even send picture messages.
> 2. The user hasn't installed the plugin. Who does that? So signal doesn't work.
"when the user sees that it doesn't work out of the box, and sends you to an app store instead, they consider it broken."
Also, I did acknowledge that this approach will turn away "mass market" users, but again, I don't think that those users will ever be Signal's primary user base. Most people are going to use stock apps or whatever is most heavily marketed (read: whoever spends the most dollars on acquiring users). Signal frankly can't afford to buy its way into the mass market. It's a niche app, and it should focus on catering to that niche.
Uhm, do you have examples to support your claim? None of the truly successful applications I know do this, as it's rather inconvenient from the user's perspective.
All in all, I think your issue is with the Android permission model, not Signal, while suggesting workarounds how Signal could improve the situation a little.
A few other applications that use plugins:
* ES File Explorer
* FB Reader
And there are others. Most of these don't do this for the explicit purpose of permissions management (Automate may be the only one), but there's no reason that security can't be the primary motivation for a plugin system.
There's another bonus to this approach: a plugin architecture allows you to add controversial features without forcing them on your user base. Don't like a new feature? Don't install it.
Yes, a plugin architecture adds complexity. However, Android's intent system is built to simplify this kind of design, so it's not like you have to build it all from scratch.
I was alarmed at the Location permissions at first too, but I believe it's only used so that you can send someone your GPS location. If you simply deny the request, that feature will be disabled, but I don't think it prevents the app from working.
Most (all?) permissions in the catch-all "other" group cannot be disabled at all.
What ever happened to Intents? Shouldn't photo sharing be a simple Intent rather than requiring a new UI and storage permissions for every single app?
But yes, and this is a deliberate Google policy, also like the don't want ad blocking programs. You are an input to their revenue generating system, not a client.
You can see exactly what it's doing.
The issue with permissions is closed source apps that do shady things with extra permissions...
It's very hard to write bug-proof code. Restricted permissions would be a sensible countermeasure since Signal is a likely target.
It would be better if features could be disabled. Sadly the only way to achieve that is to fork the code base and run your own dev build.
I'm low on space in my 16Gb android phone. I'm moving everything I can to external storage. My greatest problem is some apps that doesn't allow it, like WhatsApp.
The point is, it's a step towards a future where a much greater percentage of our systems is vetted, verified, and shown to be secure/stable (modulo external components beyond their control) and minimizing those external components.
For various definitions of "secure/stable".
For me, anything which Google can reach and amass information from, and thus NSA, is not secure. Rust on Linux where I am playing with, is fine.
And its not only the "conspiracy style" "why would Google put backdoors in its 'Play Services'", no its more like
"oh Google receives and sends notifications for every Signal message sent and recieved, among other information, such as Device ID, phone number, android version" - in short who is using signal and when.
Signal is amassing huge amounts of information for benefit of Google. Look at their github page, where they even say they want more to amass more data and to "annoy the hell out of users" to make them update - shove updates down their throat a la Microsoft style.
I imagine this is by design. What do I do if there is a vulnerability that a patch fixed but the person I'm taking to refuses to update?
I am not saying I like the idea of bricking an app if there's no update for three months but if you agree that the fight is against dragnet not targeted surveillance then this is a reasonable compromise.
And not "hey user, take this update which contains backdoors since the main developers got gagged/blackmailed, trust us this time for real".
I upvoted this comment of yours because it seemed kind of unfair to me for it to be downvoted, and I don't like people being told what they can or can't do, but the gist of what you're trying to argue is complete nonsense. GCM dependence is a complete non-issue in context, and to say that because Telegram doesn't have that dependency it is safer from US intelligence is so over the top ridiculous. Signal-the-app has it's problems, and a fair share of questionable decisions, but GCM and Play Store are not among those.
That Moxie acts with contempt toward people on GitHub issues is true. It is also true that most of those people (in exchanges I have read) are complete ignorants who wouldn't know security if it hit them on the head, yet insist and yell with noble indignation that they're correct. The kind of people who were thought that more FOSS = more security is an axiom and go from there. Since he gets tons of that, I kinda understand why he might act like that. I understand, but do not condone, it's extremely off putting. In the end, one of Signal-the-app's biggest problems is that it's mostly criticized by unreasonable people.
There is 3 issues which Im trying to clear up here. With GCM and Signal:
1, Google and in turn USA based data-centers receive events/more-data to analyze. That this data is not the encrypted message, not the decrypted message doesnt matter. This data can be analyzed and used effectively. Its not impossible to think of "push-message for android phone available at X time - which looking through our other database shows it has Signal but doesnt have WhatsApp, hm, our other databse shows that android device has Signal app usage of X%". Oh boy.
2. Google has root on your phone, so even if the transport security is very good, it doesnt matter as the phone is effectively owned by Google and thus messages/keys can be stolen at will.
3. The choice should be with the users and not Moxie. Moxie has shown contempt and disregard for users wishes, see quote "annoy the fucking hell out of our users".
Outside that threat model, it's a useful privacy tool in that it at least reduces risk ftom some vectors. Still need OS-level security like putting and trusted path on OKL4 with secure firmware. Even then, subversion risk is so great that still can't use it for nation-states. Better to put usable front end on very cross-platform, easy-to-isolate tool like GPG. Or communicate in person or mailing encrypted files/messages.
However, if OWS only supports systems on which such a toggle exists via a third-party provider, that somehow makes them secure?
I find this hard to understand. Yes, of course an app which encrypts data against some adversaries is nice, but it should definitely be called "secure-against-some-people", not "secure", and people shouldn’t write "Trust It" but rather "Trust It if you also trust X and Y and Z".
If anything, I'm more frustrated with the Signal team that the app doesn't have as good call quality/performance as WhatsApp, nor does it have video call support, and that the Chrome desktop "app" doesn't seem to import my phone contacts for some reason - all of which is making me continue to mostly use less secure and less trusted alternatives.
My point is we should aim for getting things "more secure" constantly, and I think we have in the past few years. So rather than just say "what's the point?", we should say "let's put more pressure on X company to open source/prove their system is secure" and hope that in time enough pressure is built that those companies actually agree to do those things.
And since I was talking about putting pressure on companies, let me start:
Where the hell is Google's End-to-End tool? It hasn't had any commits in over half an year, and we already know NSA's bestie, Yahoo, has given up on it. Should we start drawing some conclusions about the Google/NSA relationship, too? Did Google abandon the project?
There - who's next?
If you're really paranoid, go for open hardware supported by libreboot  or the Talos Workstation and run a hardened "free" OS.
However, I don't think Intel ME (or similar firmware in AMD and ARM) has ever been used to compromise user security and privacy. The threat probably exists and is real but has it ever been exploited? On the other hand, I suspect that there is no lack of zero-days and other vulnerabilities for iOS and Android.
Have you tried re-importing them manually via the "Import now" button in the Desktop app's settings? Maybe that helps.
No, he's saying that people run applications on fundamentally insecure operating systems.
> They're not going to switch.
That doesn't make them right, nor him wrong.
Only because there's there's nothing to switch to. There's just no solid FOSS phone OS at the moment, and, IMO, fixing that is more important than securing messaging systems.
I generally accept Moxie's / OWS's argument that upstream, patched Android with Google services and spyware/backdoor and all, is in general more secure than running a hodgepodge of FOSS software on a rooted phone - especially for less technical minded users (ie: almost everyone if your target market is everyone).
I don't think it follows that a transparent platform running fully open and user-controlled software, perhaps backed by some form of web-of-trust cacert-like CA system can't ever work - and might not be a good idea to have available as a fallback if it turns out that the anti-democratic paramilitary organization you have to fight is one backed by the NSA.
I'm a little surprised how polarized these discussions tend to get - as if two ideas have to be mutually exclusive.
I think I understand OWS reasoning with locking down their network and forcing phone number IDs - I don't really agree - but I understand the reasoning behind it.
It's really on all of us that care about open federated protocols to set up an alternative network, and OWS have even graciously provided source code and a protocol as a starting point - but it's a shame that rather than some email-like model where all systems could federate in a predictable way, we are forced to have three different networks (a hypothetical open-signal, signal and whatsapp).
I guess there's a lot of people that are still sore about Facebook and Google discarding XMPP, and breaking the unification trend that we saw a glimmer of a few years back. Even without federation, I could have one sane XMPP client, with OTR support, and chat both to my non-technical friends on gtalk and facebook - and have encrypted chats over those same servers, or through the federated XMPP network.
Now I have some people in Facebook's silo, some in Google's Hangouts silo, still quite a few on SMS/regular phone service, and a handful on Signal. That's not really the fault of OWS - I actually have a few non-technical contacts I can reach via Signal thanks to their focus on a simple SMS-replacing app. I just still wish I could cut back on the number of clients and have some sane federation.
But fixing the client when the host is still insecure/unknown is just going to move the target. If messages are secure, governments are just going to move to the OS-layer.
I feel like you might've misstated your intended point, but in any case:
- Most threat models exclude the situation which you're discussing here because risks are generally low and, in the event of such a threat becoming material, the entity is probably screwed regardless of whether that threat is considered due to the costs of mitigation. (Seriously -- how would a company or person mitigate this short of independently auditing the code for the OS? Or building their own? And what happens after you look at the code? Do you then look at the hardware too? How low would you go? How low would your attackers go, for that matter?)
- If you're the target of attackers who would actually try to gain access to your device through compromising the device maker, you've got bigger problems.
The philosophical argument doesn't really work here because there's no practical solution that anyone can (or would, really) adequately fund.
P.s. just to clarify, I'm not tptacek.
OWS doesn't then allow you to use their servers for routing/discovery etc - so you need to run your own servers, and set up a different network that cannot federate with the one users of the Google Play Appstore version of Signal use.
If you do that, and install eg. the F-Droid store, you've now given another actor (the F-Droid store) access to your phone. OWS argues that in general you're less likely to manage to run a safe, patched system this way.
? That's a misunderstanding. You can of course use the official servers with your self-compiled version. (side note: I also don't think your phone needs to be rooted for this)
Clarification: "...operating systems you don't like" implies that claudius is biased and that his point about OS security is made invalid by that.
I think the point is that someone who is choosing an OS that is controlled by a particular company has chosen to trust that company.
Quora question: https://www.quora.com/What-are-concrete-examples-of-ad-homin...
No! that is not at all what is being said. There is no way to use signal that doesn't give Google or Apple remote code execution privileges in the process.
This means that for people who aren't already exposing themselves to these companies use of signal is a step down in security.
I'll give you a different example. These are two positive reaction examples for the cashless society:
1) I pay with the card all the time anyway.
2) I do not like coins.
These are naive reactions considering only personal convenience. If these people are guided to have a longer more focused thought about the issue then they are able to make more informed decision.
This process is also made more difficult by arrogant people like you who out of their ignorance or self interest actively work against it.
Let me explain: saying put your money where your mouth is and build something that can win in the market is considerably arrogant position as it states that an argument is simply wrong just because current market will probably not sustain it. But it will not sustain it because the market is not informed enough and it is very difficult to campaign against actors with huge resources on the sea of ignorance.
Besides, I am simple observer, not a one I was describing. But I am becoming to believe more and more that the basic infrastructure were are using must be open to reclaim the lost trust within the society.
I've been seeing it for well over two decades now. That's more than enough time for a market to emerge, were it ever likely to produce one.
A centralized service is a convenient stop for the three letter agencies to do their work. Multiple independent implementations of the protocol and interoperability is a much stronger ecosystem. Even if the security of one individual user might not be better.
If you applied the argument to the web instead, it might be tempting to say the security of a single user would improve if Google just ran the whole web, instead of all of these small shops with shoddy security, but very few people would argue that it would improve the reliability and security of the system as a whole.
"Just centralize it" is not some great insight either.
What claudius said was that in essence was that a trusted application should not depend on giving remote root to Google, likely referring to not be able to compile and distribute the software in a useful way. That is worth a more meaningful answer. Distribution and the run time environment are central to any realistic threat model and reducing that to open source zealotry kind of misses the point.
You can also disable permissions on the Google Service Framework and use something like XPrivacy for MUCH more explicit permission control (revocation, spoofing, etc...) if you still want GApps on your device.
It is different. I expect Apple and Google to insert backdoors deliberately into their operating systems for three letter agencies (it's easy to do it when you've got either a proprietary OS like iOS or a "technically open but practically closed" OS like Android). They've probably done it before and are part of the PRISM program either way.
However, I don't expect the FSF or Linus Torvalds to do it. They haven't done it yet and they probably won't do it.
Its a given that secret juicy electronic stuff always ends up on wikileaks, so anything important I discuss live in person and never electronically. So if, in a massive delusion of self importance, everything I do thats juicy can't show up on wikileaks because its not electronic, regardless of any app I use or don't use, and the only thing I use electronics for is the security equivalent of "don't forget to buy a quart of milk at the store on the way home" then how does encrypting my quart of milk purchase help me? Is there any reason to not take it as a given that any juicy electronic stuff ends up on wikileaks regardless of this app?
Realize that if I wanted to keep my visit to the supermarket a secret using this app, I can't. Facebook and google sell my GPS data. Tomorrow google rewards will send me a survey asking what I thought of my visit to the store. The store sniffs the wifi MACs and bluetooth data and camera data to track my every move, that free internet for customers isn't entirely free. Not to mention I'm on probably 50 camera recordings. And the phone company knows where I am, every step of the way, for supposed 911 purposes. And my credit card is rubbed up against my receipt purchase data to data mine the hell out of my milk purchase. But I'm supposed to feel perfectly private and secure because Sprint can't read the contents of my wife's shopping list, uh huh.
If you keep things super nebulous and don't think too hard, it seems I'd be protecting myself against someone, and protecting is always good and there's always a someone to fear so obviously it must be awesome. But analysis shows there's not a problem and I'm not defended against any important forces only against a single weak and unimportant force and wide open to absolutely everyone else.
There's no reason people should be confined to only discussing those topics with people in physical proximity, and encrypted IM apps perfectly fit that use case.
There's also a legal and technical distinction between the NSA (or phone company) reading plaintext on the wire and actively compromising a device. Your comment faceteously ignores that.
Ed: I think of it like locking my front door. My deadbolt won't stop the government getting in, but it establishes (for legal reasons) that I had taken steps to ensure privacy and it raises the chance they leave signs of entry, rather than being covert.
It's not that Im trying to hide things from the government -- they could just ask me anything they wanted to know. I just want them to have to ask, not just covertly take whatever they want.
However, I will extend your admittedly excellent analogy with unfortunately this probably high quality piece of security hardware is installed in a garden shed that has easily breakable windows with no curtains and at least a couple unlocked back doors and an unknown number of (old fashioned electronic) bugs installed in the shed and the news is full, every day, of stories of garden sheds being broken into and peoples secrets on the front page, or at least the front page of wikileaks. Yet the marketing spiel is something like "once you install this really nice door, that's all you need to be completely secure and can feel comfy doing anything that needs to be private or is illegal"
"There's no reason people should be confined to only discussing those topics with people in physical proximity"
Pragmatically, sure there is, its because they don't want it made public. Extremely optimistically, all you need to do is install this really top quality bank vault door on your garden shed, then ...
But in this case, we also have to imagine that the bank-grade vault door to the shed costs about the same as a regular door.
While I agree the marketing is nonsense (you need lots of other secure features too!), there's absolutely no reason people shouldn't a) start locking their doors, since most current "robberies" are walking in the front door without challenge and b) use the high security door, because the cost is the same as a regular one while the benefits are strictly greater.
I agree that Signal needs to tone down the complete security language, but I think too many security professionals scare people out of making improvements by talking about how there are still compromises. There are lots of middle-ground social goals, like reaching a level of security that makes bulk collection untennable, but leaves targetted attacks open. It's the digital equivalent of closing your blinds in a locked house -- government can still get in to see if they have reason, but they can't see when just wandering by on the street. And they can't pretend their intent wasn't to violate your privacy by entering, since the low technical barriers still require active bypass.
The vault door may not keep your shed from being robbed, afterall there's a ton of easy-to-kick-in windows, but forcing it to be B&E instead of a walk in is meaningful. The law might be ambiguous about walk-ins, but is clear about B&E. (Id argue the other "unlocked" doors are really just getting keys from the landlord, which is a separate problem.)
I admit defeat. Two extremely strong back to back arguments, both very persuasive and well written.
I see we have common ground on the toning down the security language. That specific aspect of the issue triggered me a bit into a general, eventually proven somewhat wrong, rant.
Have a pleasant day!
You take it as a given that everything will end up on Wikileaks, but I wonder if putting some of their day to day procedural discussion on a secure, ephemeral messaging system would have kept some egg off of faces at the DNC.
That's incorrect. Signal works perfectly fine with any mobile, landline or VoIP number.
Concerning multiple devices: I use Signal on my phone and on two desktops, works perfectly fine.
You really need to do research into this before making this claim. The phone numbers are hashed (Or something like it) before being sent to the Signal servers.
If that's the case, choosing to use an existing identity management system most people are part of is an excellent bootstrapping decision. Identity management is hard, and it's a fair problem to punt on if it's not your focus.
Encryption != anonymity. (In fact, it usually does the opposite, because encrypted messages stand out.)
But my next switch will be to a self hosted/federated solution such as https://riot.im/, at least for me and my wife. I will not be able to make friends use and trust my server (they shouldn't for their private messages) and they won't run their own.
Signal looks nice but man the Telegram desktop client on Linux is also very very convenient, plus the fact that you don't need your phone to be on.
There are legitimate concerns with Telegram that I share with all other technically inclined people, but this article is like saying "use a firewalled Windows 95 instead" because something is wrong with Vista. You really should disregard everything they say because the true parts are too interwoven with fabrications.
This might be slightly better:
- Telegram isn't end-to-end encrypted per default: true
- Telegram uses homebrew encryption (MTproto): true
- Telegram presence information, at one point, could be read by someone who isn't your contact: true
Then again, that isn't the threat model Signal is designed for, and if you are looking to protect against that type of threat model you could totally design hardware to support it for not too much (see Blackphone) and throw Replicant on there (with some development of course). Then, if it were Signal vs Telegram, the latter would likely be your security weakness.
 - http://security.stackexchange.com/questions/130559/is-telegr...
What's wrong with Reuters? Would you prefer RT?
Telegram is open to Russian version of NSA, and the company that made it.
For me and you both, it doesnt matter, unless you work for NSA.
More like, Google has root on your phone so it can steal your private keys any time.
I am trying to Switch to riot (matrix) as well. Specially now that they have e2e crypto.
Why would a privacy centric protocol choose to use a phone number which directly connects a user to their identity. How can this make sense?
There is enough evidence most US based companies are in bed with the nsa, compromised or can be easily compromised.
Companies or open source projects can be bullied and threatened by government officals, legally forced to give up their users, gagged and forced to betray users, co-opted, infiltrated or compromised. Lavabit has already happened.
Why do we need encryption, security or privacy? If it is exclusively against state actors then we know its a serious challenge against extremely powerful, well resourced, and legally empowered actors and illusions of privacy, hand flailing 'something is better than nothing' and half baked measures won't do.
It's reasonable then to expect any solution claiming security or privacy in this context to explicitly spell out how they address or plan to address these threat models. The alternative is acting in bad faith and making users vulnerable.
I would recommend to read the article first and then follow the protectionism in the comments later.
The main claim of the article is that we need federation, as we do with the email (but imo we are loosing it).
In addition, Signal shares a problem with email - information about your communication circle is not secure.
Instead we should discuss why these are not implemented and how could be proceed to implement them.
If you want to discuss how a follow-on from Signal could and maybe should address some or all of those points, great! That's a conversation worth having. But it doesn't sound from your toplevel comment as though it is the conversation you're trying to start.
This is mis-categorisation. The article says "I won't recommend the use of Signal" and gives reasons and desired improvements.
I think that you are misinterpreting the articles intention. It is more complex than a single statement. I believe that the author wants Signal to change in certain direction. It is hard to do when the userbase is growing regardless and people are not aware of the issues.
For example someone may make a false impression that because the messenger uses the signal protocol, it is secure (and points to this paper), but in fact its implementation makes it considerably less secure.
Of course, as with any hot meme, it gets a semantic makeover in the course of widening adoption, but if you're looking for an answer to the question of whether someone who uses "normie" unironically, especially in a context outside Reddit or a *chan, merits taking seriously...well, at the very least, I think it's reasonable to interpret that at least as a strong clue that the user isn't in the habit of thoroughly considering his utterances before emitting them.
Things need to be straightforward & familiar for normies (eg. Pokemon Go vs Ingress) and you can't expect to hold a high level conversation with a good chunk of them, whether that be about mathematics, policy (so much circular logic), reality (citations & sources are not a thing many normies are willing to use), etc.
That being said, its not all bad, just set expectations accordingly, just like you would going on HN. I do not expect the average HN reader to understand much about traffic dynamics and the minimal efficiency gains that may come with self driving cars, or the sheer volume of people a moderate sized light rail network can move in a timely manner, so I set my expectations very low.
Its like talking to a Microsoftie about rail or self driving cars, there is a lack of knowledge (the fact that Amtrak runs trains from Vancouver to Seattle to Portland and is paying BNSF to make the route more reliable) and a conceptual barrier that I do not expect them to rapidly grasp (bullet trains need dead straight rights of way, no exceptions).
Edit: Apparently I can't reply to you, nevertheless I picked up normie as a term in meatspace, and while it might not meet your sensibilities, I do not see a more accurate term, and I'm not here to intimately know & defend your sensibilities. Same goes for asking me or telling me your pronoun, great for you, I give not a single shit, use what you want and cut to the chase.
Do not include that in your list of complaints. That is a security feature, not a privacy deficit, but only if done right.
TL;DR you want automatic updates
No, I would not say it like that.
Security isn't TRUE/FALSE. Signal is more secure than other products like Telegram. There are a lot of things it could add to increase its security. But it's pretty damn good and that it doesn't do things that would make it better doesn't change the fact that it's damn good.
> All it seems like you're trying to do is distract from these glaring issues.
I take issue with listing "automatic software updates without user consent" as a list item in criticisms about Signal because "automatic software updates without user interaction" are a damn good idea for the threat models that take most computer crime into account. Even the paranoid (I don't use this term lightly) models can be mitigated by a well implemented secure update infrastructure.
What would you rather have?
- Activists being pwned by 1day vulnerabilities
- The patch being applied automatically as soon as it's available
WordPress, which powers 26% of websites on the Internet, doesn't even cryptographically sign its updates. If you pwn their update server, you've got a backdoor into millions of websites. The Mirai botnet? Child's play in comparison.
That's a glaring issue.
> If OWS was legitimately interested in security and anonymity, they wouldn't be including proprietary components and shutting down other open source projects that take matters into their own hands.
So says your ideology.
If OWS wasn't legitimately interested in security and anonymity, why would they publish their protocols as open specifications that anyone can use to develop their own protocols and apps?
Go on, take their papers and build an app that doesn't do all the things you disagree with.
Fork their project (It's GPL; you can fork it!), remove all Signal trademarks and branding, and release your own GPL app that doesn't rely on proprietary components. Make your app/protocol federated. If you do these things, there's literally nothing OWS can do to "shut down" your project.
Don't bother worrying about integrating with Signal users or using Signal servers. Do a better job and convince people to use your fork instead.
What's stopping you?
Thankfully: you cannot comment like this on HN. If you want to criticize Signal, you can do that, but you can't do it this way.
In what way is it out of bounds to point out deeply suspicious behavior?
And if you look at the repo that you linked, this is called out right at the top of the README:
In the same thread, Marlinspike said that he would be willing to consider "a clean, well written, and well tested" pull request that would add WebSocket support to the Android version of Signal. This would effectively eliminate Signal's dependency on GCM and thus allow the official Signal app to function on custom Android ROMs that do not include Google Play Services. There is now a bounty on making this pull request, and the person/team making the pull request would also receive whatever the BitHub payout would be at the time.
To be clear, Signal relying on GCM is only an issue for people who use a custom Android ROM without Google Play Services. For the vast majority of people who do have Google Play on their phone, this issue is completely irrelevant. Signal is designed so that GCM is only used for a wakeup event and never sees any of the encrypted Signal messages.
This is straightforwardly false: the user does not send anything over Google servers in the course of using Signal. Signal sends an empty push notification over GCM to users who have received a message; while technically true, to, in a discussion of a messaging platform, refer to empty push notifications as sending "messages over Google servers" is prone to misinterpretation as false statements that the "messages" going over Google servers contain any information about the messages being sent over the messaging platform.
Additionally, the use of GCM only allows Google to extract metadata about when a user receives messages; GCM messages aren't necessary to notify you when you've sent a message, so I'm not sure how you propose that Google can infer when a user sends a message and who that user is messaging. Could you elaborate on this?
Signal is seemingly built around providing the image of anonymity and security, but has glaring design flaws which negate these.