Cue the inevitable HN complaints that it's not perfect security, that you still have to trust your CPU manufacturer, that the app permissions are too invasive, that it uses phone numbers as identifiers, and that it runs on Google play services.
Pull your head out of the trees and look at the forest: humanity desperately needs privacy herd immunity, and you are taking the part of digital anti-vaxxer.
Signal is one of the best options available for privacy: thoughtfully architected by a leader in the field, reliable, with a good UI and reasonable adoption, and genuinely open source. People NEED an alternative to Facebook messenger, Facebook WhatsApp, closed source Viber and Telegram, and carrier SMS. By refusing to recommend this extremely good, strong contender, you are actively hurting the causes of privacy and open source. Stop being a digital anti-vaxxer.
If you have complaints, submit a PR or make a compatible fork. But stop discouraging people from protecting themselves.
Or alternatively: if you're going to be a digital anti-vaxxer, hit up Facebook, Google, or the NSA to see if they'll send you a paycheck for it. Because doing their jobs for free is just dumb.
I'm all for privacy and security. That said, I like the idea of Signal, but don't use it at all. The reasons are straightforward. It's not as good as Telegram in UX/UI/features. It is not multi-platform (I don't like installing Chrome/Chromium just to add Signal). Worst of all, it doesn't have multi-device message sync (like Wire does, even with end-to-end encryption).
Ok, fine, I could learn to live with those limitations and quirkiness. But far worse than all of the above is the fact that if you change devices (at least on iOS), you start with a blank slate  because Signal's developers refuse to allow data backups and restores (encrypted ones through iTunes on a computer or plain backups on iCloud).  The application does not allow backups of the data because the developers have actively restricted it! Not everyone is capable of creating a PR or forking it. The people who did take the time to file it as an issue or a request have been told it won't be done or the issues left unresponded to or locked for further commenting. Anyone interested in what's been happening on this particular issue can visit https://github.com/WhisperSystems/Signal-iOS/issues , search for "backup" and look at the open and closed issues over the past few years.
I feel bad to say this, but without a minimally decent UX, which Signal does not provide IMO, I can't recommend it to anyone. People don't expect to lose all previous chats and conversations when they buy a new phone. Quoting a line from the blog post, albeit out of context, that's not "We want to enable online social interactions that are rich and expressive in all the ways that people desire" means - not to me personally.
Adding to that, nobody in the top 10 root comments is actually "discouraging people from protecting themselves".
In a broad sense, they don't actually - being pretty much happy with all these messengers.
And in terms of server, Signals is "open source" but you can't run one yourself so how can you be sure really. :\
>Anything against signal is propaganda and anti-privacy because of the big picture
>anything against telegram is deserved because I've been told it's bad that they rolled their own crypto
Fragmentation/competition in this space is good. If you think whatsapp is less secure than Signal despite the author of this software working on it then maybe you need to evaluate for yourself if he is indeed "a leader in the field".
1) Intel SGX isn't really capable of resisting moderately serious physical side channel attacks by someone with physical proximity or access to the server. It's decent for low-value, or widely deployed apps where compromise of a single instance only hurts once user, but for a central singular service, less so. Inferior to, say, a conventional HSM with code run inside it. (SGX would be great for client-side stuff, or for, say, a Tor or multi-server VPN server array, which is something I worked on using TXT before.)
2) Intel's still a gatekeeper on what SGX applications can be meaningfully distributed; to run in release-mode, you need a key signed by them, and to get this requires a commercial agreement. If you run in debug or pre-release, anyone with a debugger can pull keys.
If Intel were to fix the latter (even if it cost, say, $100 to get the signing rights, or some non-discriminatory "you must do X, Y and not A, B" were published), the former wouldn't be a deal-killer for some things. If the server operator were a mostly-trusted third party (e.g. a cloud provider), and the people writing and signing the code were never able to get physical access to the hardware, it might work a lot better in most server threat models.
In Signal's particular case it's probably true that most Signal users trust Moxie/OWS, and this is really about putting a much higher bar in place for government compulsion to tamper with or disclose the contact info, so this is a net win.
For the uninitiated, stickers are essentially a UI layer on top of image sending. Each user has a collection of stickers which they can add images to. This collection can be scrolled through quickly and any sticker can be sent to a chat with a tap, like a sort of custom emoji. Other users who see a sticker can add it to their own local collection by tapping on the sticker message. I see no technical reason why this couldn't be built on top of the Signal Protocol.
However, I believe that this feature was added last year:
Various communities absolutely rely on this style of communication, so it becomes a dealbreaking feature.
I can understand them being a nice UI/UX feature that makes the app more fun, but I cannot possibly imagine that people actually rely on stickers to communicate.
Maybe you mean something different and I'm misunderstanding you?
It's the new emoticon. :-)
Some examples http://www.line-stickers.com/
Pictures, maybe with a max 0.5 sec animation are okay, and no sound. (At least that's my hunch.)
EDIT: That scheme can be used to probe whether or not someone already has a specific sticker stored, so it might have to be separated per-contact to avoid leakage.
Also, stickers display with proper transparency, giving them the appearance of arbitrary shapes, whereas images on Signal are displayed in a rectangular frame and transparent areas get a black background.
It may seem trivial, but this matters to people in the real world.
This is the same problem with PGP/GPG -- it may be incredibly secure, but if no one uses it, it doesn't matter.
Obtusely refusing to use a good system which has broad appeal is working contrary to your own goals.
You don't need any of that. Certainly none of that is present in Telegram. I don't think you read my original comment in full.
> I want this to be a Snowden'ish tool for adults, not giffy.
Ironically Signal already has Giphy integration, but I suppose you didn't notice because you don't use it—as would be the case with stickers.
Not sure how you could be more condescending here.
Look, I get it. I don't use stickers on messaging apps that support them, and I don't have a "GIF keyboard" or whatever. But plenty of actual adults use them and like them, and don't consider a messaging experience complete without them.
If Signal's goal is to be a niche messaging platform that only hyper-privacy/security-conscious people use, then by all means, leave out the user-fun features that the masses want. But if they actually want to raise the bar for mass-market security and privacy, those features are essential. Maybe that's "lame"or "childish" to you, but that's just how it is.
If the addition of stickers makes you leave the platform, then that's certainly a loss for you and those like you, but it's a huge win for the much-larger group of people who will be attracted to the service.
Where will they go? To some less secure alternative simply because of stickers?
I'm afraid you really can't rule that out, no.
My reaction is the opposite. I love the signal iOS app because it doesn't try to do SMS. It lets you enter the code which is great if I want to use a Google Voice number instead of a carrier number (and I don't forward Google Voice texts to the carriers). I am not a fan of SMS fallback and I am glad I get the "Enable Signal for SMS" on Android as opposed to just sending a text.
My only question is the following: what protection is there against the server sending a correct remote attestation for the code being executed to the client, and then, right after this attestation is validated, the server rerouting the network pipe so that the contact list the client sends goes to a different server running different, non-SGX code?
I ask that as a non-expert in SGX, so this might be something that has an obvious answer.
Edited to add: Also, isn't it the case that verifying SGX remote attestations requires phoning home to Intel? If that's the case here (and I'm not sure that it is,) is Intel consequently able to build an IP address graph of Signal users?
Edit 2: Matthew Green has provided a credible answer to my initial question: https://twitter.com/matthew_d_green/status/91274558241391820...
The same thing that prevents contacting an HTTPS server and having your connection re-routed after verifying the server certificate: the code elsewhere (e.g. outside the enclave) doesn't have the keys.
In the current contact discovery implementation you need to fully trust the server, namely the open source component that is the contact discovery service.
In this proposed new implementation, you still have to trust the server; you need to trust closed source processor hardware offering the Software Guard Extensions.
Those extensions need necessarily be kept away from user control, because otherwise you could pretend to attest a particular program, while actually running a different program. So this encourages a centralization of trust into a private key managed by processor manufacturers, which with server hardware is primarily Intel.
(note that you fully need to trust the processor hardware in both cases; I'm only arguing that this scheme possibly isn't an improvement)
Could anyone shed some light onto why this is an improvement, assuming you don't trust the CPU manufacturer? This is kind of the premise of the article, which notes that "more fundamentally for us, we simply don’t want people to have to trust us", "that’s not what privacy is about".
What you're referring to as "the server" is really a technology stack that starts with the CPU hardware and extends to the service operator. Without SGX, the trust stack probably looks something like: Intel, the OS, the VM, the cloud provider (Amazon, Google, Microsoft), the server software, the server operator.
Any of those points on the trust stack can be compromised by their principals or by an attacker (for example, Amazon could choose to interfere with Signal, or someone could hack Amazon and do the same). This is true for the OS, the software, the service provider, and everything else in the stack. Users have to trust all of those points, and additionally trust them not to be compromised.
SGX shortens the trust stack all the way down to the CPU. It takes the OS, the VM, the cloud provider, the service operator, and everything else out of the equation. Clients no longer have to trust any of them, either to be honest or to avoid compromise.
If trusting the CPU isn't acceptable to you, then that might be a bigger problem than you realize, since all computing has to. If you'd rather put your trust in AES, the problem is that you still have to trust the CPU to perform AES the way you would like it to.
Maybe the main point should have been that that this scheme "encourages a centralization of trust into a private key managed by processor manufacturers". You might say that by integrating SGX mechanisms into your security model, you create a set of 'feudal lords'  which can wield their power over you.
A manufacturer may legitimately establish an enclave in you most trusted hardware which you may not audit or even measure. And if that security model becomes commonplace, for example when only allowing Widevine DRM inside SGX, you eventually cannot use your self-chosen hardware, but will have to pick a feudal lord.
Yes, this is vulnerable to an effort by Intel, but that's still a significant increase in complexity and capital required to pull of an attack. Really, "still vulnerable to malicious insertion by the hardware vendor" is probably one of the most positive things you can say about a security system, as generally, especially when it comes to privacy, there are many attack vectors that are _so much easier_. For instance, OWS itself.
The improvement here is reducing the attack surface. Yes, that doesn't look like an improvement if you choose to look only at the particular part of the attack surface that this change doesn't address. But in real terms it is a substantial improvement.
The API allows you to POST an attestation report to Intel's verification web service, and get back a response saying "OK" or "not OK". You can use TLS to verify that the authentication was done by Intel rather than a MITM, but there seems to be no cryptographic way to prevent Intel from simply lying to you. They don't offer any way for third-parties to verify attestation reports themselves. 
Is there something I'm misunderstanding?
> "The Attestation Service verifies the validity of the platform. It is the responsibility of the Service Provider to validate the ISV enclave identity."
This API gives back considerably more than just an "OK" or "not OK". It gives back an attestation verification report (see section 4.2.1 of https://software.intel.com/sites/default/files/managed/7e/3b...)
But beyond that, as noted in the document, this is Intel certifying that the attestation was created by genuine Intel hardware. It also carries a data-at-rest signature in addition to TLS:
> "The Attestation Verification Report is cryptographically signed by Report Signing Key (owned by the Attestation Service) using the RSA-SHA256 algorithm. The signature is calculated over the entire body of the HTTP response."
What Intel is providing here is not too far off from OCSP: the main things that can go wrong with "isvEnclaveQuoteStatus" beyond messages being malformed or signed by untrusted keys are the keys being revoked.
An OCSP server could just as easily lie to you about the revocation status of an X.509 certificate. C'est la vie.
> If so, perhaps you missed this: "The Attestation Service verifies the validity of the platform. ..."
No, I didn't miss it. The Attestation Service claims to "verify" the validity of the platform, but it provides no proof of that verification. And if the platform is not valid (i.e. not verified to be using an authentic Intel key) then it doesn't matter what other checks you do; the security of the entire system falls apart, because the supposedly "secure" enclave could actually be being emulated.
> It also carries a data-at-rest signature in addition to TLS:
That's a signature generated by the Attestation Service, not by the original processor. It's generated using a "Report Signing Key" which has no cryptographic relationship with the processor's keys, or with the signature of the attestation; it just tells you that Intel claims to have checked that attestation and found it to be valid.
> An OCSP server could just as easily lie to you about the revocation status of an X.509 certificate. C'est la vie.
Right, this system seems as weak as OCSP, in the sense that it can be easily compromised by (for instance) a court order to modify the behavior of Intel's verification API. That's much weaker than the original claim, which was that it could be compromised only by tampering with the processors during manufacture. (Not to mention, the TLS certificate infrastructure would be vastly less secure if browsers had to rely entirely on OCSP and couldn't do any certificate validation of their own. OCSP can lie about revocation, but it can't lie about the signature itself.)
I'll add that v1 of the reporting API seems to have been even weaker than OCSP. In that version, the signed response didn't even include any identifying information about the request, so if Intel faked a response, you wouldn't even be able to prove it to a third party! Thankfully, v2 seems to have fixed this obvious oversight.
> the entire system falls apart, because the supposedly "secure" enclave could actually be being emulated.
I'm not sure what else you're expecting? Even if the signatures were cryptographically verifiable end-to-end (which I believe is still on Intel's roadmap, but in the meantime they have centralized revocation) Intel could still issue an attestation certificate for a malicious enclave which is in fact a white box software emulation of an enclave, and you'd be none the wiser.
There is no magical way for Intel to provide a mathematical statement that a device is a genuine Intel CPU. You are trusting the hardware and the key management to do what Intel says on the box.
Before this, you had to trust Open Whisper Systems was not coerced and that Intel had not been coerced into backdooring their hardware.
Now, you need only trust that Intel hasn't backdoored their hardware.
Its not perfect, but its better. Especially since its entirely plausible, from a legal perspective, that Open Whisper Systems might be compelled to record any contact data they get. It is comparatively less plausible that a court might compel Intel to backdoor their code in order to enable an order against Open Whisper Systems. And that is what would have to happen here.
The only way around this would be to have many implementations - but AFAIK they don't allow third party clients to connect.
Granted, how secure those enclaves against state-level attackers also remains to be seen (that recent attack on ARM Trustzone was pretty cool and effective), but it surely is an improvement after all if you want some defense in depth.
For single-server PIR that is almost certainly an impossibility, since no matter what sort of techniques you use you must scan the entire database on every request (there may be some batching that can be done if several requests are from the same user, but if you have millions of users that is not terribly helpful). Multi-server PIR is already practical, but deployment is a bit difficult: who will be the other server? How will you ensure the other server's database is synchronized with yours?
Almost daily you read about yet another new "innocent" tech being exploited and used for some corporate gain, taking advantage of the end-users. It's refreshing to see it go the other way!
That may be extreme (if it's not a strawman), but a major problem with TPM is that there is no way to give control to the end-user, with the possible exception of customers large enough to demand hardware customization.
Beyond that I haven't used it so can't vouch for it. Anyone else got an opinion on this Swiss app?
Re: Wire, I tried it back when it had video chat (one of the few options between iPhone and Android) and Signal didn't. But I found Wire to be too unstable to be really usable. That is to say, my family members quickly got annoyed and demanded I buy an iPhone. Signal video chat solved that issue :-)
I tried to use Signal and even gave it permission to use my contacts when I installed it. Once it found the handful of people I know using it I disabled those permissions. To my surprise, the app refused to let me use it. That's when I uninstalled it and stopped recommending it to people.
How they operate makes me think they're trying to build the illusion of security above all else (possibly with nefarious purposes?) or they're more concerned about driving up their user #s than they are about providing security.
Despite what security consultants like to tell people, end to end encryption is not rocket science. If you trust the publicly available algorithms (if you don't then this is moot) then it's relatively straightforward to assemble a system that should be secure over the wire.
Of course, that requires you to also trust that the app you're installing is using the same source as the one you vetted (or wrote) and that your device / computer hasn't been compromised somehow. Ditto the person you're talking to. You can trust the whole chain if you want but currently verifying it is impossible.
I like to do a thought experiment about what an actually secure messaging system would look like. The only truly secure system is an air-gapped one. (Yes, there are ways to bridge the air gap if you're in the vicinity, but that's not the point.)
How could you air gap a mobile phone? Well, you can't. What you could do is use a second phone with the radios physically disabled. You could then use this to encrypt your messages and then type those encrypted messages into email or SMS or whatever.
This is a bit laborious so you could send the encrypted info to the second phone and to a second app that brokers these messages. You could use the analog ports to modem these messages back and forth. Assuming your ADC is just an ADC, the standard analog audio port should not be hackable in any way.
This is a silly example but it's meant to illustrate a point. If you really have something to hide, an app isn't going to get your there. If you just want a little bit of privacy, you're better off with iMessage or whoever is offering end to end encryption. Signal is not a particularly good chat app and no, most of your friends aren't using it anyway.
(So they used ORAM)
That's all very neat, but before a 128MB limit scares anyone off from playing with SGX, it's no longer a limit.
As of the Linux SGX SDK v1.9, enclaves can be up to 64GB (31.75GB heap space and 31.625GB buffers). The Baidu Rust SGX SDK supports this, for example:
Kinda depressed that we seem to have given up on the scalable Private Set Intersection problem. It's a hard problem but an important one for privacy-preserving social apps.
Either way, it's far from trustless, but it does provide a benefit in that if the initial setup was done correctly a compromise of OWS servers can no longer result in leaked contact hashes.
Running remote software and attestation that it indeed is what you think without trusting anyone can be... hard.
Why don't the clients simply use Tor to retrieve information about the numbers they're interested in? To avoid the server from using the time of the requests to correlate them, you could let the clients sleep a random amount of time in between requests, and sometimes request information they already have. Maybe you could even ask volunteers to spam the server with nonsense requests, so that the genuine requests are drowned in this noise.
Also, it seems to have its own flaws, just like SGX:
Ok so that is a bit tongue in cheek but just how much of signal is OSS and how much can the community contribute (it seems that adding gif transparency would be one of those items - but really I don't "get" the requirements.
Actually, it is, since Signal encrypts your messages and only decrypts them when the app is "open". See http://esl.cs.brown.edu/blog/signal/ and
Also, don't they neccessarily have a mapping of telefone number -> IP address? They could just show all contacts, and attempt to send if you write to somebody. If they want to only show mutuals, use the hashed pairs to get the IP address (or ID, or status, or whatever they need).
I don't want to be too negative, but this seems to me like an unnecessary complex solution to a problem I don't have. I'd rather have the option to register anonymously, or to use alternative clients and servers.
Also, as a layperson, they could be making all of this up and I would never be able to tell. Somebody could have bought or coerced a couple of security experts that I know and trust, and I would never be able to find out. So, in a sense, a dumber and less secure solution might actually be better...
A related problem I think about frequently is this: how do users know/trust the client software they are running works as they expect?
I wonder if there is some way to use SGX to enhance trust in the client side too.
But to avoid minimizing it, there is a scenario I've heard where it matters: crazy ex-boyfriend discovers you're using a new messaging service, just because he still has your phone number in his contacts.
Ideally, nobody should know you're using a new messaging service unless you've given explicit consent to share that info with them.
What does "inverted" mean here? And why is the keyspace too small? And how do these problems relate to not trusting the server? The problem definition and proposed solution don't seem to match to me, even though what they have solved is cool.
The key space -- the total number of all possible PINs -- is really small; there are only 10,000 of them. One could simply "pre-compute" the SHA256 hashes of all 10,000 possible PINs and store them in a table.
When you receive a hashed value, it's really simple -- and, more importantly, very fast -- to just look up the hash in the table and get back the original PIN (a.k.a. "inverting" it).
It's kinda the same thing as "rainbow tables", if you recall those.
SGX as an additive layer for things you can't make strongly private is interesting... though I wonder if it wouldn't have first made sense to implement a private intersection so that signal users wouldn't have to send numbers that are guaranteed to have no hits.
By the way for those unconvinced that this is a serious issue just lookup this one from 5 days ago: https://news.ycombinator.com/item?id=15298833
Because, regardless of how good that solution is technically, I cannot understand what's so difficult about simply making contact discovery optional. At the user's discretion. Like you'd exchange PGP certificates.
Why, and how did user control and consent become so hard?
7:16:26 Daniel Gruss / Michael Schwarz - Cache Attacks on SGX
They do? 2/3rds of this blog post are about side channel attacks. The majority of the technical aspects of the article are about memory side channels in SGX and how they overcome those challenges. I haven't looked at the code yet, but it sounds as if they wrote it to be branchless so that attackers can't monitor control flow and so that memory access patterns don't leak anything.
Protecting against side channel attacks and reverse engineering is the responsibility of developers who use SGX, according to Intel's website and user manual:
> Intel designs the Intel Software Guard Extensions to protect against known hardware attacks [...] Intel Software Guard Extensions is not designed to handle side channel attacks or reverse engineering. It is up to the Intel SGX developers to build enclaves that are protected against these types of attack.
Intel's assumptions appear to correctly account for the fact that side channels can occur. They have assessed whether their product is designed to protect against that class of attack, and assigned responsibility for defending against it. What do you think they should do differently?
The lecture I pointed to shows an example of this attack.
This should be the fucking defacto standard!
We really need Privacy-as-a-service and security-as-a-service
We can provide food-as-a-service, but we can't provide digestion-as-a-service. Which is to say, we can provide many useful services but privacy and security are things that can't be outsourced, just like you can't hire someone to digest your food for you. They require deep integration and planning, and they're difficult to pull off in the best of cases. Trying to service-ify them is just asking for pain.
He just takes as a given that private set intersection doesn't work, and in the original 2014 document writes, 'There are cryptographic protocols for performing privacy-preserving set intersections, but they generally involve transmitting something which represents the full set of both parties, which is a non-starter when one of the sets contains at least 10 million records.' This is just flat-out wrong though: the sets to be intersected are the contacts of two users, not Signal's registered users and the contacts of one user.
> "Very few people want to install a communication app, open the compose screen for the first time, and be met by an empty list of who they can communicate with."
Signal needs to bootstrap itself automatically in order to solve this problem. Can you explain how two users doing a private set intersection of their contacts solves this bootstrapping problem?
The idea is that users bootstrap via the social graph of folks they physically know (or, of course, they could fall back to manually entering keys, for experts). Users place their trust in one another, rather than in the server.
Call me crazy, but I think that'd be a bad user experience of the sort that held back encrypted messaging for decades.
I want to protect people's contact lists.
What's more, Signal's current solution doesn't actually address the problem: the vast majority of people who install Signal will open up the app and see … an empty contact list. Why? Because the vast majority of the human race doesn't use Signal. So what do folks who use Signal have to do? Ask their friends to install it, and register it. Which isn't appreciably different from asking a friend to install Signal, and text/email/NFC one his contact information.
Yes, once a cluster of people have installed it, the contact-sharing functionality becomes useful. So, too, would friend-of-a-friend contact sharing.
> Call me crazy, but I think that'd be a bad user experience of the sort that held back encrypted messaging for decades.
Since that's still the user experience of Signal, and Signal is pretty popular, I don't really think it's a problem.
I use Signal. I like Signal, a lot. I respect Moxie Marlinspike's crypto chops. But I wish he had more respect for privacy, and didn't require us all to trust in his good intentions.
In person you can also exchange "safety numbers" via QR code which provides key verification.
Signal doesn't care; users care about mutual friends.
Here's an example:
- Alice installs Signal². She has many contacts, and doesn't know which contacts also use Signal². Notably, she doesn't want to give the Signal² servers all of her contacts.
- Alice asks Bob to install Signal². He does, and they trade key information (e.g. via SMS, email, NFC — whatever) and their phones use private set intersection to discover that they have Charlie, Diana & Ed in common. None of those guys has Signal² installed yet, so far as Alice or Bob know.
- Alice asks Charlie to install Signal². He does, they trade key information, and they see that they have Bob, Frank & Gene in common. Charlie gets (what Alice says is) Bob's public key, without contacting Bob directly.
- Charlie's phone contacts Bob's phone, and discovers that they have Alice in common; Charlie's phone validates that Bob's claim of Alice's key matches what he verified himself.
- Bob asks Diana to install Signal². She already has it, so all they need to do is exchange keys and discover mutual contacts. They both know Alice & Ed — and Diana is able to give Bob Ed's public key (it turns out that, unbeknownst to Alice or Bob, he's been using Signal² for months). Bob's phone can then share Ed's public key with Alice in another round of set intersection.
- Diana's phone can also share Ed's public key with Alice, now that she knows Alice's contact information. Alice's phone now has two different people attesting to Ed's public key; if they agree, that's good and if they disagree then her phone can give her a warning. She can contact Ed out-of-band if she chooses. This is an improvement on the current Signal protocol, since Alice has a chance to detect a malicious attestation without having to manually compare keys with Ed.
Note the user experience: as each user starts using Signal², his social network is used to share the contact information of his circles of friends. Users are incented to be truthful, since lying will be easily detected. The Signal² servers never see users' contacts; they don't even need to know users' real-world identities.
Just as in current Signal, each Signal² user is introduced to the program by a friend. Just as in the current Signal, users discover contacts who use Signal². Now, it's not identical: users won't see contacts who use Signal but have no mutual friends in common. This is indeed a cost — but it comes with the benefit of not needing to trust OWS.
Your solution is certainly better than nothing, but it relies on people having actually used Signal to talk to each other in the past. It also appears to involve a lot of P2P coordination. And while it may not share contacts with the Signal servers, it does leak your address book to all of your friends, with potentially serious consequences ("hey, why does Alice have my therapist in her address book?", "Bob told me he erased Carol from his life! Why does he have Carol's key?", etc).
Well, it is trying to incentivise Signal² usage:-)
> It also appears to involve a lot of P2P coordination.
Not necessarily a huge amount, I think — folks could SMS/email stuff as well as trade contacts face to face, depending on their risk preferences.
> it does leak your address book to all of your friends
Only where there are overlaps: since it's private set intersection, Alice & Bob only see their mutual acquaintances, not other folks. But yes, the mutual-thereapist issue exists. Presumably an expert mode might enable folks to whitelist/blacklist particular contacts when they're added.
Certainly, there are ways in which this approach is worse & more complex than the current Signal approach, but it is better in another way: it preserves privacy from Signal (and its hosting provider, and Intel, and any government which can coërce Signal, its hosting provider and/or Intel into subverting security).
Even if we assume the NSA has no backdoors in the secure enclave, there have been countless demonstrations against secure enclaves in today’s real-world processors on conferences.
Additionally, the same problem applies here that has been used before to break Google’s SafetyNet solution (which does the same, but the opposite way around, to ensure the client is running, and can’t prevent tracking): You can simply have two systems, and for attestation you question the true device, but for actual code execution you run it on an emulated one, where you can extract all data. Alternatively, you can still emulate this system completely and circumvent it via that path, too.
The only way to truly verify SGX is by having a signature of the processor manufacturer – but in that case, you still don’t get protection from the NSA, or the processor manufacturer.
Yes, there are huge challenges and lots of different attack vectors. No, that doesn't invalidate the progress that's made when someone invests a lot of time and effort to solve 80% of an extremely difficult problem.
Sometimes it feels like you could solve world hunger and world peace, but unless you also gave everyone a puppy, half of the security community would still complain.
(And then there's always someone who wanted a kitten instead of a puppy...)
As I mentioned, advertising with Snowden leads to a promise that Signal can not fulfill. Not even with this.
But, you know, there is already a solution for all the issues here: Don’t use phone numbers, use usernames! As it turns out, that is far more private and secure.
That said, this particular development is a lot more than a puppy. It's at least a full grown, happy, friendly Golden Retriever who's already house trained and knows how to fetch your slippers.
They've removed one of the biggest remaining weaknesses of their system, where they still required the users to "just trust" them. That's pretty cool, especially for the vast majority of us whose primary adversaries are smaller or profit-driven.
It is. But it's not convenient. Everyone has contact list with numbers and people want to talk to their friends immediately not call them for their ID for service X.
I've got basically no phone numbers of many of my friends, or they've switched them so often that the ones I have are long wrong. Half the numbers in my contacts list don't exist anymore.
I certainly prefer usernames.
EDIT: Before I get complaints from the Matrix devs again that I’m bad-mouthing their app: ObjectOutputStream is NOT a suitable implementation for a "database": https://github.com/matrix-org/matrix-android-sdk/blob/736643...
Once again, why not invest your time in doing something more productive than spreading FUD about Matrix?
 https://github.com/matrix-org/matrix-ios-sdk/tree/master/Mat... and https://github.com/matrix-org/matrix-ios-sdk/tree/master/Mat...
Because I’m already investing my time in improving IRC, and there’s little time left after that.
> spreading FUD about Matrix?
Every criticism of mine you’ve admitted to be true, yet call it FUD still. Even this one you admit that there is significant amounts of bad code in Matrix, just argue that other projects have as much. That’s no excuse.
> The equivalent work just hasn't yet happened on Android; PRs welcome.
So why did this happen in the first place? This isn’t even just bad code – someone spent significant amounts of work and time (far more than would’ve been spent for using existing solutions such as Realm) on this. And there’s many pieces of the Matrix code just like that.
So, please back off. We wish IRCv3 and your projects the best; please stop spewing negativity about us.
The problem isn't in flat files, it's in "serializing" to the flat files by dumping the objects directly from memory. Specifically, Java's native serializer is used, which serializes types and constructors in a non+portable way. There are fucktons of people that get this wrong, every second RCE in JavaEE was due to this. It's long deprecated. You use it as only storage backend.
Files written that way can't be reliably read on different android versions, or different devices.
Reading such files back in basically allows anyone with write access to the files to execute whatever they want in your process, because it also serializes constructors.
The problem isn't flat files, but that whoever wrote that code has never had any experience with security critical code.
I can easily provide hundreds more examples of this. Race conditions that lead to private messages being sent unencrypted. Bugs that cause full crashes due to code errors. Horrible messaging performance with tons of low hanging fruit.
All this would be fine were this a random flashlight app.
But if an app with a focus on secure messaging fucks up the secure part and the messaging part, then it's definitely not something I'd want to use. I've got apps of my own that I've worked on daily for 3 years that have similar code quality — but guess what, I haven't released them, because of this code quality.
With secure messaging, you can't release first and iterate later, you actually need to build a secure solution from the first version on.
The fact that you apply Silicon Valley mentality to secure messaging makes it already problematic. If your company needs to iterate or pivot, you can. But once your data's been sent unencrypted, it's gone.
And the fact that you also don't even notice the problem with this code even when I'm pointing you directly at it makes it even worse, because it makes your project even less trustworthy.
In terms of whether "release and iterate" is unacceptable for secure messaging: we make it abundantly clear that the privacy protecting features are still in development and are beta quality and shouldn't be used for anything where privacy is critical. Meanwhile the idea that secure messaging products shouldn't be publicly beta'd prior to a formal GA (especially for stuff as complex as decentralised e2e crypto) feels weird at best.
If you've found fatal crashes or races in the E2E UX, please do the responsible thing and chuck an email to email@example.com rather than bragging about non-specifics here. Finally: Matrix's projects span a little bit more than just the Android SDK, which is admittedly mid-development still. So yes, there's loads of low-hanging optimisation work to be done and some areas of crappy code, but that's not remotely the whole picture.
Something I forgot to add then: why doesn't Signal have a public roadmap? Almost all open source projects that I've seen have one. Why? Because they are typically community-driven and developed projects and because they rely on the community to buy into the project's future, and give a hand developing it or funding it.
But we don't see that with Signal. Why does the roadmap have to be secret? I'm not even talking about deadlines and schedules. Just decide on a bunch of features and present them on the site. You may get more volunteers willing to help if they see an interesting new feature popping up on the horizon.
Also, please do something about the call routing speed. It takes too long for a call to connect. If it's a funding issue, then say so. In fact, it wouldn't hurt to have "crowdfunding campaigns" for the various features on the roadmap. Show progress lines and whatnot, and then start developing that feature when it's fully funded, etc
The whole project needs to become more open towards the community and less "we'll just be here in our basement for the next 2 years developing an awesome secret feature."