This is idiotic. Signal didn't take money from the NSA, they took money from the Broadcasting Board of Governors, which funded virtually all Internet privacy projects during the time period we're talking about, both American and otherwise; some of those dollars went to development, and still more went to contracting commercial pentests from firms like Matasano, the one I co-ran, and iSEC Partners.
This article says it's not rehashing DeVault's arguments, and it isn't; it's making an even dumber set of arguments.
The NSA thing is the only thing I didn't check out, and it appears you are right, so I am dumb. -- I wrote this before Marlinspike stepped down and I never thought to check that either, so second point to you: I am idiotic.
Regardless that particular point stands even if he's not the messiah anymore, that he was heralded as a saint who bestowed on humanity the right to privacy, and that we should trust in him.
From what I can find about the entity that funded Signal from the government it seems to be a lot to do with the CIA and anti-censorship products designed to disrupt other countries... Which, actually fits with the narrative of censorship resistant messaging, at least -- so no reason to think that it betrays the stated mission of Signal
The foundation that funded them used to be called: Radio Free Asia (which on inspection seems to be considered propaganda, though seems to market itself as free media), now called OTF, if you see the list of other software they sponsor it's very much in the same category: https://en.wikipedia.org/wiki/Open_Technology_Fund
So, I recant those statements about NSA, I only know that a number of people in NSA are not using Signal, and I had heard about NSA funding from somewhere, which obviously is not true.
Correct the article, and note what you originally said when you do so, so your readers can make their own decisions about how seriously to take your arguments.
The details you're providing about BBG, RFA, and OTF aren't relevant, and just add detail to what I said. In case you were relating them to educate me: there's no need, I have firsthand knowledge of the programs you're slandering (whether you mean to or not).
> Correct the article, and note what you originally said when you do so, so your readers can make their own decisions about how seriously to take your arguments.
I mean, the points at the bottom of the article are exactly the same. I would consider your temperement and not indulge flights of fancy that I am attacking you or your institutions.
I am speaking as a citizen, from an outside perspective, on what concerns me; because ultimately I see Signal pushed very hard and only lip service paid to any issues.
> The details you're providing about BBG, RFA, and OTF aren't relevant, and just add detail to what I said. In case you were relating them to educate me: there's no need, I have firsthand knowledge of the programs you're slandering (whether you mean to or not).
I thought it might provide some context, given that I am agreeing that the stated mission of OTF aligns somewhat and does not directly contradict the stated goals of Signal.
You ran an article titled "I don't trust Signal" with a subhed that read "Signal took NSA money". That was false, as you acknowledged. You can correct it properly or not. People can draw conclusions from your actions either way.
I don't pretend to understand what you're arguing about right now. Personally, I think you should correct the article, but you could opt not to. It's your call either way.
OK, let me be as clear as I can be because I thought I stated this: I will, always, absolutely correct the article, and I will keep my original statements as strike-throughs.
Given that it makes no material difference to the point being made I don't know why it's being so emotionally driven;
I would correct the article even if it completely invalidated my point.
I would correct the article if you hadn't been emotional too.
It's just good to make sure that if you make a mistake that you own up to it and you ensure that misinformation does not spread.
I think (I hope) you agree with that.
Which is why I'm confused as to why you keep pressing the issue as I had already corrected the article after seeing your comment for the first time (before I even replied, in fact).
The reason you can't understand what I'm arguing about is because I'm... not arguing.
The lack of Signal’s use within the NSA is not positive evidence of its compromise. The NSA has operational constraints that don’t even remotely map onto a phone-number-identified messaging scheme.
Reports from 2007/2008 had already indicated significant interference and government spying between agencies and with private corporations. Also, at that time it was widely believed but not yet confirmed Dual_EC_DRBG was backdoored via NIST/NSA collaboration.
Many folks equate any US government money with NSA money, and did so especially around this time, which is likely why you made this mistake. Taking any US-backed government money, even at the time Signal took it, was and should be suspect.
IIRC Signal continued to take money even after Snowden. So it is a fair point and not at all idiotic, and the over-reaction after you agreed to correct the article is suspect.
I do think Signal needed that money to survive, and it probably was put to good use, but I would have acted differently and more transparently about how my use of that money was communicated to the public.
That's not what they said. They didn't write "Signal, like every other privacy app at the time, took US-backed Broadcast Board of Governors money". They wrote "Signal took NSA money", which is inflammatory and, of course, obviously false. You can't rescue that claim with jazz hands.
I don't think you're doing this person any favors by keeping this debate alive.
I cant see any evidence that Broadcast Board of Governers has funded anything like: Telegram, Briar, any of the third party clients for XMPP or IRC or any of the major privacy mail services like tutanota.
So I think it’s at least a mild mischaracterisation that they funded everything.
in fact the only other major popular software they seem to fund is Tor (also from US govt) and noscript.[0]
“How could you possibly get the three letter agency wrong, your argument is entirely invalid” is a bit of a strawman dude, I admitted the mistake and corrected immediately — but you make it sound as if it matters at all which three letter agency it was.
No, it wasn't the one that is charged with spying on everyone, it was the one linked to disrupting everyone. Much better, but not materially different. I am not finding evidence to suggest what you claim either.
You are very poorly informed. You were a Google search away from a sprawling list of projects OTF funds, but instead found a 2017 PDF of "current projects" that listed a tiny percentage of them, and concluded you'd found dispositive evidence for your argument.
It's fine not to be super well informed about this stuff. Why would you be? Most subjects that come up on HN, I'm very poorly informed about, too. But it's not fine to be so noisily poorly informed that you spread misinformation, which is literally what you've done here. I've done that, too! But I believe I apologized when that happened, rather than doubling down. I hope I did! Dealing with this thread has made me super self conscious about that, which is I guess a good thing. Your blog post was so bad I experienced personal growth.
You could have asked people before you posted this story; you could have done any kind of research at all, and improved it. But you didn't: instead, you ran a piece that claimed the NSA funded Signal, and that Signal relies on server security, and that advocates of Signal are part of disingenuous conspiracy.
(You and I are the ~only people reading this flagged, buried thread now, so we can leave it here if you like).
I don't think casting aspersions on people and calling their reputations into question is a small thing at all. The argument on this thread isn't about semantics: it's about demonstrably false inflammatory things you said in your post.
Doesn’t necessarily mean it’s cryptographically insecure. I can imagine any NSA employee installing a typical strong-crypto software like Signal, PGP, or TOR on a personal device is a massive red flag worth investigating. If I were in that position I would not want to garner that kind of attention even if the crypto itself is fine
I have to imagine using Signal with a high level clearance might itself arouse suspicion, for one thing. People inside the intelligence community really can’t expect much privacy for themselves.
> The NSA thing is the only thing I didn't check out
You might want to check again because your post is full of inaccuracies. From signal being on f-droid to their backend being relevant to security, you got almost everything wrong.
I’m personally not a fan of requiring phone numbers or disallowing third party clients. I’m not really sure how I could characterize those concerns as being “dumb” even if they didn’t particularly bother me.
I don't agree with many points from that blog post, but I agree with your comment about phone numbers.
xx Messenger doesn't require user's phone number (it's optional, if the user wants to make it easier to be found by phone number).
Messaging clients on some other networks also don't require phone numbers, but that makes it harder to find people.
xx Messenger allows the creation and use of network nicknames that can be ported to other apps.
The "I don't trust Signal" articles were one of the things that inspired me to push for making xxm first over other apps. Since then a few other options have also sprung up (Session, cwtch, simplex to name a few).
Also, it wasn't mentioned in the articles, but another issue solved by us and many of these other messengers is use of a private keyboard instead of the vendor/isp ones that can spy on users.
Unfortunately, I learned the hard way that better privacy and security will not beat convenience and network effects when you are competing with "free", so we are changing our approach.
There are valid reasons to prefer other apps, but this is not really one of them; it is the Signal idiosyncrasy that has most clearly been vindicated, by what happened to Matrix, which has the opposite model and has suffered for it and will continue to.
In the unlikely event that a set of vulnerabilities as devastating as those from the Nebuchadnezzar paper were found in Signal, Signal controls the whole platform end-to-end, and can simply publish software updates to fix them. Matrix has to do a coordinated multivendor update of their entire protocol.
Right, but what prevents them from doing a similar whole-platform "update" to totally screw up privacy/security? The point of software freedom is the freedom to run variations on the software that's reasonably easy to do, because you have the source. If it's supposedly "you can connect to any Signal server you want, as long as it's our Signal server" and "you can use any Signal client you want as long as it's our signal client", there's not much freedom left there.
Ask them! They're in the middle of one right now. The answer is simple: they don't control all the software, but rather have to convince other vendors to make changes.
(I like Matrix and will always sound like I'm dunking on them because of the implications of Nebuchadnezzar, and, before that, of opt-in E2EE; they're doing mostly the best they can with a tough hand to play.)
- If you use our client you can use our servers
- If you don't use our client, you can't use our servers, but you can use any other server
It's like, technically it's sometimes[1] OSS, but they don't care about actually being FOSS in practice. If I can't fork the software, add or remove a feature and keep using the software's other features, it hasn't hit the bare minimum to be called FOSS, IMO.
1 - Most old versions of Signal are OSS, but frequently updates are only shared after a long delay - in some cases over a year out of date, if my memory serves me.
This article makes early on a major factual error, claiming that Signal is on F-Droid. It is not, and the Signal team’s refusal to authorize the app on F-Droid is a well-known and longstanding decision. The author then claims that the team’s development of new functionality behind closed doors (payments) represents some kind of betrayal of Signal being open source. Yet all clients running on people’s phones in the meantime were based on source that was published, with verifiable builds. Finally, Moxie Marlinspike is no longer leading Signal.
There are nevertheless legitimate reasons to not trust Signal and to worry about compromise. I am pleased to see that the author mentions one major but usually overlooked one: the (optional, but encouraged) sharing of people’s contact lists with the server through Intel SGX functionality, which has been repeatedly found to be insecure.
> Yet all clients running on people’s phones in the meantime were based on source that was published, with verifiable builds.
"Based on" meaning they're built from that source but have changes to it? Because given that the published source had known bugs and was not updated for months, either they're not fixing known bugs for months, or they're publishing builds based on unpublished source. (And the article does address the verifiable builds point, pointing out that it doesn't and can't really work).
Thank you, I’ve always highlighted most of these points here in HN and I get heavily downvoted and attacked being “paranoid”, how is it paranoid if you don’t even have a way to confirm the physical servers are secure from memory injection attacks/boot attacks/etc.? How would you verify after the audit, something was changed? Phone numbers are inherently have broken security by protocol design (and I personally have a lot of personal work on attacking GSM protocol, and I know how easy it’s for 3 letters agencies with enough funding and access to exploit it), why I don’t have an option to choose? Why it was shilled so hard that time when WhatsApp went dark? There’s a LOT of sketchy things, there’s noway I would trust it, never will.
The point of an E2E cryptographic design is that you don’t ever need to trust the server — the server can be as malicious as it pleases, so long as you can affirmatively demonstrate that the client never communicates anything that isn’t encrypted with a key the server has no access to.
That was the original argument. But Signal has now rolled out functionality (sharing of contact lists and other details with the server through Intel SGX) that does force one to trust the server.
> E2EE is meaningless if the client and the network are the same
Using modern, networked computers does involve a lot of trust.
But as long as it's not one company delivering the whole stack, some attacks require a gradually more unlikely scenario where a lot of parties across the world would have to cooperate, and the cost of an exploit that traverses the software stack becomes so expensive that targeting you is out of scope.
> I do not believe that end-to-end encryption means anything at all when the network and the client are the same entity.
That's textbook black-and-white thinking, and it's bullshit. I agree that network and client being controlled by the same entity raises questions, but that doesn't imply that E2EE "doesn't mean anything".
A very basic argument that shows why you are wrong: It's much easier for the government to compel a company to hand over data from their servers than it is to compel the company to write and publish a backdoored client. The two scenarios are not equivalent in practice, and this is what matters. Threat models that ignore how the real world operates are useless.
Please read up on the concept of "defense in depth", central to modern information security, which is built around the insight that security mechanisms can be valuable even if they don't work perfectly in all circumstances.
> It's much easier for the government to compel a company to hand over data from their servers than it is to compel the company to write and publish a backdoored client. The two scenarios are not equivalent in practice, and this is what matters. Threat models that ignore how the real world operates are useless.
There is a very real threat when
- Signal servers operate in the US and clients on app stores run by large US companies
- Signal can becompelled to not release government-imposed backdoors
- Signal stops releasing their open-source version, but patches it arbitrarily in production
I've read an interview that there is code related to anti-spam you can't share if you're a large network, because it's an arms race. But because Signal does not make their operations transparent beyond what's absolutely necessary to keep secret wrt. anti-spam, this creates distrust: It leaves a sense that they care more about uptime than trust, because they got big. So it's not the messenger of choice for political dissidents, where your threat model does involve the government to some degree (passive or active).
My primary issue with Signal is that they’re a USA-based org, and AFAIK a NSL would allow the US government to collect the same metadata they could with SMS or iMessage
They might not have the contents of your messages, but they know who you’re talking to, and when
Remember that if you're not USA based, NSA doesn't need an NSL to collect information from you; they can just own you up, and that is literally their chartered purpose.
That's not to say you should use US providers! Just that NSLs aren't a good reason to pick a provider. Pick a service that doesn't have information to share about you in the first place as your high order bit.
> Remember that if you're not USA based, NSA doesn't need an NSL to collect information from you; they can just own you up, and that is literally their chartered purpose.
"Owning you up" is harder (not impossible, but harder) when they can't simply send a letter and bring the force of the law to bear. NSLs are a very good reason to avoid any system that requires you to use a provider that has a presence in the US (and there are analogous concerns about e.g. AU, and obviously any country where legal and practical protections are weak enough that a strongman can just send a team of thugs round is a nonstarter). But really any specific country is beside the point; it should be table stakes for a serious cryptosystem that one can avoid depending on any single point (and make choices based on one's own trust base vs available resources) whether that's for relay servers, app maintenance, or anything else.
> Pick a service that doesn't have information to share about you in the first place as your high order bit.
True enough; obviously trusting your security to a system that requires you to use a phone number identity is laughable in the first place.
But I think it's quite valid to suggest users should carefully think about the tradeoffs between being subject to legal disclosure and being subject to compromise. Basically, do you trust the FISA courts, or do you trust the code?
The answer isn't really obvious! When it's a random anonymous startup based in, like, Panama that claims to have reinvented JavaScript-based encryption or whatever, yeah, I sorta trust the FISA courts more!
Meh. All secure systems are alike, each insecure system is insecure in its own way. Like, yes, there are absolutely systems whose code is weak enough that I trust them less than the US authorities (I wouldn't say the FISA courts - isn't that the court that's declined like 3 warrants ever?). But that doesn't make having a presence in the US any less dangerous!
No, I wasn't talking about Telegram. I was making a general statement about crappy e2ee startups with bad designs and accountability. There was a recent one using Sealed.io whose name I forget.
Telegram's cryptography is objectively inferior to every other messaging app. It is anything but "tried and battle-tested". The idea of selecting Telegram over Signal to protect metadata is risible.
For sure! I’m assuming TAO or whomever can come after me at any point for manufactured reasons, if nothing else. Nothing I can do there…
I do think this matters in a general sense, because state actors targeting individual users is a completely different threat from state actors collecting the communication graph of a major hub.
> Of course signal is open source, it’s on f-droid
Even with their criticism, the author is giving Signal too much credit.
Signal is not on F-Droid. Signal sends their lawyers after open-source app repos for including their app.
I think the only claim they have is their trademark name "Signal". I wonder what's a good name for packagers to use for apps like this. Reminds me of Firefox and IceCat, or Rust Lang and Crab Lang.
It would merely be annoying if it was just about a trademarked name, but it's worse than that. They actually forbid using their infrastructure from non-official clients.
Trademark doesn't mean you can't say “Signal” at all. You can officially call it Whatever but then always say “Whatever (like Signal)” or “Whatever (Signal-compatible)”.
This is an excellent post, but I want to point out that what really matters here is your threat model.
If you are trying to hide from the NSA or other nation states, you have a LOT of work cut out for you. There are basically two sub threat models: are you trying to hide from the dragnet (in which case, just using any obscure and relatively obfuscated communications mechanism will work) or the scenario in which you’re being actively targeted (in which case you need rock solid security from end to end). Keep in mind that the Security version of https://en.m.wikipedia.org/wiki/Analog_hole means the security of your networked device is just as important as your messaging protocol, and… good luck with that on mobile.
If you are just a small fish trying to avoid something with a court-admissible record (and don’t care about parallel construction) you’re probably fine with Signal, provided you understand that your counterparty can just give you up.
I hate to bring out the “nothing to hide” argument because I disagree with its premise from a moral standpoint, but from a practical standpoint, I recommend avoiding having “directly targeted by the NSA and needing to avoid it” as your threat model to begin with.
Unless you are also building your entire hardware from scratch, including the CPU, and writing or auditing all firmware and device drivers, "zero trust" is a fantasy.
Zero trust means verifying everything. Not only has no living person verified the entire technology stack they are using, it is literally impossible to do so for any modern consumer device, since they all contain closed hardware and software that affects the trust model yet cannot be verified in any meaningful sense.
Maybe that commenter didn’t know what zero-trust means. Zero-trust in practice just means continuing to authenticate from within your “perimeter”, ie, assume an employee or machine is already compromised.
If you need unbreakable encryption and security that even the NSA (or the various vendors it works with to find zero day exploits) can’t hack you need to get off the fucking internet
I personally think all the major VPNs are honeypots, that’s my conspiracy. Subpoenaing my ISP is far more work legally than just having data being fed to them for free.
For those who think that’s too far, Crypto AG. A company that actually wasn’t founded by the CIA, but was slowly bought out by intelligence agencies with shell companies. Also they were Swiss by every appearance! Good thing there isn’t a modern Swiss company many people here use and trust because they are Swiss and not US…
Also, paying for VPNs with cash is, in my opinion, overrated when they know your actual IP address. Sure, visit the coffee shop, but if the coffee shop has cameras…
On the macOS client (and iOS client too, I think), Signal keeps periodically prompting me to import my contacts, and the only options offered are 'Yes' and 'Not Now'.
I have an intense dislike for any software that offers a 'not now' option without also offering a 'not ever' option, particularly when it's about exporting PIIs. It feels like it's just saying "ok, but one day we know you're going to slip up and accidentally click the 'Yes' button, and then... bam!"
Same. As trustworthy as Signal might be, the most trustworthy chat service is one I don't have to trust at all. Being required to give them access to my contacts was a dealbreaker for me.
> This throws into question what (Signal) consider “open source”, they are clearly deploying non-public software. [1]
Even if Signal did deploy the publicly available code, there's no way they nor a user can prove it.
We should never assume an open source version of anything is what's actually running on a server - edits could have been made to the code; in fact it could be entirely different software designed to appear similar.
A good solution is paid registration. Telegram did an experiments for this, but they asked too much (about $10-15 for not requiring a phone number) and you needed to buy their weird cryptocurrency.
If Telegram asked about $4 in BTC for anonimity, I would gladly pay for it.
It's interesting that this experiment shows how valuable is knowing your phone number and identity.
I don't know if there is a good solution. Mostly it's a trade-off against these problematic options:
1. your contacts to be stored server-side instead of device-side, where they might end up in the hands of an adversary
2. your contacts to be stored in some other, less-ubuquitous-and-less-likely-to-be-complete format on your device.
Maybe signal could have write access to your contacts so that it could store its own identifiers in the address book, but I could see a lot of wtf moments coming up when privacy-minded folk see that permission request.
On the user (and actionable) end, getting a cheap SIM from a country where they are sold OTC. Then, registering with that. You only need to confirm your number once, I think.
If you don't have access to the original phone number you used for registering, and accidentally lose or break the phone, how do you recover the account?
This article is a grab bag of complaints, conspiracy theories and one or two valid points.
Yes, it needs your phone number. Yes, it shares contacts, protected in a (somewhat) secure enclave. Yes, Signal could theoretically distribute a compromised client that would undermine the end to end encryption (which if ever discovered would end their organisation).
If you are afraid of spy agencies or hostile governments, don't use it. If you just want reasonable protection for your chats with friends and family, it's perfectly fine. I trust it more than WhatsApp.
They also don't have any publicly defined approach to combat criminal activity on their network, like they can't give the IP address of personal information, but they can still delete the groups and related accounts[1] - just create an account from these free temporary phone numbers from the internet, you will be able to see the past groups accessed (and non exited) by the account. To give a contrast, Matrix has a clear approach to moderation[2].
* they have a forum[3] that is not even indexed by search engines[4], which is not community friendly at all.
They keep bringing up the point that for an E2E encrypted app to be secure the client and network must be fully independent from each other, but I am not able to understand why. If I trust Signal the client, why does it matter what backend it is talking to? Being able to communicate over an untrustworthy network is in fact a core selling point of such apps. So why does it matter that Signal runs both?
Because functionally it's not any different than just trusting the network.
If telegram says "trust me, we can't read anything" and signal says the same: the only way you know it to be true is if they have never given you any software that has access to the unencrypted content.
OR
you only use another network to communicate (and, obviously, you ensure that your clients aren't sending any messages out to anywhere that's not that network).
The open-source aspect is interesting with hosted services. Even if Signal open-sourced their entire stack (front-end & back-end), you still couldn't be sure that the code they are hosting for you is exactly the same as the published open-source code.
Ironically I tell people all the time not to trust me.
I run an IRC network (and have for 15 years), and I tell people often to use OTR in DMS primarily because I shouldn't be trusted. I definitely do not tell people to install my client and forsake all other networks.
> -that you can bribe the New Yorker magazine to write a profile about you
Oh that's definitely true; though I wasn't saying bribe; Did you know those "30 under 30" lists, you have to apply to be on those. It helps if you're rich, but you don't technically have to pay. It's like this with a lot of media.
I have been in the room when discussing paying for PR pieces on our senior staff, especially the CEO in top magazines. Truthfully you hire a PR agency and "things happen".
> -that Facebook Messenger is as secure as Signal
Why are they not?
> -that you cannot be open source if you do not instantly share every change
You are not, since nobody can bloody run the software themselves anyway, and the entire point is transparency.
> Oh that's definitely true; though I wasn't saying bribe; Did you know those "30 under 30" lists, you have to apply to be on those. It helps if you're rich, but you don't technically have to pay. It's like this with a lot of media.
I mean, you wrote, "From everything I personally know about the media, articles like that are usually paid for, though almost never directly by the person being profiled."
This is, frankly, a stupid claim, though I think from your profile you are probably German or Austrian and thus might not really know which news outlets are reliable. As a perhaps helpful analogy, the New Yorker is roughly like Der Spiegel or Die Zeit, only more respected. (That's not to say they don't have misses; Ken Auletta's profile of Elizabeth Holmes comes to mind.)
The prospect that you can pay for coverage--and as an aside, I don't think The New Yorker has ever run "30 under 30" lists--is rather laughable.
> You are not, since nobody can bloody run the software themselves anyway, and the entire point is transparency.
I'm British but reside in Sweden and work for a German company, it's possible I don't know all things about American culture- I've only worked for one American company. What I'm referring to was a french company and they were talking about US papers.
the 30-under-30 thing is Forbes, I have always considered them reputable, perhaps I am mistaken.
The reproducible builds website; is; as previously mentioned a really nice looking page which basically says: It won't be reproducible even if you follow these steps.
I literally said: "it looks good in a google search, but there is nothing of substance since they say in the article itself that it will not be reproducible", that "please don't send us messages about how it's not reproducible"; they say it's because of the shared libraries and that some parts are actually reproducible.
The New Yorker is not a shithole, and don't take money for articles.
Maybe you want to, like, correct your blog post? Honestly, the claim makes you sound dumb.
> The reproducible builds website; is; as previously mentioned a really nice looking page which basically says: It won't be reproducible even if you follow these steps.
I don't know what you mean. Are you referring to the NDK portions? And if so, does the critical stuff (key verification, encryption) happen in Java or in native code?
> "please don't send us messages about how it's not reproducible"
Even if I take, for absolute fact that you cannot possibly hire a PR firm or spend enough in marketing to get a puff piece about you in a magazine; you are glossing over the entire point I am making.
Someone is telling me to like this guy. Someone is telling me that he's a saint, a hero, a steward of the future. Someone truly want's me to think that.
I'll say it again because you didn't catch it: none of these issues are nails in a coffin, when placed together they paint a particular picture. I can't unsee that picture, and arguing over semantics doesn't change the core argument at all.
I absolutely will not redact that statement because it is true that you can buy your way into magazines, either directly or indirectly. Whether the New Yorker is entirely immune to being convinced to write profiles like this: even if no money directly changes hands, is irrelevant.
-----
> the Signal Android codebase includes some native shared libraries that we employ for voice calls (WebRTC, etc). At the time this native code was added, there was no Gradle NDK support yet, so the shared libraries aren’t compiled with the project build.
> Getting the Gradle NDK support set up and making its output reproducible will likely be more difficult.
> Please don’t freak out
> Just to head off the inevitable deluge of GPG-encrypted emails with dramatic subject lines, we are not doing this in response to any kind of legal threat or pressure. This is just a weekend hack; please don’t make us regret it.
tl;dr: it's not reproducible with shared libraries, or gradle and it was just a hack, please don't make them regret doing it.
Oh, and did you actually run through the steps? Or is it merely enough that having them listed makes you trust them?
OK, fair enough. I guess my point was a little bit broader: anyone who knows anything about The New Yorker will think you're ignorant for having written what you did. Your comment here sort of confirms that.
By all means, you're not the first person to proudly wave their ignorance on the Internet, nor will you be the last, and yours isn't especially egregious by Internet standards. I am baffled that you would want to write a blog post about how ignorant you are and then submit it to the front page of HN. If I were you, I'd, like, not want people to read that?
But, up to you. Good thing you don't use your real name I guess. ;)
> > Just to head off the inevitable deluge of GPG-encrypted emails with dramatic subject lines, we are not doing this in response to any kind of legal threat or pressure. This is just a weekend hack; please don’t make us regret it.
This doesn't say what you said it says.
> tl;dr: it's not reproducible with shared libraries, or gradle and it was just a hack, please don't make them regret doing it.
As I said, in which codebase does the key verification and encryption live--is it in the native code, or in the Java code? I suspect you don't know, or you don't know why that matters.
> Oh, and did you actually run through the steps? Or is it merely enough that having them listed makes you trust them?
> anyone who knows anything about The New Yorker will think you're ignorant for having written what you did.
Well, I am ignorant, I don't know the internal workings of every single magazine, but I am aware how the media engine is working since I also work in the entertainment industry.
I am sure you are not genuinely claiming that it's impossible for them to be influenced by PR firms or external marketing spend? That you sincerely believe they are never influenced by external marketing of people? How might you suppose they find people to write profiles on? I personally know that it isn't chance.
> Good thing you don't use your real name I guess. ;)
I do use my real name for whatever it's worth, I have absolutely no problem wearing these statements on my sleeve- if you read the article you would see my real name plastered at the top of the page and a link to my video game credits buried near the bottom of the page in the "Commonly Asked Questions" section.
I'm really happy you took the the time to respond to me though, since you are precisely the type of person I'm writing this for; people who are emotionally pro-signal, when in reality there's no reason to be.
Do you even read how aggressive your tone is? If you actually had anything meaningful to say I would feel terrible.
> I am sure you are not genuinely claiming that it's impossible for them to be influenced by PR firms or external marketing spend? That you sincerely believe they are never influenced by external marketing of people? How might you suppose they find people to write profiles on? I personally know that it isn't chance.
"PR people pitch profiles to writers" is very different from...oh, what did you write? "From everything I personally know about the media, articles like that are usually paid for..."
The former is normal journalism. The latter is an ethical breach. So, yes, let's talk about tone for a moment: in your post, you accused multiple people of ethical or legal breaches, on the basis of, oh...your feelings?
Very respectful of you! Yes, my "tone" is direct. But I didn't accuse you of anything unethical, did I?
> I do use my real name for whatever it's worth, I have absolutely no problem wearing these statements on my sleeve
Hahah, OK then.
> I'm really happy you took the the time to respond to me though
Oh, but I did. As far as I can tell, you made one, and only one, technical claim: that Signal's reproducible builds aren't useful because they don't cover the NDK code. I asked you, twice now,
"As I said, in which codebase does the key verification and encryption live--is it in the native code, or in the Java code? I suspect you don't know, or you don't know why that matters."
You have twice now failed to reply to that. Care to try for a third time?
You think it matters where the key lives? Why do you think that?
Do you think it irrelevant that your phone auto-updates without consent (citation in TFA), do you think it irrelevant that the reproducible builds are -- not, you wouldn't ever need to touch the key to bypass signal. You want me to go into semantics but anyone with half a technical brain knows that RCE, logging, remote screen capture, or anything that can read memory will easily break security and doesn't have to be colocated with the code that actually does the encryption/decryption.
Also I made multiple technical claims:
* "third party clients are a no go"
* "third party networks are a no go"
* "Automatic updates are enabled and forced" (due to "move fast" ;))
* "The code on the server has been provably non-public"
* "Forces the use of a globally unique number that can often be tied to personal identity and will always be tied to physical location"
Care to answer these? ;)
EDIT: seems like Signal has a spotty relationship with reproducible builds but they are trying to keep it and even have an automated job for it, though it seems people are often unable to reproduce: https://community.signalusers.org/t/beta-feedback-for-the-up...
> Oh that's definitely true; though I wasn't saying bribe; Did you know those "30 under 30" lists, you have to apply to be on those.
The New Yorker is a storied magazine that has never had anything like that. Forbes runs those lists, I believe.
I am not particularly a fan of the New Yorker any longer, and some media ignorance is forgivable. But to make a libelous sloppy claim on this basis is the height of hypocrisy and recklessness.
Do you notice how people have to waste time repeatedly rebutting your pig ignorant claims? I am a free speech advocate but your behavior is the reason libel laws exist.
Don't trust Signal for what? For communicating about illegal dealings, that intelligence agencies are interested in cracking down? Ok, sure, but then don't "trust" anything digital over the wire.
What about having E2EE traffic for your day-to-day communications? Why not, it seems reasonably protected and no government agency or third party would reasonably put the effort into looking into that.
"trust" is one of those words which has contextual meaning. I actually do "trust" signal, for low risk things. I don't routinely do high risk things and so I can make a cost/benefit decision facing who might exploit what I am doing and the consequences for me.
Other people live in the margins of what either their own, or some aggressor society wants and information from/about them is key. I can understand somebody having a heightened sense of what should be the foundation of "trust" in these circumstances. But that said: simply using signal is probably putting a giant red flag over you in some circumstances. Or, PGP. Any self-installed cryptographic technology you are associated with, puts a marker against you for some problems.
I don't "trust" closed source. I prefer that people have review, and that objective groups of people I trust in a personal sense (because I know them, no matter how vaguely, from online lists, and IETF wg and the like) tell me from their position of contextual knowledge, and I admit some authority, where trust can be well placed. Some of them are American. Some of them work for agencies related to the state, like the FCC and in some cases the NSA. I again have to re-frame "trust" into the sense of what it means when somebody who works for an agency like those tell you something. (others of them are not american, do not work for these agencies, or sister agencies in five eyes or other economies. But I observe, if you are competent to speak about cryptography you almost certainly DO engage with people who work for state enterprises like NIST and the NSA, even if in other economies. Its just the way the world IS: academics who work on cryptography have a circuit of relationships which includes the mathematicians inside these agencies)
I don't "trust" Moxie quite as much as I did, because some of the actions around Signal confused and worried me. And, I saw little to no desire to engage. Well, he doesn't owe me anything, financial or moral. I have no direct relationship, and in the end if he decided trying to assuage a million edge users like me (edge user not edge lord) was a loosing game, I can't fault his logic but the other side of that logic is I don't "trust" him as much as I used to.
I do trust Meredith. I have met her (once or twice) and she seems to me to behave in an open, direct manner, and when she says things I see them reflected in what she does. And, I trust the kinds of statements she has made about signal and the kinds of things I see emerging around governance of something like signal.
In due course I do hope she and the board can say something about source code, and future development, and how we can re-build or build from scratch some reasoned trust in the code, and its cryptography and the trust in systems behind the code.
Indeed. But, it's foundational for why people have reserved some judgement about the software and the service. I didn't know Phil Zimmerman but I "trust" PGP because of the amount of eyes on the code. That was the substantive point here about Moxie.
Unless and until somebody audits and reviews, I would believe code he wrote remains in the application suite and head-end services? Should I "trust" that implicitly?
> It is absolutely not the case that cryptography engineers generally interact with NIST employees, and certainly not NSA employees.
Day to day? no. Communications are confined to matters of substance between them. But I see NIST people at IETF standards meetings, Mailing to lists. I see engineers who submit algorithms to the NIST beauty contests talking online about who they discussed things with. I knew researchers in Australia who were engaged in NSA related study and it was simply understood no technology entered the building with them, to go and do face to face but the fact they met was not secret, and there are lots of people in bodies like IETF who meet these people on an as-needs basis.
Around the time of IETF being held in Beijing a number of people wound up being asked or told not to take their normal laptop with them into that economy. A lot more than I would have expected simply because of commercial in confidence reasons of risk.
"generally" might be too sweeping. It's not forbidden. It might be unusual.
Phil Zimmerman also has very little to do with the design of modern PGP, and the influence he did have was malign (if not deliberately so). Signal is so well studied it won the Levchin Prize at RWC.
I don't understand why you believe cryptography engineers generally participate in standards bodies. A comparable percentage of cryptography engineers do that as do software engineers with software standards bodies.
I think you're reading too much into this. I only mean that the cohort of cryptography engineers I meet are the ones in IETF. I have no intent to imply they are a majority, or even significant against the number of mathematicians who work at NSA, GCHQ, DSD &c.
If the ratios are similar then sure, there's hundreds (thousands) who don't. The point of substance is, these people can be asked what they think about software and systems. I haven't asked for a while.
"Don't take your laptop or phone into China" is a completely standard and unremarkable piece of security advice for anyone in our field. I'm not sure how it ties into your point.
I've been to China 7 times in the last 20 years and never been placed in any constraint of need here. No employer, or travel advisory has ever been placed on me or recommended to me. I was not issued a burner phone or laptop. Ever.
One meeting I went to in Nepal, the attendee from a US company had a personal minder (with gun) and car. That's about as close as I've come to a heightened sense of need facing attendance at standards and policy meetings internationally.
So maybe to YOU this is unremarkable. My anecdata is that it's more said, than done. However some people who work in these fields are placed under specific requirement. Perhaps it goes to your seniority and importance: I am glad I've never been held to that burden in 42 years.
Just for more anecdotes every tech company I’ve worked for that has reason to travel to China has had a blanket (all employees independent of role or seniority) policy requiring using burner laptops and phones when traveling there.
That doesn’t relieve one entirely of fears of betrayal. American 501c3 law allows nonprofits to act like ordinary for-profit corporations in many ways. Such organizations cannot generate a profit (which has a specific legal meaning) but they can amass money to pay as much as they want in salaries to their leadership under employment contracts, or use that money for things that none of the original supporters saw as part of the mission. The community’s insight into finances is limited to peeking into periodic tax filings instead of having insights into (and any oversight over) day-to-day operation.
I have been active in two organizations that are American 501c3s where the staff, who enjoyed a high-paying position and continually pushed the board for pay raises, explored putting advertising on the organization’s website and selling user data as a way to raise even more funds. So, users can be the product even with 501c3s.
This article says it's not rehashing DeVault's arguments, and it isn't; it's making an even dumber set of arguments.