Hacker News new | comments | ask | show | jobs | submit login
OpenPGPjs has passed an independent security audit (protonmail.com)
245 points by testloop 6 months ago | hide | past | web | favorite | 122 comments



Not that it matters. They could silently replace it with a backdoored script and your browser would never tell you it happened.

And to preempt the ProtonMail rep who is probably going to respond to this comment, I know that you can run the web app on localhost. But that doesn't mean that users who don't are any more secure.


These concepts are interesting:

https://codegolf.stackexchange.com/questions/18217/javascrip...

https://bitbucket.org/geraintluff/caution.js/

The idea is that a long data-uri containing hashes and a small loader function are bookmarked. The loader won't load the corresponding javascript unless the hashes match. The user only needs to verify the javascript once, then they can rely on their bookmark containing the hashes. If the server were to swap out the javascript, the bookmark would fail to load it.


Isn't that just SRI in a bookmarklet? With SRI you give the browser a hash and if the resource does not match it does not load.


Doesn't SRI require the hashes to be in the __loaded__ html? I believe parent is referring to a page which is the same, but has been compromised on the server side, meaning you can't trust the html, even if the server is who it says it is.


Right, but what I'm saying is that you don't need the loader. Just have a bookmarklet with html that contains script tags with SRI. The loader is just another step you need to trust.


That does indeed work. Pretty neat. For anyone trying, here is basically all you need:

  data:text/html,<script crossorigin="anonymous" integrity="..." src="...">
In total that should be a lot smaller than the linked codegolf answer.


Oooh, that's very cool. Thanks for the links.


You might also like substack’s work on hyperboot.


if you send the datauri by email (webmail) and use it from there someone with phisical access to the computer wont be able to change it easily.


This (rightfully) comes up every time some browser-based encryption tool is posted. It seems like the desire for such tools isn’t going to go away. Is anyone working on solutions for making distribution of JavaScript applications more secure?

There’s a range of assurances you could try to provide, e.x. signatures from the author (or even 3rd parties), prompting for updates, etc. It would likely require support from browsers.

At one point I investigated using service workers to intercept subsequent app updates to check signatures but there was no way to prevent the service worker itself from being replaced (probably because it would be easy for a site to permanently “brick” itself in users browssr).


> At one point I investigated using service workers to intercept subsequent app updates to check signatures but there was no way to prevent the service worker itself from being replaced (probably because it would be easy for a site to permanently “brick” itself in users browssr).

Cyph nailed this back with appcaches, later migrating to service workers (and ultimately got a patent for the solution), but the implementation of the solution itself got an entire protocol canned by one browser (HPKP).

Source: did the talk on it.



That’s a start, but if the main resource (the HTML page) can be modified it doesn’t help.

If you could require the root page be cryptographically signed (but by who?) and optionally prompted for updates then we’re talking.


This was "solved" by a online crypto tool (can't remember which) that basically did HPKP suicide every 30 minutes and had a service worker that loaded cached assets on failure. So the browser would pin to a key that was deleted within minutes and then all subsequent requests would only go through the service worker until the pin expired.


cyph.com


Thanks!


It's worth noting that Cyph came up in an earlier discussion about HPKP being removed from Chrome:

https://news.ycombinator.com/item?id=15572888

As far as I'm aware that means their approach no longer works (across browsers), and, though it was a fantastic idea, it did require you to trust that they really were throwing their keys away (a trust model slightly weaker than TOFU).


We (Cyph) have been pretty disappointed in the Chrome team's decision to kill HPKP.

Paraphrasing, but IIRC the reasoning pretty much boiled down to "it's a pain to maintain and Expect-CT is kind of similar anyway" — which I think is a really weak justification for harming end user security and breaking established APIs that people depend on in production. Fingers crossed that Firefox keeps it alive!

That said, it doesn't entirely break WebSign in Chrome, just weakens a bit further below strict TOFU. https://www.cyph.com/websign goes into detail, but WebSign has some client-side logic to validate its own hash against a signed whitelist. The major downsides to relying on this are:

1. It depends on a caching layer, not a security feature. This means that any guarantees are potentially out the window if a browser vendor decides to do something crazy for performance reasons or whatever.

2. It opens up an attack vector where it can be forcibly unpinned by filling up the user's disk and making the browser evict the cached WebSign instance.

All in all I think it's still basically fine, but shipping an optional browser extension for hardening WebSign is now a higher priority because of this.


There was a site posted on HN a while back that had an interesting take on a solution to this: they had a service-worker that checked github.com for the latest version of the app code and itself (along with the standard subresource integrity of course). That description doesn't do the system justice, as to my memory it seemed like a pretty sound system as long as your public repo remains uncompromised.

Unfortunately can't remember the name of the website nor exactly what it did...



Nope, it wasn't a library, this was a SaaS website running an open source codebase that was browsable on GitHub.


Well, it might have been Airborn.io, the Google Docs competitor that this library grew out of.


I was definitely asleep at the keyboard as it definitely was airborn.io. Thanks.


Don't shoot the messenger here, but isn't this a case for blockchain technology? You sign and make the JS public and it can't be changed?


I believe this sort of thing will be possible via the [Web Package Standard][1]

[1]: https://github.com/WICG/webpackage


A browser add-on that is manually installed (which I believe would stop any potential insecure/unintended automatic update) could check a digital signature embedded in a formatted comment inside the JS file. That is relatively easy to implement, but you will also want some sort of PKI for key distribution and revocation.


Let the uri be the signature?


The idea came to my mind that people could create websites on domains of the form [some SHA hash].example.com and reference from the root HTML page a file with the name [the same SHA hash].js and this could trigger a special mode in the browser where it checked that the JavaScript file hashed to the given value, and then refused to load any other scripts.

The bootstrap JavaScript file could contain the code needed to download more files, and to download a digitally signed list of file hashes, which it could check against a hardcoded public key. Also the browser would have to remember a flag for that domain to require this same bootstrapping process every time, to stop downgrade attacks.


The same can be said for any software that auto update (i.e. Chrome), no?


Attack vectors for replacing Chrome:

  - Hack Google
Attack vectors for replacing OpenPGPjs:

  - Hack the servers hosting OpenPGPjs
  - Hack the browser to inject or replace content across domains, sandboxes, other security barriers
It's a subtle difference, but delivering applications dynamically via web browsing is much more precarious than natively hosted applications.

Another way to think of it is if your entire Linux OS were actually just web apps with GUIs. Every time you run 'bash', it was actually downloaded from a remote server. And every time you used bash, and it used some plug-in which was hosted on some other site, that plug-in could be compromised, and could be trying to attack your OS, which if successful, would compromise your entire host.

That doesn't happen right now because all the apps sit on your host, aren't constantly re-acquired, aren't constantly subject to potential 3rd party attacks over a wide surface area. Though this does sort-of happen with programming language package managers like npm, pip and so on. But you can pin those versions and hashes if you're paranoid, which I don't think you can do with a browser.


You don't have to hack google to replace chrome. You just have to hack a trusted cert provider and DNS and/or BGP. Those two things are not unheard of.


They're using key pinning, so you would have to use one of Google's keys and use one of Google's authorized CAs. But second, I imagine the auto-update process is using a non-TLS certificate to verify the signed binaries. Either way you have to hack Google.

It would actually be much easier to just find a vuln in Chrome that can break out of sandbox and get root.


Or hack the team who signs.


So Chrome doesn't use certificate pinning for distributing updates?


> Attack vectors for replacing Chrome:

> - Hack Google

  - NSL Google
(If you're a high-enough value target and your adversary is the US Government. But in _that_ case you've probably already lost - you might just not know it yet. I wonder if Snowden uses Chrome or lets it autoupdate?)


That's a good question. I don't think the government cares enough about snowden to do that. Snowden's damage has been done. It can't be reversed. Imo, they'd be much more interested to preemptively shutdown all future would-be Snowdens.


That depends.

1. Most platforms these days require a signature with a key issued by the platform. In the browser you have HTTPS but that doesn’t help if the server is compromised.

2. It’s easier to target individuals (thus evading detection) if you’re serving the code to users directly (which I think is also the case with Chrome, but not Mac App Store, Linux package managers, etc)

3. Some platforms even do some amount of auditing before including software in their repositories.


Yes.


No. The risk is much lower, especially if different entities (upstream, linux distributions, ...) review the same code.


Remember when Hushmail backdoored their client-side applet to capture unencrypted emails and passwords?


It appears they were actually just hosting a web app and users were sending their decryption password to Hushmail. I don't see a reference to backdooring the Java client, though obviously since they delivered it, they could do that too. https://www.wired.com/2007/11/encrypted-e-mai/


I remember when Lavabit didn't...


Back in the days when I was 15-16 I added a simple system() call in PGP to send the plaintext message out via sendmail before encrypting it as an experiment on my local system, and was surprised that a one line change would completely compromise the system. I remember telling Phil Zimmerman this at a conference I think he said something like "well, that is a real asshole move". There's nothing to prevent all of these "create secure password checkers" from sending out JSON requests your plaintext password as well. Furthermore, you can send fake "invalid password" messages and trick a user into typing all of his old passwords and permutations while he/she tries to "remember" the right password. One of my first root exploit for OpenBSD was taking over the console, killing login, and presenting a fake login prompt that pipe'd its output to a real one.



No, because the provider would be in the position to change the SRI hashes. Sub Resource Integrity protects you against malicious CDNs and so on, but needs a non-compromised HTML page to provide correct hashes.

You could however probably provide a signed entry point via a webextension or so and a an audit trail via a trusted distribution plattform, like addons.mozilla.org. Are there apps which use a mechanism like this?


> No, because the provider would be in the position to change the SRI hashes. Sub Resource Integrity protects you against malicious CDNs and so on, but needs a non-compromised HTML page to provide correct hashes.

Can you not trust the originating site to serve non-compromised HTML if using HSTS and a trusted local certificate store (eliminating MITM as an attack vector)?


Not if the originating site is the potential attacker, which was the initial scenario: Protonmail sending you page that leaks your decrypted mail to them.


There was a way to make a permanent site installation on most browsers using HTML5 appcache, which even the web host couldn't update, but that API is deprecated in favor of the service worker's Cache API.

It's not clear to me that the Cache API offers the same level of security guarantee.


> It's not clear to me that the Cache API offers the same level of security guarantee.

Last I checked it seemed there wasn’t any way to prevent the service worker itself from being updated.


If you can change out the JS payload, you can probably also swap out the HTML which supposedly "secures" it.

If the users had the option of "locking" a JS version with the Subresource Integrity attribute that they are currently using, it might help.


Wouldn't you also need to lock down all the UI code and everything that interacts with the library?


This solves a different problem. Say actor A makes a website and uses CDNs provided by actor B. This protects users from actor B changing their scripts, but not from a malicious actor A. In this case I'm presuming ProtonMail is actor A.


I don't know if Protonmail do this or not, but presumably you could supply the javascript part of your web application as a browser plugin, which would render it immune to attacks on the server.


Hopefully web packages⁽¹⁾ can eventually solve this issue.

1. https://github.com/WICG/webpackage


Do either IPFS or DAT/BeakerBrowser solve this problem?

(They are both protocols for distributed web apps based on immutable content.)


A little, although it involves running your own ipfs node, something precisely 0% (rounded) of web users are going to do.


How would you implement a secure webmail such that it satisfies your complaint?


...not to mention the huge attack surface of browsers in general.

And also how the same application you are using to encrypt your mail is also running javascript from random sources when you browse websites.


...and why all the downvotes? Interestingly, if I write from other accounts I don't get all this downvoting.


The readme indicates that it can be installed via npm, so I'm not sure what your concern is.

https://github.com/openpgpjs/openpgpjs/blob/master/README.md


That doesn't have anything to do with what I said.


To be fair, what you said is only tangentially related to the posted article anyway. It is about a security audit of the OpenPGPjs library (what bastawhiz commented about), not how ProtonMail implements it (what you commented about).


I have zero trust in Proton<anything> after learning, that the free ProtonVPN service is provided by a data mining company from Eastern Europe[1].

[1] https://news.ycombinator.com/item?id=17258203 (please turn on "showdead" in settings, to see the entire thread)


Please, that's not a verified claim[0], and you shouldn't trust any VPN service that isn't operated by you in the first place.

[0]: Plus, it was raised by a competitor, Private Internet Access, so it makes it even more difficult to get the facts straight.


> Please, that's not a verified claim[0], and you shouldn't trust any VPN service that isn't operated by you in the first place.

The co-founders of ProtonMail were caught providing multiple inaccurate statements about their business practices in that thread, and couldn't deny any of the facts stated by the co-founder of PIA[1].

[1] https://news.ycombinator.com/item?id=17262566


Which part of the world should any service be provided from to be trustworthy? Let me rephrase, which services are known to have never cooperated with any agency, nor ever being hacked by them?


... Facebook


> it was raised by a competitor, Private Internet Access

Focus on the facts. Not the messenger. Secondly, I admire your love of this discussion [1].

[1] https://news.ycombinator.com/item?id=17497912


     1. You are the user in question that is a co-founder for PIA. 
     2. You are a direct competitor to Proton*
     3. "Messengers", especially in the position of founder of the competitor, have significant bias. 
     4. You have a fiduciary reason for them to fail.
     5. user protonmail has noted significant harassment regarding this issue
Im inclined to distrust both of you. I find that your arguments might have merit. But I also see you as a digital aggressor. I don't particularly like either.


> Im inclined to distrust both of you.

It's neither religion, nor politics. It's tech. You don't need to believe in anything you cannot verify yourself. I have checked most of the statements provided by the co-founder of PIA, and found none of them to be false, even if he sometimes crossed the line of a civil discussion.


Nothing you said here addressed the facts that were provided. Good luck!


I'm sure the poster was addressing "Focus on the facts. Not the messenger."

As was listed, there are plenty of good reasons to learn about the messenger, just like how looking up my comment history will show my propensity for calling this argument out. You know my ulterior motives and where I'm coming from.


Well one things certain: I'll never use Private Internet Access after this ridiculous comment from you[0] not even three months ago.

[0] https://news.ycombinator.com/item?id=17261149 -- need to have "showdead" enabled in profile


I stand by that statement.


Those types of comments (yours and the general back-and-forth with ProtonMail) make the VPN industry look like it's full of sharks. It's hurting all of you. It makes you look unprofessional. Before seeing this I had a favourable impression of PIA, but not anymore.

EDIT: I'm sure the competition is intense and I'm not sure I would be able to rise above it myself but I think you need to be aware of what it looks like.


It may be full of sharks. Specifically, though, my issues are with the two companies who pretend they are privacy companies and aren’t.

They bring shame to our industry, and further, shame to our cause.

I will stand up against them everyday regardless of what kind of repercussions come to me. That’s what it means to protect people’s privacy.

Cheers!


And I stand by mine that I'll never use or recommend Private Internet Access.


I agree that people should read the linked comments. I did not find the the evidence to be as clear cut or damning as OP seems to think at all after reading through it. Most claims are also put forward by a co founder of Private Internet Access. A direct competitor.


This is exactly my thoughts as well. rasengan is the cofounder and thus is very biased against ProtonMail since they're a competitor.


ProtonVPN is no competitor to Private Internet Access in terms of the size and the number of users. If you were a co-founder of PIA, would you risk your reputation by publicly providing false accusations against a company 1/100 of the size of yours?


No but there's incentive to provide many small half-truths out of context to shape the narrative into one beneficial for yourself want.


That's the problem. I haven't found any of his statements, that would be only half-true. I even discovered a conference held in Lithuania in 2017, where one of the speakers was presented as the head of B2B sales at Tesonet, working on Oxylabs[1]. It is very unlikely, that ProtonMail was not aware of who it was partnering with on a free VPN service.

[1] https://web.archive.org/web/20170909183904/https://salesclub...


Furthermore, after it was pointed out by the co-founder of PIA, that the CEO of Tesonet is the director of ProtonVPN UAB, the company was renamed multiple times in two months[1], with its director now hidden from the public view.

[1] https://web.archive.org/web/20180818102535/https://rekvizita...


Finally, the IP blocks, which belonged to Tesonet and were used by ProtonVPN just a few months ago – despite the co-founders of ProtonMail publicly denying any technical partnership between the two[1] – now belong to ProtonVPN[2].

[1] https://news.ycombinator.com/item?id=17260847

[2] https://web.archive.org/web/20180818104256/https://bgpview.i...



> These stories were first fabricated by Private Internet Access, a competitor who has been feeling pressure from ProtonVPN lately.

This is a lie. Private Internet Access is probably the largest paid VPN provider in the world, and ProtonVPN (by Tesonet?) belongs to a short list of free VPN providers, such as Onavo VPN by Facebook[1] and Hola VPN by Luminati[2], most of which are subsidized by data mining companies. These are two completely different markets.

> We used the same legal address and nominee directors as our local partners because we still did not have our own office yet. For contractual reasons, these moves took some time. For example, ProtonLabs Skopje, our newest entity, only moved in November 2017.

ProtonVPN UAB has been founded in July 2016, and was still operated from Tesonet HQ in June 2018, when this fact was made public by the co-founder of PIA. The current ProtonVPN legal address in Vilnius, Lithuania can be used by any company, which agrees to pay for 1 work-place without any long-term obligations[3]. This means, that ProtonVPN might as well be still operating from Tesonet HQ.

> ProtonVPN/ProtonMail does not, and has never used any IPs or servers from Tesonet (this can be publicly verified)

This is a lie. ProtonMail admitted to using Tesonet IPs, when presented with Whois results in June 2018[4]. Those IP blocks were later assigned to ProtonVPN.

> Proton does not share any employees (or company directors) with Tesonet. This is also a verifiable fact.

This is a lie. It is no longer possible to verify, who is the director of ProtonVPN, because the company made the public record unavailable after changing its name multiple times in the last two months[5]. The last public record listed the CEO of Tesonet as the director of ProtonVPN[6], which was still true in early June 2018, when the co-founder of PIA made the fact public.

> There is little actual evidence that Tesonet does data-mining (in any case we have never used infrastructure from them).

This is a lie. There is plenty of actual evidence, that Tesonet is running a data mining company, called Oxylabs[7][8], which sells access to "10+ Million Mobile IPs in Every Country and Every City in the World".

[1] https://fossbytes.com/facebook-onavo-protect-feature-vpn-tra...

[2] https://www.techtimes.com/articles/56706/20150530/if-you-are...

[3] http://bc2000.lt/en/#ofisas

[4] https://news.ycombinator.com/item?id=17261243

[5] https://web.archive.org/web/20180818102535/https://rekvizita...

[6] https://web.archive.org/web/20171017093924/http://rekvizitai...

[7] https://web.archive.org/web/20180426151621/http://oxylabs.io...

[8] https://web.archive.org/web/20170909183904/https://salesclub...


They explained it quite clearly:

> We used Tesonet as a local partner before we had an official Lithuanian subsidiary, and rented office space from them. We don't share employees, infrastructure, etc. We have had a similar temporary arrangements with local companies when we opened offices in other jurisdictions where we didn't have an official presence yet.

This type of arrangement is common in the startup world.


The section from the "About" page of Tesonet (26 Apr 2018)[1], which got removed soon after that HN thread:

"For the latest project, Tesonet is working together with an international brand from Switzerland to create a security product that helps users protect their network traffic. As part of this technical partnership, we are collaborating on datacenter and network infrastructure that can easily supply 10 Gbps worth of bandwidth to users around the world. The product is developed using the latest authentication encryption methods and the best practices in the security world."

[1] https://web.archive.org/web/20180426161609/https://tesonet.c...


I strongly resent the implied concept that being "East European" by itself could be used as a valid argument to doubt the quality or integrity of a service.


Tesonet denies to its customers[1], that it is running both, a VPN service, NordVPN, and a data mining service, Oxylabs, from its HQ in Vilnius, Lithuania, even though both of these facts can be easily verified by anyone with the internet connection[2][3].

[1] https://www.reddit.com/r/VPNTorrents/comments/90jyoy/heres_w...

[2] https://web.archive.org/web/20180607165926/https://ui.awin.c...

[3] https://web.archive.org/web/20180426151621/http://oxylabs.io...


Cloud<everything> should not really be trusted for anything that requires privacy (unless you encrypt the data locally first using GPG or something similar).


This sort of racism is not acceptable.


Eastern Europe, from which ProtonVPN is operated as a legal entity without the knowledge of its users, is an entirely different jurisdiction from Switzerland in terms of privacy and data retention laws.


>Eastern Europe, from which ProtonVPN is operated as a legal entity without the knowledge of its users, is an entirely different jurisdiction from Switzerland in terms of privacy and data retention laws.

Eastern Europe is not a jurisdiction.


As a Eastern European myself, I chose not to single out Lithuania, because most of the region is now in the EU.


There is nothing wrong with data mining itself. It's completely neutral technology. You are just thrwoing shade with link flooding (those who read the links find out that they don't credibly confirm what you say).

Tesonet provides all kinds of services, like hosting, software development and cybersecurity for it's customers.


> There is nothing wrong with data mining itself. It's completely neutral technology.

Tesonet's Oxylabs offers "10+ Million Mobile IPs in Every Country and Every City in the World"[1], which might explain why ProtonVPN, whose Android app is signed by Tesonet[2], is a free service. This is how Luminati, Tesonet's largest competitor in Residential Proxies, operates: it provides a free VPN service, Hola VPN, and then connects its users into a botnet[3], which is used for data mining operations.

[1] https://oxylabs.io/proxies/residential-proxies

[2] http://apkforandroid.org/com.protonvpn.android/34007825-prot...

[3] http://fortune.com/2015/05/29/hola-luminati-vpn/


It turns out, that Luminati Networks Ltd sued UAB Tesonet over patent infringement in "Large-scale web data extraction products and services with residential proxy network (oxylabs.io)"[1] in July 2018.

[1] http://litigation.maxval-ip.com/Litigation/DetailView?CaseID...


> The only limitations come from the platform itself (JavaScript/web), which do not allow for side channel resistance or reliable constant time operations. Overall however this is an exceptional library for JavaScript cryptography.

How would this compare to something like WebCrypto, which assume would be implemented in a way that would allow for side channel resistance etc? It does seem surprising that we don't have something like a browser API version of libsodium in widespread use already.


You are confusing crypto primitives with a high-level spec like OpenPGP. OpenPGPjs used WebCrypto and node crypto libraries when available for primitives. You still need a library for the OpenPGP stuff.


Whoops, I see my mistake, thanks.


I think you're right to pick up on this "side channel resistance or reliable constant time operations" wording, actually. If the OpenPGPjs library is using WebCrypto for the primitives, then what are the non-constant time operations and JavaScript-specific side channels that have security implications? Such a claim should really be accompanied by a specific threat model.

Is the supposed threat actor a MitM that can use the timing of the packets your browser sends to work out when you stopped typing your email and when the email was sent to the server, allowing them to calculate the time taken by the encryption operation and thus infer something about the plaintext of the email?

Alternatively, is the threat actor someone running JavaScript code in another tab of the same browser, who can infer how much CPU the browser is using at any given time, with enough accuracy to reveal bits of the private key?

Perhaps they are imagining an attacker who could do both, and it would be very interesting to see a practical attack along these lines, but I still think that a decent WebCrypto implementation should make it close to impossible for an attacker to extract any useful information unless the user is sending billions of emails through the ProtonMail web client.


I also think exploiting it would be extremely difficult. IIRC, it was NIST ECC curves which are hard to make constant time and do not have WebCrypto primitives. We are still going to see what we can do to address this.


Have any current protonmail users experienced denial of service from online providers solely because of their email address?


I recommend that any user of hosted mail registers their own domain.

1) No negativity towards your provider reflects on you

2) If your mail provider locks you out you can move to another

Just do a little mental simulation right now of what would happen if gmail or ms/hotmail/etc. locked your account.


Yupp. My account at a particular website was terminated. They pointed to their TOS, where "anonymous" address are not allowed. Wasn't even given the chance to keep the account and change the email to an "acceptable" one.


What service? Name and shame.


Not gonna reveal as it's got to do with my private life :p


Just a guess - private trackers often do this. Back in the day I got banned at couple of places because of a riseup email.


why do they say it's anonymous? I use proton, it's encrypted but not anonymous. (unless you used the hash option someone else posted (I didn't know that was a feature)


Not yet at around a year worth of usage. I use a custom .space address which I thought would bring its own issues, but hasn't. I have spf, dkim, and dmarc setup with it as well.


It seems that easier approach would be to compile Go's openpgp library or something higher level like https://github.com/lastochkanetwork/easypgp into wasm.


Why not use WebCrypto instead? No library needed.


You are confusing crypto primitives with a high-level spec like OpenPGP. OpenPGPjs used WebCrypto and node crypto libraries when available for primitives. You still need a library for the OpenPGP stuff.


Does OpenPGPjs use WebCrypto to create keys which are not extractable? That's the big win here if you can make it impossible for a compromised client to leak keys which were used before/after the compromise.


This also means you can't use another computer or that your key is lost if you clear browser data. Unless you'd do backups but I doubt this is standard procedure of ProtonMail users.


That's true assuming that the browser doesn't offer any way to manage that using e.g. Chrome/Firefox Sync.

What PGP really needs is a modern security model so you'd have many device keys registered to an identity rather than requiring the risk of spreading copies around. I think I have IIRC 8 GPG subkeys currently (6 of them being Yubikeys) and every aspect of that toolchain is unacceptable in the modern era.


I've got the same setup with subkeys per Yubikey (though I had to rotate due to Infineon).

What do you mean by "device keys"? Something like forward secrecy keys for initial session setup as used by e.g. Signal? This could be done with some effort... actually Rust OpenPGP library Sequoia developers already work on making this use case easier.

Another set of patches circulating on the ML adds support for TPM bound keys, that are non extractable.


Thanks to the great folks at PARAGONIE our open source platform (ie you can actually tell the code is always the same and you can host it yourself) also just passed an independent security audit:

https://paragonie.com/audit/L7TtZbFoJBxR91Xg

I didn’t think it was worth it to post to HN as news, though. Perhaps I should start posting our achievements a bit more.

Like for example our Group Rides feature:

https://youtu.be/PHuYV7q7NeM


> Perhaps I should start posting our achievements a bit more.

Maybe, but don't do it in someone else's thread trying to steal the spotlight from them... Really bad taste.


How am I stealing the spotlight from them? They are on the front page, whereas mine is just a comment that's relevant to it. They still have the spotlight, the link is still there and my comment only adds to the number of comments on the story.

If anything, the comments saying that they shouldn't be trusted, etc. harm them more than my comment.

Actually my comment should be: I don't think being audited by a third party firm is newsworthy, this is us being audited and we didn't post it.


PR is hard. Replies like the one you got are a hint that you need to learn a lot more than just "This right here that I thought was not really newsworthy is totally newsworthy."

;)

(Chin up and all that. This is not intended to be in any way hostile.)




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: