Mailpile. Despite what anyone tells you, end to end encrypted email is not possible in webmail a world [sic]. The first precondition for developing a usable and forward secure email protocol is a usable mail client, and I currently believe that Mailpile is our best shot at that.
From : mailpile
A modern, fast web-mail client
I am honestly confused. It sounds like Moxie is saying a webmail client is not the answer, but then he recommends a webmail client? I'm not trying to be snarky; I'm genuinely curious.
I use "webmail" to refer to a remote hosted web interface. GMail, Yahoo Mail, Hotmail, riseup.net, etc. This is the dominant way that people access email, and it's not possible to secure well because of the "webapp crypto problem."
Mailpile, on the other hand, is a locally hosted MUA that happens to use your web browser as the UI. I think it's a great idea, leveraging the UI properties of a web browser, but with everything running locally.
All development of a new secure email protocol has been stymied for the past 13 years by webmail. It is not possible to provide end-to-end encryption if you don't perform that encryption on the client side, and in the webmail world there is no "client."
I'm excited about Mailpile because it could be what gives us a usable local MUA, which is the precondition to deploying a nice, modern, usable, end-to-end encryption protocol.
I figure that if this is explained to me then surely the solution should present itself at the same time :)
Even if Sergey is a stand up guy, this setup means that the government can force him to break his promise at any time, whereas if he had put out a discrete set of software versions, particularly if they were open source with a public source control, he could plausibly tell the government that what they were asking was impossible.
And if a web site suddenly switches to a new public key, the browser should do the same kind of thing as it does for expired SSL.
It should be relatively easy to create a browser extension that does this in the meantime.
Then what I'd like to see is a mail service that sends its source unminified, and then publishes the same code (with signature) on its server. That way you could easily verify that you were getting the canonical version of the code (and not a special compromised version that an attacker inserted for users on a special list), and anyone could look and see if it was doing something fishy (or broken).
In that case, why do we trust e-commerce? Are we stupid to trust e-commerce?
I mean, we all get our software from somewhere. Why should I trust a security update from Apple, Microsoft, or Canonical for instance ...
You generally don't trust code updates, which is one reason you do them infrequently; every time you update code there's an opportunity for someone who has corrupted the update process to take over your machine.
Is this true anymore? So much stuff auto-updates I barely know what goes on these days, and it seems pretty frequent. Between Firefox auto-updates, OS X updates, MS Word critical updates, etc., I would be surprised if a week goes by without something important being updated.
Say I don't trust code updates which is why I choose to run Uuntu because I like its central package management system. Is it entirely infeasible to leverage that update mechanism to enable end-to-end crypto communication in the browser or are these entirely separate issues? Is it your contention that the browser is not the correct platform for end-to-end crypto communication?
edit: it's ok - you needn't reply, I've read some of your other posts and I get that you'd tell me that there are DOM considerations as well.
The difference is that it is very hard to specifically target someone via an OS update. It is very easy to specifically target a web app user:
Now, if you were forced to log in or to otherwise uniquely identify yourself before you received OS updates, this would be different.
Because it's your operative system and you can't realistically read and compile each time the patches (if you have the sources). If your operative system is against you you've utterly lost, so your best bets are to both trust them and use 100000 eyes to find bogus patches (open source OS)
well, many don't trust it, with good reason, and use temporary credit cards (sorry, can't remember the correct name for that but I hope it's clear enough)
Locally hosted web apps are on the rise (Mailpile, Camlistore, etc.) and remembering which app runs on which port is neither user-friendly nor scalable. More so if you start to consider multiple users on the same machine.
Maybe there's a need for a usable reverse proxy just for local web apps?
It would also be neat if browsers could speak HTTP over some IPC that isn't TCP on some random port. Maybe UNIX sockets in ~/.run? This would delegate read/write permissions to the OS.
From above: "it's not possible to secure well because of the 'webapp crypto problem.'"
I REALLY hate these sort of platitudes because they sound authoritative with no real basis. "Not possible" is a very strong statement. One, as a matter of fact, that I am working on a solution to.
The so called "webapp crypto problem" that you refer to is the fact that you cannot trust that the provider will change the source on you at will to initiate an attack. This can be dealt with by having hashes to identify the piece of code that has been recieved. This hash is then looked up by multiple verifying nodes which will confirm the signature. These nodes can confirm the signature by looking at the source and matching it with the hash. This way you move the authority from the single issuer to the set of verifiers. Now, if the code is open source any individual can verify the verifiers.
This is a general overview of the system that can solve the "webapp crypto problem." Yes, there are details missing, but this should be enough to show you that it is indeed possible.
I'm not convinced the "webapp crypto problem" can be solved without changes to browsers.
Imagine this scenario. You get a plugin from your distro's repository; you have encrypted, sig-checking, hash-checking mechanism in apt or rpm or whatever. It is open source/Libre, maintained and audited by competent crypto people, uses well-vetted mechanisms in the code, etc..
And what this does is run native code to encrypt your message, after prompting for a passphrase to unlock your private key. It provides an editing window so plaintext won't go into the browser. Then after editing, you encrypt, and the plugin pastes the encrypted text, in, say, ASCII form, into the text field in the webmail application.
The correspondent of course has the same plugin and uses it for decryption. You exchange public keys with your correspondents by a side channel.
(Edit: Obviously, you can do this today, minus the GUI; it's easy enough to run a GPG command, use a text editor, paste manually)
This would be a non-starter on vendor-captive smartphones and tablets, of course, and proprietary OS, as such systems are fundamentally unsecurable. But it might be viable for laptops, desktops and anywhere you can have root with Linux or BSD.
The metadata problem is much harder.
 like pine
Mailpile is an MUA, not an MTA.
We can make it happen, if this new thing is so attractive people would will do it, just like they put up with Windows for several decades now.
It could be a simple raspberry pi box that has all the stuff ready-made, just enter wifi credentials and whatever and run. The box has to be in white and silver because then who wouldnt want one? To show it off like a status symbol. A sleek little box in a corner, "my own email" people could say.
Mailpile is free software, a web-mail program that
you run on your own computer, so your data stays
under your control.
Does that sound crazy to anyone else? It makes me think the authors have a hammer (i.e. web development skills) and therefore think everything is a nail (i.e. a webpage).
If I have to install software anyway, I'd much rather it be a full fledged native client. One that looks, feels, installs, uninstalls, and is configured just like all the other native programs on my machine. I had such a program in the late 90s/early 2000s. It even had support for SMIME and PGP.
Having tabs, back buttons, the ability to open things in new browser windows, the ability to bookmark and copy-and-paste links to different views of an app are all things that I like about web apps that aren't universal in native apps.
There's nothing wrong with making a web mail client this way. Many people already access mail via their browser, so this is not drastically different. It also means they don't have to develop and test three different codebases.
I imagine they will get to the point of having the client wrapped for the platform so the user doesn't have to fudge around with it and can just click the icon and have it open in their browser (or the client's browser.)
What about Silent Circles involvement in Dark Mail? They came under criticism for not being open-source in the past year.
Sure they have Phil Zimmermann but I'm curious if he is already too much focused on his own business to not fully be able to contribute to Dark Mail. Compared to say some new eager hackers willing to focus on this full-time. Do both Ladar/Phil have the focus/ability to create an entirely new OSS email protocol?
He's quite right that there are several steps where the server must "avert its eyes" (this is a good way to explain it) to keep the plaintext password, decrypted private key, and resulting plaintext email safe.
But still, if the server averts its eyes at those points, once the user has logged out of webmail, the email is again safely stored, and (as claimed) even an NSA-compromised Lavabit can't access them until the user signs in again (if the server has been modified to capture the password or private key).
Well... except for the loophole where the emails were transmitted plaintext via SSL, NOT using perfect forward secrecy. In that case, anyone who managed to capture that SSL-encrypted traffic can decrypt it after the fact if they can get the SSL private key from Lavabit's servers.
And honestly, that was the weakest point. Once Levison shut down Lavabit (preventing Snowden from sending in his password again), Snowden's emails were safely locked away, except for that last loophole.
What this all suggests is that an open source version of Lavabit could actually be more valuable than the original service, as long as SSL is configured for perfect forward secrecy.
I.e., set up & secure your own email server (or let a trusted person do it for you), with code that verifiably averts its eyes at the critical moments, and leaves your email history safely encrypted when you're not accessing it. If you ever suspect your server may be at risk or has been compromised, you simply don't sign in again.
A private Lavabit seems like a pretty solid solution to me, and certainly far better than throwing up your hands and going with gmail and friends.
[minor edits for clarity]
"Just trust the server to avert its eyes" is untenable. It's untenable because it's unnecessarily risky. We as a community can do better than to use someone else's design full of holes merely because it seemed to work.
Crypto systems seem to work until they don't. And when they stop working, it's likely you'll never realize. But your adversary will.
My main point above was that while the original Lavabit did require you to trust Lavabit (who could be legally compelled to start logging passwords...), a roll-your-own version would shift that trust burden from a US company to you or whoever sets up your server.
I'm not claiming it's the final solution -- just that it would be significantly better than nothing.
There's value in continuing to push for full solutions; but that doesn't mean there's no value in options like this, just as yes, you're a fool if you rely on security by obscurity, but that's not the same as saying that it can't add to your security in a real world situation.
Actually, your proposal has negative value, because the danger is you might actually go on to implement the broken design and trick people (along with yourself) into believing it's trustworthy.
a roll-your-own version would shift that trust burden from a US company to you or whoever sets up your server.
You've shifted the burden, but away from commercial pressure is almost always a bad idea. Now instead of having a team of people thinking about security issues 24/7 and paying strict attention to their server configurations and minimizing their attack surface, you have only yourself. You may be capable, but most people aren't. And even the very best of us make mistakes.
Once your server is breached, the security offered by this design drops to nil. Compromising the server compromises the security. That's a fatal flaw. It's no accident that all modern cryptosystems are based around the idea of "Here's your secret key. Don't let it get stolen." It's the strongest guarantee we have. It's incredible that it's even possible to get such a strong security guarantee: "As long as you don't let your private key get stolen or get MITM'd, it's impossible for anyone to eavesdrop on you." That's incredible! Governments for thousands of years have been wishing for such a thing, and now our generation finally has it, because we live in the future... and you're going out of your way to give it up.
Your design is literally "transmit your secret key to the server while hoping it's still under control of friendlies." This kind of thinking is dangerous precisely because it tries to frame blind faith / hope / "probably won't happen to me" as a security pillar. But it's not a pillar. You can't trust hope. Your trust is the very first thing any adversary will subvert. In fact, if the cryptosystem is designed properly, adversaries won't have any realistic route of attack short of physically compromising the boxes you're receiving secret messages on. By fooling yourself into believing in the myth of "better than nothing", you've opened up an attack vector for the adversary. If you were to use a proper cryptosystem, then the adversary wouldn't be able to attack you. And since you're opening doors for the adversary, it would not be unfair to characterize that as "you're doing the adversary's job for them."
I apologize for the negativity. Usually when people pick apart an idea, they're expected to present a better alternative. In this case I don't know what the better solution is, because it hasn't been invented yet. But you're talking about a cryptosystem. Cryptosystems fail silently, because adversaries break them without informing their victim. So all it takes is one misstep to completely lose: the adversary will be able to intercept everything, and you'll be none the wiser. By transmuting the trust guarantee from "don't lose your secret key" to "trust this central server," it exposes dozens of attack vectors. Every vector that leads to a server breach is now a vector that can subvert you.
Please enlighten me.
The server "averts its eyes", hashes the password and compares it to a stored hash to check it. If you're lucky. If you're less lucky, your password is just in cleartext in the database. Note: if a website can send you a "password reminder", as many of them can, this is the case.
Furthermore, your data won't normally even be encrypted (or with some, e.g. Dropbox, they will be encrypted with keys that are available to the server even without your password).
Hosted Lavabit was a flawed system -- they were vulnerable to the NSA forcing them to start logging passwords/private keys, and they were vulnerable to the NSA capturing all of their SSL traffic then demanding their SSL private key. They were also vulnerable to a malicious employee or other person with legitimate access to the server code who snuck in a bit of logging code.
But they were still far more secure than just about any other web application you'll encounter. If they had configured their SSL for perfect forward secrecy, the NSA could have even confiscated their servers but would have been unable to get any user's emails. They could have installed any code they wanted on the servers, but still would only have been able to break into accounts where the users actively signed in beyond that point.
If someone managed to steal a data backup from Lavabit, it would not have revealed any data. That's not true of almost any other site.
That's part of why I find it frustrating when I try to point out the value of a private, fixed up Lavabit and am scorned for advocating an imperfect solution. Well, yeah! But it'd be miles ahead of where your email is now....
The only other authentication mechanism i can think of is that your password is somehow used to generate a key-pair. The server encrpyts a session password with ur public key and sends to you. You decrypt with your private key and enter it. Hmm.... not a bad idea
> a strong security guarantee: "As long as you don't let your private key get stolen or get MITM'd, it's impossible for anyone to eavesdrop on you." That's incredible! Governments for thousands of years have been wishing for such a thing, and now our generation finally has it, because we live in the future... and you're going out of your way to give it up.
I'm keen on getting a system that makes that promise as well; but HOW can I get that today for my email? That is the practical problem I'm addressing.
Your other comments about blind faith, etc. are somewhat out of context. There is a secret key, and it needs to be kept safe or your stored email can be compromised. Whether the private key is on your laptop or on your server, you cannot guarantee it is perfectly safe; if either computer is airgapped it's pretty useless for reading your email.
Certainly, you don't trust to blind faith that your server is secure; but you don't trust to blind faith that your laptop is, either.
If you set up a private Lavabit, in your favor, you have a private key that's encrypted with a key that's not stored on your server, so someone who gains read access to your server still cannot actually compromise you, and someone will full access still cannot compromise you until you sign in again. Also in your favor, nowadays it's fairly well-known how to lock down/harden a single-purpose server so that it would be very difficult to compromise; you'd basically need only 3 ports open, ever, you can disable root SSH and enable private key auth only, etc.. It's simpler than securing a laptop that you use for a billion different purposes.
Against you are the facts that you don't keep the server with you (probably -- unless it lives at your house), and it's a more visible target -- it must be findable because that's where your emails are delivered.
Your same arguments about your expertise in securing your server also apply to securing your own laptop. Using your own arguments, if you read email on a laptop that you also use for web browsing, you're "opening doors for the adversary", aren't you?
> * In this case I don't know what the better solution is, because it hasn't been invented yet.*
This is the real problem. I'm no Edward Snowden; I'm not an NSA target and don't expect to be. I want to secure my email because I think everyone should. Are you saying I should wait another X years before using email?
A private Lavabit is the best solution I see right now -- I totally agree that anyone should deploy any solution with their eyes open (even if you keep the private key locally only, that does not guarantee security either...). But that said, I can imagine a server image and simple set of instructions that would enable someone to set up a private Lavabit that would be a better solution than anything I am able to set up myself, certainly better than the original Lavabit, and far better than what most people use to store email.
It's amazing (and amazingly satisfying) how much of this debate one can simply ignore when one uses ssh to log into an account and run pine (or elm).
A lot of this just becomes irrelevant.
Did you know that not one intercompany email at rsync.net has ever traversed any network ? It's just a local copy operation ... and no browser has ever touched them.
I get that Lavabit was fundamentally flawed, but I don't know about this part. Lavabit saying they can't read your email seems analogous to any website that requires a password saying they can't read your password, because it is hashed. That's an important and reasonable claim, right? It means at the very least all the passwords/emails can't be download in bulk and read immediately.
You don't know for sure what hashing methods are being used on any given site, but to say it doesn't matter at all.... is kind of like saying the operators should just leave all of your passwords in plaintext in the database because they could intercept them at log in anyway.
Recent (in the past year) court rulings have decided that passwords in memory are accessible, even if your softwre normally throws them away -- so you could be legally compelled to implement interception of those. (IANAL, and this assumes that I understood the things others wrote about these...) Sure, it's likely only in some circuits, but I'd be surprised if other judges did not rule similarly.
A safer system would be where YOU create your own key pair, and only send your public key to the secure mail provider. You know that your e-mail, your text, etc is never in cleartext on the remote system, which means that even if that system is completely compromised, all an attacker is getting is encrypted copies of your communications. (Well, and cleartext metadata, since you need that for sending mail.)
In such a system, you know that encryption is happening, because you are doing it on your computer before sending bits to the server. (You'd also need a way to exchange keys in a way which doesn't require trusting the secure server not to be MITM-ing you.)
Even that's likely not fully safe, but it's very different from having the server avert its eyes and pretend you never sent it plaintext keys/credentials.
If your server receives email for you over SMTP, you are trusting the server not to log a copy before encrypting, trusting that there is no intruder on the server, trusting that someone (like the NSA) is not logging traffic between servers, and trusting the sender's machines to the same.
Similarly when you send email in a way that can be read by your recipient's provider. You have to encrypt for an individual, as with PGP, for there to be meaningful security, at which point your provider's "secure" practices are only covering a bit of metadata, some of which will be leaked when communicating the message.
The problem with PGP is that it has a complicated trust model, poor client integration, and does not provide forward secrecy. The first two may be fixable via better user interfaces (which includes breaking from traditional webmail) but forward secrecy would need protocol support is in conflict with the asynchrony email currently enjoys.
Think of a site that says your password is secure. When you call them on it, they say "well, we do store your passwords in plaintext, but we use SSL to transfer them."
*rounded to the nearest percent
SSL is enough (I think?) to prevent a random attacker from snooping your traffic to get your _credentials_. However, now imagine a judge gives them a subpoena saying, "We need Dylan's password. You receive it in plaintext over a secure tunnel, and you must give it to us." There's no wiggle room for that kind of request, and then a court-backed attacker would be able to do things like use your password as evidence, and probably even use it to try to gain access to other things of yours.
A way to prevent a court-ordered harvesting of your credentials would be for the service to have your public key, and require you to cryptographically sign something as part of the login process: Your secret stays secret on your end.
Back to what you are trying to protect. There are many things that can be secure, and from different types of attackers. We would ideally like to be able to keep our credentials (keys, passwords) secure from attackers both black-hat and police-hat, and we would like to also be able to keep the contents of our communications secret from those same entities. Most services safeguard your credentials against everyone except the court, and try to only protect your data similarly.
Yes, because passwords are insecure even if you follow "best practices." Here is a straightforward attack:
Actually, as with encrypted email, the cryptography research community already knows how to solve that problem:
You type in your username, it downloads a payload, then you type in your password and it is decrypted locally. Your password never leaves your machine.
It's been attempted, but the issue of storage remains the most bothering. A mailbox can be pretty big and having it distributed over the network is difficult. Not to mention spamming problems.
Maybe some day we'll find the right formula. But I think the who-owns-the-private-key problem is a bigger priority.
Not for nothing, but Usenet is a distributed email system. Yes, most people use it as a forum or a file transfer system, but once upon a time it was a way to send email. One downside was that people had to locally find a path for routing their mail through the network, though I suspect that with modern techniques that would be irrelevant. Storage is not an issue if people can download their mail. Privacy is achieved with public key encryption, authentication with digital signing.
The real issue is not spam (which is already manageable with modern spam filters), but the fact that you need to download your mail and store it yourself. That does not really mesh with how people are using email these days. This is, in my view, the big stumbling block to strong encryption -- people are frustrated by systems that prevent them from reading their mail on their friends' computers (or kiosks, etc.).
I'll summarize what you might see as HN antagonism in this piece as "refinement of the current digital trends will only make worse appear better". If the digital utopia (another ill-defined term) refers to current-trend network panopticon, then I surely and emphatically agree.
But computers are faithful servants, nothing more. They are currently recapitulating existing hierarchies -- this is how We The Hackers have commandeered them. Who wants to write a distributed system when so much in our tool-belts makes client/server architectures a comparative breeze. It's no surprise that on the first try we've made our servants into centralization machines, into pyramid builders.
The network effects -- for or against hierarchy -- of most (maybe all) previous tech is hard-wired. The steam engine's effects, etched in steel, support hierarchy only to the point thermodynamics and Mr. Carnot will allow. Radio and television are inherently hierarchal, supporting one-way broadcast on account of the physical limits of electromagnetic transmission. There are a myriad other technologies, to be evaluated by these criteria, and I think Lewis Mumford has done a pretty thorough job of it .
As for our digital servants -- they aren't hard-wired. Decentralization may be non-trivial today, but when it works, it persists as long as the medium. Bittorrent isn't going away anytime soon, and DHTs are here to stay.
So by all means, leave the digital utopia you've been sold so far. Most popular fiction utopias were strictly controlled hierarchies anyway.
Let's re-wire our servants to decentralize. We can fight the panopticon with the same silicon we used to build it. For in the end, the universe allows encryption.
So in the end, isn't that more of a will problem than a technical one? DarkMail would obviously face the same adoption problem, unless it's somehow much easier to set-up for both the e-mail providers and the user.
Besides that, I think they proposed an extra security layer to encrypt the metadata, too - wouldn't that be possible for a PGP-based system, too?
Dark Mail is intended to be a new protocol.. not just a new Lavabit (which would mean, theoretically, that it could be point-to-point secure)
Ladar NEVER made the claim that he re-invented email, the people here who say he misled people are out of their mind.
Isn't this because if they did, they could be detected by any user? And the CA could lose their CA status in browsers for improperly issuing a certificate?
also, I always found it slightly dishonest that the free tier they used to provide featured no special encryption, given that their stated reason for existence is to provide secure communication
This thread led me down the rabbit hole to your quest as a maniac sailor, in the epic Hold Fast. I must say, as a fellow romantic - this was a great piece of work. I was left inspired to seek out the "impossible." I recommend it to you all!
I think you did much justice to the art of sailing, the beautiful world of the ocean and the spirit of the human heart. Thank you so much!
>>There is no way to ever prove or disprove whether any encryption was ever happening at all, and whether it was or not makes little difference.
That is the whole point in open sourcing the code!
Unfortunately, that would be DRM, which evokes knee-jerk cries of "Evil!" The point here is that DRM is not fundamentally evil. The particular way that lots of companies want to use it and slip it into everyone's machine under the radar is most certainly bad. However, there are situations where it would actually be useful and help protect individual rights. (In particular, when it is used by individuals as a tool to protect their own interests.)
(Yes, I know I'm preaching to the choir, but this is really for 3rd party readers.)
No, it wouldn't. DRM means someone other than the hardware owner restricts what the hardware can do. If you're the owner and control all the relevant keys, the setup enhances rather than removes security - the opposite of DRM.
Also I don't think the concept would work. Suppose you have something like a TPM chip and the so-called "trusted computing" scheme - except that the hardware owner has the ability to replace the "attestation key" at will. This would remove the "evil" quality of the TC scheme, which relies on a vendor or corporation acting similarly to a CA, keeping something mathematically related to the Attestation key, and concealing it from the hardware owner.
Now as the server owner, you can remotely verify it's still running the software you specified. But without that third party role, you can't prove this to anyone else! And to the extent you could, you would have to point would-be users to the third party, which could "sell out" or use its power to foist treacherous software, or refuse to sign yours, etc. - IOW, right back to the evils of the TC plan.
Why doesn't it include someone voluntarily giving up what the hardware can do?
> Now as the server owner, you can remotely verify it's still running the software you specified. But without that third party role, you can't prove this to anyone else!
Why couldn't the license holder of the software take this role?