Hacker News new | past | comments | ask | show | jobs | submit login
A Critique of Lavabit (thoughtcrime.org)
245 points by tptacek on Nov 5, 2013 | hide | past | web | favorite | 118 comments

I have two alternate recommendations:

Mailpile. Despite what anyone tells you, end to end encrypted email is not possible in webmail a world [sic]. The first precondition for developing a usable and forward secure email protocol is a usable mail client, and I currently believe that Mailpile is our best shot at that.

From [0]: mailpile A modern, fast web-mail client

I am honestly confused. It sounds like Moxie is saying a webmail client is not the answer, but then he recommends a webmail client? I'm not trying to be snarky; I'm genuinely curious.

0: http://www.mailpile.is/

I don't think we've quite worked out the language here yet.

I use "webmail" to refer to a remote hosted web interface. GMail, Yahoo Mail, Hotmail, riseup.net, etc. This is the dominant way that people access email, and it's not possible to secure well because of the "webapp crypto problem."

Mailpile, on the other hand, is a locally hosted MUA that happens to use your web browser as the UI. I think it's a great idea, leveraging the UI properties of a web browser, but with everything running locally.

All development of a new secure email protocol has been stymied for the past 13 years by webmail. It is not possible to provide end-to-end encryption if you don't perform that encryption on the client side, and in the webmail world there is no "client."

I'm excited about Mailpile because it could be what gives us a usable local MUA, which is the precondition to deploying a nice, modern, usable, end-to-end encryption protocol.

Could some please please please explain what this "webapp crypto problem" thing is. I think of a browser as a client. Isn't the javascript done client-side. Why is everybody saying that the browser (javascript) is a broken platform for crypto. Have a even characterised what the issue is correctly? I don't even know.

I figure that if this is explained to me then surely the solution should present itself at the same time :)

Let's say Bob wants to communicate with Alice, and he doesn't want Sergey to be able to read his message, but Sergey writes the software that they want to use to communicate.

In a peer to peer setup, Bob and Alice only need to trust a discrete piece of software they download from Sergey at a given point in time. Maybe that software is open source so they can audit it and thereafter have confidence in it. But if Sergey is instead releasing his software as a javascript program to be run in a web browser than they need to trust Sergey each and every time they run the program, because they are downloading the program anew each and every time they want to use it.

Even if Sergey is a stand up guy, this setup means that the government can force him to break his promise at any time, whereas if he had put out a discrete set of software versions, particularly if they were open source with a public source control, he could plausibly tell the government that what they were asking was impossible.

So the question becomes one of modifying the browser Javascript delivery mechanism to allow a more secure & discrete versions channel that would require user intervention. Or could there be another possible way?

There's already a mechanism in place for that in the form of javascript plugins/extensions/scripts which have to be explicitly installed. There's still problems because downloaded javascript can run in the same context and subvert the installed javascript. I believe tptacek has written a blog post elaborating on the problems with that.

In a larger sense, I would push back against the web browser as a general purpose OS. We already have a few battle tested OSes who designers have put great thought and effort into these problems. If you insist on javascript, there's even node.js programs, albeit generally installed outside of the usual OS specific mechanisms. If you insist on HTML/css for layout and javascript for programming, I believe there are toolkits based on webkit, chromium and IE that let you create a standalone program embedding the respective browser engines. You can even do what mailpile is doing and embed a web sever in your client application and use your web browser as a client to the client acting as a server (though this last seems a little Rube Golderg-esque to me). But in any event downloaded and installed like a full fledged program rather than treating the exercise as though you were going to a special website.

Web mail is one of those things that really, really makes sense though. I don't use it on my computer, but I'm glad it's there when I need to check my mail from any other device.

And if I understand it right, the mailpile kind of 'webmail', you can't actually do that, it's not webmail at all, it's just running on your local computer, and the mail is stored on your local computer.

I'm just gonna pick it up...

It seems like what we need is code signing for web pages and their assets. If you sign the app, you still have to trust the people making it, but you don't have to trust the server it comes from. This puts you in no worse of a position than an update to your non-web mail client.

And if a web site suddenly switches to a new public key, the browser should do the same kind of thing as it does for expired SSL.

It should be relatively easy to create a browser extension that does this in the meantime.

Then what I'd like to see is a mail service that sends its source unminified, and then publishes the same code (with signature) on its server. That way you could easily verify that you were getting the canonical version of the code (and not a special compromised version that an attacker inserted for users on a special list), and anyone could look and see if it was doing something fishy (or broken).

If your browser gets JavaScript crypto from webmail.example.com every time you visit webmail.example.com then there's nothing stopping webmail.example.com from serving malicious JavaScript crypto that steals your keys or unencrypted data. Even though the JavaScript runs locally, the code is supplied by webmail.example.com. There's a discussion of this and a few other issues here: http://www.matasano.com/articles/javascript-cryptography/

JavaScript in web browsers also has a few other issues, such as side-channel timing attacks and the lack of control of memory.

Ahhh. I see. But of course, how dim of me.

In that case, why do we trust e-commerce? Are we stupid to trust e-commerce?

Am I right in saying though that if the javascript has been signed that the browser could trust it assuming the browser could trust webmail.example.com

I mean, we all get our software from somewhere. Why should I trust a security update from Apple, Microsoft, or Canonical for instance ...

E-commerce doesn't rely on Javascript cryptography.

You generally don't trust code updates, which is one reason you do them infrequently; every time you update code there's an opportunity for someone who has corrupted the update process to take over your machine.

A Javascript application might need to update itself several times per second across a single execution of itself.

> You generally don't trust code updates, _which is one reason you do them infrequently_ [emphasis added]

Is this true anymore? So much stuff auto-updates I barely know what goes on these days, and it seems pretty frequent. Between Firefox auto-updates, OS X updates, MS Word critical updates, etc., I would be surprised if a week goes by without something important being updated.

Would there be a way of hooking important Javascript blobs into the OS update/store/packaging mechanism or am I being completely dense?

Say I don't trust code updates which is why I choose to run Uuntu because I like its central package management system. Is it entirely infeasible to leverage that update mechanism to enable end-to-end crypto communication in the browser or are these entirely separate issues? Is it your contention that the browser is not the correct platform for end-to-end crypto communication?

edit: it's ok - you needn't reply, I've read some of your other posts and I get that you'd tell me that there are DOM considerations as well.

Are you noticing how hard it is to reason through the security model of Javascript crypto code? How many different interactions there are you'd need to account for? That's a big part of the problem, and it's a problem that simply doesn't exist in the same way for native code.

Dang, fell asleep there mid-conversation :/

I am noticing that it is unexpectedly difficult to reason through the security model of Javascript crypto code. And you sure are patient, and I thank you for bringing about that realisation. It is beginning to dawn on me that it is amazing how _happily_ we allow any random site to go ahead and use are CPUs to do _God knows what_ as soon as we visit their site. That's rather trusting of us when you think about it.

But we gotta. Because why? Because dynamic content supposedly; it was easier to have Turing-complete Javascript than figure out how to make HTML/CSS dynamic. Never mind that a generic VM approach should have been taken if that's what you're gonna do, and let random site-designer Jo(sephin)e choose the language they like hacking with rather than create yet another language that we're all going to bitch and moan about. And you can tell that the assembler for the Web / VM approach should have been taken because that's what Javascript is becoming. Exhibit A: ASM.js

And at the time we should have figured out that in addition to sandboxing we also needed a security model that would cater for end-to-end secure (anonymous?) communication. Pity we couldn't see 20 years down the road. Now we're stuck with Javascript (which I actually like, don't get me wrong) and GMail (which I'm regretting that I use, nowadays) . sigh

"It is beginning to dawn on me that it is amazing how _happily_ we allow any random site to go ahead and use are CPUs to do _God knows what_ as soon as we visit their site"

That's a very different issue from JavaScript cryptography though. Allowing random sites to use your CPU is the whole purpose of the world wide web - it takes CPU cycles to render static HTML, after all. The issue here is trusting that the browser sandbox is good enough to prevent that code doing anything malicious outside of the context of the browser. Browsers are pretty good at that these days.

"I mean, we all get our software from somewhere. Why should I trust a security update from Apple, Microsoft, or Canonical for instance"

The difference is that it is very hard to specifically target someone via an OS update. It is very easy to specifically target a web app user:


Now, if you were forced to log in or to otherwise uniquely identify yourself before you received OS updates, this would be different.

> Why should I trust a security update from Apple, Microsoft, or Canonical for instance

Because it's your operative system and you can't realistically read and compile each time the patches (if you have the sources). If your operative system is against you you've utterly lost, so your best bets are to both trust them and use 100000 eyes to find bogus patches (open source OS)

> why do we trust e-commerce? Are we stupid to trust e-commerce?

well, many don't trust it, with good reason, and use temporary credit cards (sorry, can't remember the correct name for that but I hope it's clear enough)

And here's the thread taking that post apart:


(A bit off-topic)

Locally hosted web apps are on the rise (Mailpile, Camlistore, etc.) and remembering which app runs on which port is neither user-friendly nor scalable. More so if you start to consider multiple users on the same machine.

Maybe there's a need for a usable reverse proxy just for local web apps?

It would also be neat if browsers could speak HTTP over some IPC that isn't TCP on some random port. Maybe UNIX sockets in ~/.run? This would delegate read/write permissions to the OS.

One time I hacked chromium to skip the socket and send the HTTP directly to the embedded python wsgi app. In the end we couldn't use it because we needed some 64 bit only code running behind the wsgi and chromium only builds 32 bit on windows. Not that it would really be ideal either. Your app is too easily confused for a legit chromium window.

From the article: "Despite what anyone tells you, end to end encrypted email is not possible in a webmail world."

From above: "it's not possible to secure well because of the 'webapp crypto problem.'"

I REALLY hate these sort of platitudes because they sound authoritative with no real basis. "Not possible" is a very strong statement. One, as a matter of fact, that I am working on a solution to.

The so called "webapp crypto problem" that you refer to is the fact that you cannot trust that the provider will change the source on you at will to initiate an attack. This can be dealt with by having hashes to identify the piece of code that has been recieved. This hash is then looked up by multiple verifying nodes which will confirm the signature. These nodes can confirm the signature by looking at the source and matching it with the hash. This way you move the authority from the single issuer to the set of verifiers. Now, if the code is open source any individual can verify the verifiers.

This is a general overview of the system that can solve the "webapp crypto problem." Yes, there are details missing, but this should be enough to show you that it is indeed possible.

Surely the hashing solution you propose can only be implemented as an enhancement to browsers? If you have decentralised "verifiers" how can you be sure that the version they most recently verified is the same code as your browser just downloaded?

I'm not convinced the "webapp crypto problem" can be solved without changes to browsers.

Why not a plugin?

Imagine this scenario. You get a plugin from your distro's repository; you have encrypted, sig-checking, hash-checking mechanism in apt or rpm or whatever. It is open source/Libre, maintained and audited by competent crypto people, uses well-vetted mechanisms in the code, etc..

And what this does is run native code to encrypt your message, after prompting for a passphrase to unlock your private key. It provides an editing window so plaintext won't go into the browser. Then after editing, you encrypt, and the plugin pastes the encrypted text, in, say, ASCII form, into the text field in the webmail application.

The correspondent of course has the same plugin and uses it for decryption. You exchange public keys with your correspondents by a side channel.

(Edit: Obviously, you can do this today, minus the GUI; it's easy enough to run a GPG command, use a text editor, paste manually)

This would be a non-starter on vendor-captive smartphones and tablets, of course, and proprietary OS, as such systems are fundamentally unsecurable. But it might be viable for laptops, desktops and anywhere you can have root with Linux or BSD.

The metadata problem is much harder.

web hosted MUA < curses client over SSH [1]

[1] like pine

I agree for geeks, but not for "normal" people.

As I understand it, Ladar's plan for Lavabit is to open source it (and implement the Dark Mail protocol) so you can run your own instance of it... exactly like Mailpile.

There's a huge difference between running your own mail server and running a mail client. Most people are not well positioned to run their own mail server.

Mailpile is an MUA, not an MTA.

People werent well positioned to run their own web browser either.

We can make it happen, if this new thing is so attractive people would will do it, just like they put up with Windows for several decades now.

It could be a simple raspberry pi box that has all the stuff ready-made, just enter wifi credentials and whatever and run. The box has to be in white and silver because then who wouldnt want one? To show it off like a status symbol. A sleek little box in a corner, "my own email" people could say.

Thanks for clearing that up for me.

From their IngieGogo[1]:

  Mailpile is free software, a web-mail program that
  you run on your own computer, so your data stays
  under your control.
[1] http://www.indiegogo.com/projects/mailpile-taking-e-mail-bac...

So the POP client is being built on top of an HTTP server? And I access it via a web browser pointed at localhost?

Does that sound crazy to anyone else? It makes me think the authors have a hammer (i.e. web development skills) and therefore think everything is a nail (i.e. a webpage).

If I have to install software anyway, I'd much rather it be a full fledged native client. One that looks, feels, installs, uninstalls, and is configured just like all the other native programs on my machine. I had such a program in the late 90s/early 2000s. It even had support for SMIME and PGP.

Honestly, on a desktop I'd almost rather use an HTML/JavaScript UI than a native UI. On the machine I'm using right now have the option of using iCal, Mail.app and Twitter's native desktop client. I find myself using Google Calendar, Gmail and Twitter's website instead.

Having tabs, back buttons, the ability to open things in new browser windows, the ability to bookmark and copy-and-paste links to different views of an app are all things that I like about web apps that aren't universal in native apps.

My biggest fear with the "mobile" trend is losing all these navigation abilities.

If you're the type of person who can actually get through the install process,[0] you're the type of person who doesn't care about having to access it via localhost.

There's nothing wrong with making a web mail client this way. Many people already access mail via their browser, so this is not drastically different. It also means they don't have to develop and test three different codebases.

I imagine they will get to the point of having the client wrapped for the platform so the user doesn't have to fudge around with it and can just click the icon and have it open in their browser (or the client's browser.)

[0]: https://github.com/pagekite/Mailpile#setting-up-the-basic-co...

You're also one XSS vulnerability away from losing all of your private communication to an attacker.

The biggest advantage is cross compatibility. Setting up a http server in any mainstream language is very easy, when compared to the task of creating native UIs for each OS. Look at how much hate Swing, GTK and even Mozilla's XUL got before they went to great efforts to use the native UIs.

Qt? I know it is not 100% native UI but "Beautiful UI" is not the main selling point of this I think.

I believe he is referring to a web-mail service hosted by a third-party having inherent insecurity. With mailpile, you run the service on your own system.

I am not familiar with mailpile, but I imagine it uses client-side implementation of PGP with automatic public key exchange.

Great analysis. This is mainly why I didn't donate to Dark Mail even though I fully supported Mailpile and Lavabits legal defense.

What about Silent Circles involvement in Dark Mail? They came under criticism for not being open-source in the past year.

Sure they have Phil Zimmermann but I'm curious if he is already too much focused on his own business to not fully be able to contribute to Dark Mail. Compared to say some new eager hackers willing to focus on this full-time. Do both Ladar/Phil have the focus/ability to create an entirely new OSS email protocol?

Isn't he glossing over some actual value in a private Lavabit-like setup?

He's quite right that there are several steps where the server must "avert its eyes" (this is a good way to explain it) to keep the plaintext password, decrypted private key, and resulting plaintext email safe.

But still, if the server averts its eyes at those points, once the user has logged out of webmail, the email is again safely stored, and (as claimed) even an NSA-compromised Lavabit can't access them until the user signs in again (if the server has been modified to capture the password or private key).

Well... except for the loophole where the emails were transmitted plaintext via SSL, NOT using perfect forward secrecy. In that case, anyone who managed to capture that SSL-encrypted traffic can decrypt it after the fact if they can get the SSL private key from Lavabit's servers.

And honestly, that was the weakest point. Once Levison shut down Lavabit (preventing Snowden from sending in his password again), Snowden's emails were safely locked away, except for that last loophole.

What this all suggests is that an open source version of Lavabit could actually be more valuable than the original service, as long as SSL is configured for perfect forward secrecy.

I.e., set up & secure your own email server (or let a trusted person do it for you), with code that verifiably averts its eyes at the critical moments, and leaves your email history safely encrypted when you're not accessing it. If you ever suspect your server may be at risk or has been compromised, you simply don't sign in again.

Marlinspike is right that the best solutions are in sticking with email clients (not webmail... unless you want to dig into the problems of real encryption/decryption in JavaScript, and verifying the JS you've just downloaded). But -- those solutions don't exist yet, and may not exist for years to come.

A private Lavabit seems like a pretty solid solution to me, and certainly far better than throwing up your hands and going with gmail and friends.

[minor edits for clarity]

Lest anyone seriously consider this, keep in mind that the success of cryptography is that you don't need any "blind faith". Properly designed crypto systems are those which don't force you to lean on any pillar that might give way unexpectedly.

"Just trust the server to avert its eyes" is untenable. It's untenable because it's unnecessarily risky. We as a community can do better than to use someone else's design full of holes merely because it seemed to work.

Crypto systems seem to work until they don't. And when they stop working, it's likely you'll never realize. But your adversary will.

"Unnecessarily risky" depends on what options you have available to you. The problem here (as I mentioned) is that a more solid solution is probably still years away -- and even then, email is a very difficult thing to secure end-to-end, so the possibility of user error exposing an email you imagined private will not go away for a very long time.

My main point above was that while the original Lavabit did require you to trust Lavabit (who could be legally compelled to start logging passwords...), a roll-your-own version would shift that trust burden from a US company to you or whoever sets up your server.

I'm not claiming it's the final solution -- just that it would be significantly better than nothing.

There's value in continuing to push for full solutions; but that doesn't mean there's no value in options like this, just as yes, you're a fool if you rely on security by obscurity, but that's not the same as saying that it can't add to your security in a real world situation.

that doesn't mean there's no value in options like this

Actually, your proposal has negative value, because the danger is you might actually go on to implement the broken design and trick people (along with yourself) into believing it's trustworthy.

a roll-your-own version would shift that trust burden from a US company to you or whoever sets up your server.

You've shifted the burden, but away from commercial pressure is almost always a bad idea. Now instead of having a team of people thinking about security issues 24/7 and paying strict attention to their server configurations and minimizing their attack surface, you have only yourself. You may be capable, but most people aren't. And even the very best of us make mistakes.

Once your server is breached, the security offered by this design drops to nil. Compromising the server compromises the security. That's a fatal flaw. It's no accident that all modern cryptosystems are based around the idea of "Here's your secret key. Don't let it get stolen." It's the strongest guarantee we have. It's incredible that it's even possible to get such a strong security guarantee: "As long as you don't let your private key get stolen or get MITM'd, it's impossible for anyone to eavesdrop on you." That's incredible! Governments for thousands of years have been wishing for such a thing, and now our generation finally has it, because we live in the future... and you're going out of your way to give it up.

Your design is literally "transmit your secret key to the server while hoping it's still under control of friendlies." This kind of thinking is dangerous precisely because it tries to frame blind faith / hope / "probably won't happen to me" as a security pillar. But it's not a pillar. You can't trust hope. Your trust is the very first thing any adversary will subvert. In fact, if the cryptosystem is designed properly, adversaries won't have any realistic route of attack short of physically compromising the boxes you're receiving secret messages on. By fooling yourself into believing in the myth of "better than nothing", you've opened up an attack vector for the adversary. If you were to use a proper cryptosystem, then the adversary wouldn't be able to attack you. And since you're opening doors for the adversary, it would not be unfair to characterize that as "you're doing the adversary's job for them."

I apologize for the negativity. Usually when people pick apart an idea, they're expected to present a better alternative. In this case I don't know what the better solution is, because it hasn't been invented yet. But you're talking about a cryptosystem. Cryptosystems fail silently, because adversaries break them without informing their victim. So all it takes is one misstep to completely lose: the adversary will be able to intercept everything, and you'll be none the wiser. By transmuting the trust guarantee from "don't lose your secret key" to "trust this central server," it exposes dozens of attack vectors. Every vector that leads to a server breach is now a vector that can subvert you.

I agree with your views, but i am a little confused by something (probably due to my lack of experience in web dev). When i log in to any site, don't i always transmit my password over SSL? I know good security systems match the hash of my password. But that means they store a hash, right? or does that mean i also transmit a hash? I think we transmit the actual password because this way, if someone breaks into their servers, and steals the passwords, they'd only have he hashes and not the passwords. And that point is moot if the server takes the hash as input.

Please enlighten me.

Yes, with probably no exceptions, all of the sites you currently sign into will at some point have your username and cleartext password in memory on the server.

The server "averts its eyes", hashes the password and compares it to a stored hash to check it. If you're lucky. If you're less lucky, your password is just in cleartext in the database. Note: if a website can send you a "password reminder", as many of them can, this is the case.

Furthermore, your data won't normally even be encrypted (or with some, e.g. Dropbox, they will be encrypted with keys that are available to the server even without your password).

Hosted Lavabit was a flawed system -- they were vulnerable to the NSA forcing them to start logging passwords/private keys, and they were vulnerable to the NSA capturing all of their SSL traffic then demanding their SSL private key. They were also vulnerable to a malicious employee or other person with legitimate access to the server code who snuck in a bit of logging code.

But they were still far more secure than just about any other web application you'll encounter. If they had configured their SSL for perfect forward secrecy, the NSA could have even confiscated their servers but would have been unable to get any user's emails. They could have installed any code they wanted on the servers, but still would only have been able to break into accounts where the users actively signed in beyond that point.

If someone managed to steal a data backup from Lavabit, it would not have revealed any data. That's not true of almost any other site.

That's part of why I find it frustrating when I try to point out the value of a private, fixed up Lavabit and am scorned for advocating an imperfect solution. Well, yeah! But it'd be miles ahead of where your email is now....

yes so why in the world is my parent calling it a major design flaw? it's the norm! Sure, it's not perfect. Sure, a better option is to encrypt it yourself, and send it to the server. but as discussed endlessly in the comments here, it can NOT be done on web-mail currently (not securely atleast). So the common authentication mechanism is that you tell the server your password. Well you have to tell it to SOMEONE to show that you know it. So how is "transmitting your password" a flaw?

The only other authentication mechanism i can think of is that your password is somehow used to generate a key-pair. The server encrpyts a session password with ur public key and sends to you. You decrypt with your private key and enter it. Hmm.... not a bad idea

I do understand your viewpoint, and agree with it at a high-level, but I'm working from a practical viewpoint.

> a strong security guarantee: "As long as you don't let your private key get stolen or get MITM'd, it's impossible for anyone to eavesdrop on you." That's incredible! Governments for thousands of years have been wishing for such a thing, and now our generation finally has it, because we live in the future... and you're going out of your way to give it up.

I'm keen on getting a system that makes that promise as well; but HOW can I get that today for my email? That is the practical problem I'm addressing.

Your other comments about blind faith, etc. are somewhat out of context. There is a secret key, and it needs to be kept safe or your stored email can be compromised. Whether the private key is on your laptop or on your server, you cannot guarantee it is perfectly safe; if either computer is airgapped it's pretty useless for reading your email.

Certainly, you don't trust to blind faith that your server is secure; but you don't trust to blind faith that your laptop is, either.

If you set up a private Lavabit, in your favor, you have a private key that's encrypted with a key that's not stored on your server, so someone who gains read access to your server still cannot actually compromise you, and someone will full access still cannot compromise you until you sign in again. Also in your favor, nowadays it's fairly well-known how to lock down/harden a single-purpose server so that it would be very difficult to compromise; you'd basically need only 3 ports open, ever, you can disable root SSH and enable private key auth only, etc.. It's simpler than securing a laptop that you use for a billion different purposes.

Against you are the facts that you don't keep the server with you (probably -- unless it lives at your house), and it's a more visible target -- it must be findable because that's where your emails are delivered.

Your same arguments about your expertise in securing your server also apply to securing your own laptop. Using your own arguments, if you read email on a laptop that you also use for web browsing, you're "opening doors for the adversary", aren't you?

> * In this case I don't know what the better solution is, because it hasn't been invented yet.*

This is the real problem. I'm no Edward Snowden; I'm not an NSA target and don't expect to be. I want to secure my email because I think everyone should. Are you saying I should wait another X years before using email?

A private Lavabit is the best solution I see right now -- I totally agree that anyone should deploy any solution with their eyes open (even if you keep the private key locally only, that does not guarantee security either...). But that said, I can imagine a server image and simple set of instructions that would enable someone to set up a private Lavabit that would be a better solution than anything I am able to set up myself, certainly better than the original Lavabit, and far better than what most people use to store email.

For a scenario like the one Lavabit was trying to address, trusting the service provider to "avert its eyes" isn't good enough. If I'm Edward Snowden, I need to assume that my service providers are all actively hostile to my interests. Crypto that doesn't protect me in that environment can't be trusted to protect me in an "non-hostile" environment, either.

Right; I was talking about the private version of Lavabit, where you aren't obliged to trust a service provider.

Even so, how many people would implement their "private Lavabit" on AWS or Linode, utterly defeating the purpose?

I typically don't go on about this, and I suspect it's dismissed by most, but ...

It's amazing (and amazingly satisfying) how much of this debate one can simply ignore when one uses ssh to log into an account and run pine (or elm).

A lot of this just becomes irrelevant.

Did you know that not one intercompany email at rsync.net has ever traversed any network ? It's just a local copy operation ... and no browser has ever touched them.

I don't understand. When you run Pine from an SSH session, the mail you send is still being sent in plaintext over SMTP.

I assume you mean intracompany email. And you haven't changed to mutt? Mutt is like elm with fifteen years of clever development by people who actually like mail.

I just switched to mutt and it's freaking awesome!

I like nmh...

So does this mean that all intercompany email is generated and read on one local host that everyone SSHs into?

>>There is no way to ever prove or disprove whether any encryption was ever happening at all, and whether it was or not makes little difference.

I get that Lavabit was fundamentally flawed, but I don't know about this part. Lavabit saying they can't read your email seems analogous to any website that requires a password saying they can't read your password, because it is hashed. That's an important and reasonable claim, right? It means at the very least all the passwords/emails can't be download in bulk and read immediately.

You don't know for sure what hashing methods are being used on any given site, but to say it doesn't matter at all.... is kind of like saying the operators should just leave all of your passwords in plaintext in the database because they could intercept them at log in anyway.

No. It matters because courts have ruled that your system cannot claim "averting its eyes" as a defense against providing intercepted data.

Recent (in the past year) court rulings have decided that passwords in memory are accessible, even if your softwre normally throws them away -- so you could be legally compelled to implement interception of those. (IANAL, and this assumes that I understood the things others wrote about these...) Sure, it's likely only in some circuits, but I'd be surprised if other judges did not rule similarly.

A safer system would be where YOU create your own key pair, and only send your public key to the secure mail provider. You know that your e-mail, your text, etc is never in cleartext on the remote system, which means that even if that system is completely compromised, all an attacker is getting is encrypted copies of your communications. (Well, and cleartext metadata, since you need that for sending mail.)

In such a system, you know that encryption is happening, because you are doing it on your computer before sending bits to the server. (You'd also need a way to exchange keys in a way which doesn't require trusting the secure server not to be MITM-ing you.)

Even that's likely not fully safe, but it's very different from having the server avert its eyes and pretend you never sent it plaintext keys/credentials.

> you know that encryption is happening, because you are doing it on your computer before sending bits to the server

If your server receives email for you over SMTP, you are trusting the server not to log a copy before encrypting, trusting that there is no intruder on the server, trusting that someone (like the NSA) is not logging traffic between servers, and trusting the sender's machines to the same.

Similarly when you send email in a way that can be read by your recipient's provider. You have to encrypt for an individual, as with PGP, for there to be meaningful security, at which point your provider's "secure" practices are only covering a bit of metadata, some of which will be leaked when communicating the message.

The problem with PGP is that it has a complicated trust model, poor client integration, and does not provide forward secrecy. The first two may be fixable via better user interfaces (which includes breaking from traditional webmail) but forward secrecy would need protocol support is in conflict with the asynchrony email currently enjoys.

I think the point is that while it does matter in some ways, it doesn't matter for security issues Lavabit claimed it addressed.

Think of a site that says your password is secure. When you call them on it, they say "well, we do store your passwords in plaintext, but we use SSL to transfer them."

I don't understand your analogy. 100%* of sites with passwords use the same method, transmitting the plain text to the server where it is hashed and discarded. Do you want 0%* of sites to claim that passwords are secure, even if they use a well-configured bcrypt and follow best practices?

*rounded to the nearest percent

It's a difference in who you care about it being secure from, and what it is you're trying to protect.

SSL is enough (I think?) to prevent a random attacker from snooping your traffic to get your _credentials_. However, now imagine a judge gives them a subpoena saying, "We need Dylan's password. You receive it in plaintext over a secure tunnel, and you must give it to us." There's no wiggle room for that kind of request, and then a court-backed attacker would be able to do things like use your password as evidence, and probably even use it to try to gain access to other things of yours.

A way to prevent a court-ordered harvesting of your credentials would be for the service to have your public key, and require you to cryptographically sign something as part of the login process: Your secret stays secret on your end.

Back to what you are trying to protect. There are many things that can be secure, and from different types of attackers. We would ideally like to be able to keep our credentials (keys, passwords) secure from attackers both black-hat and police-hat, and we would like to also be able to keep the contents of our communications secret from those same entities. Most services safeguard your credentials against everyone except the court, and try to only protect your data similarly.

Well I would deal with that problem in an entirely different way. Courts should not have the power to compel either party in a private communication to keep records of what was said/transmitted.

"Do you want 0%* of sites to claim that passwords are secure, even if they use a well-configured bcrypt and follow best practices?"

Yes, because passwords are insecure even if you follow "best practices." Here is a straightforward attack:


Uhh, I don't think best practices include logging the passwords people try to login with.

You are missing the point. It makes no difference if your website is following best practices if your users enter their passwords on a website that fails to do so. Maybe your users are using the same passwords in many places. Maybe they accidentally entered the password for one website when trying to log in to another because their username is the same.

I meant SSL. Edited for clarity.

"is kind of like saying the operators should just leave all of your passwords in plaintext in the database because they could intercept them at log in anyway."

Actually, as with encrypted email, the cryptography research community already knows how to solve that problem:


What about something similar to dashlane.

You type in your username, it downloads a payload, then you type in your password and it is decrypted locally. Your password never leaves your machine.

Wouldn't Lavabit be better if all decryption was done on client side, either with javascript or a client side add-on/extension? This way the only thing that is ever on the server is the public key? The only thing left would be if it had been in a man-in-the-middle attack... which is always an issue on the internet unless every part is encrypted which is hard to do... though internally it could potentially be safe as it would not ever be sending out of itself and emails being sent would also be encrypted client side using javascript/add-on/extension... (also have the keys generated on client side) yes this would inevitably be a large client side program but for security it would be worth it.

This is a tempting approach, but man-in-the-middle attacks, or the equivalent compromised or legally-strongarmed servers are the whole problem here. Any client-side logic that is served by a server can only be trusted as far as that server (and your communication channel to it), which means that in this case it's almost useless.

There doesn't seem to be any serious alternatives to thick, open-source, locally installed clients. As a web affectionado and JavaScript nerd, this pains me too, but we'll have to get used to it.

Then I think it's time to look at a mail system that doesn't need servers something built on top of the bit torrent grid or similar system that the government can watch all they want but won't get any information back from it and have it completely open source... this will take out man-in-the-middle and a central server compromised issue and there will be no one to legally strong-arm.

Decentralized email, uh?

It's been attempted, but the issue of storage remains the most bothering. A mailbox can be pretty big and having it distributed over the network is difficult. Not to mention spamming problems.

Maybe some day we'll find the right formula. But I think the who-owns-the-private-key problem is a bigger priority.

"It's been attempted, but the issue of storage remains the most bothering. A mailbox can be pretty big and having it distributed over the network is difficult. Not to mention spamming problems."

Not for nothing, but Usenet is a distributed email system. Yes, most people use it as a forum or a file transfer system, but once upon a time it was a way to send email. One downside was that people had to locally find a path for routing their mail through the network, though I suspect that with modern techniques that would be irrelevant. Storage is not an issue if people can download their mail. Privacy is achieved with public key encryption, authentication with digital signing.

The real issue is not spam (which is already manageable with modern spam filters), but the fact that you need to download your mail and store it yourself. That does not really mesh with how people are using email these days. This is, in my view, the big stumbling block to strong encryption -- people are frustrated by systems that prevent them from reading their mail on their friends' computers (or kiosks, etc.).

well this could be fixed by using something like bittorrent sync to allow you to keep your "inbox" wherever you want all you need is the code... and storage space... and well at least 1 of your own computers that already has the inbox to be online at the same time. this also uses a separate dht table to sync and as long as your inbox is only in the megabyte it wouldn't be that hard to read your email from your friends computer or any other computer... but i do agree I would want to limit the ability to spam the network as this would load down a lot of the peers with excess mail that they actually wouldn't need... maybe somehow limit how many messages each node can send out... as this system would be like torrents but you would need a private key to open... you could send mail to multiple people they download the one message and decrypt it you wouldn't really need multiple message sent so if a node is sending many the rest of the network could identify that and ignore that node...

And good point I forget that the web is basically insecure from government intrusion :[

"Deserting the Digital Utopia: Computers against Computing"[1] might be of interest to HN readers as well. The whole piece is quite antagonistic to the HN worldview, and it ends on a sort of challenge to hackers. I submitted it as a link last week and the submission fared poorly then. I mention it here because I would like to see it read and discussed by this crowd.

[1] http://crimethinc.com/texts/ex/digital-utopia.html

I suppose this article faired poorly because of its length and a touch of the vague. I think the premise of an "ideal capitalist product" is either self-contradictory or ill-defined. The analysis of the digital panopticon, and its effect on interpersonal relationships is spot on.

I'll summarize what you might see as HN antagonism in this piece as "refinement of the current digital trends will only make worse appear better". If the digital utopia (another ill-defined term) refers to current-trend network panopticon, then I surely and emphatically agree.

But computers are faithful servants, nothing more. They are currently recapitulating existing hierarchies -- this is how We The Hackers have commandeered them. Who wants to write a distributed system when so much in our tool-belts makes client/server architectures a comparative breeze. It's no surprise that on the first try we've made our servants into centralization machines, into pyramid builders.

The network effects -- for or against hierarchy -- of most (maybe all) previous tech is hard-wired. The steam engine's effects, etched in steel, support hierarchy only to the point thermodynamics and Mr. Carnot will allow. Radio and television are inherently hierarchal, supporting one-way broadcast on account of the physical limits of electromagnetic transmission. There are a myriad other technologies, to be evaluated by these criteria, and I think Lewis Mumford has done a pretty thorough job of it [1].

As for our digital servants -- they aren't hard-wired. Decentralization may be non-trivial today, but when it works, it persists as long as the medium. Bittorrent isn't going away anytime soon, and DHTs are here to stay.

So by all means, leave the digital utopia you've been sold so far. Most popular fiction utopias were strictly controlled hierarchies anyway. Let's re-wire our servants to decentralize. We can fight the panopticon with the same silicon we used to build it. For in the end, the universe allows encryption.

[1] https://en.wikipedia.org/wiki/The_Myth_of_the_Machine

Disregarding the metadata problem for a moment, wouldn't it be possible for all major e-mail providers to integrate PGP in a user-friendly way, with public keys tied to their accounts (so you wouldn't need to know someone else's public key, just their e-mail address), and then do something like Ladar is proposing with the green light/red light thing for PGP to PGP email providers and for PGP to non-PGP e-mail providers?

So in the end, isn't that more of a will problem than a technical one? DarkMail would obviously face the same adoption problem, unless it's somehow much easier to set-up for both the e-mail providers and the user.

Besides that, I think they proposed an extra security layer to encrypt the metadata, too - wouldn't that be possible for a PGP-based system, too?

The providers are the people we can't trust.

The real lesson is: don't depend on someone else's computer to perform the encryption process for you. If you do, it is susceptible.

I'm unconvinced that supporting other mail applications is a better bet than supporting Lavar. Regardless of his product claims, he's in a position to fight an important political battle for the rights of all of us. That's why I support him, because politics are the battleground here, not technology.

I support his defense funds, but I don't support his products for the reason the article stated. Lavabit wrote a check they knew they couldn't cash. It wasn't the government's fault that he had the capability to compromise the promised security in the first place.

So support him by contributing to his political/legal efforts, but not contributing to or supporting his technological efforts, unless you understand the technology and really do support it.

I'd like to see a critique of the actual Dark Mail protocol. From the Kickstarter video it seems clear they're aware of the tradeoffs of a Lavabit style system, and are starting with a true end-to-end encryption protocol, with the option to "dial down" the security when necessary.

Aren't these problems with Lavabit related to the fact that they still had to support plain old SMTP coming and going?

Dark Mail is intended to be a new protocol.. not just a new Lavabit (which would mean, theoretically, that it could be point-to-point secure)

That's exactly the issue here, it's not possible to provide perfectly secure email without completely re-inventing email from the ground up, breaking compatibility with existing systems.

Ladar NEVER made the claim that he re-invented email, the people here who say he misled people are out of their mind.

Is there something to read about how Dark Mail will work? I'd be interested to contribute, help build an implementation, or add support in products I work on.

Assuming you meant plain old POP/IMAP, then yes, I think that's the root problem.

I meant SMTP as that is the protocol used between servers and that is where some encryption should really start.

> One big question is why they didn’t just get a CA to make them their own.

Isn't this because if they did, they could be detected by any user? And the CA could lose their CA status in browsers for improperly issuing a certificate?

There's that, but there also the lazyness angle. Why go with a complicated, detectable technical solution when you can send out a NSL and have someone else do the heavy lifting for you? I strongly suspect Lavabit's reaction wasn't in any way expected.

It would only be detected if the Lavabit certificate was pinned by the browser. Otherwise the browser trusts the CA.

You're right, but if even one person uses certificate pinning, they could make a post somewhere saying "hey, Lavabit's SSL certificate just changed, any thoughts?" and others may suspect that something fishy is going on. Especially if it were to occur after the NSA leaks.

If you're referring to an MITM attack, then the attacker could intercept the connection (establishing SSL under its own certificate) only when attacking the specific target. The target himself would need to notice that the certificate fingerprint changed.

the other aspect is that despite the whole reason for Lavabit's popularity in recent years being NSA stuff etc, their site freely admitted that they couldn't refuse to fulfil legal government data requests, but that users shouldn't be worried because that would only apply in the case of criminal behavior. which bizarrely, is the same justification many NSA-overreach supports use: "if you're not doing anything wrong, you've got nothing to worry about."...

also, I always found it slightly dishonest that the free tier they used to provide featured no special encryption, given that their stated reason for existence is to provide secure communication

Hopefully the new window.crypto stuff could be used to createa a hosted webmail service where the private key is generated in the browser and never leaves the browser.

Probably not. Maddeningly, the W3C Web Crypto project decided to define a crypto interface in terms of primitives knitted together with Javascript, so, while you can probably assume WebCrypto AES is real native AES (assuming you're not dealing with polyfills, which is a real problem for any crypto extension), you can't assume the glue code in the cryptosystem is secure --- that's left up to content-controlled Javascript to define.

moxie, great read, ty.

This thread led me down the rabbit hole to your quest as a maniac sailor, in the epic Hold Fast. I must say, as a fellow romantic - this was a great piece of work. I was left inspired to seek out the "impossible." I recommend it to you all!

I think you did much justice to the art of sailing, the beautiful world of the ocean and the spirit of the human heart. Thank you so much!

>Despite what anyone tells you, end to end encrypted email is not possible in a webmail world.

Sure it is. You just have to do crypto in JavaScript.

This is a terrible writeup.

>>There is no way to ever prove or disprove whether any encryption was ever happening at all, and whether it was or not makes little difference.

That is the whole point in open sourcing the code!

That is a dumb comment. Open sourcing the code doesn't mean you know anything at all about what's actually running on a server purporting to use the open source code.

Now, if such Open Source systems could be compiled with a mechanism that could ensure that only "blessed" executables could run, and if there was also a process where 3rd parties could compile their own executable and verify what is executing on the server, then there would be a solution to this dilemma.

Unfortunately, that would be DRM, which evokes knee-jerk cries of "Evil!" The point here is that DRM is not fundamentally evil. The particular way that lots of companies want to use it and slip it into everyone's machine under the radar is most certainly bad. However, there are situations where it would actually be useful and help protect individual rights. (In particular, when it is used by individuals as a tool to protect their own interests.)

(Yes, I know I'm preaching to the choir, but this is really for 3rd party readers.)

So you think that the FSF have the wrong angle on this one? You're saying that DRM is fundamentally ethically neutral, it's just the use cases that have been put forward have been broadly user hostile. To continue, there's nothing to stop "good guys" from using DRM in a benevolent way to secure their rights. So if your reading of this is correct then the FSF should not be waging war on DRM per se but on the many and varied freedom and user -hostile implementations of DRM - this is a harder sell I guess but the distinction is an important one.

"Unfortunately, that would be DRM"

No, it wouldn't. DRM means someone other than the hardware owner restricts what the hardware can do. If you're the owner and control all the relevant keys, the setup enhances rather than removes security - the opposite of DRM.

Also I don't think the concept would work. Suppose you have something like a TPM chip and the so-called "trusted computing" scheme - except that the hardware owner has the ability to replace the "attestation key" at will. This would remove the "evil" quality of the TC scheme, which relies on a vendor or corporation acting similarly to a CA, keeping something mathematically related to the Attestation key, and concealing it from the hardware owner.

Now as the server owner, you can remotely verify it's still running the software you specified. But without that third party role, you can't prove this to anyone else! And to the extent you could, you would have to point would-be users to the third party, which could "sell out" or use its power to foist treacherous software, or refuse to sign yours, etc. - IOW, right back to the evils of the TC plan.

> No, it wouldn't. DRM means someone other than the hardware owner restricts what the hardware can do.

Why doesn't it include someone voluntarily giving up what the hardware can do?

> Now as the server owner, you can remotely verify it's still running the software you specified. But without that third party role, you can't prove this to anyone else!

Why couldn't the license holder of the software take this role?

I'm pretty confident there's no real DRM possible when the NSA (potentially) controls the hardware.

Then you'd need a camera on the system and a remote kill trigger. Or. A system that commits suicide when tampered with. All this would need redundancy. But this is the very very outer edges of paranoia and extreme compromise surely? Wouldn't a trusted execution path DRM-style be more than enough?

The court order basically told lavabit to modify the code to provide the access they were seeking. Even if the code was open sourced, you can bet that the requested modifications wouldn't have been.

If someone else held the copyright to the code and it was licensed under the AGPL, then it would be illegal to not open source such contributions. Of course, the government could work around this by providing their own mail server software or by just disregarding it.

Are you saying that releasing a similar system under an open source license will prove claims about the way their code worked in the past or just that we'd know if encryption is happening in the open source system?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact