
A Critique of Lavabit - tptacek
http://www.thoughtcrime.org/blog/lavabit-critique/
======
tnorthcutt
_I have two alternate recommendations:

Mailpile. Despite what anyone tells you, end to end encrypted email is not
possible in webmail a world [sic]. The first precondition for developing a
usable and forward secure email protocol is a usable mail client, and I
currently believe that Mailpile is our best shot at that._

From [0]: _mailpile A modern, fast web-mail client_

I am honestly confused. It sounds like Moxie is saying a webmail client is not
the answer, but then he recommends a webmail client? I'm not trying to be
snarky; I'm genuinely curious.

0: [http://www.mailpile.is/](http://www.mailpile.is/)

~~~
moxie
I don't think we've quite worked out the language here yet.

I use "webmail" to refer to a remote hosted web interface. GMail, Yahoo Mail,
Hotmail, riseup.net, etc. This is the dominant way that people access email,
and it's not possible to secure well because of the "webapp crypto problem."

Mailpile, on the other hand, is a locally hosted MUA that happens to use your
web browser as the UI. I think it's a great idea, leveraging the UI properties
of a web browser, but with everything running locally.

All development of a new secure email protocol has been stymied for the past
13 years by webmail. It is not possible to provide end-to-end encryption if
you don't perform that encryption on the client side, and in the webmail world
there is no "client."

I'm excited about Mailpile because it could be what gives us a usable local
MUA, which is the precondition to deploying a nice, modern, usable, end-to-end
encryption protocol.

~~~
igravious
Could some please please please explain what this "webapp crypto problem"
thing is. I think of a browser as a client. Isn't the javascript done client-
side. Why is everybody saying that the browser (javascript) is a broken
platform for crypto. Have a even characterised what the issue is correctly? I
don't even know.

I figure that if this is explained to me then surely the solution should
present itself at the same time :)

~~~
DrewHintz
If your browser gets JavaScript crypto from webmail.example.com every time you
visit webmail.example.com then there's nothing stopping webmail.example.com
from serving malicious JavaScript crypto that steals your keys or unencrypted
data. Even though the JavaScript runs locally, the code is supplied by
webmail.example.com. There's a discussion of this and a few other issues here:
[http://www.matasano.com/articles/javascript-
cryptography/](http://www.matasano.com/articles/javascript-cryptography/)

JavaScript in web browsers also has a few other issues, such as side-channel
timing attacks and the lack of control of memory.

~~~
igravious
Ahhh. I see. But of course, how dim of me.

In that case, why do we trust e-commerce? Are we stupid to trust e-commerce?

Am I right in saying though that if the javascript has been signed that the
browser could trust it assuming the browser could trust webmail.example.com

I mean, we all get our software from somewhere. Why should I trust a security
update from Apple, Microsoft, or Canonical for instance ...

~~~
tptacek
E-commerce doesn't rely on Javascript cryptography.

You generally don't trust code updates, which is one reason you do them
infrequently; every time you update code there's an opportunity for someone
who has corrupted the update process to take over your machine.

A Javascript application might need to update itself several times per second
across a single execution of itself.

~~~
igravious
Would there be a way of hooking important Javascript blobs into the OS
update/store/packaging mechanism or am I being completely dense?

Say I don't trust code updates which is why I choose to run Uuntu because I
like its central package management system. Is it entirely infeasible to
leverage that update mechanism to enable end-to-end crypto communication in
the browser or are these entirely separate issues? Is it your contention that
the browser is not the correct platform for end-to-end crypto communication?

edit: it's ok - you needn't reply, I've read some of your other posts and I
get that you'd tell me that there are DOM considerations as well.

~~~
tptacek
Are you noticing how hard it is to reason through the security model of
Javascript crypto code? How many different interactions there are you'd need
to account for? That's a big part of the problem, and it's a problem that
simply doesn't exist in the same way for native code.

~~~
igravious
Dang, fell asleep there mid-conversation :/

I am noticing that it is unexpectedly difficult to reason through the security
model of Javascript crypto code. And you sure are patient, and I thank you for
bringing about that realisation. It is beginning to dawn on me that it is
amazing how _happily_ we allow any random site to go ahead and use are CPUs to
do _God knows what_ as soon as we visit their site. That's rather trusting of
us when you think about it.

But we gotta. Because why? Because dynamic content supposedly; it was easier
to have Turing-complete Javascript than figure out how to make HTML/CSS
dynamic. Never mind that a generic VM approach should have been taken if
that's what you're gonna do, and let random site-designer Jo(sephin)e choose
the language they like hacking with rather than create yet another language
that we're all going to bitch and moan about. And you can tell that the
assembler for the Web / VM approach should have been taken because that's what
Javascript is becoming. Exhibit A: ASM.js

And at the time we should have figured out that in addition to sandboxing we
also needed a security model that would cater for end-to-end secure
(anonymous?) communication. Pity we couldn't see 20 years down the road. Now
we're stuck with Javascript (which I actually like, don't get me wrong) and
GMail (which I'm regretting that I use, nowadays) . _sigh_

~~~
simonw
"It is beginning to dawn on me that it is amazing how _happily_ we allow any
random site to go ahead and use are CPUs to do _God knows what_ as soon as we
visit their site"

That's a very different issue from JavaScript cryptography though. Allowing
random sites to use your CPU is the whole purpose of the world wide web - it
takes CPU cycles to render static HTML, after all. The issue here is trusting
that the browser sandbox is good enough to prevent that code doing anything
malicious outside of the context of the browser. Browsers are pretty good at
that these days.

------
dmix
Great analysis. This is mainly why I didn't donate to Dark Mail even though I
fully supported Mailpile and Lavabits legal defense.

What about Silent Circles involvement in Dark Mail? They came under criticism
for not being open-source in the past year.

Sure they have Phil Zimmermann but I'm curious if he is already too much
focused on his own business to not fully be able to contribute to Dark Mail.
Compared to say some new eager hackers willing to focus on this full-time. Do
both Ladar/Phil have the focus/ability to create an entirely new OSS email
protocol?

------
jtheory
Isn't he glossing over some actual value in a private Lavabit-like setup?

He's quite right that there are several steps where the server must "avert its
eyes" (this is a good way to explain it) to keep the plaintext password,
decrypted private key, and resulting plaintext email safe.

But still, _if_ the server averts its eyes at those points, once the user has
logged out of webmail, the email is again safely stored, and (as claimed) even
an NSA-compromised Lavabit can't access them until the user signs in again (if
the server has been modified to capture the password or private key).

Well... except for the loophole where the emails were transmitted plaintext
via SSL, NOT using perfect forward secrecy. In that case, anyone who managed
to capture that SSL-encrypted traffic can decrypt it after the fact if they
can get the SSL private key from Lavabit's servers.

And honestly, that was the weakest point. Once Levison shut down Lavabit
(preventing Snowden from sending in his password again), Snowden's emails were
safely locked away, except for that last loophole.

What this all suggests is that an open source version of Lavabit could
actually be more valuable than the original service, as long as SSL is
configured for perfect forward secrecy.

I.e., set up & secure your own email server (or let a trusted person do it for
you), with code that verifiably averts its eyes at the critical moments, and
leaves your email history safely encrypted when you're not accessing it. If
you ever suspect your server may be at risk or has been compromised, you
simply don't sign in again.

Marlinspike is right that the best solutions are in sticking with email
clients (not webmail... unless you want to dig into the problems of real
encryption/decryption in JavaScript, and verifying the JS you've just
downloaded). But -- those solutions don't exist yet, and may not exist for
years to come.

A private Lavabit seems like a pretty solid solution to me, and certainly far
better than throwing up your hands and going with gmail and friends.

[minor edits for clarity]

~~~
sillysaurus2
Lest anyone seriously consider this, keep in mind that the success of
cryptography is that you don't need any "blind faith". Properly designed
crypto systems are those which don't force you to lean on any pillar that
might give way unexpectedly.

"Just trust the server to avert its eyes" is untenable. It's untenable because
it's unnecessarily risky. We as a community can do better than to use someone
else's design full of holes merely because it seemed to work.

Crypto systems seem to work until they don't. And when they stop working, it's
likely you'll never realize. But your adversary will.

~~~
jtheory
"Unnecessarily risky" depends on what options you have available to you. The
problem here (as I mentioned) is that a more solid solution is probably still
years away -- and even then, email is a very difficult thing to secure end-to-
end, so the possibility of user error exposing an email you imagined private
will not go away for a very long time.

My main point above was that while the original Lavabit _did_ require you to
trust Lavabit (who could be legally compelled to start logging passwords...),
a roll-your-own version would shift that trust burden from a US company to you
or whoever sets up your server.

I'm not claiming it's the final solution -- just that it would be
significantly better than nothing.

There's value in continuing to push for full solutions; but that doesn't mean
there's no value in options like this, just as yes, you're a fool if you
_rely_ on security by obscurity, but that's not the same as saying that it
can't add to your security in a real world situation.

~~~
sillysaurus2
_that doesn 't mean there's no value in options like this_

Actually, your proposal has negative value, because the danger is you might
actually go on to implement the broken design and trick people (along with
yourself) into believing it's trustworthy.

 _a roll-your-own version would shift that trust burden from a US company to
you or whoever sets up your server._

You've shifted the burden, but _away_ from commercial pressure is almost
always a bad idea. Now instead of having a team of people thinking about
security issues 24/7 and paying strict attention to their server
configurations and minimizing their attack surface, you have only yourself.
You may be capable, but most people aren't. And even the very best of us make
mistakes.

Once your server is breached, the security offered by this design drops to
nil. Compromising the server compromises the security. That's a fatal flaw.
It's no accident that all modern cryptosystems are based around the idea of
"Here's your secret key. Don't let it get stolen." It's the strongest
guarantee we have. It's incredible that it's even possible to get such a
strong security guarantee: "As long as you don't let your private key get
stolen or get MITM'd, it's impossible for anyone to eavesdrop on you." That's
incredible! Governments for thousands of years have been wishing for such a
thing, and now our generation finally has it, because we _live in the
future_... and you're going out of your way to give it up.

Your design is literally "transmit your secret key to the server while hoping
it's still under control of friendlies." This kind of thinking is dangerous
precisely because it tries to frame blind faith / hope / "probably won't
happen to me" as a security pillar. But it's not a pillar. You can't trust
hope. Your trust is the very first thing any adversary will subvert. In fact,
if the cryptosystem is designed properly, adversaries won't have any realistic
route of attack short of physically compromising the boxes you're receiving
secret messages on. By fooling yourself into believing in the myth of "better
than nothing", you've opened up an attack vector for the adversary. If you
were to use a proper cryptosystem, then the adversary wouldn't be able to
attack you. And since you're opening doors for the adversary, it would not be
unfair to characterize that as "you're doing the adversary's job for them."

I apologize for the negativity. Usually when people pick apart an idea,
they're expected to present a better alternative. In this case I don't know
what the better solution is, because it hasn't been invented yet. But you're
talking about a cryptosystem. Cryptosystems fail silently, because adversaries
break them without informing their victim. So all it takes is one misstep to
completely lose: the adversary will be able to intercept everything, and
you'll be none the wiser. By transmuting the trust guarantee from "don't lose
your secret key" to "trust this central server," it exposes dozens of attack
vectors. Every vector that leads to a server breach is now a vector that can
subvert you.

~~~
xerophtye
I agree with your views, but i am a little confused by something (probably due
to my lack of experience in web dev). When i log in to any site, don't i
always transmit my password over SSL? I know good security systems match the
_hash_ of my password. But that means they store a hash, right? or does that
mean i also transmit a hash? I think we transmit the actual password because
this way, if someone breaks into their servers, and steals the passwords,
they'd only have he hashes and not the passwords. And that point is moot if
the server takes the hash as input.

Please enlighten me.

~~~
jtheory
Yes, with probably no exceptions, all of the sites you currently sign into
will at some point have your username and cleartext password in memory on the
server.

The server "averts its eyes", hashes the password and compares it to a stored
hash to check it. If you're lucky. If you're less lucky, your password is just
in cleartext in the database. Note: if a website can send you a "password
reminder", as many of them can, this is the case.

Furthermore, your data won't normally even be encrypted (or with some, e.g.
Dropbox, they will be encrypted with keys that are available to the server
even without your password).

Hosted Lavabit was a flawed system -- they were vulnerable to the NSA forcing
them to start logging passwords/private keys, and they were vulnerable to the
NSA capturing all of their SSL traffic then demanding their SSL private key.
They were also vulnerable to a malicious employee or other person with
legitimate access to the server code who snuck in a bit of logging code.

But they were still far more secure than just about any other web application
you'll encounter. If they had configured their SSL for perfect forward
secrecy, the NSA could have even confiscated their servers but would have been
unable to get any user's emails. They could have installed any code they
wanted on the servers, but still would only have been able to break into
accounts where the users actively signed in beyond that point.

If someone managed to steal a data backup from Lavabit, it would not have
revealed any data. That's not true of almost any other site.

That's part of why I find it frustrating when I try to point out the value of
a private, fixed up Lavabit and am scorned for advocating an imperfect
solution. Well, yeah! But it'd be miles ahead of where your email is now....

~~~
xerophtye
yes so why in the world is my parent calling it a major design flaw? it's the
norm! Sure, it's not perfect. Sure, a better option is to encrypt it yourself,
and send it to the server. but as discussed endlessly in the comments here, it
can NOT be done on web-mail currently (not securely atleast). So the common
authentication mechanism is that you tell the server your password. Well you
have to tell it to SOMEONE to show that you know it. So how is "transmitting
your password" a flaw?

The only other authentication mechanism i can think of is that your password
is somehow used to generate a key-pair. The server encrpyts a session password
with ur public key and sends to you. You decrypt with your private key and
enter it. Hmm.... not a bad idea

------
rsync
I typically don't go on about this, and I suspect it's dismissed by most, but
...

It's amazing (and amazingly satisfying) how much of this debate one can simply
ignore when one uses ssh to log into an account and run pine (or elm).

A lot of this just becomes irrelevant.

Did you know that _not one_ intercompany email at rsync.net has ever traversed
_any_ network ? It's just a local copy operation ... and no browser has ever
touched them.

~~~
dsr_
I assume you mean intracompany email. And you haven't changed to mutt? Mutt is
like elm with fifteen years of clever development by people who actually like
mail.

~~~
cipher0
I just switched to mutt and it's freaking awesome!

------
jere
>>There is no way to ever prove or disprove whether any encryption was ever
happening at all, and whether it was or not makes little difference.

I get that Lavabit was fundamentally flawed, but I don't know about this part.
Lavabit saying they can't read your email seems analogous to any website that
requires a password saying they can't read your password, because it is
hashed. That's an important and reasonable claim, right? It means at the very
least all the passwords/emails can't be download in bulk and read immediately.

You don't know for sure what hashing methods are being used on any given site,
but to say it _doesn 't matter_ at all.... is kind of like saying the
operators should just leave all of your passwords in plaintext in the database
because they could intercept them at log in anyway.

~~~
gknoy
No. It matters because courts have ruled that your system cannot claim
"averting its eyes" as a defense against providing intercepted data.

Recent (in the past year) court rulings have decided that passwords in memory
are accessible, even if your softwre normally throws them away -- so you could
be legally compelled to implement interception of those. (IANAL, and this
assumes that I understood the things others wrote about these...) Sure, it's
likely only in some circuits, but I'd be surprised if other judges did not
rule similarly.

A safer system would be where YOU create your own key pair, and only send your
public key to the secure mail provider. You know that your e-mail, your text,
etc is never in cleartext on the remote system, which means that even if that
system is completely compromised, all an attacker is getting is encrypted
copies of your communications. (Well, and cleartext metadata, since you need
that for sending mail.)

In such a system, you know that encryption is happening, because you are doing
it on your computer before sending bits to the server. (You'd also need a way
to exchange keys in a way which doesn't require trusting the secure server not
to be MITM-ing you.)

Even that's likely not fully safe, but it's very different from having the
server avert its eyes and pretend you never sent it plaintext
keys/credentials.

~~~
jedbrown
> you know that encryption is happening, because you are doing it on your
> computer before sending bits to the server

If your server receives email for you over SMTP, you are trusting the server
not to log a copy before encrypting, trusting that there is no intruder on the
server, trusting that someone (like the NSA) is not logging traffic between
servers, and trusting the sender's machines to the same.

Similarly when you send email in a way that can be read by your recipient's
provider. You have to encrypt for an individual, as with PGP, for there to be
meaningful security, at which point your provider's "secure" practices are
only covering a bit of metadata, some of which will be leaked when
communicating the message.

The problem with PGP is that it has a complicated trust model, poor client
integration, and does not provide forward secrecy. The first two may be
fixable via better user interfaces (which includes breaking from traditional
webmail) but forward secrecy would need protocol support is in conflict with
the asynchrony email currently enjoys.

------
djjaxe
Wouldn't Lavabit be better if all decryption was done on client side, either
with javascript or a client side add-on/extension? This way the only thing
that is ever on the server is the public key? The only thing left would be if
it had been in a man-in-the-middle attack... which is always an issue on the
internet unless every part is encrypted which is hard to do... though
internally it could potentially be safe as it would not ever be sending out of
itself and emails being sent would also be encrypted client side using
javascript/add-on/extension... (also have the keys generated on client side)
yes this would inevitably be a large client side program but for security it
would be worth it.

~~~
marijn
This is a tempting approach, but man-in-the-middle attacks, or the equivalent
compromised or legally-strongarmed servers are the whole problem here. Any
client-side logic that is served by a server can only be trusted as far as
that server (and your communication channel to it), which means that in this
case it's almost useless.

There doesn't seem to be any serious alternatives to thick, open-source,
locally installed clients. As a web affectionado and JavaScript nerd, this
pains me too, but we'll have to get used to it.

~~~
djjaxe
Then I think it's time to look at a mail system that doesn't need servers
something built on top of the bit torrent grid or similar system that the
government can watch all they want but won't get any information back from it
and have it completely open source... this will take out man-in-the-middle and
a central server compromised issue and there will be no one to legally strong-
arm.

~~~
devcpp
Decentralized email, uh?

It's been attempted, but the issue of storage remains the most bothering. A
mailbox can be pretty big and having it distributed over the network is
difficult. Not to mention spamming problems.

Maybe some day we'll find the right formula. But I think the who-owns-the-
private-key problem is a bigger priority.

~~~
betterunix
"It's been attempted, but the issue of storage remains the most bothering. A
mailbox can be pretty big and having it distributed over the network is
difficult. Not to mention spamming problems."

Not for nothing, but Usenet _is_ a distributed email system. Yes, most people
use it as a forum or a file transfer system, but once upon a time it was a way
to send email. One downside was that people had to locally find a path for
routing their mail through the network, though I suspect that with modern
techniques that would be irrelevant. Storage is not an issue if people can
download their mail. Privacy is achieved with public key encryption,
authentication with digital signing.

The real issue is not spam (which is already manageable with modern spam
filters), but the fact that you need to _download your mail_ and store it
yourself. That does not really mesh with how people are using email these
days. This is, in my view, the big stumbling block to strong encryption --
people are frustrated by systems that prevent them from reading their mail on
their friends' computers (or kiosks, etc.).

~~~
djjaxe
well this could be fixed by using something like bittorrent sync to allow you
to keep your "inbox" wherever you want all you need is the code... and storage
space... and well at least 1 of your own computers that already has the inbox
to be online at the same time. this also uses a separate dht table to sync and
as long as your inbox is only in the megabyte it wouldn't be that hard to read
your email from your friends computer or any other computer... but i do agree
I would want to limit the ability to spam the network as this would load down
a lot of the peers with excess mail that they actually wouldn't need... maybe
somehow limit how many messages each node can send out... as this system would
be like torrents but you would need a private key to open... you could send
mail to multiple people they download the one message and decrypt it you
wouldn't really need multiple message sent so if a node is sending many the
rest of the network could identify that and ignore that node...

------
PilateDeGuerre
"Deserting the Digital Utopia: Computers against Computing"[1] might be of
interest to HN readers as well. The whole piece is quite antagonistic to the
HN worldview, and it ends on a sort of challenge to hackers. I submitted it as
a link last week and the submission fared poorly then. I mention it here
because I would like to see it read and discussed by this crowd.

[1] [http://crimethinc.com/texts/ex/digital-
utopia.html](http://crimethinc.com/texts/ex/digital-utopia.html)

~~~
gorklin
I suppose this article faired poorly because of its length and a touch of the
vague. I think the premise of an "ideal capitalist product" is either self-
contradictory or ill-defined. The analysis of the digital panopticon, and its
effect on interpersonal relationships is spot on.

I'll summarize what you might see as HN antagonism in this piece as
"refinement of the current digital trends will only make worse appear better".
If the digital utopia (another ill-defined term) refers to current-trend
network panopticon, then I surely and emphatically agree.

But computers are faithful servants, nothing more. They are currently
recapitulating existing hierarchies -- this is how We The Hackers have
commandeered them. Who wants to write a distributed system when so much in our
tool-belts makes client/server architectures a comparative breeze. It's no
surprise that on the first try we've made our servants into centralization
machines, into pyramid builders.

The network effects -- for or against hierarchy -- of most (maybe all)
previous tech is hard-wired. The steam engine's effects, etched in steel,
support hierarchy only to the point thermodynamics and Mr. Carnot will allow.
Radio and television are inherently hierarchal, supporting one-way broadcast
on account of the physical limits of electromagnetic transmission. There are a
myriad other technologies, to be evaluated by these criteria, and I think
Lewis Mumford has done a pretty thorough job of it [1].

As for our digital servants -- they aren't hard-wired. Decentralization may be
non-trivial today, but when it works, it persists as long as the medium.
Bittorrent isn't going away anytime soon, and DHTs are here to stay.

So by all means, leave the digital utopia you've been sold so far. Most
popular fiction utopias were strictly controlled hierarchies anyway. Let's re-
wire our servants to decentralize. We can fight the panopticon with the same
silicon we used to build it. For in the end, the universe allows encryption.

[1]
[https://en.wikipedia.org/wiki/The_Myth_of_the_Machine](https://en.wikipedia.org/wiki/The_Myth_of_the_Machine)

------
mtgx
Disregarding the metadata problem for a moment, wouldn't it be possible for
all major e-mail providers to integrate PGP in a user-friendly way, with
public keys tied to their accounts (so you wouldn't need to know someone
else's public key, just their e-mail address), and then do something like
Ladar is proposing with the green light/red light thing for PGP to PGP email
providers and for PGP to non-PGP e-mail providers?

So in the end, isn't that more of a _will_ problem than a technical one?
DarkMail would obviously face the same adoption problem, unless it's somehow
much easier to set-up for both the e-mail providers and the user.

Besides that, I think they proposed an extra security layer to encrypt the
metadata, too - wouldn't that be possible for a PGP-based system, too?

~~~
StavrosK
The providers are the people we can't trust.

------
RexRollman
The real lesson is: don't depend on someone else's computer to perform the
encryption process for you. If you do, it is susceptible.

------
vinceguidry
I'm unconvinced that supporting other mail applications is a better bet than
supporting Lavar. Regardless of his product claims, he's in a position to
fight an important political battle for the rights of all of us. That's why I
support him, because politics are the battleground here, not technology.

~~~
HelloMcFly
I support his defense funds, but I don't support his products for the reason
the article stated. Lavabit wrote a check they knew they couldn't cash. It
wasn't the government's fault that he had the capability to compromise the
promised security in the first place.

------
tlrobinson
I'd like to see a critique of the actual Dark Mail protocol. From the
Kickstarter video it seems clear they're aware of the tradeoffs of a Lavabit
style system, and are starting with a true end-to-end encryption protocol,
with the option to "dial down" the security when necessary.

------
mattkrea
Aren't these problems with Lavabit related to the fact that they still had to
support plain old SMTP coming and going?

Dark Mail is intended to be a new protocol.. not just a new Lavabit (which
would mean, theoretically, that it could be point-to-point secure)

~~~
eridius
Assuming you meant plain old POP/IMAP, then yes, I think that's the root
problem.

~~~
mattkrea
I meant SMTP as that is the protocol used between servers and that is where
some encryption should really start.

------
MichaelGG
> One big question is why they didn’t just get a CA to make them their own.

Isn't this because if they did, they could be detected by any user? And the CA
could lose their CA status in browsers for improperly issuing a certificate?

~~~
js2
It would only be detected if the Lavabit certificate was pinned by the
browser. Otherwise the browser trusts the CA.

~~~
meowface
You're right, but if even one person uses certificate pinning, they could make
a post somewhere saying "hey, Lavabit's SSL certificate just changed, any
thoughts?" and others may suspect that something fishy is going on. Especially
if it were to occur after the NSA leaks.

~~~
jcrites
If you're referring to an MITM attack, then the attacker could intercept the
connection (establishing SSL under its own certificate) only when attacking
the specific target. The target himself would need to notice that the
certificate fingerprint changed.

------
batemanesque
the other aspect is that despite the whole reason for Lavabit's popularity in
recent years being NSA stuff etc, their site freely admitted that they
couldn't refuse to fulfil legal government data requests, but that users
shouldn't be worried because that would only apply in the case of criminal
behavior. which bizarrely, is the same justification many NSA-overreach
supports use: "if you're not doing anything wrong, you've got nothing to worry
about."...

also, I always found it slightly dishonest that the free tier they used to
provide featured no special encryption, given that their stated reason for
existence is to provide secure communication

------
randallu
Hopefully the new window.crypto stuff could be used to createa a hosted
webmail service where the private key is generated in the browser and never
leaves the browser.

~~~
tptacek
Probably not. Maddeningly, the W3C Web Crypto project decided to define a
crypto interface in terms of primitives knitted together with Javascript, so,
while you can probably assume WebCrypto AES is real native AES (assuming
you're not dealing with polyfills, which is a real problem for any crypto
extension), you can't assume the glue code in the cryptosystem is secure ---
that's left up to content-controlled Javascript to define.

------
relampago
moxie, great read, ty.

This thread led me down the rabbit hole to your quest as a maniac sailor, in
the epic Hold Fast. I must say, as a fellow romantic - this was a great piece
of work. I was left inspired to seek out the "impossible." I recommend it to
you all!

I think you did much justice to the art of sailing, the beautiful world of the
ocean and the spirit of the human heart. Thank you so much!

------
mistercow
>Despite what anyone tells you, end to end encrypted email is not possible in
a webmail world.

Sure it is. You just have to do crypto in JavaScript.

------
MagicWishMonkey
This is a terrible writeup.

>>There is no way to ever prove or disprove whether any encryption was ever
happening at all, and whether it was or not makes little difference.

That is the whole point in open sourcing the code!

~~~
tptacek
That is a dumb comment. Open sourcing the code doesn't mean you know anything
at all about what's actually running on a server purporting to use the open
source code.

~~~
stcredzero
Now, if such Open Source systems could be compiled with a mechanism that could
ensure that only "blessed" executables could run, and if there was also a
process where 3rd parties could compile their own executable and verify what
is executing on the server, then there would be a solution to this dilemma.

Unfortunately, that would be DRM, which evokes knee-jerk cries of "Evil!" The
point here is that DRM is not fundamentally evil. The particular way that lots
of companies want to use it and slip it into everyone's machine under the
radar is most certainly bad. However, there are situations where it would
actually be useful and help protect individual rights. (In particular, when it
is used by individuals as a tool to protect their own interests.)

(Yes, I know I'm preaching to the choir, but this is really for 3rd party
readers.)

~~~
ds9
"Unfortunately, that would be DRM"

No, it wouldn't. DRM means someone other than the hardware owner restricts
what the hardware can do. If you're the owner and control all the relevant
keys, the setup enhances rather than removes security - the opposite of DRM.

Also I don't think the concept would work. Suppose you have something like a
TPM chip and the so-called "trusted computing" scheme - except that the
hardware owner has the ability to replace the "attestation key" at will. This
would remove the "evil" quality of the TC scheme, which relies on a vendor or
corporation acting similarly to a CA, keeping something mathematically related
to the Attestation key, and concealing it from the hardware owner.

Now as the server owner, you can remotely verify it's still running the
software you specified. But without that third party role, you can't prove
this to anyone else! And to the extent you could, you would have to point
would-be users to the third party, which could "sell out" or use its power to
foist treacherous software, or refuse to sign yours, etc. - IOW, right back to
the evils of the TC plan.

~~~
stcredzero
_> No, it wouldn't. DRM means someone other than the hardware owner restricts
what the hardware can do._

Why doesn't it include someone voluntarily giving up what the hardware can do?

 _> Now as the server owner, you can remotely verify it's still running the
software you specified. But without that third party role, you can't prove
this to anyone else!_

Why couldn't the license holder of the software take this role?

