
So You Want to Build an End-to-End Encrypted Web App - madrafi
https://www.zfnd.org/blog/so-you-want-an-e2e-encrypted-webapp/
======
oneng
Speaking of E2E Encryption, the Jitsi Team is looking for feedback on their
own implementation of E2E over WebRTC.

[https://jitsi.org/blog/e2ee/](https://jitsi.org/blog/e2ee/)

------
rictic
The equivalent to the app signing cert for a web app is the TLS cert. If
security is important to you, don't let third parties control your TLS cert!

It's so common now to let CDNs (primarily cloudflare) run your TLS frontend
that this article apparently doesn't even consider the idea of hosting an app
entirely from servers the app author controls.

That said, it's true that a TLS cert is necessarily more exposed than an app
signing cert can be. If you're serious about security, your app signing cert
will be on an airgapped machine. The TLS cert however has to be available on a
networked machine in order to sign messages.

~~~
tialaramex
The technology you want is Delegated Credentials:

[https://tools.ietf.org/html/draft-ietf-tls-
subcerts-07](https://tools.ietf.org/html/draft-ietf-tls-subcerts-07)

The _certificate_ is public, it's fine for copies of that to be in all edge
devices, the problem today is that the associated _private key_ has to be on
those edge devices too, and that's what Delegated Credentials solves.

~~~
rictic
That definitely helps, at least for short term compromise of TLS servers.

------
kodablah
> you can trust in TLS when you’re downloading signed software too; but for
> the web, you only trust in the connection, there’s nothing else to save you
> if you can’t trust that connection.

While Signed HTTP Exchanges were originally developed for a more nefarious
purpose (to allow the URL to be changed by a trusted proxy), I think the idea
or one like it can apply to serving trusted web content. Think of it as
instead of your current TLS cert verifying your host, it would also verify the
full URL and content including headers. It's a bit untenable for regular use,
but some apps could leverage it for extra trust.

> When designing E2EE protocols for persistent vs ephemeral applications, we
> need to figure out where we need long-term identity in terms of
> cryptographic keys, and where we don’t.

I would hope that web apps always lean towards ephemeral key use whenever
possible (i.e. key generation and post of public key in browser upon
authentication, with private key only in local JS memory for just that page).
If this means the webapp has to be built to work with 20 different keys for a
user because they opened 20 tabs, so be it. I know people are afraid of doing
anything like key generation in the browser, but we can't ride-off the
possibility of e2ee web apps altogether. I fear the browser allowing access to
the OS's key management or the system's TPM for key storage because it may
lead to overuse/over-reliance on long-term keys, but I'm sure it'll happen if
it hasn't already.

~~~
dane-pgp
I'm hopeful that Signed HTTP Exchanges lead to what you describe, but another
Chrome-originating technology that could be extended/abused to achieve a
similar goal is the <portal> tag.

There is already a little trick[0] that can be done with bookmarklets (or
locally saved files) which allow you to bootstrap a page with a known set of
JavaScript code running on it, but it has the disadvantage that the URL bar
doesn't contain a familiar domain. If the <portal> spec[1] ends up supporting
SRI[2] integrity hashes in a sensible way, this little bootstrapping technique
could actually be practical.

[0]
[https://news.ycombinator.com/item?id=17776456](https://news.ycombinator.com/item?id=17776456)

[1] [https://wicg.github.io/portals/](https://wicg.github.io/portals/)

[2] [https://www.w3.org/TR/SRI/](https://www.w3.org/TR/SRI/)

~~~
the8472
Combine SRI with CSPs and cache-control: immutable and you could already
commit a page to never change. All that's missing for TOFU is fingerprinting
this combination, watching for changes and surfacing the information to the
user.

~~~
lisper
Unfortunately that by itself does not guarantee security. The code that is
verified by the bookmarklet could download additional code when it runs, and
that code would not be verified.

~~~
the8472
No, preventing that would be the CSP's job.

~~~
lisper
My point is that verifying that the content doesn't change is by itself not
enough. You also have to verify that it was secure to begin with, and that is
much harder, especially for your typical end-user.

~~~
the8472
That's a separate problem to solve. But for audits to even make sense you
first need to solve the problem of sites changing under your feet, i.e.
enabling TOFU.

------
Boulth
I haven't seen any mentions or this extension that allows verifying pages
before they are rendered: [https://github.com/tasn/webext-signed-
pages](https://github.com/tasn/webext-signed-pages)

Another thing is non exportable WebCrypto keys that can limit the damage even
if the page is compromised.

------
h3cate
I started building an end to end encryption API once that includes
server/client setups. I promise I'll finish it one day (there's still client -
client to do. client - server is all done)
[https://gitlab.com/DrRoach/NetworkAPI](https://gitlab.com/DrRoach/NetworkAPI)

------
camhart
While this article points out there are still opportunities for a malicious
actor to gain access to the private keys stored locally in the browser,
wouldn't that still be an improvement over only using https and server side
encryption?

I'm not a crypto expert--so forgive my ignorance.

------
michelpp
If you want to do this with something like libsodium there is a Key Exchange
API

[https://doc.libsodium.org/key_exchange](https://doc.libsodium.org/key_exchange)

Knowing only each others public keys, two parties can exchange session keys
for bidirectional encryption.

~~~
e12e
> Knowing only each others public keys

Do you even need a "protocol" if the clients trust each other?

Client A generates a random key, maybe a nonce - and a session Id - then
encrypts that with Bs public key, signs with As private key - and sends that
to B. Only B can decrypt the message, A and B now share a key.

Or maybe that _is_ the protocol.

Anyway, if you know someone's public key _and_ they know yours - you're
already bootstrapped for a secure channel?

Ed: m seeing the page, I see this is more à link to the api for libsodium, and
that obviously makes sense - to have standard implementation (and I guess this
does some tricks for generating public/private _session_ keys from long
lasting public keys?

------
laughinghan
> this basically boils down to TOFU (trust on first use), but the trust does
> not perpetuate across uses, so it’s more like, TOAU (trust on any use). The
> trust is ephemeral, the meeting is ephemeral, the ID is ephemeral. For a lot
> of meetings, this is perfectly acceptable.

I think I would call this TFSU (Trust For Single Use). Trust On Any Use sounds
like complete and total trust.

~~~
OJFord
Or Each.

------
emilfihlman
It used to be possible to actually do TOFU without PKI in browser with
caching, trusted sha signatures and user certificates but it was deprecated.

------
supermatt
> This is not just a theoretical either: Google Duo supports E2EE group calls
> on Android, iOS… and web!

Google Duo does NOT support E2EE group calls on web... They actually don't
support ANY group calls in the web app.

Lack of good support for e2ee multiparty calls is probably why - the hope is
that adoption of insertable streams will change that.

~~~
philips
Coming soon? [https://9to5google.com/2020/05/08/google-duo-web-group-
calls...](https://9to5google.com/2020/05/08/google-duo-web-group-calls/)

------
rarrrrrr
I appreciate when technical writers use humor (those headlines!)

------
beders
E2E is an illusion on anything other than a free Linux running on a free BIOS
with no security enclave.

You can't have E2E on mobile devices, you can't have E2E on any other OS. (And
you'll probably have a hard time finding the right combination of hardware and
Linux distro to have it on Linux)

~~~
akerl_
This seems to pick an arbitrary expansion of what “end to end” means, where
“end” is “the OS layer on the source/destination computers”.

What if the monitor is backdoored and sends copies of the display buffer to
The Secret World Government? What if the keyboard has a hardware keylogger?
What if we’re all living in an elaborate computer simulation of a global
pandemic?

As an alternate comparison: it’s still end-to-end encrypted communication if I
take the securely received message, print out a copy, and tape it to a
bulletin board at the town square.

The “end-to-end” refers to the transmission path. It’s a defense against MITM,
and can be accomplished by plenty of systems that aren’t Linux.

~~~
beders
Yes, I understand perfectly what is _is_.

But people attribute security properties to it that it doesn't have!

What good is protection against MITM if I can just read it off your device
while you type it?

You have no security with mobile devices. It is foolish to think so.

~~~
akerl_
I feel like you meant the question to be rhetorical, but for the sake of
clarifying: there is tremendous value in protecting against MITM, even if
there remain other attack vectors.

Encrypting traffic end-to-end over the network protects against entire
categories of attack. For some attackers (for example: ISPs), end-to-end
encryption essentially removes their ability to compromise traffic contents.
For other attackers, it forces them to ignore those categories of attack and
instead narrows them to things like compromising the device. Notably, Linux is
not magically immune to device compromise, even if you’re running a magical
open-source BIOS. And unlike Windows/OSX, Linux doesn’t have Apple/Microsoft
paying large, motivated security teams whose work is pushed to all their
devices. At best, Linux has commercial distro providers like RedHat paying for
security work. At worst, it relies on the good will and skill sets of open
source maintainers. In trade, Apple/Microsoft offer lower
customizability/visibility into the OS. But since the average user is not
interested in (or qualified to do) security hardening of devices, Linux isn’t
likely to buy them anything meaningful in the field of device security.

All of this is to say “life is hard. We shouldn’t make it harder by protesting
the concept of E2E encryption due to the obvious fact that it does not cure
all ailments.”

