
Delegated Credentials in TLS - sudoyear123
https://engineering.fb.com/security/delegated-credentials/
======
tialaramex
This is/ will be part of the dividend from TLS 1.3 encrypting more of the
handshake.

Historically deploying a feature like delegated credentials was impractical
because some large and unpredictable fraction of connections would go like
this:

Popular Web Browser: "Hi, I want to talk TLS 1.2 to somesite.example, I know
how to handle DelegatedCredentials if you've got any"

Some Site: "Hi, here's our delegated credentials for somesite.example and now
let's..."

"Security" MITM Box: "Oh no you don't scumbag! I don't recognise that
certificate - FIN. FIN. This connection is an attack!"

With TLS 1.3 that MITM box's operator has to make a decision up front, since
it can't understand the part where the remote server actually identifies
itself because that's now encrypted. It can either just give up and let the
Popular Web Browser do TLS 1.3 unmolested and not break random features OR it
can proxy everything and then since it doesn't claim to understand
DelegatedCredentials the remote server won't bother trying to use them.

This way if you have middle boxes everything stays no worse than before, but
if you don't you get the benefit of new features.

Unfortunately although TLS 1.3 makes it possible to begin deploying this, the
end game which reaps the benefits seems a bit fraught. But hey, can't get
there if you don't start.

------
saltcured
The part I find a bit weak is the discussion of RFC3820 proxy certs, which I
remember from the grid-computing days. They were originally designed in the
different era of the late 1990s, but I would be interested to read a more in-
depth comparison of the approaches.

Those proxy credentials were primarily for delegating "limited" client/user
credentials, which in turn might be used as either the client or server role
for subsequent TLS connections during the lifecycle of a distributed,
scientific computing job. As I recall, the practical "limited" characteristic
in the Globus GSI system was a single bit flag that was interpreted in the
ecosystem to prevent recursive delegation, i.e. out of an abundance of caution
about things like batch jobs turning into worms.

~~~
sudoyear123
This is a very good question. We actually did consider proxy certs and name
constraints certs first and had a long discussion at the IETF about these
different options. At the end the consensus was that it would be much better
to have a very minimal structure which could only do 1 thing and nothing else.
DCs also have the advantage that they are cryptographically bound to a
particular End entity certificate vs a particular public key only, and hence
can only be used with their properties, so it really is the minimal possible
thing you need and nothing more.

------
est31
While the proposal is nice, it's a bit sad that the RFC invents its own
serialization scheme instead of just using ASN.1, like everything else that
touches X.509:

> The signature of the DelegatedCredential is computed over the concatenation
> of:

> 1\. A string that consists of octet 32 (0x20) repeated 64 times.

> 2\. The context string "TLS, server delegated credentials" for servers and
> "TLS, client delegated credentials" for clients.

> 3\. A single 0 byte, which serves as the separator.

> 4\. The DER-encoded X.509 end-entity certificate used to sign the
> DelegatedCredential.

> 5\. DelegatedCredential.cred.

[https://tools.ietf.org/html/draft-ietf-tls-
subcerts-04#secti...](https://tools.ietf.org/html/draft-ietf-tls-
subcerts-04#section-3)

~~~
sudoyear123
DER encoded ASN.1 is used for the X.509 end-entity certificate, however
generally this follows how CertificateVerify works in TLS 1.3
[https://tools.ietf.org/html/rfc8446#section-4.4.3](https://tools.ietf.org/html/rfc8446#section-4.4.3)

~~~
est31
Hmm fair enough. Case rested then.

~~~
sudoyear123
generalized asn1 parsing can be super tricky and the industry is generally
avoiding asn1 for new protocols. There has been some really interesting work
from microsoft on verified parsers for asn1
[https://www.usenix.org/system/files/sec19-ramananandro_0.pdf](https://www.usenix.org/system/files/sec19-ramananandro_0.pdf)

~~~
wahern
You're supposed use parser generators for ASN.1. The reason ASN.1 is so
difficult in TLS and other crypto standards is precisely because ASN.1
messaging is intermixed with ad hoc messaging (as in this case) and implicit
state, which means you couldn't even use a parser generator for everything
even if you wanted to.

The excellent open source ASN.1 compiler, asn1c
([http://lionet.info/asn1c/compiler.html](http://lionet.info/asn1c/compiler.html)),
can generate C data structures and a parser and composer for X.509 DER
certificates from the formal ASN.1 description. But it's not widely used
because, among other reasons, you end up having to write too much ad hoc
parsing anyhow, which makes the investment in the parser generator seem not
worthwhile. (AFAIU, asn1c is far more popular in the telecom industry, likely
because telecom uses fully ASN.1-based messaging.)

Of course, if you're not going to use ASN.1 as intended, then the binary
encoding (e.g. DER) can be quite tricky to parse using an ad hoc parser,
including most parser combinators, mostly because TLV encodings aren't
context-free. But I managed to write a full X.509 parser using LPeg. LPeg has
an extension for match-time captures which provide a way to invoke a
subexpression parameterized on the value of a previous match (e.g. the decoded
length context), which can return match success or failure along with a
resumption point to the parent expression. See [http://lua-
users.org/lists/lua-l/2019-04/msg00226.html](http://lua-
users.org/lists/lua-l/2019-04/msg00226.html) and [http://www.inf.puc-
rio.br/~roberto/lpeg/#matchtime](http://www.inf.puc-
rio.br/~roberto/lpeg/#matchtime)

I feel like there's simply no good answer here. The fundamental problem is the
tension among 1) strictly specified, formalized protocols (which ASN.1 DER
absolutely provides), 2) efficiency in time and space (ASN.1 DER does well,
PER takes it to an extreme), 3) the need for forward compatibility so
protocols can incrementally evolve (partly technical, partly a social
management issue), and of course 4) ease of implementation. Context-free
encodings help with #3 and #4, but fail at #2 (e.g. field names aren't
necessary, and variable length values require a more complex encoding), and in
a security context cause problems with #1 (better to have a failed parse than
to successfully parse unknown elements that you ignore).

------
chasil
What is preventing this from turning into another "Turktrust fiasco?" Is the
delegated signing privilege only valid for explicit DNS names?

[https://nakedsecurity.sophos.com/2013/01/08/the-turktrust-
ss...](https://nakedsecurity.sophos.com/2013/01/08/the-turktrust-ssl-
certificate-fiasco-what-happened-and-what-happens-next/)

~~~
breser
All this is doing is allowing the server (or client since the protocol allows
client certificates to do the same thing) to use a different key pair with a
shorter validity period than the CA signed certificate.

The delegated credential is not another cerificate and doesn't have a DNS name
in it at all. The original certificate who's private key was used to create
the delegated credential still has that information and the client still gets
that. The delegated credential only consists of four pieces of information:
validity interval, public key, signature algorithm and signature. I.E. just
enough information to provide the public key and verify it is signed by the CA
signed certificate.

RFC is here if you want to read the details:
[https://tools.ietf.org/html/draft-ietf-tls-
subcerts-04](https://tools.ietf.org/html/draft-ietf-tls-subcerts-04)

------
baybal2
Why would not they run their own CA for short term certs?

Few CDNs have this exact problem and they sign their short term certs with
their master cert.

~~~
sudoyear123
As a CA you can issue certificates for other domains as well which might be
undesirable. There are existing mechanisms such as Name constrained CAs and
proxy certificates to reduce this scope. While they were originally considered
there are issues with them. There is no widespread support for either and
there is no way to know whether both sides support them. DCs allow for a
extremely minimal subset of what you might need to issue credentials with your
own lifetime and it only affects you. DCs are cryptographically bound to the
leaf certificate as well. A bunch of this is documented in the draft.

