
Cryptographic Signatures, Surprising Pitfalls, and LetsEncrypt - baby
https://www.cryptologie.net/article/495/cryptographic-signatures-surprising-pitfalls-and-letsencrypt/
======
agwa
This is a good example of what can go wrong when you try to use too much
cryptography. When designing a protocol/application, you should first try to
do it without crypto, then with hashes/symmetric crypto, and only as a last
resort public key crypto (e.g. signatures, RSA), since the more crypto you use
the more things can go wrong.

Here, ACME was using signatures for its challenges when it could have gotten
by with no crypto at all (just putting the account ID and CA name in the
challenge would have been secure, and easy to analyze) and got tripped up by a
counter-intuitive property of signatures (signatures do not uniquely identify
a person).

~~~
snarf21
These things are hard to get perfect and attackers generally have more
resources. I have a question though: wouldn't encrypting the data in .well-
known/acme-challenge/some_file with a LE public key (this can be loaded out of
band to prevent MitM) prevent this signature attack?

~~~
agwa
Maybe. But why would you try that when you can just not use signatures, which
were the wrong tool in the first place? My point is that you usually need
_less crypto_ , not more. Trying to fix fundamental problems by tacking on
more crypto might make a protocol more secure, but it definitely will make the
protocol harder to analyze, which makes it harder to find problems.

------
vsenko
The article neglects the details of crafting the private key which are
provided in the original blog post:
[https://www.agwa.name/blog/2015/12](https://www.agwa.name/blog/2015/12)

IMO crafting the public key alone does not provide the attacker significant
benefits.

------
chmike
I'm implementing the ACME protocol to automatically renew certificates and the
claims in the article don't make sense to me.

In the attack by Eve, it is claimed that Eve recovers the account. Unless she
knows the private key of the account, this is not possible because there is no
function in the ACME protocol to recover an account.

The attack also suppose that the special file stored in `/.well-known/acme-
challenge/` is downloaded by Eve. But this file is usually automatically
deleted after the certificate renewal is completed. It's not even clear from
the explanation what Eve could do with this file.

The whole security of the ACME protocol relies on the assumption that nobody
except the owner of the domain name can return a special crafted data when
`[http://<domain>/.well-know/acme-challenge/<token>`](http://<domain>/.well-
know/acme-challenge/<token>`) is requested with the GET method. Cryptographic
signatures aren't really needed here.

Any key pair can be used to renew any certificate with any associated private
key. I thus don't understand the point the author of this article is trying to
make.

EDIT: if MITM is assumed for a GET request to `[http://<domain>/.well-
know/acme-challenge/<token>`](http://<domain>/.well-know/acme-
challenge/<token>`), then the attacker can indeed generate it's own
certificates for that domain.

~~~
agwa
Keep in mind the vulnerability was in a 5 year old version of the protocol and
has since been fixed, so the description of the protocol won't match what
you're familiar with.

> In the attack by Eve, it is claimed that Eve recovers the account

The article says Eve "recover[s] the example.com domain". It should say
request a certificate for example.com. I've mentioned this to the author.

> But this file is usually automatically deleted after the certificate renewal
> is completed.

Usually, but not always, and an attacker could always try to race the
legitimate certificate request.

> It's not even clear from the explanation what Eve could do with this file.

Eve takes the signature and constructs her own ACME account key that produces
the same signature as Alice's account key. Note though that the HTTP challenge
wasn't practically exploitable because Eve would get a different token which
wouldn't exist on Alice's server. The attack is better explained in terms of
the DNS challenge, which was practically exploitable. You can find such
explanations in my blog post
([https://www.agwa.name/blog/post/duplicate_signature_key_sele...](https://www.agwa.name/blog/post/duplicate_signature_key_selection_attack_in_lets_encrypt))
or IETF post
([https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJC...](https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJCV4dqWf-4Yc/)).

> The whole security of the ACME protocol relies on the assumption that nobody
> except the owner of the domain name can return a special crafted data when
> `[http://<domain>/.well-know/acme-challenge/<token>`](http://<domain>/.well-
> know/acme-challenge/<token>`) is requested with the GET method.
> Cryptographic signatures aren't really needed here.

Exactly right! It's the point I made here:
[https://news.ycombinator.com/item?id=22524805](https://news.ycombinator.com/item?id=22524805)
Fortunately, ACME stopped using signatures in challenges after I found this
vulnerability.

~~~
tialaramex
Cryptographically it seems as though all we need in HTTP is that we get back a
special token when we ask for it. But it's essential in designing real-world
security systems to understand real world practice. Historically (prior to the
Ten Blessed Methods explicitly forbidding this) it was not uncommon for HTTP-
based DV to go like this:

Customer: I want a cert for www.example.com

Average CA: OK, put this file (containing the text 12345678) on
[http://www.example.com/average-12345678](http://www.example.com/average-12345678)

Customer: OK done.

Average CA does GET
[http://www.example.com/average-12345678](http://www.example.com/average-12345678)
verifies that the token 12345678 is present and issues certificate

Cryptographically it's fine, who else but the owner could make this test pass?
But in the real world lots of web servers when you ask them for
average-12345678 will go "Sorry, average-12345678 isn't available. Maybe you'd
like to visit our home page?" and that text matches the token and passes the
test.

That really happened, to real commercial CAs which are still trusted today,
because they hadn't thought about real world problems (also in one case
because they goofed an HTTP response code check so if you gave that reply as a
404 their code wouldn't notice it was a 404 before passing the test)

Let's Encrypt was designed to prevent this mistake (even though embarrassingly
it was _still happening_ at other CAs years later until the Ten Blessed
Methods were de facto imposed by Mozilla policy) but managed to make a very
similar mistake in tls-sni-01 which was dangerous because Apache httpd (and
maybe nginx?) has crazy default behaviour for virtual hosts on HTTPS. Again,
in principle tls-sni-01 looks safe, who else but the real owner could answer
TLS setups with bogus SNI information? But the real world gives us an answer:
Anybody sharing a cheap bulk hosting site with you if the host used one of the
world's most popular web servers.

You can see analogues in the real world too. We had a sub-thread on HN
recently about RFID entry badges. Most use a very passive design which is
easily cloned. But you _can_ buy hard-to-clone secure systems for this role.
Having done so you might assume only employees and legitimate visitors can get
into your facility. And then you see that in the real world your employees are
still letting people tailgate and leaving fire doors open to take a smoke
break and you realise that cryptographic security of the RFID entry cards was
not in fact your big problem in controlling which people are in the building.

------
tialaramex
> The attack was found merely 6 weeks before major browsers were supposed to
> ship with Let's Encrypt's public keys in their trust store.

This is both wrong in a small way and misleading in a larger way.

The first big browser to add keys from "Let's Encrypt" (actually from ISRG,
the Internet Security Research Group, a 501(c)(3) entity which exists to run
Let's Encrypt) to their trust store in a shipping browser was Firefox, in
November, rather more than six weeks away.

But that's misleading because even in say, Internet Explorer on Windows, which
uses Microsoft's trust store (of course) and didn't trust ISRG until several
_years_ later of course a certificate from Let's Encrypt worked fine on the
first day of production.

What makes your leaf certificate trusted is that it's signed by an
Intermediate CA which is trustworthy, and while ISRG's Intermediates (at that
time Let's Encrypt Authority X1 and Let's Encrypt Authority X2) were signed by
ISRG they also had copies of more or less the same certificates (same public
keys) signed by an existing trusted CA - DST Root CA X3, which was in most
public trust stores for years and currently belongs to IdenTrust. Those copies
are used (by default, you can swap in the ISRG versions if you want) today to
give you a trusted path back to even quite old web browsers.

~~~
agwa
This is a pretty pedantic point and the inaccuracy doesn't detract from the
rest of the post, though I will mention this to the author as accuracy is
important.

The accurate thing to say is that the attack was found 6 weeks before Let's
Encrypt's keys were scheduled to be trusted by browsers (by way of IdenTrust's
cross certificate). (Per [https://letsencrypt.org/2015/06/16/lets-encrypt-
launch-sched...](https://letsencrypt.org/2015/06/16/lets-encrypt-launch-
schedule.html))

------
NieDzejkob
Unsurprisingly, this flaw has since showed up in CTF challenges:
[https://hack.cert.pl/challenge/failcrypt2](https://hack.cert.pl/challenge/failcrypt2)

