
I exploited TLS-SNI-01 issuing Let's Encrypt SSL-certs for any domain (2018) - yread
https://labs.detectify.com/2018/01/12/how-i-exploited-acme-tls-sni-01-issuing-lets-encrypt-ssl-certs-for-any-domain-using-shared-hosting/
======
schoen
I work on Let's Encrypt and Certbot and I'll offer the following summary for
people who aren't familiar with the history of this.

TLS-SNI-01 is one of several validation methods for getting a Let's Encrypt
certificate. It was originally the only method that worked on port 443 (other
methods use port 80 or ask you to create DNS records).

In January 2018, Frans Rosén (the author of the article linked here)
discovered that, on some shared hosting environments, one shared hosting
customer could pass TLS-SNI-01 challenges for a different customer's domain.
This problem only affects users of particular shared hosting services, but
there was no apparent way that Let's Encrypt could get every shared hosting
provider to fix this. (Let's Encrypt already knew about a related problem in a
proposed-but-never-implemented validation method called HTTPS-01, but had
missed the issue that Rosén discovered.)

Let's Encrypt then temporarily disabled TLS-SNI-01 entirely, and later allowed
it for certificate renewals but not for initial issuance, as well as for
specific hosting providers that had specifically confirmed that they were not
affected by the vulnerability.

At the same time, the eventual deprecation of TLS-SNI-01 was announced.

In preparation for the deprecation, the Certbot client improved its support
for HTTP-01 (the challenge that uses port 80), and, in recent releases,
switched to preferring HTTP-01 over TLS-SNI-01 even if the certificate
authority offers both. (Certbot originally preferred TLS-SNI-01 over HTTP-01.)
This change means that recent renewals for people who have moderately up-to-
date versions of Certbot will default to HTTP-01 and simulate what will happen
after TLS-SNI-01 isn't available at all.

Recently, Let's Encrypt has also been e-mailing people who performed recent
renewals using TLS-SNI-01 and warning them that this option is going to be
eliminated permanently in March.

There is a newer validation method that works on port 443 called TLS-ALPN-01;
this is supported by Let's Encrypt but not in Certbot. Some other clients do
support it, although they may require that you shut down your web server
temporarily.

[https://community.letsencrypt.org/t/which-client-support-
tls...](https://community.letsencrypt.org/t/which-client-support-tls-alpn-
challenge/75859/2)

Some people have been frustrated by this change because they have blocked port
80 entirely. Let's Encrypt has published a document noting that this isn't a
good idea; in particular, it doesn't protect against active SSL stripping
attacks.

[https://letsencrypt.org/docs/allow-
port-80/](https://letsencrypt.org/docs/allow-port-80/)

~~~
jbigelow76
_I work on Let 's Encrypt and Certbot..._

Nothing related to the article or your response but I wanted to take a moment
to say that as someone who is just getting started with learning Linux
administration Certbot and Let's Encrypt made it stunningly easy to get SSL
enabled for my site, your work and dedication is immensely appreciated!

~~~
schoen
Thanks, I've passed that along to the whole Certbot team!

~~~
thaumaturgy
I want to second this. With only a little effort initially, I've been able to
have fully automated SSL-by-default for all of my "backyard hosting"
customers, at no extra cost for them or for me. You folks are wonderful.

------
numbsafari
While it's not a mitigation, something you should consider doing is using a
certificate transparency[0] monitor[1] for your domains.

I use the crt.sh service to monitor my domains. It provides RSS feeds (e.g.
[3]) that you can hook up to your own monitoring infrastructure. I have this
piped into a slack channel, so that any time a certificate is issued, I get a
warning. Since I am using LE for certs, I will often cross-reference that
issuance with the logs from our LE refresh bot. A future step would be to do
that automatically, but this is sufficient for now.

[1] [https://www.certificate-transparency.org/](https://www.certificate-
transparency.org/)

[2] [https://crt.sh](https://crt.sh)

[3]
[https://crt.sh/?q=%25.ycombinator.com](https://crt.sh/?q=%25.ycombinator.com)

~~~
joombaga
How do they know when a cert is issued for your domain?

Edit: I never knew about this!
[https://tools.ietf.org/html/rfc6962](https://tools.ietf.org/html/rfc6962)

~~~
schoen
The certificate authorities proactively tell public logs, which these services
monitor.

[https://www.certificate-transparency.org/](https://www.certificate-
transparency.org/)

[https://crt.sh/monitored-logs](https://crt.sh/monitored-logs)

------
Ajedi32
FWIW, this has long since been patched and TLS-SNI is no longer considered a
valid method for ACME. It's being replaced with TLS-ALPN which doesn't have
this problem: [https://community.letsencrypt.org/t/tls-alpn-validation-
meth...](https://community.letsencrypt.org/t/tls-alpn-validation-method/63814)

~~~
djrogers
Long since? TLS-SNI-01 and 02 were only disabled only a few weeks ago, and
they haven't been officially removed from the ACME spec, and the replacement -
TLS-SNI-03 - hasn't been finalized yet.

I think you're conflating LetsEncrypt with ACME here. LE turned off the 2
methods, and added a different tls validation method last year, but neither of
those are directly related to fixing the ACME spec.

~~~
mholt
TLS-ALPN-01 is the successor to TLS-SNI, and it has been in production at
Let's Encrypt for some time now. [https://community.letsencrypt.org/t/tls-
alpn-validation-meth...](https://community.letsencrypt.org/t/tls-alpn-
validation-method/63814)

TLS-SNI has been disabled for almost a year for new issuances and the final
nail in the coffin is to turn it off for renewals which is happening soon.
[https://community.letsencrypt.org/t/important-what-you-
need-...](https://community.letsencrypt.org/t/important-what-you-need-to-know-
about-tls-sni-validation-issues/50811)

------
CaliforniaKarl
Let's Encrypt did end up disabling TLS-SNI-01 on January 12, 2018:
[https://community.letsencrypt.org/t/tls-sni-challenges-
disab...](https://community.letsencrypt.org/t/tls-sni-challenges-disabled-for-
most-new-issuance/50316)

And as per [https://community.letsencrypt.org/t/important-what-you-
need-...](https://community.letsencrypt.org/t/important-what-you-need-to-know-
about-tls-sni-validation-issues/50811), it looks like TLS-SNI-01 and TLS-
SNI-02 will be dropped, and moved to TLS-SNI-03 that addresses the issue.

So, as long as your ACME client is up-to-date, you should be fine! And if your
ACME client isn't up-to-date, then you're either using a different method or
you're probably not getting certs.

~~~
thro_away_n
You get an email like this:

Hello,

Action may be required to prevent your Let's Encrypt certificate renewals from
breaking.

If you already received a similar e-mail, this one contains updated
information.

Your Let's Encrypt client used ACME TLS-SNI-01 domain validation to issue a
certificate in the past 60 days. Below is a list of names and IP addresses
validated (max of one per account):

example.com (x.x.x.x) on 2018-12-05

TLS-SNI-01 validation is reaching end-of-life. It will stop working
temporarily on February 13th, 2019, and permanently on March 13th, 2019. Any
certificates issued before then will continue to work for 90 days after their
issuance date.

You need to update your ACME client to use an alternative validation method
(HTTP-01, DNS-01 or TLS-ALPN-01) before this date or your certificate renewals
will break and existing certificates will start to expire.

Our staging environment already has TLS-SNI-01 disabled, so if you'd like to
test whether your system will work after February 13, you can run against
staging: [https://letsencrypt.org/docs/staging-
environment/](https://letsencrypt.org/docs/staging-environment/)

If you're a Certbot user, you can find more information here:
[https://community.letsencrypt.org/t/how-to-stop-using-tls-
sn...](https://community.letsencrypt.org/t/how-to-stop-using-tls-sni-01-with-
certbot/83210)

Our forum has many threads on this topic. Please search to see if your
question has been answered, then open a new thread if it has not:
[https://community.letsencrypt.org/](https://community.letsencrypt.org/)

For more information about the TLS-SNI-01 end-of-life please see our API
announcement: [https://community.letsencrypt.org/t/february-13-2019-end-
of-...](https://community.letsencrypt.org/t/february-13-2019-end-of-life-for-
all-tls-sni-01-validation-support/74209)

Thank you, Let's Encrypt Staff

~~~
opless
Hopefully Ubuntu has updated certbot to support this.

~~~
GuyPostington
If not, installing from pip is always an option.

`pip install --upgrade --user certbot`

~~~
schoen
We would really suggest using certbot-auto instead because it will create a
venv for you so that you don't get version conflicts elsewhere. (It does use
pip behind the scenes, but in a venv.)

~~~
GuyPostington
That's a good point, thank you.

------
hannob
The upcoming deprecation of TLS-SNI-01 (Feb 13th) will catch a few people by
surprise, because some Linux distros have been slow in reacting to this.

Debian has only recently added an updated certbot to the stretch-updates repo
and the normal stable repo still has an old version that defaults to TLS-
SNI-01 for apache+nginx.

Ubuntu 16.04, which is an LTS version, has an even older version of the
letsencrypt software (before it was called certbot) and Ubunbtu doesn't seem
to care.

~~~
rlpb
> Ubuntu 16.04, which is an LTS version, has an even older version of the
> letsencrypt software (before it was called certbot) and Ubunbtu doesn't seem
> to care.

Ubuntu Server developer involved with Certbot here. We do care.

Updating letsencrypt/certbot in 16.04 is monumentally difficult because, from
a Certbot perspective, February 2016 is prehistoric, and users of an Ubuntu
stable release expect not to be regressed for all the different use cases they
may have, not all of which we even know about. Updating a stable release is
difficult for these reasons normally. Now add in the complication of five
different interacting source packages, an upstream project rename, and the
need to not regress the behaviour of a key library which must be updated but
users may be depending on directly, and hopefully you can see the difficulty
of this task. We are working on it though, and you can follow progress here:
[https://launchpad.net/bugs/1640978](https://launchpad.net/bugs/1640978). I'm
still hopeful that this will land in time.

In the meantime, if you can't wait, are using 16.04 and must have it now, then
you have a number of options:

1) Use certbot-auto as recommended by upstream.

2) Use 18.04, which isn't affected as it shipped with a new-enough certbot
package.

3) Try the (experimental) snap: [https://forum.snapcraft.io/t/call-for-
testing-certbot-lets-e...](https://forum.snapcraft.io/t/call-for-testing-
certbot-lets-encrypt/7990) (though I don't recommend this for production).
Relative to the deb packaging, the snap has been trivial to develop and
maintain and the edge channel keeps up with upstream master automatically as
long as CI passes (and keeping it passing has been a relative breeze).

------
kpcyrd
The title on hackernews is highly misleading, there's quite a difference
between "for any domain" and "for any domain using shared hosting".

------
ohnoesjmr
Not sure I understand how this attack works. You ask for cert for foo.com, LE
tells you to provide a self-signed cert subjectAlt=foo.bar.acme.invalid. LE
resolves foo.bar to some heroku endpoint as the domain lives on heroku, LE
connects via TLS without verification and sends foo.bar.acme.invalid as SNI,
which heroku routes to a hijacker who asked heroku to route foo
bar.acme.invalid requests?

Why would LE send the SNI in the first place? I thought the purpose was to
prove you own the domain, not cohabit an environment where the domain is
hosted?

I guess it only allows hijacking certs of domains running on heroku et al
envs?

Also, what do host headers have to do with this? Presumably this is just a tls
handshake test?

~~~
xianb
> Why would LE send the SNI in the first place? I thought the purpose was to
> prove you own the domain, not cohabit an environment where the domain is
> hosted?

The assumption was that you controlled the domain if you could return the
self-signed cert with subjectAlt=foo.bar.acme.invalid when the SNI request for
foo.bar.acme.invalid is made to the server your are requesting a cert for.
Unfortunately the assumption didn't hold up because hosting providers shared
the same routing server across domains and subdomains and those routing
servers did not have controls around the subjectAlt domains used for TLS-
SNI-01.

> Also, what do host headers have to do with this? Presumably this is just a
> tls handshake test? They don't have anything to do with the weakness. It's
> mentioned to make the distinction that SNI is used for the cert retrieval to
> establish the connection and Host-header is used separately to route to the
> proper backend

~~~
morpheuskafka
It seems that this arose from a fundamental issue with a lot of internet
specifications. The person developing hosts' SNI implementation did not
consider SNI to be a form of unique identification, but rather a way to
establish that the server was authorized to serve it--as long as the server
makes sure to serve the right files, which is a security issue for them to
deal with, the certificate can be returned to anyone who requests it, even
just to get a 404 page. It's like the janitor randomly trying keys from a
ring, doesn't matter whose key is whose as long as it gets him in and he does
what he is supposed to.

The TLS-SNI-01 developers assumed, presumably based on the implementations
they knew/wrote, that SNI was an identifier. When used as originally intended,
of course, SNI provides the name of a real certificate desired, not a
validation string, and the returned certificate authenticates a pairing of
server/IP->domain not validationstring->returned certificate. I'm not sure
what the TLS standard documenting SNI actually says is the right
interpretation, but unless it clearly says that this might be done in the
future, it seems to me like a "hackish" solution that could reasonably be
expected to cause a lot of issues. This seems to be a failure of setting clear
standards for critical security protocols more than anything else.

By contrast, HTTPS-01 and DNS-01 operate using a known authoritative measure,
ability to control the actual content of the website. If a bad actor has
access to this, its game over. Only EV certs are intended to protect against
this. Likewise, TLS-APLN-01 created a new protocol that could not possibly by
implemented by accident or by anyone not intending to authorize issuance. It
seems like nearly every major security issue (short of hard coding/crypto
flaws) involves an assumption or edge case relied upon without consulting
relevant standards.

------
walrus01
It is worth noting that the version of 'certbot' currently packaged for
debian-stable (stretch) is very old, and attempts to use tls-sni-01 as the
default method for verification.

Strongly recommended to enable stretch-backports and update certbot to v0.28.

------
peterwwillis
For a while I found it funny that the web's security is based on the strength
of (1) new standards from Let's Encrypt, and (2) the quality of their
codebase. It's been shown multiple times that they have failed in both of
these, yet nobody has taken them to task as failing to uphold the
responsibility that CAs have to keep the web secure.
([https://community.letsencrypt.org/c/incidents](https://community.letsencrypt.org/c/incidents))

At first I thought that their biggest issue was properly verifying that a
domain owner is requesting a certificate (they don't do that, but anyway..).
They actually have multiple categories of security holes relating to web pki,
because it's a pretty big sandbox.

The holes affecting LE include everything from OCSP support, to actually not
issuing any certificates during Google outages (yes, if Google is down, LE is
down, as well as other CAs, and it's happened multiple times), to exposing the
personal information of LE users, to bugs in their protocol allowing any cert
to be issued, to mis-handling blocklists, leaking API keys, and so on.
Basically, a litany of different security holes could at any time compromise
the security of the whole damn web.

The thing is, this isn't specific just to LE. All these things and more
probably affect every CA. LE is just more experimental and new, using less
robust infrastructure and technology, and their "growing pains" provide a much
more visible and much _wider_ attack surface for abusing PKI.

I don't think the industry organizations connected to CAs care enough to put
in stringent security practices in place. But I do think it's about time to
start producing these standards. It's probably going to be up to hobbyists and
privacy aficionados to do this work, because they're the ones who care enough.
I'm also surprised that governments aren't more active in enforcing more
stringent security practices, seeing as they're affected just as much.

Interestingly, following the CT Policy mailing list is a good way to be told
about holes in the current CA system:
[https://groups.google.com/a/chromium.org/d/msg/ct-
policy/_cs...](https://groups.google.com/a/chromium.org/d/msg/ct-
policy/_csiMYrwsxc/2S6bMnSCDwAJ)

~~~
profmonocle
> At first I thought that their biggest issue was properly verifying that a
> domain owner is requesting a certificate (they don't do that, but anyway..).

Why would they? That's not what web PKI standards require. It's not necessary
to demonstrate ownership of the domain, only _control_ of the domain.

If only the owner could request a certificate, then many valid use cases would
be impossible. i.e. if I use a blogging service to host blog.company.com, they
can use Let's Encrypt (or another CA) to manage the cert for me. If domain
owners had to manually renew these certs themselves and upload them, HTTPS
adoption would be much lower on these types of services.

Requiring proof of ownership also wouldn't take into account that DNS
hierarchy sometimes extends below what WHOIS can reveal. For instance, a
company may delegate department.company.com to some internal sub-group. That
sub-group wouldn't be able to manage their own certs if they had to own the
registered domain, as shown in whois.

> I don't think the industry organizations connected to CAs care enough to put
> in stringent security practices in place.

This simply isn't true. There's an extremely thorough set of requirements
called the "baseline requirements" that every CA must follow. These are
written by the CAB forum, which is an industry group comprised of browser/OS
vendors and CAs.

~~~
peterwwillis
tl;dr it's not impossible and we can delegate requests.

> If only the owner could request a certificate, then many valid use cases
> would be impossible. i.e. if I use a blogging service to host
> blog.company.com, they can use Let's Encrypt (or another CA) to manage the
> cert for me. If domain owners had to manually renew these certs themselves
> and upload them, HTTPS adoption would be much lower on these types of
> services.

It's not impossible, and wouldn't even be that difficult, just a few extra
steps for LE and the registrar, and one extra step for the domain owner. It
would work like this:

Step 1) Domain Owner signs up for a blog on SomeBlogSite.com with their own
domain "mycoolrecipes.com" and sets up the DNS record ( _" mycoolrecipes.com.
IN A <sbo IP address here>"_)

Step 2) SBO wants to gen a cert for mycoolrecipes.com, so they request one
from LE.

Step 3) LE goes to the registrar for mycoolrecipes.com and says "Hey, some
random blog is trying to get a cert, is this cool? "

Step 4) Registrar sends an e-mail to DO saying, "Hey, is this cool? I know
your DNS is pointed at SBO, but I wanted to make sure you actually authorize
SBO to handle your security, and not just random network services. Click this
link and log in to verify they get the cert."

Step 5) DO logs in to registrar, verifies SBO can have the cert.

Step 6) Registrar notifies LE they can have the cert.

Step 7) LE releases cert to SBO.

This has the effect of 1) removing almost all invisible attacks on cert
issuance, because the [delegated] owner has to see and authorize the request,
2) preventing the current need for Certificate Transparency; Registrars could
still feed the logs, but CAs wouldn't have to, because the registrar (and
domain owner) is the final arbiter, 3) preventing the need for CAA (which
doesn't necessarily work anyway and is opt-in), and 4) preventing attacks on
DNS from compromising cert issuance.

You can also do this with keys and not e-mails, so that it happens
automatically, but only keys authorized by the DO can gen certs. Meaning we
have cryptographic proof that only SBO can gen certs for DO, and only with
that one CA. Which is way more secure than the current regime. (Again: nobody
should be depending on DNS, DNS sucks, and DNS records don't prove domain
ownership, nor that you want anything DNS points to at any time to control
your security)

> Requiring proof of ownership also wouldn't take into account that DNS
> hierarchy sometimes extends below what WHOIS can reveal. For instance, a
> company may delegate department.company.com to some internal sub-group. That
> sub-group wouldn't be able to manage their own certs if they had to own the
> registered domain, as shown in whois.

For companies with DNS delegation, registrars can provide more fine-grained
delegation features, or companies can integrate a DNS management API to
delegate authorization of certs. There are at least two open source DNS
management APIs (that work independently of who controls the authoritative DNS
server) and can control pre-created zones based on users, groups, and access
policies. Registrars can make the request for the FQDN to the main domain and
the domain can pass it on as needed. So all of that already exists, it would
just need one new API function to handle authorizing certificate issuance.

DNS _is_ a hierarchy, so delegating control like this should be expected.
Otherwise, use different top level domains.

And schoen is correct, I don't think the BRs are nearly enough. There are
security best practices that are going ignored and would catch a lot of the
mistakes perpetrated by integration and operation teams.

~~~
tialaramex
This punts all the hard stuff to the registrars. Maybe that makes you feel
better, but I think in terms of making the Internet secure that's a step
sideways at best.

~~~
schoen
I'm curious about why this is so, since I've had other discussions about
trying to increase registrars' role in certificate verification and issuance,
and other people also expressed the idea that registrars weren't really up to
the task. But doesn't all DV always essentially treat the registrars' view of
domain control as axiomatic? How would it make things worse to involve them
more proactively?

~~~
tialaramex
When we change the role from passive to active we also significantly change
the effect of incompetence and laziness, which are ordinary human traits we
should expect to find everywhere and most especially in organisations with no
public oversight like the registrars.

Peter's scheme involves a tremendous number of these delegated requests going
out every day. Doubtless the vast majority will be legitimate. For all those
the lazy (but incompetent) solution is to short circuit between Step 3 and
Step 6. Everything appears to work exactly as you'd hope, indeed it's better
and more reliable than you might expect. Right up until bad guys realise
there's a short circuit.

We know Certificate Authorities, which actually do have oversight and are
required to keep proper records and so on, have repeatedly got this sort of
thing wrong, short circuiting essential validation steps and not realising
because the happy path worked. We should expect it to be _at least as bad_ and
probably much worse with registrars.

Hence, as I said, at best a side step.

It's also a huge pile of work. To do this you need to get all the registrars
on board, or at least so very many that you can declare the others
"unsupported" and cease issuance for their domains without a significant
backlash. I would be _astonished_ if anyone can put together a working system,
deploy it to all/most registrars and so on in under a decade. I might be on
board with a programme of work taking a decade if it was a huge improvement,
but as I wrote above it's just a sideways step.

Method 3.2.2.4.1 is dead right? So it seems as though other actors in this
space also recognise that "Just ask the registrar" is not a workable solution
unless it so happens you are the registrar.

------
whizzkid
I would 100% use this article as my CV if I were him. Good work my friend :)

------
mitchtbaum
Where's the CVE?

ie. [https://www.cvedetails.com/](https://www.cvedetails.com/) ...

~~~
Foxboron
CVEs are traditionally limited to software packages. You wouldn't issue a CVE
for the specification nor the cloud vendors.

~~~
mitchtbaum
"You can't improve what you don't measure.”

~~~
mitchtbaum
Rather: If you want to improve something, it helps to measure it.

------
johnchristopher
Is traefik auto issuance and renewal still working ?

------
joering2
So thru my network I see 278 servers I need to patch. This will take forever,
or couple days of work. Next time I will simply spend few bucks on an annual
certificate instead.

~~~
Dylan16807
A few bucks, or a few bucks multiplied by 278?

If you'd be replacing everything with a wildcard because it's easier, you can
get a free wildcard from Let's Encrypt too.

------
B-Con
Rewording (and hopefully simplifying) for my own benefit because I think the
OP is a little loose with the details.

There are two parts:

Part 1: Subdomain takeover

1) DNS for domain X points to service Y (Y=Heroku in this case).

2) Service Y allows anyone to claim an unused domain.

3) Domain X is then claimed by an attacker.

4) Domain X is then served by an attacker.

5) Bonus points if the owner of X has a wildcarded HTTPS cert, so the hijacked
subdomain gets HTTPS automatically.

So now attacker serves content on a domain they don't own (possibly over
HTTPS). This is already a known cloud hosting mis-configuration problem.

In this case, investor.example.com for HTTPS was not claimed on Heroku but DNS
was pointed to Heroku, so it was vulnerable to subdomain takeover. However,
the author only had HTTPS access and example.com did NOT have wildcarded cert,
so the author was in an edge case of having control of a subdomain but only
over HTTPS with no cert.

This prompted them to look for...

Part 2: Abusing TLS-SNI-01 validation

TLS-SNI-01 works by having LetsEncrypt request a cert with a secret from the
specified domain.

1) LE does a DNS lookup of the desired domain.

2) LE connects to that domain using TLS-SNI. SNI specifies a hostname in the
pre-encryption request.

3) ACME TLS-SNI-01 has the SNI requested domain in the form
foo.bar.acme.invalid.

4) Author claimed the necessary foo.bar.acme.invalid hostname on Heroku (and
later AWS).

5) Author initiated a LetsEncrypt cert request for a _different_ domain on the
same hosting provider.

6) When LE made the SNI request to that domain it did a DNS lookup and got
pointed to the shared hosting provider, but the SNI hostname was of the form
foo.bar.acme.invalid which was controlled by the attacker, so it was passed to
the attacker's app.

6.a (Side note: Because there was no valid cert offered by the hosting
provider, I'm guessing the front-ends passed the SNI request through to the
backend app. Because if there was a cert configured for the Heroku app then I
think the front-ends just serve the cert directly without checking with the
backend app.)

7) The attacker then finished the LE cert negotiation and got a cert on behalf
of the other domain.

I'm a little fuzzy on exactly how choosing the domain in (3) and (6) works
since the spec[0] doesn't seem to match what the blog post shows, but I'm sure
it works somehow.

Corrections/clarifications welcome.

Related specs:

[0] [https://tools.ietf.org/html/draft-ietf-acme-
acme-01#page-40](https://tools.ietf.org/html/draft-ietf-acme-acme-01#page-40)

[1]
[https://tools.ietf.org/html/rfc6066#page-6](https://tools.ietf.org/html/rfc6066#page-6)

