Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt now supports ACME-CAA: closing the DV loophole (devever.net)
309 points by pabs3 on Dec 18, 2022 | hide | past | favorite | 150 comments



To summarize: a CAA record might point to Let’s Encrypt, but anybody could sign up at Let’s Encrypt, so this does not protect anyone. But if the CAA record points at a specific account name at Let’s Encrypt (which it can now do), this closes that hole.

This is similar to the lame delegation problem; i.e. where some old forgotten subdomain has a CNAME record pointing to some big hosting provider; although the site does not work anymore, the CNAME record is still there, and an attacker could get an account at that same big hosting provider and sign up with the same subdomain name, since the CNAME record is still valid. This way, they would get access to providing connectivity (web content, incoming e-mail) for a domain name they should not have.


>> Except, when this Domain Validation is performed, you don't have a certificate yet. That's why you're going through the process in the first place: to get a certificate. Which means that when the CA verifies that your domain is correctly hosting the challenge, it does so via ordinary, unencrypted HTTP... which can be trivially subject to man-in-the-middle attacks.

This is not completely true.

For HTTP challenge, yes its true. But for DNS Challenge its not.

DNS Challenge requires the provisioning of a special DNS Text record. Assuming you are using an API with your DNS provider this can happen over HTTPS. (Indeed I can confidently say it would Have to be over HTTPS, or similar.)


Author here.

A DNS challenge is equally vulnerable to MITM unless DNSSEC is used.

The whole point here is that an entity which wants to MitM connections between a browser and website can obtain a certificate if they can MitM connections between a CA's validator and that website. If that validation isn't cryptographically secured, it's moot whether it uses HTTP or DNS from a security perspective.


> For HTTP challenge, yes its true. But for DNS Challenge its not.

The query from LetsEncrypt to your authoritative DNS server is unencrypted. The method you use to make an update is irrelevant, since the response can be spoofed by a third party with backbone surveillance capabilities.


In that case, CAA doesn't help either because that record could also be faked. The answer of course is that you need DNSSEC in order for this to work, in which case the DNS challenge is already secure against spoofing without the need for this particular CAA extension.

It doesn't hurt to have this feature, but I suspect it would primarily protect against misconfigured DNS (where there's a CNAME someone forgot to remove that points to a hosting provider an attacker can get an account on). And of course, for making the HTTP challenge more secure.


> but you could restrict the challenge type before IIRC

I don't think so, that's what the other change Let's Encrypt announced does:

The change mentioned actually lands two CAA attributes. All the below assumes DNS integrity, either via DNSSEC or because the attacker hasn't been able to spoof your DNS records for whatever reason.

The accounturi attribute lets you bind to a specific ACME account, which means an attacker has to use your Let's Encrypt account to get certificates or it won't work. That means e.g. if you have a locked down ACME issuing server that has the account credentials and is supposed to handle all your issuing, but a bug means one of your test server got rooted by bad guys -- they don't get to issue certificates because they don't have the credentials from the locked down server.

The validationmethods attribute lets you specify that only some ACME methods ("challenge types") are permissible for this CA. So you can rule out methods you don't use, and thus forbid them to an adversary who might have found a way to use them.

If you have DNS integrity, set CAA to letsencrypt and validationmethods to require DNS ACME verification, you only need to protect DNS records, all your application protocol stuff is unable to cause issuance. So even though your new guy's "really cool" Python web service was immediately turned into a reverse shell by a Russian teenager and is now trying to send fake Viagra spam via your company email server, it can't get itself any certificates it wasn't supposed to have, which is a small blessing.


Sorry, I edited my comment to remove that aside before you posted your reply. Ultimately this does make the HTTP challenge more secure and I mixed up which CAA bits were already supported.

My point was more about how this doesn't make DNS challenges any more secure because the security of CAA ultimately depends on DNSSEC.


Previously you couldn't tell Let's Encrypt "Just don't use the other challenges" and you couldn't tell them "Just don't let people use their own ACME account" and so even if you only ever do DNS challenges, and you have DNS integrity, bad guys who control one of your systems can run Certbot and get themselves a certificate via the http-01 challenge.

So yeah, it doesn't make the DNS challenges more secure, but in some cases it makes it possible to rule out lots of other risks outside DNS, narrowing your attack surface very considerably if you are able to do that.

This is about raising the low bar, not the high bar.


Agreed, but GP was explicitly saying that this makes DNS challenges more secure. I was simply responding to that.


In addition to the other points in sibling comments, even if you use DNS for your domain validation, and use DNSSEC, there is nothing to stop a MiTM attacker from requesting a certificate from Let's Encrypt (or some other CA) using the http method.


> For HTTP challenge, yes its true.

I know very little about ACME, but surely this (not having a cert yet) is only true the very first time you get a cert, or if you let the existing cert expire?


Alas no. The http challenge method is http,regardless of whether you have an existing cert or not.

However the protocol consists of a number of steps. Around 8 to 10 steps. For all but 1 of the steps you are the client connection, so these happen over https. So its not like the whole conversation is in the clear.

The only step where the content is retrieved from the server is LE fetching a file, which is itself encrypted. So not over TLS, but the content itself is encrypted.

So I'm not really sure how an MITM attack works here.


As I understand it, the MITM attack is relying on the lack of authentication rather than lack of confidentiality. The attacker can go to LE and get a challenge file (AIUI), which they host on a fake version of the website. They then use DNS spoofing/cache poisoning/ARP spoofing/whatever to get the CA to hit their spoof website rather than the real one. This “proves” the attacker owns that domain and so they can then carry out the rest of the steps to get a cert.

IMO its much harder to carry out this MITM against a CA compared to typical MITM attacks against end users. CAs generally speaking aren’t connecting to random wifi hotspots or using random ISPs etc. So you’d need to be in a pretty privileged network position to carry this out. And the multi-endpoint resolution approach seems like it would make it very hard indeed to pull off.

That said, it seems a bit of a shame not to use the existing cert where one exists (which is presumably the case for most requests, which I’d expect are renewals).


And thinking further, I'm not sure how MITM attacks would be useful, or expose anything. They would see the challenge request itself go by, but that's all.

Setting up the challenge, and fetching the certificate is done over https (the server acting as a client to LE.)

I suppose the MITM machine could recognizes the challenge, and attempt to then fetch the valid certificate (first?). I don't recall offhand if the client has a nonce for that request, I'd have to go and review the spec.


The MiTM allows the attacker to itself request and receive a certificate. You're right that it doesn't allow the attacker to tamper with (other than to block) the legitimate site's own attempt to receive certificates, but the legitimate site also can't block the attacker from completing the validation process.


It shouldn't need HTTPS -- DNS itself supports cryptographically signed updates.


Usually you still have CA -> DNS server being subject to MitM, unless you enforce DNSSEC.


Situation with certificate authorities is just getting ridiculous.

Before that a lot of extensions were introduced to somehow (poorly) cover other security risks (CRL, OSCP, CT Log, ...). Now we admit that secure certificate issuance requires DNSSEC to be in place in order to make CAA work securely.

Then why we have ditched DANE[1] in the first place? Why not infer public key trust directly from domain keys of DNSSEC? With dependence on DNSSEC, certificate authorities becoming fifth wheel.

[1]: https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


DANE wasn't deployable. It turns out that a lot of clients out there use some crappy DNS server, maybe they're required to, maybe that's just default configuration but either way that's not going to get fixed.

Crappy DNS servers basically only know how to "make the web work" in the sense it worked in about 1995. They expect questions like A? some.web.server.example and they reply with an IP address, it might be the right IP address, but not always. These servers of course have no idea DNSSEC exists.

When we send them anything else except 1995 style web browser queries they freak out, and either try to reply to it as though it was an A? query or they ignore the query. So you ask FANCY_MODERN_RECORD? web.site.example and it replies with an IP address even though that record's answer is a 64 byte compressed blob, or it silently ignores you...

Here in 2022 things have changed... but only slightly. We mostly got DNS servers to answer AAAA? by doing something at least vaguely sane, in some cases even actual IPv6 answers like the specification says. We think some other records work, although not always. And clients learned "Happy eyeballs" a strategy to just try everything and use whatever worked. But many of the servers out there are still crap.

The biggest change in 2022 is that lots of clients (especially web browsers) talk DoH. If you speak DoH (or DoT or any of these protocols but in practice it's DoH) you can actually get good answers from somebody whose DNS server works, a tremendous improvement. The browsers have by various means forced outfits who run such servers to actually do a competent job. Which means they implement DNSSEC, and, since the transport is TLS encrypted, you can choose to trust their validation answer.

Which means today arguably DANE is somewhat deployable. But more practically, meanwhile we built a bunch of stuff which means instead of clients needing to do DANE, we can get away with DNSSEC to specialists. Many real world web users have crappy DNS, but Let's Encrypt can get traction or change to a different provider or build their own servers or whatever they want because it's part of their core mission.


> Crappy DNS servers basically only know how to "make the web work" in the sense it worked in about 1995. They expect questions like A? some.web.server.example and they reply with an IP address, it might be the right IP address, but not always. These servers of course have no idea DNSSEC exists.

That's not a good reason not to deploy DNSSEC.

If DNSSEC/DANE is deployed, and some clients cannot make use of it because they are shitty, why hold back the non-shitty clients that can deal with DNSSEC/DANE? Just because 100% coverage for all clients isn't possible doesn't mean we shouldn't try to move the ball forward.

The perfect is the enemy of the good (/improvement).


No, it's subtly harder than that. It's not that some clients will get a better experience than others, it's that in an environment of widespread reliability issues, browsers and other TLS clients will need to develop a downgrade protocol to handle the cases where DNSSEC lookups don't work. That downgrade protocol will inevitably break, like every other cryptographic downgrade ever tried.

By way of example: until a year or two ago, the last great hope of DANE was on stapled DANE records as a TLS handshake extension. No DNS lookups would even be required. But then, years into the project, somebody realized that attackers would just be able to strip those extensions off of handshakes. The only thing that prevented the attack was the Web PKI roots of trust --- which is what DANE was trying to augment in the first place.


What would prevent what you're calling "stripping" is that a client which wants to see DANE records won't accept the "stripped" connection because it doesn't have any DANE records and none are stapled. That's true for any imaginable mechanism, if you "strip" the intermediates with conventional Web PKI certificates you're in the same place, if the client has the information already (e.g. a modern Firefox) the connection works and security is maintained, if not the connection fails.

The stapling RFC was deliberately sabotaged, by the browser vendors and they more or less openly admitted that. Not their finest moment. The TLS WG (dominated by people working for the browsers) adopted the stapling work saying we'll run with this, and then after stalling for several years they basically said this doesn't do what browsers want (it's for DANE and the browsers have decided they don't want to do DANE) so we aren't going to publish.

This is a great way to be an asshole, and if that was actually the goal (which it appears was the case to at least some extent) then I guess bravo.


My understanding of what happened and what you're talking about is that the antidote to this problem was yet another continuity mechanism, like pinning and HSTS; it doesn't protect first connections, and is a contraption.

To a first approximation, DANE is essentially a browser protocol. Obviously, things besides browsers speak TLS, but browsers are the overwhelming primary audience. If the browsers don't want to do DANE, that's a very strong signal.

Sorry, I have more to say about this.

I feel like there's a general attitude among some IETF people and DANE bystanders (especially people from the European DNS community) that feel like browsers are arbitrary, capricious gatekeepers of how TLS works; that we could have working DANE everywhere but for lazy browser people who don't want to work through the deployment drama, maybe?

But that overlooks the fact that the Web PKI is a partnership between the browsers (the root programs in particular) and the PKI providers. Neither side is working entirely on its own, both sides are sort of intensely engaged with each other. We have the Web PKI we have now, with free automated issuance, transparency, and toothy CA revocation, because of a real (if fraught) partnership.

Nothing like that appears to exist in the DNS community? Tell me I'm wrong. Why should I believe any of this would work?


> If DNSSEC/DANE is deployed, and some clients cannot make use of it because they are shitty, why hold back the non-shitty clients that can deal with DNSSEC/DANE? Just because 100% coverage for all clients isn't possible doesn't mean we shouldn't try to move the ball forward.

Because that's only the first step. After that you have to solve all the problems WebPKI has already solved for itself.

Two primary things - the ability to remove trust anchors and visibility into issuance. Neither of which are in any means doable or solved with DNSSEC at this point in time.


The Web PKI has solved none of those, as evidenced by this article. Not without DNSSEC.


You just rebutted a comment nobody made, and ignored the comment that was made. Like it, and the article itself, says: the DNS PKI hasn't addressed revocation and visibility. The Web PKI has: CAs are required to submit to CT logs, and the browser root programs have killed some of the largest commercial CAs for noncompliance. Neither of those is possible, or will be possible, in the DNS PKI.


> Neither of those is possible, or will be possible, in the DNS PKI.

Transparency logs for domain issuance is completely possible, it just requires some engineering and deployment. Remember that HTTPS was in use for decades before CT logging of certificates became mandatory.

More importantly, though, we need to be clear about what "noncompliance" means. In the web PKI, it means a CA issuing a certificate for a domain that the requester doesn't control, but the equivalent for that in the DNS PKI would be TLD .foo publishing data for a domain example.bar which is not an attack at all, because no one would care what .foo thinks about a .bar domain.

So, rather than relying on browsers being brave enough to kill a large provider (and we need to be honest with ourselves about how much leeway browsers would give to Let's Encrypt if they ever suffered a catastrophic security breach), the DNS PKI simply isn't vulnerable to this problem, because TLDs can't issue certs for domains that aren't registered under them.


No, they’re not. They’re only possible in the Web PKI because of the coercive power the browser root programs have over CAs. No such influence exists in the DNS.

Mozilla will dis-trust your CA if you try to evade CT. It can’t revoke .io.


As evidenced by this article there are other things that can be improved upon, but those two are certainly solved to a large extent. Without DNSSEC.


We mostly got DNS servers to answer AAAA?

That's the sense I get. Back in early 2011, there was still an issue where crappy DNS servers would sometimes respond to AAAA record requests with a "Server fail" error. Because of this, I had to tweak my recursive DNS server to treat a server fail to an AAAA request as if the AAAA record did not exist. This made IPv6 resolution more fragile, but it had to be done because of the state of the internet in early 2011.

I finally removed that "feature" here in 2022, when someone asked why I handled server fail responses to AAAA requests in this unusual manner.


My ideal certificate issuance process should look as follows:

I create private key.

I publish corresponding public key at some standard DNS record.

Then I can curl http://letsencrypt.org/cert/mywebsite.com and letsencrypt responds with certificate (probably cached unless it decides to issue new one).

So I can spend some time preparing this DNS configuration and then writing simple cronjob to fetch certificate every day and restart apache.

That would be vastly superior to current certbot horror and as secure.


> So I can spend some time preparing this DNS configuration and then writing simple cronjob to fetch certificate every day and restart apache.

> That would be vastly superior to current certbot horror and as secure.

Have you looked at Apache's mod_md, which allows you to integrate with ACME providers without certbot?

Here's the documentation, it's available since Apache 2.4.30: https://httpd.apache.org/docs/2.4/mod/mod_md.html

Configuration example from the docs:

  MDomain example.org
  
  <VirtualHost *:443>
      ServerName example.org
      DocumentRoot htdocs/a
  
      SSLEngine on
      # no certificates specification
  </VirtualHost>
(you do need restarts/reloads to actually apply the provisioned certificates though, that part is up to you; I do it approx. daily since the startup is fast enough to not cause lots of downtime)

I actually wrote a blog post about using Apache for that and other things, and moved my personal workloads over to it (still using Nginx and other servers at work): https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...

In short, in addition to having lots of useful modules, Apache has recently gotten the aforementioned ACME functionality, which makes it a bit more easy to use, like how web servers like Caddy also have "automatic HTTPS" functionality: https://caddyserver.com/

I'm yet to find a good self-hosted WAF solution, since mod_security doesn't seem popular or documented enough, even though it is better than nothing.


What you describe sounds a lot like acme.sh. It integrates with a few DNS providers and is run as a simple cronjob

I tried certbot a few times, but never liked it.

Usually I use acme.sh on VMs and cert-manager on Kubernetes.


You might want to look at another client and dns-01 verification instead of the annoying mess of http-01.

I'm replacing acme.sh with Lego currently. https://go-acme.github.io/lego/

Once you have Cloudflare (or one of many other options) set up it works as easily as you describe. And no port 80 open or special snowflake reverse proxy rules.


I'm aware of it and I'm using it. But I feel uneasy when some software can wreak havoc in my DNS, even if theoretically. For example Cloudflare does not have some kind of very limited "letsencrypt tokens".


That's a different proof of ownership model, though. If I understand it correctly, you set the DNS record once and then forget about it. Hence, you will only prove that you had control over the zone at some point in time. With the DNS-01 challenge you prove that you have control over the zone at the time you request the certificate, because it requires a new TXT record every time.


But what could an attacked actually do in vbezhenar’s proposed protocol? An attacker could… get a new certificate for a private key they don’t have? How is that bad?


Well, you would only need to compromise a dns zone once and sneak in your DNS record. If the actual owner of the domain/zone does not carefully watch their records and if they use a different verification method for their certificates, then you'll have a continously operating backdoor. Maybe this is somewhat fabricated, but if you require a fresh challenge everytime, then that means the attacker has to maintain a compromised channel to the dns zone, which one could argue raises the bar significantly.


A bit far-fetched, but sure. You could require that the certificate put into the DNS record also had, say, today’s date on it (UTC). This way, you’d fix the problem you describe but still have a vastly simpler process than ACME.


You could also just skip the CA part, and use DANE. Just publish the key in your DNS, and the corresponding self-signed certificate in your server.

Are there any DANE compatible clients today?


No. DANE is a dead letter standard.


As much as a footgun HTTP Public Key Pinning was, HPKP would have solved a lot of issues/caveats mentioned in the article.

That said, I'm going to add this for my domains (all of which use Letsencrypt anyway).


After all of the block-chain hype nonsense all these years, this might be the first truly prefect use case for append-only, publically auditable ledgers that makes all the sense in the world


Whilst blockchain-based domains like Namecoin are great for security they don't go far enough in my opinion. Certificate authorities and DNS infrastructure are heavily centralised at the root level, so something like the Handshake Protocol[1] is a necessary decentralised alternative.

[1] https://handshake.org/


... which was still implemented without blockchain.


To be fair, the definition of blockchain is so vauge that there is a reasonable argument that certificate transparency is a block chain. But yes i agree, CT is awesome and avoids the bitcoin BS.


Git is an append-only ledger of this kind.

(Well, append-only plus regular garbage collection.)


> example.com. IN CAA 0 issue "letsencrypt.org; validationmethods=dns-01"

I have a domain with configured DNSSEC. Would publishing this make me as secure as publishing an accounturi? Would LetsEncrypt strictly disregard any non-DNSSEC queries to a DNSSEC domain?


The article states that Let’s Encrypt uses a validating resolver, so yes.


deleted I misread the text. The account binding is a new feature that's opt-in.


Sounds like you didn't add "accounturi=https://some/lets-encrypt/account-id", which is what your quoted excerpt is probably referring to.


Yes, I had a reading comprehension fail. I thought it was implying that the browser would be using the CAA record to validate the cert's issuing account, but it's the CA that uses it to validate new issuance requests.


We're just gonna keep patching holes in the PKI design instead of admitting it doesn't really work, I guess.


Does "the PKI design" here mean specifically the Web PKI?

Or do you mean Public Key Infrastructure in general?

For PKI in general, you don't have any actual choice. This is actually why the Web PKI exists. SSL was originally conceived with no PKI, but upon analysis there's a trust problem. Why is this the right key? If I have no reason to believe it's the right key, Mallory can MITM me, the protocol achieved very little of value. So that's why there's the web PKI.

If you don't like how the Web PKI works, it seems to me that improvements (of which there have been a great many in the 20+ years I've cared about it) are exactly "patching holes in the design" and so, yeah, duh, of course that's what we are going to do.


I note that this requires DNSSEC, which isn't very popular here on HN.


It doesn't seem to actually require DNSSEC. Let's Encrypt will happily use these new fields to restrict issuance, the same way they were checking CAA records previously, for domains with and without DNSSEC.


Without DNSSEC this requires an attacker to 'global MITM' letsencrypt's DNS queries. That is a higher bar than before this change.

Before, an attacker only needed to 'global MITM' http requests. Your ISP could trivially do that. But your ISP couldn't just global MitM DNS queries for your domain unless you happen to use them as your registrar.

That is much harder than having an attacker use HTTP verification and global MITM the end user.


It requires DNSSEC to prevent MITM between the CA and the domain's nameservers.


In this context, I think "requires DNSSEC" is an opinion at best. "Requires" is probably the wrong word.

You are welcome to use CAA accounturi without DNSSEC and it will be effective.

Your zone may be vulnerable to an active man-in-the-middle DNS attack (which is hard to pull off), but it will still be protected against somebody figuring out how to upload an /.well-known/acme-challenge/ file on your domain and issue an unauthorized certificate from a foreign ACME account. This attack is much easier - I did it against a popular mail provider a few years ago.


> This attack is much easier - I did it against a popular mail provider a few years ago.

I guess this is Fastmail :)


Can't even understand, why they do not give us some features like IPv6, DANE, DNSSEC and so on... https://fastmail.blog/historical/fastmail-dns-hosting/

In 2014 Rob Mueller wrote: "Our future DNS plans include DNSSEC support (which then means we can also do DANE properly, which allows server-to-server email sending to be more secure), DMARC record support, and ideally one day Anycast support to make DNS lookups faster."


I still don't understand how industry got conned into thinking sending all your DNS requests to essentially one actor (DoH) is a good idea


Too many ISPs hijacking requests to user configured DNS servers and too many ISPs providing default servers that 'helpfully' redirect to a placeholder domain with ads rather than just an NX record.


DNSSEC is not the same thing as DoH or DoT or DNSCrypt


https://dnscrypt.info/public-servers

Lots of servers out there support DoH


I got the impression that DoH/DoT are mainly for request privacy rather than security.


It's about security. It keeps intermediate hops from hijacking your query and providing their own response.

Unfortunately, it offers no privacy. DNS is usually the precursor to a tls connection, and the domain name is sent in cleartext during the tls handshake. So the same people who would hijack your DNS queries are still privy to them if you use DoH (routers, ISPs, governments, etc).


DoT/DoH only stops one hop of the DNS resolution process from hijacking requests, so it isn't as good as DNSSEC for security.

There is encrypted TLS handshaking for hiding the TLS domain name indicator.

https://datatracker.ietf.org/doc/draft-ietf-tls-esni/


> ... so it isn't as good as DNSSEC for security.

And DNSSEC doesn't provide any privacy to the end user. Both mechanisms provide something of value, and ideally they'd both be used together.


There's a lot of different DoH providers. And don't most people who aren't using DoH send all of their DNS requests to the same actor (usually their ISP) anyway?


Centralized servers run by trusted [0] third parties is the foundation that most of the industry is built on. Surveillance dollars flow in, and mindshare follows.

DNS seems eminently easy to secure against eavesdropping, given it is just distributed database, and a slowly consistent one at that. Someone just needs to step up to the plate and make a p2p resolver that communicates with a secure protocol, rather than the naive one spelled out in rfc1035. And with DNSSEC you wouldn't even have to worry about trusting/verifying other peers.

[0] The formal definition of someone who can break you, not the common usage that implies "trustworthy".


I like the GNU Name System for replacing DNS:

https://www.gnunet.org/gns


Every time people bring this project up, I have to say this again, so here goes:

The purpose of a naming system is to give users a common way to consistently reference specific things. Anyone should be able to mention a name for a thing, and know that any other user will be able to look up that name and resolve it to the same thing.

GNS missed this lesson. Under GNS, every user has their own, slightly different view of naming; a name which one user is able to resolve may not be resolvable for another user, or (even worse) other users might see it resolve to something completely different.

There are many ways of defining how a "decentralized" naming system should work. This is one of them, and I'm sure it sounds cool, but it is not a good idea. Try again.


It's to prevent DNS-based ad blocking (e.g. Pi-hole).


This isn't true. There are DoH servers that do ad blocking.


Parent means from the IoT device side. DoH lets devices (say your TV) bypass a pi-hole and still reach ad servers.

That's why I blackhole DoH on my network. Don't connect to my wireless if you want DoH privacy.


Bypassing your network's DNS doesn't require DoH, it just requires the IoT device to use its own DNS servers.


I redirect all outbound TCP/UDP 53 traffic to my DNS servers. I can't do that with DoH, so into the blackhole it goes


Remember, if you can censor traffic at the network level, so can Comcast and China. Hopefully, there will eventually be a day when Microsoft, Google, Amazon, and CloudFlare all serve DoH from the same IPs they serve the rest of the websites they host from, so that it won't be feasible to block them anymore.


But they can run DoH on any server, they don't have to use CloudFlare or Google or whatever. So any port 80 connection is suspect. Same for any port 443 connection with DoT. Or any port whatsoever if they run their DNS (on any transport) on a "non-standard" port, which is not unheard of for such devices. DNS works on port 5353 just as well as it does on port 53. Redirecting outbound port 53 to your own servers has never been an effective way to stop devices from using their own DNS. DoH and DoT do make it harder to block (since they're authenticated), but even classic DNS can evade simple port-based redirection.


I only ever see one person argue persistently against DNSSEC here on HN.


Why should anybody? DNSSEC is almost irrelevant for the majority of domains. I've worked at a place that was pioneering it and found it over complicated and fragile, so haven't touched it since.

If you implement it correctly you are slightly more secure but there are better ways to spend your time.


I’ll just say that the same thing could be said (and was said, frequently and loudly) about HTTPS.


No, it wasn't. I've been working in security since 1995. People have at times said that HTTPS was unnecessary for things like static content sites and blogs, sure. There has been pushback against the idea of doing HTTPS everywhere. But no serious person has ever pushed back against the need for HTTPS in some places. HTTPS and DNSSEC are not comparable. We've spent 25 years trying to have DNSSEC and not having it, and for a lot of us, the verdict is in: DNSSEC isn't necessary for a secure Internet.


This is a big problem. I feel like the security world is trusting DNSSEC to close the unencrypted / unsigned DNS record shaped security hole. If its not something people actually want to implement because its a mess, what do we do? Do we need something better than DNSSEC? Is DoH good enough?


On the contrary; the work is only for DNS server operators (registrars). As a domain admin, you simply get DNSSEC for free if your registrar supports it.

If you are instead running your own DNS server, you're probably big enough that you can afford spending some time on configuring DNSSEC.


> As a domain admin, you simply get DNSSEC for free if your registrar supports it.

How is a registrar holding your keys in any aspect better than WebPKI right now?

> If you are instead running your own DNS server, you're probably big enough that you can afford spending some time on configuring DNSSEC.

And as the parent comment said, it's complicated and brittle, while offering little benefit. Plus you are still forced to trust your registrar, because it's doubtful you will go into the effort of interfacing with the TLD registry directly.


> How is a registrar holding your keys in any aspect better than WebPKI right now?

In this context, the question is more "how is a registrar holding your keys in any aspect worse?" There are a couple of mostly availability-related risk models where having DNSSEC at all is probably nets negative but your registrar generally has de facto control over your domain under the no-DNSSEC PKI anyway. There is a wide range of viable use-cases where telling your registrar that yeah turn on DNSSEC is nearly free and gets you ... well, the extremely minor benefit of being able to use DNS as a verifiable public kvmap.


> In this context, the question is more "how is a registrar holding your keys in any aspect worse?"

I find it fairly obvious that such an extra operator in the middle of the trust chain between your webserver and your end-user is worse.

> but your registrar generally has de facto control over your domain under the no-DNSSEC PKI anyway.

Yes-ish. The issue is that DANE+DNSSEC would give the registrars control over the keys, which door they belong to and nobody could determine otherwise. A registrar trying to do the same right now with WebPKI would most likely be pathetically caught in the act, either by trying to redirect traffic without a valid certificate or trying to issue a certificate and it getting logged.


> And as the parent comment said, it's complicated and brittle, while offering little benefit.

Back in the day, lots of things were complicated and brittle regarding having an internet presence.

If that were the standard for IT in general and networking in particular, we wouldn’t deploy a great many things.

That being said, it’s pretty easy these days to deploy DNSSEC; so is cryptographically verifying the chain of trust between your domain and the root.


The pattern has recurred for over a decade: someone says DNSSEC is, at last, easy to deploy, and then some huge company tries to turn it on and falls off the entire Internet for a day. Last time, I think, it was Slack. I wonder who it'll be this time.


> someone says DNSSEC is, at last, easy to deploy

Almost not a week goes by on HN where there's a headline about some well-known company or government organization that did something (or didn't do something) to either get themselves knocked off the internet or were compromised.

While companies screwing up DNSSEC has gotten some companies knocked of the net, it pales in comparison to other reasons for companies getting knocked off the net.

But whatever.

A more nuanced take on the ease of deploying DNSSEC: it's easy to do for the weekend hobbyist working on a side project or a homelab. Or a small organization whose registrar and internet host both support DNSSEC.

Deploying DNSSEC is definitely not easy if you're running at Facebook, Twitter, Apple, etc. scale. There are other reasons too, but there's no need to rehash once again.


> How is a registrar holding your keys in any aspect better than WebPKI right now?

You can use both DNSSEC and HTTPS. And actually, if your registrar and hosting provider are the same (e.g. Cloudflare, AWS), they might hold your keys anyway.


> And actually, if your registrar and hosting provider are the same (e.g. Cloudflare, AWS), they might hold your keys anyway.

Sure, but you have the ability to choose if they're the same or not. Although an untrustworthy registrar right now would be quite bad, it wouldn't trivially compromise the security of your WebPKI TLS connections. One would not be able to say the same if we would have had deployed and built everything on top of DNSSEC+DANE.



…by that person.


>(This isn't a particularly idle concern. Amazingly Microsoft once got a court to let it take operational control of the domain no-ip.org — that is, to actually hijack the domain — a dynamic DNS service used by countless people — simply because one user was apparently using it for malware-related purposes.)

What a dishonest take. Microsoft didn't wasn't granted this court order because there was one bad no-ip user, Microsoft was granted the court order because there was a bad no-ip user that no-ip wouldn't take action against.

Oh, and it wasn't one bad user. It was 22000 different hostnames.


Author here.

If the sought action of the court case, and the outcome were, "the domain were taken down" that would be one thing. Domains get suspended by court cases all the time, that's not the issue.

What makes the no-ip.org case extraordinary is that Microsoft a) persuaded the court that the domain was being used for malware, and then b) persuaded the court that because of this, rather than doing something normal like compelling its operator to take down the afflicted subdomains, or failing that compelling a third party to suspend the domain, that they should be allowed to take over DNS service for the domain.

Microsoft is not the law and they have no special legal status. If a domain is being used for cybercrime it's one thing, it doesn't mean any random party should get to walk into court, complain about it, and then offer to "solve" the issue by randomly appointing itself DNS provider. Microsoft essentially hijacked and MitM'd the domain via court order, again demonstrating that the registries/registrars will always be a risk here.

The result I might add was a massive outage for a massive number of innocent no-ip.org users.


I think the fundamental issue here is that the court actually granted Microsoft's rediculus request. The only valid ruling here was for the court to order the suspension of the domain.

Seeing that Microsoft are an unrelated third-party, what was the judge's reasoning for granting them specifically ownership of the defendant's property? Wouldn't it have made more sense to assign ownership to a government organization instead?

Did Microsoft reimburse the domain owner the value of the domain or did they just steal it without payment?


It all got reversed eventually after massive negative press coverage. I don't think Microsoft took "ownership" of the domain, but simply got the court to make them the nameservers, though I may be wrong.

I do feel like the only way this request was granted was due to total ignorance on the part of the court of anything about how the internet works.


> I do feel like the only way this request was granted was due to total ignorance on the part of the court of anything about how the internet works.

It sounds like the court, unlike you, has the power to make the internet work the way it thinks it does, and is thereby right about how it works.



It's a completely reasonable request that has been granted countless of times now.

>I do feel like the only way this request was granted was due to total ignorance on the part of the court of anything about how the internet works.

This is absurd. The court ideologically disagrees with you about how the internet should work, not about how the internet works. This does not suggest that the court is ignorant of anything.


>What makes the no-ip.org case extraordinary is that Microsoft a) persuaded the court that the domain was being used for malware, and then b) persuaded the court that because of this, rather than doing something normal like compelling its operator to take down the afflicted subdomains, or failing that compelling a third party to suspend the domain, that they should be allowed to take over DNS service for the domain.

This is a completely normal measure, simply taking down a domain is not nearly as effective anti-malware measure than sinkholing it. A sinkhole could in some cases uninstall the malware from affected computers, or at least identify their IP-addresses for notification purposes.

>Microsoft is not the law and they have no special legal status.

Exactly.

>If a domain is being used for cybercrime it's one thing, it doesn't mean any random party should get to walk into court, complain about it, and then offer to "solve" the issue by randomly appointing itself DNS provider

Microsoft is not a random party, it's a party whose business is directly affected by these illegal malware campaigns and has been repeatedly held to have standing in these cases.

>The result I might add was a massive outage for a massive number of innocent no-ip.org users.

Turns out that possibly most no-ip users were malicious https://umbrella.cisco.com/blog/on-the-trail-of-malicious-dy...


Regardless of whether you think it's dishinest or not, his point still stands: tls mitm is not and cannot be mitigated via DNS.


Nor with DNSSEC: the same government that gave Microsoft control over this zone has de jure control over DNSSEC key management for that zone.


I wish there was wide support for public-key-addressable servers (like tor adresses). It won't solve the issue of memorable names, but it could solve this bootstrapping problem.

Perhaps le should look into encorperating tor into its domain verification process.


“[…] you cannot have a namespace which has all three of: distributed (in the sense that there is no central authority which can control the namespace, which is the same as saying that the namespace spans trust boundaries), secure (in the sense that name lookups cannot be forced to return incorrect values by an attacker, where the definition of "incorrect" is determined by some universal policy of name ownership), and having human-usable keys.”

— Zooko Wilcox-O'Hearn: https://en.wikipedia.org/wiki/Zooko%27s_triangle


Zooko's conjecture predates the invention of Bitcoin, and the article goes on to explain that blockchain-based systems can in fact have all three properties.


I don't think we would need to deal with zooko's triangle in the case of automated systems like let's encrypt. Human legibility need not apply.


It's not that anything in the verification protocol needs to be human-readable, it's that domain names themselves need to be human-readable and therefore can't just be derived from public keys. Which means you have to have some kind of system for deciding who controls which names, that doesn't just come down to who possesses a particular key. Zooko conjectured that this couldn't be done in a way that was both decentralized and cryptographically secure. He turned out to be wrong about that, although the DNS that everyone actually uses remains centralized.


Actually, you can.


I like the downvotes here for stating a fact.

The current CA system is horrendous in its centralization. It is completely possible to make a new mechanism using hashed-addresses and using traffic + user choice as the allocation mechanism for namespaces.

Instead of namespaces being fought for financially, users assign namespaces to site addresses (hashes) which represent a pub key of a keypair and identity of a server. The namespaces, say “search” is then assigned to the address hash with the most users by default. If a user likes a different one, they link the “search” namespace to a different hash and that counts as a vote for that location being the default.

This can be done using just traffic as an indicator for the defaults, in the event unique humanness cannot be established properly for an identity.

One summary of a frictionless scheme without central control that circumvents just about every shortcoming of the current system, and has all three properties.

There are other schemes, btw.

Also, in the event it isn’t clear: tls comes natively to this scheme because the addresses are pub keys. There can’t be a mitm for this scheme unless they have the priv key, or find a way to direct traffic through them and acquire a majority stake for a namespace and phish the original site. Whoever has the priv key controls the properties of the address hash, which is where all the records go.

This would make the internet significantly more democratic and less prone to bad actors. It would eliminate domain name squatting completely, and would enable new technologies which more closely match a namespace than old ones to have a chance, promoting innovation and meaningful competition.


So one day, the "search" default moves to the most popular and everything breaks? based on the amount of traffic generated for the other "search"?

Do you have more detailed write ups of that or the alternate schemes, at first take that sounds horribly flawed.


“Everything” wouldn’t break; the most popular address is the one that gets the name. It means businesses and admins would need to put in the work to have a good product instead of getting lucky / having a ton of money to grab a name. Most likely, once a popular name is defaulted it will never change since this system has a “snowball” effect, but if a ground-breaking innovation occurs, then it would have a chance of taking the name.

Anyone that manually sets a name to an address is unaffected by the default setting. Only people that haven’t overridden the default are impacted. Most people would likely not even participate in this mechanism of “voting”, so it would be a smaller group that I assume is more involved that directs defaults.

Nothing is perfect but I think this would have significantly better results for humanity as a whole once it is matured than the current system.

Additional note: For anything programmatic / apis / etc, the address hash can just be utilized to connect systems. The address hash is not an IP address. It is a record set that can only be modified using signed messages, where the latest signed message determines what is in the record — this is where a record for, say, another IP can exist. Or a record to another address hash, etc. This record set could operate basically the same as current records for domains.


The default is kind of like using top result of a search as the owner then? But I guess you want to count the number of real people who "favourite" a name > hash mapping.

You would need a consistent "easy" name as well at some point though, like a bank for example, can't use a name that could one day change for people who haven't bothered to default it.

Another issue might be names for the smaller, but very long tail of the internet, which would be open for abuse. For example a name could come and go with a social media post that gains traction, which would far outweigh the regular traffic for a name.


How exactly do you make the addresses meaningful to humans if they're public keys?


I explained that in the post. Namespaces / domain names / whatever you want to call them are set by individual users. The act of setting a namespace, ie binding “search” to whatever google’s hash is for example, contributes a “vote” to make that the default address for “search”.

Traffic can also contribute towards the count, either method would eventually settle on accurately capturing the will of people, but I would have to think about the mechanism for measuring traffic in a statistically accurate / honest way with a federated system.


The thing being described here isn't really an address system. The point of addresses is that they're supposed to be stable; I want to know that I can go to google.com and know that the thing on the other end is controlled by Google and not some other entity. This is a lot more important than being able to look up "search" and know that the thing on the other end was chosen democratically rather than auctioned off. If the thing I want is to connect to one particular entity, then under this system the only way I can do that with confidence is by getting their public key out of band, which is deeply inconvenient and the whole problem that domain names were invented to solve.

Registry operators can also hijack domain names, of course, but they have an economic incentive not to do that (except in cases like malware C&C domains that don't affect legitimate users), because their job is to ensure that the whole system of stable addresses keeps working, and failing to do that would undermine confidence in the whole thing. A public vote doesn't have that incentive alignment; anyone who bothers to explicitly configure their system in this way, is fairly likely to be someone who'd join a campaign to hijack a name for the lulz or to make a political statement, at the expense of usability for regular users.

It's true that if you have human-meaningful domain names, then some of them will be more desirable than others, and anyone who can get a good one, or who can distribute good ones to those who want them, is thereby in a position to collect a certain amount of economic rent. Which isn't ideal. But this is all a second-order consideration at best; it's a side effect of the goal of stable addresses, which is the important part.


It is highly unlikely that an entity like Google would not have control of the Google namespace with the scheme I am talking about, as it is clear what google is referred to as and this mechanism would eventually “settle” on the most correct names for each entity.

But if you don’t care about the entity and are talking generic names, like “search” or “market” it allows for a novel way of applying the namespace to the “best” one in a moment, without relying on a central party like an app store to tell us.

It also introduces a self-governance, eliminates stale squatting, gives better tech a chance, and eliminates the ability for authoritarian and bureaucratic entities from controlling namespaces. Who is ICANN really accountable to? If someone makes a site that is disruptive to the “national security” of powerful governments, by being more democratic and representative but stripping away their / corporate power, do you think the current system would just allow it to live?

We need new technologies that can handle fighting against the tyranny of small, unelected boards who subtly influence all of us in seemingly innocuous ways. The way we fight against it is by architecting implicitly democratic systems, bypassing these parasitic middlemen and replacing all of them with mathematically sound code.

There are some tradeoffs. We could go back and forth through this concept and discover a new weakness in the convenience, mainly for business. One might say “well, what about addressability for emails or federated identities” and, one by one, with some thought, these things could be resolved. But the core of the solution eliminates entire classes of putrid rot in the existing mechanism.

The rot I speak of is mostly unseen by people. It stifles innovation with stagnation, where squatters and “I got here first” eliminate the possibilities. This makes those possibilities completely hidden and stifled. Entrenched forces have no reason to innovate or progress. They are rewarded merely for existing, without any forces capable of opposing them without also being entrenched, or begging another entrenched force to aid them.

I can go on and on about the topic, but coming back to “globally stable addresses,” I think that this mechanism can be likened to an iterative / numerical method which, when given time, settles on the correct answer. Once a domain has settled, it would experience stability. And perhaps, when taken in conjunction with the existing system I’d want to see this mechanism replace, we already have “stable” names that come at cost. It isn’t like that would immediately go away. Every technology I talk about is voluntary, at a fundamental level no one should be coerced, whether by force or by implicit means, to use something.


You're punting the problem. You can't securely and objectively measure users and traffic.


You can measure users if users also have an identity bound to a key pair, with a mechanism to have attestations to their identity. In other words, the role of a CA shifts to making attestations that a pub key belongs to a unique individual. With that modification, it becomes possible to use their signature towards voting on which namespace operates as a default binding for an address hash.

This mechanism is very feasible when connected to a larger system involving federated identities, and a trust matrix where users decide which authorities they accept for identity validation (or any other attestation). Binding a physical identity to a digital one has a significant number of additional benefits, and it can be done such that anonymity is preserved via sub identities with verified claims.


Now you're farming out to another "larger system" to ensure that the keys are real people.

How does that system ensure things, and why can't that system do domains directly?

> a trust matrix where users decide which authorities they accept for identity validation (or any other attestation)

So if I tell someone my "domain name" I won't know what site they'll actually get because it's calculated per person?


No one should have ownership of a word. That is an individual choice that should move fluidly with the populace.

With this scheme, there are many ways to enable a stable endpoint that can be shared. But at the base, the addition of a hashed keypair address is introduced which is connected to a recordset controlled by a signed message.

With that, there are a lot of possibilities. Just sharing one of them. While I could outline every little detail, that would be better served in a different format and in the future.

There are going to be a lot of mental shifts required in many different ways. Maybe it will take a generation before those shifts are appropriately executed, I don’t know.


> No one should have ownership of a word. That is an individual choice that should move fluidly with the populace.

The ability to reallocate at some point is fine, but if I'm speaking an address to someone I need to be sure it only goes one place right now and in the near future.


Then I would say someone needs to make the best “thing” that entrenches their “thing” to a name. For the most desirable names, that would be the only way to maintain stability; constant innovation making something synonymous with the name.

This concept can be extended to support more stable namespaces. It just requires a little thinking. Could be as simple as a numeric queue for a name, like say you want the “search” name. You are the first to associate with it. You might have the permanent address “search.one”. Someone else wants to associate with it. They get “search.two”. This goes on and a million people want it. The millionth gets “search.million”.

These sorts of details have meaning but are irrelevant to the core problems what I’m talking about solves, and the core problems that need fixing: the CA system is inefficient, archaic, and tyrannical. They can be, technologically speaking, easily replaced with far more secure, purposeful, and democratic technological, autonomous systems.


If there is nothing between "search result that can be different for everyone except for the most popular brands" and "permanent number suffix that's probably eight digits long" then that's not a very good system.

And I do think that system fails to defeat zooko's triangle.


How do you handle key rotation?


When you connect to the service, the client tells the server which public key (key A) its expecting the server to prove that it has ownership of.

If the key A is still valid, the server can use the corresponding private key to sign a challenge.

If the key has been rotated out, the server instead presents the new key, and a signature. Eg, the server responds by naming key B, and presents a certificate of key B, signed by key A (the presented key). Instead of just a single key rotation the server could present a chain of certificates from A to B to C (the key the server wants to use). And optionally, a message saying "from now on, please make further requests using key B as key A has expired".


This falls apart if keys are ever compromised.


If the key is compromised, there’s two ways the key can be rotated. Either the key is updated upstream (in the dns record or through an app update or whatever). Or the next request uses the compromised key, (and could be MITMed.) The server responds with the new signed key. And requests after that will be safe.

It’s not perfect - it has some properties from TOFU systems. And it expects the client to cache key material. (It’s not stateless like tls). But I think it would be a pretty workable system.


Publish merkle roots on global ledgers like blockchains.


Handshake (namebase.io) comes to mind.


DNSSEC doesn't protect you against the American government if you have a .org domain, but I doubt an American court could give Microsoft control over a domain registered under a ccTLD like .de or .ru or .za for example.

I suspect Microsoft would also have trouble taking control of a domain registered under a gTLD run by a company based outside the US, but it would be interesting to see how the agreements between the gTLDs and ICANN would work out in practice.


Technically they could force root nameservers (based in the US) to intercept/proxy the whole gtld.

So all except n (netnod (EU)) and i (WIDE (JP))


>So all except n (netnod (EU)) and i (WIDE (JP))

US could just drop the records for those.


No, the US could not do that and there is multiple reasons for it. The root zone is rather special in that operating system semi-hard code the root servers. The operating system also have full control here and the number of name servers at the root zone changes very slowly. Operating systems developed by people not bound by US courts could just ignore it.

The other reason is political. If they were to cut out eu or asia from the list then the risk of a split would increase massively. It would be suicide. If they did that people might even split internet further by splitting iana (Internet Assigned Numbers Authority), in which case a computer in EU would be unable to communicate with an computer in US, and then the concept of a global internet would no longer exist. A split is a exceedingly dangerous concept.


I think the hardcoded IPs are typically only used as hints to initially resolve the root-servers.net domains.


Hints are used by the bind resolver software. It hard code the A -> M root servers and use those to initialize a cache. Naturally bind developers could change this behavior, and in the case that none of the hints works, the current behavior is to use a static compiled list that the software also include.


Not just bind, unbound also. Unbound uses the hardcoded list of IPs to resolve a-m once and build it's cache, the hardcoded IPs are never used again.


>DNSSEC doesn't protect you against the American government if you have a .org domain, but I doubt an American court could give Microsoft control over a domain registered under a ccTLD like .de or .ru or .za for example.

What? Obviously they could. ICANN is subject to US law.


This control is indistinguishable from a domain transfer, so this is trivially true.

Zones not under their control, however, are not vulnerable to this. So compared to the current system it would be an improvement.


so fucking what? it's an equivalent of a corporation invading and seizing control of an entire country because some people living there are doing it harm


That's like your landlord handing the keys to your condo to the bully upstairs because you have a cockroach problem.


More like your landlord handing the keys to your condo to the bully upstairs because somebody else on your floor has a cockroach problem.


Or to be more precise, the keys to every condo in the building.


It's like a judge ordering you to hand over your keys to the person living underneath because you have a water leak you refuse to fix.

Perhaps the water leak was caused by someone else, but it's still in your apartment.


1. That would still be ridiculous.

2. The water leak isn't actually in the apartment if we're keeping this accurate to domains. Maybe the only phone the plumber will listen to is in the apartment.

3. As someone else already said, the judge is handing over the keys to the entire building.


>1. That would still be ridiculous.

How would it be ridiculous? A water leak in your flat is causing damage to the flat below yours, it's your duty to address this. If you do not address this, someone will in fact go to court and take control of your flat.

This is something that happens all the time in cases where compliance with specific performance orders seems unlikely.


I have a domain on no-ip.org

I remember when this happened and I was trying to debug why I couldn't reach my home server.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: