The position simply has not been filled properly, like hundreds of other positions in government that have traditionally been filled in the past 4 decades.
Thanks for checking this!
Unless you read only one news source, there is no debate as to why positions are not filled. It is because of one of two reasons: 1. at this point nobody wants to work with you, or 2. you don't even know who to ask.
I would love to see evidence for your claim.
From my sources , the White House hasn't put forth anyone for these position.
Do you guys just blame the Democrats for everything?
It seems dishonest.
There has never in US history been more stonewalling than there was during 6 years of the Obama presidency.
So it’s relevent to mention DNSSEC because it’s spoken of within the government as a method of securing DNS, but in reality it makes things worse. It doesn’t protect against attacks like these and the end result is you are forced to use a crappy system that makes you even more vulnerable to these sorts of attacks. It’s the worst of all worlds.
(Also, I was on the call they held for CIOs and tech leaders of all agencies, and it was astonishing how many of the questions in their chat made it obvious that many of the entrenched players didn’t even understand how this attack worked.)
I wrote a framework at work that maintains authoritative DNS configuration truth (static, and advanced GSLB variations) as text files going through git and regular PRs, auditing, etc.
The truth is then flushed out via APIs to multiple DNS providers (active-active), Akamai being one of them. Nobody's APIs are perfect, but I don't recall it being too difficult to push the state to their system.
Additionally, for a government agency you could have dozens of contractor teams, and you need a method of securing access for all of them. It’d be much easier to just use Route53 and delegated subdomains, as all teams are largely deployed and managing infra (with MFA) there.
But it’s not allowed, because DNSSEC.
How f difficult is it to sign a zone?
DNSSEC isn't perfect (indeed it only provides assurances of record integrity and doesn't secure the channel); but it's certainly better than nothing, than no signature at all.
If route53 can't or won't or doesn't have to because they don't want to implement DNSSEC, route53 is not suitable for .gov and .mil domains.
It's really that simple.
I get that you want to use terraform; I don't see why you think route53 is the only DNS that terraform works with.
For high priority domain names I would go so far and recommend against having a DNS control panel accounts on things like Route53. It has access control management but unless the department has the experience and skills it will likely end up with a single account that has full access with little oversight when something goes wrong. Those departments are likely better served going through a domain management company that handles all the security and manage who can request changes and how. If they have the experience and skills then managing it fully themselves can be a better option, but then what they really need is a registrar which allows for DNSSEC keys to be forwarded (with for example a cds record).
Your contracting team may not have a DNS account, but the contractor that deals with DNS changes certainly does.
The point is that all infrastructure really should be managed with code. If you managed DNS from Terraform you’d notice that your production DNS system was in conflict from what your code said it should be the next time a terraform plan/apply was run (which is happening all the time from dev or maybe CI). Without it you could go months or years without noticing a changed entry on a little used, but critical system. From what I remember this attack was ongoing for a couple years before it was noticed.
> The point is that all infrastructure really should be managed with code
That I agree 100% with. If the agency itself has a IT department with experience and knowledge then the most optimal choice is that they do it themselves with either something like Terraform or their own physical hardware with authoritative DNS running on it. Agencies that is involved with either critical infrastructure, personal information, or state secrets should always have a security risk analyze done which describe what would happen if someone broke into DNS and where the vulnerabilities are. Sadly very few agencies does this.
Other agencies are likely better served by updating their procurement in respect to DNS management and make sure the requirements lists liability and high security. Make sure that rather than some cheap bulk registrar with a do-it-yourself control panel, where the registrar always have zero livability if the credentials are leaked, it is better to pay a more costly management company that manage the infrastructure with code.
I don't think it's difficult to imagine that, somewhere within the massive federal bureaucracy, there is a team that works on DNS record tickets filed by other parts of the bureaucracy, and logs into a web console with a DNS hosting provider in order to fulfill those tickets.
This makes all sorts of other things bad, like generating TLS certs with automation, as typically a validation for cert generation will be done via a DNS entry to prove ownership. But that requires a support ticket, so no automation will work here.
Even without widespread client support for DNSSEC (and the AD bit doesn’t cut it), it would be quite nice if CA/B rules required that CAs enforce DNSSEC (if enabled for the domain) when checking CAA records. This would impose no burden at all on web browsers and only a minimal burden on CAs. Unfortunately, since Route53 and the like mostly don’t support DNSSEC, getting the benefit for domain owners is a pain in the neck.
RFC 6844 seems to recommend but not require this validation. I don’t know whether newer rules require it.
edit: Unsurprisingly, people are working on this: https://tools.ietf.org/html/draft-ietf-acme-caa-06, section 5.6, for example.
And now TXT records used for domain validation are becoming quite large as well. You add a ~hundred byte TXT record for each service you want to validate your domain against, but they all want the entry to go in the same place (at the root), so you do the DNS TXT query for the root and you get a big response. Adding DNSSEC on top of that might even blow the practical limit for UDP EDNS0 responses sometimes, never mind the DDoS potential.
There may be a solution for that one -- specify a location for each TXT record instead of everyone using the root, so Facebook uses _facebook-domain-verification.[example.com], Google uses _google-site-verification.[example.com], etc. Then you wouldn't get a dozen large TXT records in response to a single query because they're each at a different name. But that's not currently what companies are doing, and a purist wouldn't like it because in principle controlling _google-site-verification.example.com doesn't mean you control all of example.com.
For example, the large TXT records can prevent mail delivery from some versions of qmail, including the one currently packaged with Debian stable, because it makes an ANY query for the domain (to avoid separate queries for MX, A and CNAME) but only supports UDP and only supports responses up to the 512 octets specified in RFC1035. Then it gets a truncated response when there are >512 octets of TXT records and considers it a name resolution failure.
Making the ANY query is an unusual quirk in qmail, but it could have just as easily been an actual TXT query looking for the SPF record etc.
CA/B Baseline Requirement 188.8.131.52 says to do RFC 6844, and that CAs need to implement at least enough DNSSEC so that they can determine whether "the domain's zone does not have a DNSSEC validation chain to the ICANN root" because if there is no DNSSEC validation chain you can treat DNS failure as permission.
In practice most larger CAs just do DNSSEC entirely here, Let's Encrypt uses it everywhere. Deployability to ordinary end user clients remains poor due to middle boxes and local DNS implementations that are garbage, but if you actually control your network (as a competent CA should) then it isn't a problem.
The sort of CAs and CA-customer relationships where DNSSEC was argued to be problematic aren't going to be doing ACME CAA, or probably ACME at all.
[Edited to delete me confusing one ID for another]
Several other major CAs including DigiCert do DNSSEC validation but I haven't used them and so can't even tell you from my own experience how well they work, though it seems likely some of their other customers might have noticed on that end.
Now, are they all doing a "good job"? If you actually were paying any attention at all in this space you'd presumably have quoted my already published opinions about that. I think they should retain the DNS responses the same way they would keep the actual raw data from an HTTP validation. So third parties doing incident investigation can do an effective post mortem - I also think they use too many "short cuts" that are going to result in someone finding a nasty bug one of these days and for which there's no real evidence they're necessary.
But since it seems for you a "good job" is just using something operationally, then yeah, in that limited Thomas Ptacek sense of "good job" they're already doing a good job I guess.
(Obviously, just to point something out for the thread that you already know, the vast, overwhelming majority of LetsEncrypt issuances are for zones without DNSSEC signatures).
> I haven't used them and so can't even tell you from my own experience how well they work
... I'm not sure what my "confirmation" would tell you, beyond that I know how to read the paperwork from the CAs. But sure, both Sectigo and DigiCert say their systems should deny issuance on DNSSEC failure.
Not an endorsement of DNSSEC, just that it’s not impossible to support it.
It's not my claim that DNSSEC is what enabled this attack in the first place. But there's no way I'm going to leave the irony of USG agencies laboriously configuring their security-theater DNSSEC protocol while not being able to defend their registrar accounts or get basic MFA deployed.
They are both invalidated through the same mechanisms. It would be nice if there were reasonable limits on TTLs, but I'm not getting invited to those meetings.
There's a reason that 'tptacek called it "security theater". Its implementors could better spend that time on actual security measures for things that are much more likely to happen...like defense in depth for credential leaking.
Or do you perhaps mean DNS over (TLS|HTTPS)? I never saw that as a complete replacement for DNSSEC; it provides transport security, yes, but how do we know we aren't talking to a malicious resolver? Maybe that's not as much of a threat if people aren't using DNS servers from their ISP -- which sometimes inject ads or otherwise tamper with traffic.
This whole notice really boils down to "seriously, people, if you lose control of your domain registrar account, all bets are off"
In fact, at this point, DNSSEC has very little to do with securing the DNS (as it is used by the rest of the Internet) at all; rather, it's a self-justifying self-rationalizing mechanism for building random new things nobody uses on top of the DNS. So it won't protect your DNS records, but it will try its hardest to make sure the security of your SSH sessions somehow depend on the security of those records.
Beyond that, it's all a clerical matter of being better or worse at verifying identity at the level of the registrar and the NIC.
Maybe a third party could start implementing something like this (maybe just a directory of domains validated in the way we do that for TLS), and it could be grafted to a NIC's infrastructure (maybe I'll talk to CIRA about it), and then maybe ICANN would pay attention after that.
I'd be up for it, if a handful of other people would try as well.
Link chain that I followed:
1) op — https://cyber.dhs.gov/blog/#why-cisa-issued-our-first-emerge...
2) first link in op, "emergency directive" — https://cyber.dhs.gov/ed/19-01
3) footnote #1 in the emergency directive — https://www.us-cert.gov/ncas/current-activity/2019/01/10/DNS...
4) FireEye research referenced on the US CERT page — https://www.fireeye.com/blog/threat-research/2019/01/global-...
>Using techniques that aren’t especially innovative, we know they can intercept and manipulate legitimate traffic, make services unavailable or cause delay, harvest information like credentials or emails, or cause a range of other malicious activities.
>We know that this type of attack isn’t something many organizations monitor for or have tight controls around.
Interesting that they've specifically identified a case of this ongoing in a coordinated manner. Makes sense why it'd be an "emergency directive" in this case.
That said I was unaware of the Talos publication last week so I will read that but I assume this same surface level fluff.
Domain registrars can and SHOULD implement multi-factor authentication.
Are there domain registrars that support FIDO/U2F or the new W3C WebAuthn spec?
Credentials and blockchains (and biometrics): https://gist.github.com/westurner/4345987bb29fca700f52163c33...
ACME / LetsEncrypt certs expire after 3 months (*) and require various proofs of domain ownership:
Certs on the Blockchain:
"Can we merge Certificate Transparency with blockchain?" https://news.ycombinator.com/item?id=18961724
Namecoin (decentralized blockchain DNS): https://en.wikipedia.org/wiki/Namecoin
DNS over HTTPS:
DNS over TLS: