Hacker News new | past | comments | ask | show | jobs | submit login
Why CISA Issued Our First Emergency Directive (dhs.gov)
211 points by ca98am79 on Feb 16, 2019 | hide | past | favorite | 72 comments

Considering the top two cybersecurity roles in the US government have been vacant since mid 2018, it's fortunate that there are still some people focused on area.

The "Cyber Czar" position was cancelled in 2018 when Rob Joyce went back to NSA.

The position was not cancelled. Tom Bossert had the position and resigned, and Rob Joyce assumed an acting position role from April 10, 2018 to May 31, 2018 (after which he returned to the NSA).

The position simply has not been filled properly, like hundreds of other positions in government that have traditionally been filled in the past 4 decades.

The position was cancelled by executive order. Politics are one thing, but the facts in this case are materially different from what you state here.

My understanding is Trump is having so much trouble getting confirmations through congress, he's basically stopped trying except for essentials and the gov has never taken tech related stuff as essential.

The previous administration took cyber sec & tech of all types quite seriously. Obama visited Silicon Valley many times and fully stocked the cyber oversight positions, as well as continuing to exploit cyber weapons. So "the gov" is not quite right. Depends on which gov.

He’s had few difficulties. Executive positions simply aren’t being filled, for some reason. He’s proposed many judges. AFAIK every name proposed for any position has been approved by the senate.

I am not from US, but decided to google quickly about this... and you are wrong, and the guy that is downvoted seemly is right:


Hmm, interesting. The high profile ones seem to go through without any more than a ceremonial objection so those less prominent ones are surprising.

Thanks for checking this!

I would imagine it is difficult to fill positions when you're a Washington outsider and when you reject the opinions of insiders. That of course assumes you have time in your schedule between executive time blocks.

Unless you read only one news source, there is no debate as to why positions are not filled. It is because of one of two reasons: 1. at this point nobody wants to work with you, or 2. you don't even know who to ask.

What do you base this understanding on?

Oh you mean the republican controlled Senate who ratifies presidential appointments ?

I would love to see evidence for your claim.

From my sources , the White House hasn't put forth anyone for these position.

Even though it was controlled by republicans there is ways to "slow walk" it by the party not in power to slow things down.

Not accurate at all. The senate has all the party line votes it needs. Theres no 'slow walking' around that. The reason the appointees aren't there is becuase the Trump admin hasnt appointed them.

Do you guys just blame the Democrats for everything? It seems dishonest.

As a different sibling commenter pointed out, it's very easy to mess with senate proceedings even as a minority party, historically the party hasn't done this often, even the republicans under Obama in 2008-2010 didn't play the games the current Dems are playing with Trump.


Sorry, but from the earliest point in Obama's presidency, the Republican side of Congress stated that it was their goal to make him a one term president.

There has never in US history been more stonewalling than there was during 6 years of the Obama presidency.

Can't agree with this more. The Repubs compromised on nothing with Obama in office.

Fun detail: their first-ever "emergency directive", owing to a rash of DNS hijacking attacks, and not one mention of DNSSEC, despite the fact that the US Government requires agencies to use it. And they do! Pull a list of random .GOV or .MIL domains and check if they have DNSSEC --- most will! And it did fuck-all to stop this attack.

This attack vector is not something DNSSEC was ever intended to defend against, so why would it be mentioned? Once the registrar account associated with a domain is pwned the attacker can do pretty much anything they want with it, including obtain fraudulent certificates.

As someone who has recently dealt with a major government agency, the requirement of DNSSEC means that agencies can’t use Route53 as it doesn’t support DNSSEC. This makes it much less likely that an agency will use something like Terraform for managing DNS, which would help in making sure all records are validated and easily auditable. Instead, many DNS records are managed by hand or with bespoke systems. The best case is that it’s managed by maybe Akamai, but records are still updated via support tickets and implemented by hand (as akamai’s Terraform support is pretty bad).

So it’s relevent to mention DNSSEC because it’s spoken of within the government as a method of securing DNS, but in reality it makes things worse. It doesn’t protect against attacks like these and the end result is you are forced to use a crappy system that makes you even more vulnerable to these sorts of attacks. It’s the worst of all worlds.

(Also, I was on the call they held for CIOs and tech leaders of all agencies, and it was astonishing how many of the questions in their chat made it obvious that many of the entrenched players didn’t even understand how this attack worked.)

Tangential, but since you mentioned Akamai.

I wrote a framework at work that maintains authoritative DNS configuration truth (static, and advanced GSLB variations) as text files going through git and regular PRs, auditing, etc.

The truth is then flushed out via APIs to multiple DNS providers (active-active), Akamai being one of them. Nobody's APIs are perfect, but I don't recall it being too difficult to push the state to their system.

For sure, you can totally make something like this happen, but again that means writing and securing a bespoke system. If you’re a team managing your AWS infrastructure with Terraform, it sucks to have to write a totally different system (and all the secrets management that goes along with it).

Additionally, for a government agency you could have dozens of contractor teams, and you need a method of securing access for all of them. It’d be much easier to just use Route53 and delegated subdomains, as all teams are largely deployed and managing infra (with MFA) there.

But it’s not allowed, because DNSSEC.

Is there anything that would make it especially challenging for Route53 to support DNSSEC, or have they just not chosen to implement it?

I don’t work for AWS, but I’ve talked to people that work on Route53. Amazon just isn’t willing to support it, none of their large customers care about it enough it seems like. Plus DNSSEC has all sorts of issues, some of which people in the thread have mentioned. Actually, tptacek has a blog post that summarizes a lot of what’s wrong with DNSSEC iirc.

This unwillingness may indicate a lack of competency, complacency as a market leader, or complicity with censoring regimes.

How f difficult is it to sign a zone?

DNSSEC isn't perfect (indeed it only provides assurances of record integrity and doesn't secure the channel); but it's certainly better than nothing, than no signature at all.

If route53 can't or won't or doesn't have to because they don't want to implement DNSSEC, route53 is not suitable for .gov and .mil domains.

It's really that simple.

I get that you want to use terraform; I don't see why you think route53 is the only DNS that terraform works with.

The article/directive specifically called out DNS account passwords on DNS hosting providers. If they were using support tickets and manually implemented changes by hand then there would not be an account. With no account for a control panel you don't have any risk that the email and password gets dumped on pastebin. In the directive it even speaks about password managers, which again implies that the issue they are talking about are DNS control panels like the one at Route53, likely with reused password, and not bespoke systems.

For high priority domain names I would go so far and recommend against having a DNS control panel accounts on things like Route53. It has access control management but unless the department has the experience and skills it will likely end up with a single account that has full access with little oversight when something goes wrong. Those departments are likely better served going through a domain management company that handles all the security and manage who can request changes and how. If they have the experience and skills then managing it fully themselves can be a better option, but then what they really need is a registrar which allows for DNSSEC keys to be forwarded (with for example a cds record).

“If they were using support tickets and manually implemented changes by hand then there would not be an account.”

Your contracting team may not have a DNS account, but the contractor that deals with DNS changes certainly does.

The point is that all infrastructure really should be managed with code. If you managed DNS from Terraform you’d notice that your production DNS system was in conflict from what your code said it should be the next time a terraform plan/apply was run (which is happening all the time from dev or maybe CI). Without it you could go months or years without noticing a changed entry on a little used, but critical system. From what I remember this attack was ongoing for a couple years before it was noticed.

If they are using contractors that have DNS accounts with email and password as authentication, and who reuse that on other online websites which later get leaked, then that is major a failure in procurement when the bid was initially created.

> The point is that all infrastructure really should be managed with code

That I agree 100% with. If the agency itself has a IT department with experience and knowledge then the most optimal choice is that they do it themselves with either something like Terraform or their own physical hardware with authoritative DNS running on it. Agencies that is involved with either critical infrastructure, personal information, or state secrets should always have a security risk analyze done which describe what would happen if someone broke into DNS and where the vulnerabilities are. Sadly very few agencies does this.

Other agencies are likely better served by updating their procurement in respect to DNS management and make sure the requirements lists liability and high security. Make sure that rather than some cheap bulk registrar with a do-it-yourself control panel, where the registrar always have zero livability if the credentials are leaked, it is better to pay a more costly management company that manage the infrastructure with code.

> The article/directive specifically called out DNS account passwords on DNS hosting providers. If they were using support tickets and manually implemented changes by hand then there would not be an account.

I don't think it's difficult to imagine that, somewhere within the massive federal bureaucracy, there is a team that works on DNS record tickets filed by other parts of the bureaucracy, and logs into a web console with a DNS hosting provider in order to fulfill those tickets.

Yes, this is pretty close to how this agency does things. Your contracting team files a support request for a record change and a manager from within the agency approves. The contractor that owns DNS changes then goes and does it.

This makes all sorts of other things bad, like generating TLS certs with automation, as typically a validation for cert generation will be done via a DNS entry to prove ownership. But that requires a support ticket, so no automation will work here.

I wonder why Route53, etc. don’t support DNSSEC.

Even without widespread client support for DNSSEC (and the AD bit doesn’t cut it), it would be quite nice if CA/B rules required that CAs enforce DNSSEC (if enabled for the domain) when checking CAA records. This would impose no burden at all on web browsers and only a minimal burden on CAs. Unfortunately, since Route53 and the like mostly don’t support DNSSEC, getting the benefit for domain owners is a pain in the neck.

RFC 6844 seems to recommend but not require this validation. I don’t know whether newer rules require it.

edit: Unsurprisingly, people are working on this: https://tools.ietf.org/html/draft-ietf-acme-caa-06, section 5.6, for example.

One of the problems with DNSSEC is that it causes the size of DNS responses to increase substantially, which becomes a DDoS vector. The response to this is nominally to use response rate limiting, but the more people who use DNSSEC the less effective that is because it means more DNS servers the attacker can use to reflect their attack through without reaching the rate limit for any one. (RRL is also not ideal because a large DNS cache can hit the rate limit by just making a large number of legitimate queries.) And the more servers that use DNSSEC the harder to block a DDoS attack because it comes from a larger number of unique addresses.

And now TXT records used for domain validation are becoming quite large as well. You add a ~hundred byte TXT record for each service you want to validate your domain against, but they all want the entry to go in the same place (at the root), so you do the DNS TXT query for the root and you get a big response. Adding DNSSEC on top of that might even blow the practical limit for UDP EDNS0 responses sometimes, never mind the DDoS potential.

There may be a solution for that one -- specify a location for each TXT record instead of everyone using the root, so Facebook uses _facebook-domain-verification.[example.com], Google uses _google-site-verification.[example.com], etc. Then you wouldn't get a dozen large TXT records in response to a single query because they're each at a different name. But that's not currently what companies are doing, and a purist wouldn't like it because in principle controlling _google-site-verification.example.com doesn't mean you control all of example.com.

For at least some of this, TCP could be a decent solution. For CAA queries, for example, no one really cares how fast the query is, so a server could plausibly refuse to answer over UDP.

That works for the thing expecting the large response, but requires an unusual custom configuration on the server (refuse UDP queries for specific record types but not others), and doesn't really help other clients that may make queries that yield large responses unexpectedly.

For example, the large TXT records can prevent mail delivery from some versions of qmail, including the one currently packaged with Debian stable, because it makes an ANY query for the domain (to avoid separate queries for MX, A and CNAME) but only supports UDP and only supports responses up to the 512 octets specified in RFC1035. Then it gets a truncated response when there are >512 octets of TXT records and considers it a name resolution failure.

Making the ANY query is an unusual quirk in qmail, but it could have just as easily been an actual TXT query looking for the SPF record etc.

Route53 was very slow to even enable CAA itself. They are not exactly at the bleeding edge of DNS features.

CA/B Baseline Requirement says to do RFC 6844, and that CAs need to implement at least enough DNSSEC so that they can determine whether "the domain's zone does not have a DNSSEC validation chain to the ICANN root" because if there is no DNSSEC validation chain you can treat DNS failure as permission.

In practice most larger CAs just do DNSSEC entirely here, Let's Encrypt uses it everywhere. Deployability to ordinary end user clients remains poor due to middle boxes and local DNS implementations that are garbage, but if you actually control your network (as a competent CA should) then it isn't a problem.

The sort of CAs and CA-customer relationships where DNSSEC was argued to be problematic aren't going to be doing ACME CAA, or probably ACME at all.

[Edited to delete me confusing one ID for another]

I don't believe that this description of DNSSEC functionality inside CAs is accurate. Can you be specific about which CAs you believe do a good job of validating DNSSEC --- not just somehow "running" it, but making use of it operationally?

The largest CA by volume, Let's Encrypt, uses DNSSEC for all its validations, and hard fails issuance if it can't get either a DNSSEC validation or the signed denial saying this part of the hierarchy isn't signed, it has done this for years now. Works well.

Several other major CAs including DigiCert do DNSSEC validation but I haven't used them and so can't even tell you from my own experience how well they work, though it seems likely some of their other customers might have noticed on that end.

Now, are they all doing a "good job"? If you actually were paying any attention at all in this space you'd presumably have quoted my already published opinions about that. I think they should retain the DNS responses the same way they would keep the actual raw data from an HTTP validation. So third parties doing incident investigation can do an effective post mortem - I also think they use too many "short cuts" that are going to result in someone finding a nasty bug one of these days and for which there's no real evidence they're necessary.

But since it seems for you a "good job" is just using something operationally, then yeah, in that limited Thomas Ptacek sense of "good job" they're already doing a good job I guess.

Can you confirm another CA other than LetsEncrypt that will reliably deny issuance on a DNSSEC failure?

(Obviously, just to point something out for the thread that you already know, the vast, overwhelming majority of LetsEncrypt issuances are for zones without DNSSEC signatures).

Given that I already wrote that...

> I haven't used them and so can't even tell you from my own experience how well they work

... I'm not sure what my "confirmation" would tell you, beyond that I know how to read the paperwork from the CAs. But sure, both Sectigo and DigiCert say their systems should deny issuance on DNSSEC failure.

I'm asking because I want to know, not to make a point.

Just stopped by to mention that Google’s Cloud DNS supports DNSSEC. You should be able to configure it via Terraform.

Not an endorsement of DNSSEC, just that it’s not impossible to support it.

As Colm MacCárthaigh noted on Twitter, not only does DNSSEC not defend against the most likely and damaging DNS attacks (hijacking between clients and servers, and hijacking at the registrar), it actually makes things worse: an attacker who has achieved the vantage point these attackers have can use DNSSEC to semipermanently poison records by using it to sign high-TTL records.

It's not my claim that DNSSEC is what enabled this attack in the first place. But there's no way I'm going to leave the irony of USG agencies laboriously configuring their security-theater DNSSEC protocol while not being able to defend their registrar accounts or get basic MFA deployed.

Hijacked high-ttl records for signed zones aren't more dangerous than hijacked, high-ttl records for unsigned zones. Why do you claim that they are?

They are both invalidated through the same mechanisms. It would be nice if there were reasonable limits on TTLs, but I'm not getting invited to those meetings.

Effort spent on DNSSEC might have been better spent on e.g. CT.

CT: extensively remarked on in this advisory.

Yeah I read it; I was highlighting this for parent.

Sorry, was emphasizing, not correcting.

Always a pleasure reading your rants on DNSSEC. How the hell was DNSSEC supposed to stop an attack against leaked credentials?

It wasn't; that's not the point. The point is that it's expensive security theater that fails comprehensively in the face of real-world DNS attacks.

It failed on this case, so it must equally fail in every? I don’t buy that ..

No. The assertion is that it does fail to provide a benefit in practical, real-world cases, but defends against a fairly niche attack.

There's a reason that 'tptacek called it "security theater". Its implementors could better spend that time on actual security measures for things that are much more likely to happen...like defense in depth for credential leaking.

It protects against targeted DNS poisoning, but that is already covered by TLS.

How is that protected by TLS? Wouldn't you need a good A/AAAA record to make a connection to a TLS server?

Or do you perhaps mean DNS over (TLS|HTTPS)? I never saw that as a complete replacement for DNSSEC; it provides transport security, yes, but how do we know we aren't talking to a malicious resolver? Maybe that's not as much of a threat if people aren't using DNS servers from their ISP -- which sometimes inject ads or otherwise tamper with traffic.

DNSSEC is not going to do anything if you have the username/password account for your domain registrar account hijacked, and somebody logs in and changes the authoritative nameservers for your domain.

This whole notice really boils down to "seriously, people, if you lose control of your domain registrar account, all bets are off"

To be fair, most people by now are aware that DNSSEC is not just a PITA to set up, but also basically useless; not to mention it was not designed to do anything about attacks of this type.

That's the problem: it was designed to deal with the IETF namedroppers favorite attack (server-server cache poisoning, a la Eugene Kashpureff) and not any part of the real-world threat model for DNS.

In fact, at this point, DNSSEC has very little to do with securing the DNS (as it is used by the rest of the Internet) at all; rather, it's a self-justifying self-rationalizing mechanism for building random new things nobody uses on top of the DNS. So it won't protect your DNS records, but it will try its hardest to make sure the security of your SSH sessions somehow depend on the security of those records.

An actual good solution wouldn't really be that complex to implement. You basically just need to sign a record or set of complimentary records (for example, all of your NS records), and make it reasonably simple to select and return those signatures from a recursive resolver, and then build a certificate hierarchy the exact shape of the DNS itself. You install the public key for the super seekret ICANN cert, and the client either trusts the recursive resolver or caches the tld certs and recursively resolves those, or some combination of the two.

Beyond that, it's all a clerical matter of being better or worse at verifying identity at the level of the registrar and the NIC.

Maybe a third party could start implementing something like this (maybe just a directory of domains validated in the way we do that for TLS), and it could be grafted to a NIC's infrastructure (maybe I'll talk to CIRA about it), and then maybe ICANN would pay attention after that.

I'd be up for it, if a handful of other people would try as well.

There are an awful lot of people who are unaware that DNSSEC is basically useless, unfortunately.

tl, dr: Because Iranian hackers were hijacking DNS records for government sites.

Where is Iran mentioned?

It's buried in links, but the FireEye research referenced by US CERT discusses it.





Link chain that I followed:

1) op — https://cyber.dhs.gov/blog/#why-cisa-issued-our-first-emerge...

2) first link in op, "emergency directive" — https://cyber.dhs.gov/ed/19-01

3) footnote #1 in the emergency directive — https://www.us-cert.gov/ncas/current-activity/2019/01/10/DNS...

4) FireEye research referenced on the US CERT page — https://www.fireeye.com/blog/threat-research/2019/01/global-...

>We know an active attacker is targeting government organizations.

>Using techniques that aren’t especially innovative, we know they can intercept and manipulate legitimate traffic, make services unavailable or cause delay, harvest information like credentials or emails, or cause a range of other malicious activities.

>We know that this type of attack isn’t something many organizations monitor for or have tight controls around.

Interesting that they've specifically identified a case of this ongoing in a coordinated manner. Makes sense why it'd be an "emergency directive" in this case.

Seems reasonable enough, steps that should have already been taken (and may have been already for certain sectors of the government). Pretty level headed “directive,” I definitely welcome it.

Nine out of ten times, it's bad passwords.

Or shared credentials, etc.

or Putin

Not phishing?

There isn't enough detail here. Which agencies gave an unknown (still unknown even now!) actor or actors access to all information that the public passed to them over the internet? What harms has the public suffered as a result?

This impacted work for people I know and we have all asked around informally and no one has info in the beltway so those in the know seem to be tight lipped about it.

That said I was unaware of the Talos publication last week so I will read that but I assume this same surface level fluff.

Thanks for that. I'm now more worried not less. b^)

There are a number of efforts to secure DNS (and SSL/TLS which generally depends upon DNS; and upon which DNS-over-HTTPS depends) and the identity proof systems which are used for record-change authentication and authorization.

Domain registrars can and SHOULD implement multi-factor authentication. https://en.wikipedia.org/wiki/Multi-factor_authentication

Are there domain registrars that support FIDO/U2F or the new W3C WebAuthn spec? https://en.wikipedia.org/wiki/WebAuthn

Credentials and blockchains (and biometrics): https://gist.github.com/westurner/4345987bb29fca700f52163c33...

DNSSEC: https://en.wikipedia.org/wiki/Domain_Name_System_Security_Ex...

ACME / LetsEncrypt certs expire after 3 months (*) and require various proofs of domain ownership: https://en.wikipedia.org/wiki/Automated_Certificate_Manageme...

Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency

Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?" https://news.ycombinator.com/item?id=18961724

Namecoin (decentralized blockchain DNS): https://en.wikipedia.org/wiki/Namecoin

DNSCrypt: https://en.wikipedia.org/wiki/DNSCrypt

DNS over HTTPS: https://en.wikipedia.org/wiki/DNS_over_HTTPS

DNS over TLS: https://en.wikipedia.org/wiki/DNS_over_TLS

DNS: https://en.wikipedia.org/wiki/Domain_Name_System

I am probably just cynical but the title reads like a medium tech blogger is about to tell me why he started using React in the cloud.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact