Hacker News new | past | comments | ask | show | jobs | submit login
I Read the Entire Cybersecurity Executive Order; Here's What You Need to Know (lastweekasavciso.substack.com)
136 points by mooreds on June 16, 2021 | hide | past | favorite | 79 comments



"One of the most significant sections of the executive order is the requirement for all federal agencies to adopt multi-factor authentication within six months (180 days) of the order being signed."

I will eat my hat if this happens in that timeframe.


> I will eat my hat if this happens in that timeframe.

Login.gov (a product of USDS and 18F @ GSA) is positioned to meet these identity provider needs.

https://www.login.gov/help/get-started/authentication-option... (2FA support for all users + PIV/CAC support for Fed Gov workers)

https://developers.login.gov/ (developer resources)

Skepticism is warranted as 6 USC 1523: Federal cybersecurity requirements [1] has required a lot of what the EO is calling for for almost half a decade, but the tooling exists and 82 agency applications/systems (as of October 2020) are already leveraging this identity provider. Appropriations ($$$) for this identity integration work seems the challenge, as implementation is straightforward.

Sidenote: You can now login to your Social Security Administration account with Login.gov [2] (under "Other Sign In Options"). If you've applied for TSA Precheck or Global Entry, you've been using Login.gov for some time.

[1] https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim...

[2] https://secure.ssa.gov/RIL/SiView.action

(disclosure: no affiliation with any federal government agency or office, not a fed gov employee or contractor currently)


In particular Login.gov can do WebAuthn, so a vaguely recent Yubikey or other Security Key product, a modern iPhone, or the nicer Android phones with fingerprint readers, all work and deliver unphishable¹, zero correlation² authentication.

1) The Domain Name "login.gov" is inherently associated with these credentials, your Security Key hasn't the faintest idea how to use them on another site even if you yourself are completely fooled and believe you're on Login.gov

2) Even if the US government, your key manufacturer and Facebook conspire together to try to figure it out, there's no way to take their authentication data and correlate that Facebook user mikey-the-shoe is US Login.gov user Michael Shoemaker of New York based on the Security Key used on both sites. It seems like a good guess of course, but WebAuthn deliberately doesn't confirm this.


We really need to allocate more money to USDS. Seems like they've been saving our ass for a while and we're paying them on government employee scale?


The American Rescue Act act did boost CISA, GSA, and USDS funding [1]. Whether they can further push salaries beyond what they're already doing with tours of duty (short tenures allow for higher GS pay scale during assignments), I cannot speak to.

> The Cybersecurity and Infrastructure Security Agency will get $650 million for cybersecurity risk mitigation. The agency has been leading the federal government’s investigation into the SolarWinds hack that breached several federal agencies.

> Also included in the relief funding is $200 million for the U.S. Digital Service, a technology team based within the Executive Office of the President. The General Services Administration will receive two appropriations through the COVID Relief Act: $1 billion for the Technology Modernization Fund and $150 million for its Federal Citizen Services Fund. The citizen services fund enables public access and engagement with government programs through a variety of operational programs and public-facing products, and supports the implementation of emerging technologies in agency-facing programs.

[1] https://www.nextgov.com/cio-briefing/2021/03/covid-relief-bi...


How is it affiliated with TSA Precheck? I'm quite sure I've never directly used login.gov for anything related to my TSA Precheck, and I don't have a password associated with it either.


Interesting! It was my understanding that TSA Precheck was lumped in with Global Entry, NEXUS, and SENTRI, all of which make up DHS'/CBP's Trusted Traveler Programs [1] [2]. I have Global Entry, so have not used the TSA Precheck enrollment flow (I steer those who ask to Global Entry, as it's only $15 more over 5 years vs Precheck and gives you everything TSA Precheck does but also CBP expedited clearance benefits at the US border). I'll have to do more research, thanks for pointing this out.

Interestingly enough, the TSA Jobs portal does use Login.gov. [3]

[1] https://login.gov/help/specific-agencies/trusted-traveler-pr...

[2] https://ttp.dhs.gov/

[3] https://candidates.tsa.dhs.gov/s/


I applied for Global Entry in 2017ish, and at the time it didn't use login.gov. I renewed my passport and had to update my passport number in the system this year, and that was my first exposure to login.gov. So sometime between 2017 and 2021, they started using login.gov.

The process for establishing identity for login.gov was frighteningly easy. It required something like my Global Entry ID and my full name only, and then it automatically connected with other details about me.


This is good to know.

The takeaway for me is to go claim my login.gov ID (if possible) before someone else does.


Please report back, I’m always interested in feedback on how the product is performing from a user experience perspective. Also recommend trying to login to your social security account with it, as it’ll take you through an identity proofing flow where a government ID (which you’ll provide photos of front and back) is used for proofing.


This is the oldest compliance trick in the book. To adopt is easy. To implement is difficult. This is why some regulations specify “adopt and _implement_”.

We adopted multi factor and now we’re in the process of rolling it out. At the moment Ted is our only user. But we aren’t not compliant.


Exactly. Policy without enforcement/oversight is just wish casting :p


NASA has had 2FA for over a decade for internal users, RSA dongles initially and PIV cards now (there was something about JPL and the fingerprinting requirement, though). They have taken steps recently to get rid of passwords entirely, although legacy systems are a problem (like OpenShift).

The down side is that accessing a server requires a patched Putty from Centrify and about 27 configuration steps that are documented nowhere.


OpenShift isn't a legacy system, it's a commercially supported distribution of Kubernetes. That's about as close to "new hotness" as you're going to find in government.

Unless you mean they're stuck on a version from 5+ years ago, which is believable, but hardly OpenShift's fault.


I mean that password authentication is all our installation supports.


In 2001 I worked on a project for some of the federal agencies that used remote biometrics for authentication in addition to password use. I'm not sure if it was safe from replay attacks, but still. It wasn't that the feds didn't know how to make things secure, it's just that the workers in the civilian side of the government didn't really understand the threats and didn't prioritize security.

Edit: Not to say everywhere in the non-civilian side of the government is secure. It isn't. Just that there are pockets of it that are better hardened that we could just wholesale copy.


Will this make SMS-based MfA more vulnerable if the government chooses to adopt / implement using SMS? There are other options but if the EO only says MFA ..


I think criminals gaining access to my bank account is far more lucrative for them than gaining access to my social security account so I don't think it makes the problem particularly worse than it already is.

But besides that, the federal government already has an SSO provider that supports smart cards, U2F, and TOTP. I suspect many agencies will choose to adopt that rather than trying to roll their own SMS auth.


And hopefully a second factor won't be SMS.


dear god, please no. So tired of the entities I have to deal with who only offer SMS for 2FA and not email. I tried to send some of them a stash of links about how insecure SMS is for this purpose, to no avail.


That does seem far fetch considering how slow they move, but I wonder how many agencies have something like that already. Back when I worked as a government contractor I had to get a CAC card and use that with a card reader to sign into a computer when I had to visit a military base. That was about 2 decades ago (man that makes me feel old).


It's hit and miss. Many civilian side agencies have PIV (the civilian version of CAC) cards for login to laptop/desktops, but not mobile phones/tablets. A few agencies have apps are PIV/CAC enabled or have any MFA, but usually one user/pass login. So you see situations where smartcards are used to login to a device, then a username/password to connect to webapps.


All of DoD uses CAC, which is probably 30%+ of federal employees. And probably 80%+ of all apps DoD employees use are CAC/PIV login.


Is it that hard to roll out MFA post-hoc? Feels like it shouldn't be, if it's a huge priority.


The hard part is "all Federal agencies".


To expand upon that, the hard part is that every agency has a unique IT infrastructure (sometimes multiple infrastructures in a single organization) and budgets that vary wildly. It's not a one size fits all approach across the whole government.

The important part of the EO is the section 3.d.i, which says that every 60 days, until MFA is fully adopted, the agencies will report to DHS their plan and progress for implementing MFA. So basically, they can take longer then 180 days, but they need to be transparent about their plan and the roadblocks they are hitting in doing so.


The nightmares that normally come with govt stuff in this space I'm seeing.

1) Long passwords - govt loves these - think 12 characters or longer. Think "EhKx~9RDmPMCW7"

2) Failures to specify allowable characters or blocked characters - very annoying.

3) Hard lockouts (rather than more nuanced rate limiting etc) requiring password resets.

4) Password reset options are weak and not protected.

The backdoor is usually the password reset option.

Ie, agency will do crazy long password + SMS code to login, but you can then reset your password with just SMS. So the long passwords are often just for show.

They often also don't time delay anything. So they'll do things like immediate reset. For high value apps you should have a 4 - 24 hour delay on reset to allow legit user to react.

They'll also often not have good ways to report bad password reset attempts other than via phone which goes no where.

They also often actively block password managers and cutting and pasting passwords. So annoying.

The IRS is currently requiring 90 day password rotations as well which are a nightmare. Coupled with hard account lockouts you have another nightmare.

The list goes on. For the millions to billions they spend you really wonder who is giving them advice.


The requirement to frequently change a perfectly good password is nonsense.

https://pages.nist.gov/800-63-FAQ/#q-b05

It says: SP 800-63B Section 5.1.1.2 paragraph 9 states: “Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.” Users tend to choose weaker memorized secrets when they know that they will have to change them in the near future. When those changes do occur, they often select a secret that is similar to their old memorized secret by applying a set of common transformations such as increasing a number in the password. This practice provides a false sense of security if any of the previous secrets has been compromised since attackers can apply these same common transformations. But if there is evidence that the memorized secret has been compromised, such as by a breach of the verifier’s hashed password database or observed fraudulent activity, subscribers should be required to change their memorized secrets. However, this event-based change should occur rarely, so that they are less motivated to choose a weak secret with the knowledge that it will only be used for a limited period of time.


"is nonsense"

What a total lie.

Do people actually work in / with govt who makes these statements? I just looked up a the current IRS checklist we've been forced to comply with which has driven lots of downline changes to all systems touching this system.

"Control access to sensitive information by requiring employees to use “strong” passwords that *must be changed on a regular basis.". This is not an option, and regular has been defined as 90 days. A previous job (yes, I'm totally aware of NIST guidance) they forced a move to 30 days. 30 days with 12 character passwords is a joke and they blocked copy and paste. EVERY password was on sticky notes by computers after that. They are $100M system implementations.

My point remains, the implementations of these things in the govt space is often the stuff of nightmares, and I have no idea who they are listening to for the money they spend.


Perhaps I should have been more clear. It is nonsense to require frequent password changes. NIST explains why in the above citation. It sucks that many USG organizations not only still enforce this, but require their contractors to do so as well.


I thought you were clear. Only after you got called a liar did I even notice the other meaning. Some people go online with a box of matches looking for things to set fire to.


Ahh, fair point, but unfortunately many many agency security folks are still fixated on making it a requirement.

Just doing it annually would be a big relief in some cases. 30 day rotations with no copy / paste is a nightmare.


Can confirm, I have a friend who works as a federal agency employee (a different one than the IRS), and she constantly has to change her passwords - her agency DGAF about SP 800-63B Section 5.1.1.2 paragraph 9 (or, in fact, many of the other NIST password recommendations). GP comment is absolutely false.


What I don't get is we are getting NEW password rotation requirements on existing systems as part of the cyber protection pushes.

We are way down the stack of course but just a basic approach like

1) Allow for cut and paste passwords

2) Reduce rotation requirements (even annually would be better).

3) 2FA if logging in from a new device (with no SMS)

Would I'm sure get tons of protection without the insanity we have now. Some folks are asking for 16 character passwords but allow reset with a text message or email? Or a phone call to an overwhelmed help desk who barely verifies anything.

That said, I've seen worse. I once tried to go as high as I could on the chain to get something even worse fixed (we were required to hand out a form for a program that had been out of existence for 5 years so form did nothing). They would not budge, I even got legal in on it.

Because someone somewhere had ordered the form be distributed, the order had not been revoked, we still had to hand it out even though everyone agreed the form and program were no longer in existence. After I kept on escalating they threatened to prosecute or violate contract if form was not distributed an I kept pursuing the issue. So thousands of people filled out a form that just went no where.


Can't speak to other agencies, but the one I work with enforces password changes every 180 days, with annual training and all related communications emphasizing the use of correct horse battery staple-style passphrases. It's been like this for a while now, several years at least.


You are both agreeing with each other.


They didn’t say it’s not required.


True. They say "SHOULD NOT" vs. "shall not", but since these are "guidelines" and not "requirements" I think they used the strongest language possible. (Note the capitalization in their FAQ. It's all caps!) Later in the same section, they use "SHALL" for requiring a password change after a known compromise.


>3) Hard lockouts (rather than more nuanced rate limiting etc) requiring password resets.

>4) Password reset options are weak and not protected.

These are pretty much impossible to both satisfy. At the end of the day you need to think of computer accounts as ephemeral or have some out of band access to the administrative staff.


"The backdoor is usually the password reset option."

County Password Inspector!


The challenge with this EO and all aspirational security pronouncements is their focus on outcomes while avoiding implementation details, trade offs and resources.

It’s as if nobody asked WHY zero trust and MFA are not already pervasive in the Federal Government. Legacy systems are going to be incredibly difficult and expensive to rearchitect for ZTA. Despite HSPD-12 (CAC and PIV authentication and access) being over a decade old, some parts of government refuse to use a smart card plus password for MFA. I wonder why? It is not simply because “government doesn’t understand computers.” The core issue is leadership. There is no benefit for executives to point out the constraints, like usability, cost or talent, that ensure that good ideas in principle will be adopted incorrectly and incompletely.

That said, there is some stuff worth cheering. The CSRB is much overdue and the elevation in status of cybersecurity as a critical function is directionally correct.

Much of whether these aspirations will be possible hinges on legislative budget decisions and ultimately sweeping reform of the government hiring system.


The order covers exactly what it should: these are the outcomes we want, make them happen. It's silly to assert that an executive order from the president would lay out how all the different agencies in the government will adopt Zero Trust, MFA, or other things. This is the kick in the pants, now the agencies are on the hook for doing them. I appreciate your point that it will be hard to accomplish, but that doesn't really let them off the hook I don't think.


> The order covers exactly what it should: these are the outcomes we want, make them happen.

No, it doesn't. "Zero-trust architecture" is not an outcome - it's an means to the actual outcome, which is "lack of breached systems/successful cyberattacks".


It's not that long, just read it yourself.

https://www.whitehouse.gov/briefing-room/presidential-action...


"Software Bill of Materials" could be important. Especially where Docker containers are involved. What's in your box?


120 thousand node_modules (3.4 gigabytes worth), including 34 versions of lodash mostly.


Why don’t I hear about npm package attack vectors more often? I’d expect it to be super common


My guess is that it's a two-tier process. For one, in many ways, NPM packages are kind-of winner takes all type of a setup where a narrow circle of packages accounts for most of the installs. This leads to more eyes on them, but I would say this is secondary to the following point. Most of the time, when a package is either added or updated, someone technical, usually a developer, is staring at their terminal, browser dev tools, and/or dozens of other things. If something fishy was aloof, the chances of it being noticed are considerably higher than in a regular user-computer interaction. Add automated scanning employed by Github, NPM and others to the mix, and I'd say that's the answer. Clearly it can happen since it has, but every time it does happen, the space for shenanigans grows narrower.


Docker does not allow specifying a docker container by the SHA-256 digest of its contents. I hope the US Federal Government will get them to finally add that crucial security feature.


SPDX (https://spdx.dev) is an open standard for describing software bill of material information. A number of open source projects, like the Linux kernel, have already adopted its conventions (e.g. for marking the license in the files), so that information can be automatically be processed.


ISO 19770-2 SWID (Software ID Tags) provides a framework for creating an SBOM. NIST has been given permission to essentially republish the 19770-2 standard here: https://csrc.nist.gov/publications/detail/nistir/8060/final

On Page 8 look at the Corpus tag.


Within the cloud native and container space, there is work being done on how to implement this (SBoM, Signing etc). It will obviously be tricky to get right, but there are groups looking at it :)

https://github.com/cncf/tag-security/ is a good place to go for more info.


I wonder what governments will be using for zero trust architecture? There are a number of companies competing in this space, one is strongDM [1] who I currently work for. I am curious if they are planning to work with existing companies in this space, or pay billions of dollars to build custom software to do this.

[1] https://strongdm.com


What exactly are we paying the NSA for if they can't lock down our government services?


We're paying them to hoard zero-days.

(And sometimes share them with our Israeli friends for hunting down Saudi journalists.)


The NSA doesn't have the authority. They can advise other federal agencies, but those agencies can (and do) waive them off.


Just a reminder that EOs like this matter mostly to those companies who sell to fussier agencies in the federal government.


Or to anyone who sells to anyone who sells to "fussier agencies in the federal government".


Zero Trust is becoming a panacea IME: You may feel overwhelmed by the complete lack of systems you can trust, but Zero Trust will magically provide security regardless and take away your stress.

We need more discussion of its costs, what it specifically provides, and its limitations. Can anyone with some expertise and experience comment?


It's just a buzzword for the IT security sector similar to how one would now use "machine learning" or "artificial intelligence" in place of "open sesame" to get a budget increased, decision made or project started. One of the authors of the NIST standard when attempting to describe "zero trust" just loosely states it as a continuation of what is already happening today in any large organisation[1].

If "zero trust" was truer to its name, everyone would be applying the type of engineering techniques[2][3] used to design "critical systems"[4]. However this probably wouldn't be desirable for the IT security sector as it'd show that many IT security products and measures are in fact detrimental to the security of the systems involved. Detriments including:

1. Increasing complexity of the system design and the impact this has on the ability to configure and maintain everything securely (think 5 years later with a new team of people looking after it who don't fully understand how it all works changing configuration).

2. Introduction of new centralised points of failure in the form of security systems having elevated access to an entire network. The recent example of this detriment is SolarWinds Orion.

3. Increasing prevalence of shadow IT as a result of end users fed up with difficulty obtaining access to information or resources they need to do their job.

4. Chilling effect on IT system development within the organisation as the security bureaucracy becomes too difficult to overcome when delivering IT system upgrades. Legacy systems in an insecure state remain connected to the network forever because it costs too much to make even small incremental improvements as the security bureaucracy will demand a complete overhaul or slow down upgrade projects.

[1] https://www.youtube.com/watch?v=NWQgh42O8WU

[2] https://en.wikipedia.org/wiki/Failure_mode_and_effects_analy...

[3] https://en.wikipedia.org/wiki/Fault_tree_analysis

[4] https://en.wikipedia.org/wiki/Safety-critical_system


@dang HN readership is pretty international now, should this type of headline maybe specify it's about the US?


Not sure too many people are going to be confused about whether this is about the US or the Philippines:

https://en.wikipedia.org/wiki/Executive_order_(disambiguatio...


It would be interesting to see breakdown of users by geography although anecdotally many Americans who are WFH in tech have moved to places like Mexico, Thailand, Bali etc.


> In essence, a Zero Trust Architecture allows users full access but only to the bare minimum they need to perform their jobs.

This seems like it will translate into "significant productivity losses due to security taking away access that workers use to accomplish their jobs more effectively", and possibly taking capabilities away that workers really do need to get their jobs done and requiring a gate-keeper to do them instead.

You used to get root on your development box? Tough luck - now you're developing in a thin client connecting to a VM on a remote machine that you don't get administrative privileges on, and if you want to install a new package, you have to submit a JIRA ticket. But it's ok, because you have the "bare minimum you need to perform your job".

Source: have seen this happen at least once before.


What often happens is a shadow development environment is setup on a machine (often personal) that isn't bound by the security restrictions, which makes the situation even worse.


Wasn't Trump elected based on his promise to end that behavior at the State Department?


I'm glad that the government is finally pivoting to a zero-trust model here. It sounds like companies like Microsoft and Apple are going to have increasingly difficult questions to answer about their data infrastructures when this law rolls out.


There are no consequences from incidents caused by gross negligence.

I think gross negligence should be treated similarly to insider trading, HIPAA violations, etc. There should be fines and jail time.


As our friend Han Solo says: "yeah, well that's the real trick, isn't it?"


Don't forget the section that requires companies doing business with the government to share data about any possible bad actors.. hence anyone using encryption.


How fortunate that an executive order was waiting in the wings, which requires federal contractors to purchase an authentication service and guarantee they use the underspecified ZTA (which will probably mean pay for AWS or Azure services to do anything at all), days after multiple highly-publicized million-dollar cybercrime heists were, quite astonishingly, thwarted by the FBI resulting in no money lost.


Or, apparently, we can just use https://login.gov

(link from https://news.ycombinator.com/item?id=27530920)


That appears to be for consumers of federal government web services, which I would not expect to prevent large breaches. The service itself would be a massive single point of failure, and I'm not sure how it would prove a contractor has ZTA, or whatever an administrator thinks that means, all the way down.


It alone is not a silver bullet for ZTA, but authn and authz are critical components of ZTA. Most are not good at auth*, should not roll their own, and absolutely should delegate to a service that provides this functionality (commercially, you see this with Okta and Auth0, for example).


While primarily targeted at the public, login.gov is used for some internal gov sites as well.


What exactly is this pile of innuendo meant to allege


Since you asked, I'll make it explicit. My pile of innuendo alleges that the FBI is directly responsible or at least complicit for the Colonial Pipeline and JBM hacks, and the executive order is likely written by lobbyists who sell or intend to sell authentication services or "secure" platforms to federal contractors.

Do you remember back when there were no weapons of mass destruction and the "Intelligence Community", the media, and other agents of reputable institutions all went along with the lie so that unrelated geopolitical objectives could be satisfied and defense contractors would make insane amounts of money? It's like that.


"Executive support of information security is essential"

so what happens to Executives who do not support this?

Nothing, in my experience.


I think in this context, the author is referring to the executive branch of government. Not necessarily business executives.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: