Hacker News new | past | comments | ask | show | jobs | submit | pquerna's comments login

What about "access controls" for the AuthZ side, instead of Permissions?

Wondering HNs collective wisdom on this-- at work we've been using Access Controls on our homepage for awhile- https://www.conductorone.com/ - to the people outside the IAM-geek space does this make more sense?


I like the idea of exposing Roles for selection, and having those roles generally apply to permissions or workflows internally. This tends to work pretty well for many, or even most applications. I can't stand fine-grained access controls in general.


For this general pattern implemented in Golang, check out redis_rate: https://github.com/ductone/redis_rate

This fork also implements Redis Client Pipelining to check multiple limits at the same time, and Concurrency Limits.


The API for Let's Encrypt to do this requires possession of the private key, which pwned keys doesn't always have. Sometimes they just have an "attestation" of compromise:

https://pwnedkeys.com/submit.html

Which if you had an standardized representation of that attestation, maybe CAs could consume that instead.

But, the author of pwnedkeys thought of that, and started an RFC for exactly that:

https://github.com/pwnedkeys/key-compromise-attestation-rfc/...

But it seems dead right now.


You can also just, Log the spans as they are being created to stderr/stdout -- I've done this on a previous project with this approach of "spans first".

It made it debuggable via output if needed, but the primary consumption became span oriented.


Good idea yeah, but do the same notions of log level apply?


Would it of been possible for Github to use Host-key rotation instead of hard breaking it?

https://lwn.net/Articles/637156/

I'm honestly not familiar with anyone actually using host-key rotation?


If an attacker has access to the private key, they could use the Host-key rotation feature to migrate you to an attacker-controlled key instead, as the old key is trusted. So, GitHub needs everyone to forcibly untrust the old (exposed) key.


Yeah, but... shouldn't Github of rotated their keys over the last decade?

I mean it seems like its clearly a key that wasn't in an HSM.. and over the lifetime, hundreds? Thousands of Github employees could of accessed it?


The problem with rotating this particular private key is that it's incredibly disruptive. Everyone who uses GH will see a big scary message from ssh saying the host key changed and something malicious might be going on. A majority of those people probably won't have seen a blog post announcing the change beforehand.

Anyone who's baked the host key in the known_hosts file that gets shipped on their CI systems would start to see jobs failing, and have to manually fix it up with the new host key.

These things are just annoying enough that I think it's perfectly understandable that GH doesn't want to regularly rotate this private key.


The point of host-key rotation is that you can avoid the disruption of the former.


Congrats, you just used "would of", "should of", and "could of" in a single thread.


Host-key rotation would enable the attacker to continue, but the attacker could be detected simply by diligent people monitoring the github key they use.

The current rotation allows anyone to try to fish the lazy users (like me probably) who will just trust on first use. Probably a bigger risk than key compromise, since they have logs.

It could be a better idea to use Host-key rotation, despite it making the life of a key-thief a bit easier. Just because it exposes people less against opportunistic impersonators.


1. IIRC UpdateHostKeys does not remove the old key, so it would still be there, lurking (I haven't checked the code).

2. It was only added in OpenSSH 6.8, so it missed Ubuntu 14.04 release, and only really turned up in 16.04 LTS that way, plenty of old systems it wouldn't work on.

As other posters noted, a bad actor could rotate the key to their chosen keys just as easily as GitHub could cause the rotation.


I just tested it and looked at the code briefly; the client fortunately does seem to remove all keys not provided by the server: https://github.com/openssh/openssh-portable/blob/36c6c3eff5e...

It seems like at least a `known_hosts` compromise would be "self-healing" after connecting to the legitimate github.com server once.


congrats on the launch!

three questions / thoughts:

1) Your post mentions "Ranking", and while do the most impactful work first is great, the method I have most often used is when dealing with Vuln-overload is to "Reclassify". That is Common Vulnerability Scoring System (CVSS) (super flawed as it is) has let reporters check the box for "remotely exploitable" therefore its a 8.0 HIGH vulnerability -- but I think your product could let me reclassify the vuln to a medium/low - maybe a built in CVSS score editor?

2) One other thing there should also be a built-in concept of "accepting the risk" -- and ideally a concrete report of what was previously "accepted", and if that package gets used in new ways?

3) I'm curious what you think about market segmentation in this space? Specifically the sub-200? person companies seem to be using alot of the "all in one" Compliance platforms (eg, Vanta, Drata, etc). Vanta for example does have a vuln management + SLA tracking dashboard + ticketing tools.


Great feedback.

1) We want to be cautious about changing a score that someone else assigned (CVSS) but we'd like to add our insight to inform of its impact.

2) Absolutely and we'd like to bundle it with active blocking. After reviewing the CVE, we'll let the user either accept (e.g. mute) it or block that specific package from being used (e.g. for dormant ones).

3) We think our service is most useful to slightly larger orgs with dedicated security functions and bigger supply chains. We want to help slow down the fire-hose of vulnerability reports coming from the security to devs.


> We want to help slow down the fire-hose of vulnerability reports coming from the security to devs.

Would be interested to hear more strategy here -- in my experience, the only way to actually lift this dev burden is to make upgrading dependencies something that's expected, routine, and near-automatic.


100% agree. The reality is that updating a dependency always carries some risk and sometimes requires changes to code. Reducing the amount of upgrades that have to be done under a stringent SLA makes life easier. In larger orgs we’ve talked to the ratio of eng:apps can be 1:2 or worse, so ownership is harder. In addition, for a fair amount of vulnerabilities, a fix is not available. These situations require a more involved risk assessment and remediation plan (e.g. moving to an alternate dependency). We aim to reduce the toil in such cases as well.


Its cool to see the automation the kubernetes team stuff does against Github -- but has it been expanded to other resources, eg AWS or some other SaaS used?

Other thought I had, is there any concept of expiration of permissions?

Something I ran into when I used to do more Apache Software Foundation work was that, we had thousands of committers with shell access -- but 94% never used it. Are any of the things protected by this privileged? eg, a release private key?


i've also been working on a similar tool -- working towards open sourcing it too. would you be interested in taking a look? paul.quenra at conductorone com


I believe you might have a typo in your mail? Just making sure you're not missing out on something useful :)


thank you -- can't edit it anymore, but paul.querna (spelled my own name wrong)


This is how Okta's Advanced Server Access works: https://www.okta.com/products/advanced-server-access/


hello.

i added support for httpd to support systemd socket activation in 2013: https://svn.apache.org/viewvc?view=revision&revision=1511033

httpd can start as non-root, assuming other configurations like the access / error logs are writable by the non-root user.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: