
Deprecating password authentication in GitHub API - nlolks
https://developer.github.com/changes/2020-02-14-deprecating-password-auth/
======
miken123
Wondering when they will start with implementing proper access controls on
their API's. At the moment, a personal access token does have scopes, but it
is quite limited:

\- Repo access is all or nothing. A read-only token is not possible, an
'issues-only' token is not possible.

\- Personal access tokens are not scoped to repositories or organizations. So
your personal toy project token also allows access to your super-sensitive
employer's repository. On top of that, your employer is unable to prevent
this, unless you start using Github Enterprise _and_ an SSH CA, which is far
from trivial.

It's nice that they drop username/password access, but as long as personal
access tokens have such broad permissions, it does not really add any value
(you should have been using 2FA anyway).

~~~
Latty
What is really weird is this isn't quite correct, they do have scoped personal
access tokens, you just can't make them. If you use GitHub actions, it
generates a scoped PAT that only works for that repository for your actions to
use.

I use deploy keys rather than a PAT as they can be scoped (single repo and can
be read-only), but they are more work and are limited to git actions rather
than the whole GitHub API.

The fact they clearly have the internal capability for this makes it
incredibly odd they aren't exposing it for users to use, and I agree it'd be a
really valuable thing to have.

~~~
ptoomey3
GitHub Actions tokens are actually based off our newer “GitHub apps” system
and not “OAuth apps”. GitHub app tokens support much more granular controls
(both in terms of abilities and resources). OAuth doesn’t lend itself to super
granular controls since they are scope based (ex. Defining a scope per
repository doesn’t really scale). This whole area is something we want to
address with personal access tokens in the future.

------
cap10morgan
> During a brownout, password authentication will temporarily fail to alert
> users who haven't migrated their authentication calls.

This took me a second to correctly parse. Would have been better written as:
“During a brownout, password authentication will temporarily fail. This is to
alert users who haven't migrated their authentication calls.”

~~~
jonahx
This confused me too. It also confused me that they only occur for 3 hour
periods on 2 specific days. You wouldn't be "alerted" unless you happened to
attempt login during those windows.

Can anyone provide more context for this deprecation strategy?

~~~
tialaramex
Best to compare Brownout to other strategies that get to the same end result.
The goal is that this feature (password authentication) goes away. A common
default is a flag day. There's an announcement (maybe now) and on a set date
that feature just goes away.

For users who pay attention (say, us) and prioritise accordingly these
strategies are the same, they know the feature it going away and can plan for
that.

But for users who weren't paying attention or who didn't correctly prioritise,
adding a Brownout offers some final warning that helps push more people to
start preparing before the final flag day happens.

It doesn't need to get everyone, if 80% of users whose business processes
still depend upon password authenticated GitHub notice something went wrong,
and during diagnosis discover that what went wrong is they're relying on a
deprecated feature, that's a big improvement over 100% of those processes
dropping dead on flag day.

Brownout is a desirable choice where you're sure that some large population
will not heed the advance notice. I bet that a lot of corporate GitHub setups
have all contact mail from GitHub either going to /dev/null or to some
business person who hasn't the first clue what "password authentication on the
GitHub API" is. Maybe they'll email the right dev person, maybe they forward
to an employee who left six months ago, either way it's far from certain
anybody who needs to take action will learn from an email.

With UX feature deprecation you can tell the live users of the service. But in
APIs even if you notionally have a way to feed stuff back (like a "warnings"
field in the results) it's probably blackholed by lazy programmers or lands in
a debug log nobody reads. So "It stopped working" is the best way to get
attention, but without a Brownout that's permanent. The user learns what's
wrong too late to do much about it which sucks.

Brownout is something ISRG's Let's Encrypt has used, because of course Let's
Encrypt is largely an API too, they publish feature changes but a huge
fraction of all their subscribers aren't paying attention so the Brownout is
the first they'll know anything is happening that matters to them.

~~~
dragonwriter
> Best to compare Brownout to other strategies that get to the same end
> result.

Sure, the isolated period blackout (“brownout” is a bad metaphor) of the
deprecated function has some obvious communicative utility compared to flag
day, but once you accept shut-off for communication, it kind of immediately
suggests communication methods that have a stronger guarantee of reaching the
audience, like progressively frequent blackouts (or probabilistic denials)
over a period of time leading to total shutoff.

------
u801e
What is not secure about using HTTP basic auth over a properly secured HTTPS
connection?

~~~
nnutter
Haven’t read the article yet but I will be surprised if it claims that. I
would imagine it’s more about using your primary means (password) of authn
which has worse consequences than using a secondary, revocable means (token)
upon compromise.

~~~
u801e
The blog post[1] that the article links to does state:

>> We are announcing deprecations that will improve the security of GitHub
apps and APIs

[1]
[https://developer.github.com/changes/2019-11-05-deprecated-p...](https://developer.github.com/changes/2019-11-05-deprecated-
passwords-and-authorizations-api/)

~~~
ptoomey3
It’s not at all about the security of delivering credentials over https, but
more about the the complexity of trying to defend against weak
passwords/credential stuffing with an api. For example, it’s more or less
impossible to add a defense in depth flow like
[https://github.blog/changelog/2019-07-01-verified-
devices](https://github.blog/changelog/2019-07-01-verified-devices) to an api.

~~~
u801e
> it’s more or less impossible to add a defense in depth flow like
> [https://github.blog/changelog/2019-07-01-verified-
> devices](https://github.blog/changelog/2019-07-01-verified-devices) to an
> api.

Except that email, as described in the blog you linked to, is not a secure
means of communication. What would be secure is to use a client side TLS
certificate as part of the authentication process. That is, your
browser/device sends it as part of the TLS connection negotiation process and
then you authenticate via the username and password (via HTTP basic auth).

They're already doing something like that whenever one pushes or fetches from
a git repository hosted on Github through ssh key authentication. It wouldn't
be much of a stretch for Github to allow an account holder to upload a CSR and
then Github signs it and makes a certificate, which the account holder can
then add to the browser's or OS's certificate store.

~~~
ptoomey3
The verified device flow isn’t meant to be as strong as 2FA, but is a very
strong mitigation against mass credential stuffing attacks for all users.

In terms of client certs, see my response in
[https://news.ycombinator.com/item?id=22849985](https://news.ycombinator.com/item?id=22849985).
I agree client certs would be great. However, it can be tricky to couple your
app logic with transport based security. A good example of
this...chrome/google introduced a crazy cool concept called “channel bound
cookies” - [http://www.browserauth.net/channel-bound-
cookies](http://www.browserauth.net/channel-bound-cookies), but it never
gained any traction because of the complexity noted.

------
noncoml
Feature request: allow association of the same ssh key with multiple accounts.

In essence stop using git username globally and start supporting user names.

~~~
oefrha
I was once told explicitly by GitHub Support (just as a reminder) that one
person having multiple accounts is against their TOS, so there’s that. (This
was years ago, not sure if TOS has changed in this aspect.)

~~~
chrisweekly
wat

Never heard such a restriction even mentioned, let alone enforced.

~~~
oefrha
Well you just heard. At least they say “we do not recommend creating more than
one account” at the moment. [https://help.github.com/en/github/getting-
started-with-githu...](https://help.github.com/en/github/getting-started-with-
github/types-of-github-accounts#personal-user-accounts)

Enforced — of course not (guess why I was told), but it stands to reason that
they probably won’t add a feature to facilitate TOS violation.

~~~
noncoml
GitHub is not what it used to be when this "restriction" was made. These days,
where a lot of companies decide to go with github, it's not uncommon for
people to have one personal account and one work account.

This becomes messy to manage, as it's not easy(as far as I know) to use the
same account on your personal PC to do both personal and work work.

------
kens
Is anyone else bothered by the use of "deprecating" to mean "removing"?
Historically, deprecated features are ones that still exist, but their use is
discouraged.

~~~
larzang
And it does still exist, until November. Deprecation is a transition period,
instead of just removal without warning, but without future scheduled removal
then deprecation would serve no purpose.

------
wildpeaks
What I'm most worried about is the deprecation of Personal Access Tokens
because I have yet to find the equivalent for doing something like "Create a
Deployment with a single cURL request to the Github API" that will work past
September.

I wish they replaced it by tokens tied to specific repos and with scopes
instead of that new Webapp Flow thing that looks a lot more complicated to
implement than a curl request (I had planned to look into it this month, but
obviously other worldwide events took priority).

------
bullen
I think the auth method is slightly less important than the offered feature
set. F.ex. you cannot ask if a specific user has sponsored you and for how
much. This is probably the ONLY important feature to have an API for!

The only way to get this data now is to paginate through all sponsors with
their GraphQL interface!

------
derefr
> All password authentication will return a status code of 401...

I’ve always found it strange that an HTTP 401 is used to indicate two very
different server-side states:

• “this resource requires authentication, and you didn’t attempt
authentication”

• “as an early step in the request flow, you tried to authenticate
yourself—but your authentication failed (due to e.g. having an unrecognized
username; or using the wrong authentication method; or, of course, having the
wrong password)”

I mean, I get it; in the end, after trying and failing to authenticate, the
request-processing continues (with the “auth” field in the server’s model of
the request state being nil, just like it would be if you hadn’t attempted
auth); and the request you’re making at that point is still an attempt to
request an authentication-required resource. You’re “not authenticated” at
that point, so 401 it is.

But it really seems like auth processing should have its own “off ramp” in the
HTTP request-processing lifecycle—i.e. if you ask for auth, and auth fails,
you get a code and the request isn’t processed any further, so it doesn’t
matter that the request you made is auth-required.

(After all, we mentally model HTTP resources, fundamentally, as URLs; and URLs
put the auth stuff in the _origin_ part, not in the _request sent to the
origin_ part. I would naively expect that an auth failure would resolve at the
same stage as an unrecognized Host name!)

When you’re coding an HTTP client, it’s very hard to debug a corrupt
Authorization header, because all you know, when you have anything even
slightly wrong, is that the server is pretending it didn’t see it!

\-----

And yeah, I know, in practice, why this isn’t the way things are.
Historically, one of the first uses of the Authorization header was through
Apache’s mod_auth_basic, which refers to an .htpasswd file in a directory to
determine the set of valid auth credentials for all resources descending from
that directory. In this model, auth is resource-specific, rather than server-
specific; so it makes sense that, if you’re sending auth, and auth is
unrecognized, but the specific resource you’re authing _against_ turns out to
not _require_ auth, then the server can proceed just fine without sending you
a 401.

However, I don’t think there’s any modern use-case where auth is resource-
specific like that. HTTP auth “realms” are almost always per-origin these
days. It makes a lot more sense to think of auth as something happening at the
level of an implicit _HTTP gateway_ between the client and the server
ultimately sending the resource, where the gateway can know what server
(realm) the client wants to route to—and so can auth the client on that basis,
and deny the request if that auth fails—but the gateway _can’t_ know anything
about what the auth requirement policies are for individual resources on the
backend it’s sitting in front of.

~~~
jimktrains2
Separating those cases expsoses the existence of an object, even if you can't
access it.

Whether that matters is up to you, but I get the impression most people just
default to not exposing existence. There is 403 which helps split these.

~~~
nnutter
How does 403 help? Isn’t that for when authentication worked but authorization
didn’t?

~~~
derefr
Correct. You’re supposed to send 403 only when 1. you’ve successfully “logged
in” with a set of credentials, but 2. the user that those credentials map to,
doesn’t have rights on the resource. If you haven’t authed at all, and there’s
a resource there _requiring_ auth, you’re supposed to send 401.

~~~
eyelidlessness
This usage of 403 should be used carefully. It's often (probably usually) the
case that you still don't want to expose existence of a resource even to an
authenticated user who is not authorized to that resource. It's generally
better to return 404 in that case.

~~~
mmerickel
The distinction is whether the resource is owned by another tenant or not.
Often a user can view a resource but aren't allowed to edit it, at which point
403 is correct. However if it's something owned by another tenant entirely,
and is not public, then a 404 is correct.

------
ccmcarey
Might want to update the title to indicate that this is for the API.

~~~
deno
Agree. The omission makes the title sound more sensational than it is. Token-
only API authentication is not uncommon.

------
nlolks
Thanks for comments !

------
jimmaswell
Everything is getting more and more complicated for hobbyists over time.
Mandatory HTTPS certs coming eventully, APIs forcing more convoluted
authentication methods, GDPR compliance (probably going to scare away a lot of
people from doing anything like writing a twitter clone hobby project, or even
static sites where they might have to spend extra time configuring access
logging) and other burdensome laws like the one from California, getting
whined at if anyone catches a whiff you use passwords to log into SSH. So many
barriers that less people are going to end up participating.

~~~
Carpetsmoker
> APIs forcing more convoluted authentication methods

It's not really that convoluted: it's just setting a header with a fixed
token, which is actually simpler than HTTP auth.

------
fergie
Slightly alarming that github has allowed username:password on api calls up to
now- they have definitely kept quiet about it

------
teknopaul
Breaking apis, is breaking apis. Even for security reasons.

Removing something that works is not, in itself, "improving" security. It is
breakage in the holy name of security.

If you have broken your system in the name of security it is not a more secure
system. Its just a broken system that does not do its job. It might be "more
secure" when users fix api usage, in the mean time its useless and is going to
cost everyone time & money to fix.

It might makes more sense in the long run to move to something with a stable
api.

