Today we're in a 24-hour brownout period to help folks find places they might have forgotten which rely on unencrypted git:// protocol. The date where this will go away permanently is still March 15, 2022.
(I'm the product manager for Git Systems at GitHub.)
It seems like the brownout should be publicized on the front page of github.com - or at least githubstatus.com ?
(Although I'm not an active git/github user, so maybe there's an even more more obvious place a naive user having issues would go to figure out what their problem is. Hopefully there's a helpful commandline error explaining the situation.)
Our CI system broke as a result - I really like this "brownout" idea to help us find it before it turns off for good, but a 24-hour period for us to be broken or scramble to fix is kind of a PITA.
I imagine it would be much more technical effort, but a way for us to opt certain repos out of a brownout would be really nice, so that once it happens, we could easily disable the brownout for our repo & schedule working on a fix, while letting us continue working with the existing infra.
One way to opt out of the brownout would have been to switch to new Auth back in September of last year. Or November of last year after the first brown out.
Because the next brownout is permanent - and that will be an even bigger PITA.
I think it's extraordinarily powerful to have these brownouts for organizations that don't make the change when they should have (which was last year).
> One way to opt out of the brownout would have been to switch to new Auth back in September of last year. Or November of last year after the first brown out.
The point of the brown out is to help people find cases where they’ve missed this. So “opt out by doing it beforehand” isn’t a viable solution.
The last brownout was a quarter ago. A lot of new things can get introduced in that time that still do it wrong.
I think the idea is that once you know what is going on you can opt out, so that you can put fixing it on your next sprint instead of having to drop everything and fix it right now so you don't lose 24 hours of productivity.
I 100% understand that - The point I'm trying to make is- the next "brown out" is permanent - so instead of 24 hours, next time you will have a permanent loss of productivity. I.E. Why are there devices being deployed that are still broken? Because in a few months, they are never coming back until they are fixed.
The idea is to make brownouts increasingly painful - just letting them be a short period of time, or let people "Opt. Out" doesn't service the purpose here - which is to make it absolutely clear that the service is going away.
An interesting additional benefit here, is that rather than a brownout, you could just have a "soft cutover", where people can reenable the old protocols for two months. There's good reason for this:
- People who aren't using the old methods can turn them off now and leave them off, benefiting from the new security change sooner.
- People who need to fix something can temporarily repair their workflow at the time of their choosing.
- People who need more than 24 hours to fix their workflow can re-disable the old methods to test that they are now good, at a time of their leisure, between today and March.
Well, feature removals do. Deprecations are just declaring that a feature should not be used and either will or may be removed in the future, which causes no operational problems (and in fact is done specifically to help avoid the operational problems of feature removals.)
I'm not impacted, but I wonder if doing a brownout that is ran on odd hours for a week and then a brownout that is ran on even hours for a week would catch more situations and allow people to fix systems without as much fanfare.
Degrading a service/feature or having intermittent failures rather than just immediately knocking a service/feature offline. In this case, intentionally induced and used as a strong nudge to downstream users to stop using this particular feature.
Comes from terminology used for the electric grid, where a brownout is a milder form of a blackout. Instead of electricity being entirely shut off and everything goes "black," the voltage drops and lights dim, i.e. go "brown."
Usually 'brownout' refers to a situation where an electrical grid is failing to provide sufficient power but is still providing some. It's not quite a blackout, but its also not functioning properly.
In this context it's referring to a transitory period where a feature is in a state of flux and may only partially work.
And in consequence although many people will experience it no longer working, it might still "work" for you (or it might never have "worked") since anybody who is between you and GitHub can decide what happens instead.
If you've insisted on building something that insists on using git:// this way, you can proxy it to the safer supported system seamlessly, since you had no way to know before it was correct and you'll have no way to know if the proxy worked either. Somebody might already make a tool to do that, if not you could roll your own.
Git itself has a config flag to tell it to always use https:// instead of git://. It's often been recommended for Windows users of git as sometimes the only way out of some peculiar submodule holes (on a platform where SSH was a second class citizen for many decades).
There are versions of that config line more specific to individual hosts if you didn't want to blanked cover every git host, but it's probably a good idea at this point to instead ask those other hosts if they would consider adding https:// support.
I've read that for fetching and pushing, the http protocol is less efficient compared to the git protocol. Could a remote helper be used to establish a TLS connection to the server and use the git protocol over that connection?
> Finally, we have the Git protocol. This is a special daemon that comes packaged with Git; it listens on a dedicated port (9418) that provides a service similar to the SSH protocol, but with absolutely no authentication. In order for a repository to be served over the Git protocol, you must create a git-daemon-export-ok file — the daemon won’t serve a repository without that file in it — but, other than that, there is no security. Either the Git repository is available for everyone to clone, or it isn’t. This means that there is generally no pushing over this protocol. You can enable push access but, given the lack of authentication, anyone on the internet who finds your project’s URL could push to that project. Suffice it to say that this is rare.
I was also under the incorrect impression that git:// was just an alias for ssh. It's amazing how long you can use something and still not understand how it works.
Not really a practical answer, as your SSH client doesn't (and shouldn't) offer this, but of course the rest of the SSH protocol just relies on a negotiated arbitrary encryption for moving data between client and server which is transparent to it, you can drop in different algorithms (on most PCs today AES will be the best option because it is hardware accelerated, on cheaper or lower power hardware ChaCha20 may be much better). So, it is technically possible to have a NO-OP encryption layer but just a bad idea.
SSHv2 negotiates this stuff up front, before anybody authenticates anywhere. Key Agreement protocols like Diffie-Hellman allow two parties to agree over the network on keys and encrypt all their data even though they don't yet know who the other party is, and in SSH the encryption protocols just have string names like chacha20-poly1305@openssh.com so you could invent useless-empty@example.com and if anybody wants to agree to use that the consequences are on them.
"The NONE cipher switch disables data encryption AFTER you have been authenticated or logged into the remote host. This can significantly reduce the load on the CPUs of both machines and may improve performance even more. Its important to remember that the initial authentication process is still fully encrypted."
If we're sending data over a private point-to-point link we always consider the NONE cipher ... especially if the underlying data was created by borg or restic anyway ...
Like most crimes, the biggest and most common dangers come from the people/software you already know. It's going to be a company's profit incentive or a lone dev's hissy fit expressed through softare you implicitly trust that gets you. Unencrypted internet transfers have worked and will continue to work for a very long time with very few problems for the vast majority of people.
Requiring encryption might be okay, but requiring CA based TLS is not okay. It is another strong force of centralization and shortly thereafter, control.
http is fine. git:// is fine. TLS CA based git and https are great. But CA TLS only to "fix" the problem introduces more security problems than it fixes.
> POSSE is an abbreviation for Publish (on your) Own Site, Syndicate Elsewhere, the practice of posting content on your own site first, then publishing copies or sharing links to third parties (like social media silos) with original post links to provide viewers a path to directly interacting with your content.
> Requiring encryption might be okay, but requiring CA based TLS is not okay. It is another strong force of centralization and shortly thereafter, control.
CA-based TLS is a requirement that a client can enforce on the servers, not viceversa (except for client certificates which are obviously not the case here).
When a server (Github in this case) chooses to acquire a CA-based TLS, the client isn't forced to depend on the CA in any way - it can even choose not to verify the certificate authenticity.
If the official git implementation started requiring TLS, thus forcing free private git servers (including your personal selfhosted gitea or whatever) to centralize under a recognized root CA, then your comment would make sense.
Aha.. I'm an idiot. Took a minute to figure out nothing needs to be changed for remotes starting with git@github.com .. this is about remotes starting with git://
Was there some notification of this change sent directly to all github users? I checked email and didn't find any references to the change from github.
Is there some other automated system to become aware of changes like this, other than stumbling upon a HN post?
Edit: Also looking here https://github.blog/changelog/ I don't see a related notification. I'm probably missing it though because there seems to be a lot of noise.
> Was there some notification of this change sent directly to all github users? I checked email and didn't find any references to the change from github.
Same here. No notification email as far as I can find. (Through this might depend on your notification settings? I have mine set to the minimal.)
>Is there some other automated system to become aware of changes like this, other than stumbling upon a HN post?
Github offer the web changelog, their RSS changelog, and the Twitter account @GHchangelog. These three all relay the same information, and ironically all three missed announcing this important change. Instead they posted a blog post[0] which was not linked at all by the changelogs.
Just got bit by that this morning. For anyone updating contributor forks submitted in Pull Requests, use `git@github:<contributor-username>/<fork-repo-name>.git`. I don't know what black magic github had going on prior, but I was always able to just use `git://`
This breaks any python package versions that install dependencies via ‘pip install’ from a pinned git+git URL. (For example: google-cloud-cpp 1.23, which was released about a year ago).
Incidentally I'm onboarding someone new to git and Github. I must say, Github is now exceedingly difficult to set up. Vocalizing every step of the process, hopefully not in a hand-wavy way, it strikes me how hard this must be for unguided noobs.
To some degree, this must be crushing the value of the service.
If ever there was a crying need for setup wizards, configuration audit helpers, and clear error messages, this would be it.
On one hand I can clearly see how git-like history and version control could help in so many areas beyond tech. On the other hand, I can see that's never gonna happen with the barriers to entry we see today.
The Github team should talk to the Rust team about error messages and helpful diagnostics.
This was not what a lot of people were doing beforehand. We were logging in with our passwords. At least for me when this change was announced, I had no idea what a key pair was as I was always using passwords for everything.
I would bet that a large portion of first time GitHub users have never used ssh, myself included at that time. Anyone teaching themselves to code will find a lot of information right away about git and GitHub, but probably little about ssh until they've gone pretty far past the stage where they create a Hello World script in Python and save it to GitHub.
Until GitHub had made that change, nope. Logged into servers with a password (my entire company did actually). Used Git nearly exclusively through some Jetbrains tool.
We've been using SSH keys by default for all of our new users for years, removes a ton of confusion and starts them with one of the better authentication methods out of the gate.
(We also set most up with code signing because why not? They think it's cool because we have green checkboxes everywhere, and they get to learn a bit of cryptography too)
IIRC the github client (by default) uses HTTP auth, not SSH auth. It works for using the client, but not if you use other tooling that needs repo access (like pushing from your IDE).
At least that was the case with github for windows last I used it a few years ago.
It's really not that hard. Install the GitHub CLI and it gives you a simple setup wizard on first run. The GitHub page for the CLI also has clear and simple instructions on how to install it for any OS.
The Github team has done a great job on the docs for this process. It's one of the few places I can tell a jr dev to RTFM and send them a link to the Github docs, suddenly they're an expert in SSH and up and running in 10 minutes.
A couple years ago it was harder, but I've done this from scratch on my machines and others' several times over the last few months and it's always easy.
Hey, I keep bouncing between mac and windows, where/how do you store credentials/tokens? I feel like I've got passwords now that I can't memorize and have to save as text somewhere. I'm putting sensitive stuff in a secrets folder with a corresponding gitignore entry, but I feel like there has got to be some well understood way to handle this?
Yubikey is probably the sanest cross-platform solution. Assuming you're using an updated beyond Microsoft's default version, which you'll need to get from https://github.com/PowerShell/Win32-OpenSSH/releases to have support for USB keys, of course. Hopefully MS will update their included version at some point soon.
From there, it's as simple as telling the .ssh/config file to use the key from your Yubikey and you can use the same config file on any machine you have OpenSSH.
Have you done the setup on windows lately? Because AFAIK, (Fido) yubikey support is still missing. Using either the PKCS#11 support or the gpg applet requires some extra piece of software. Also it required telling git to use that specific ssh version, last time I tried a few month ago, the git installer defaulted to something bundled IIRC. Then, you also want to fiddle with autocrlf and other settings. Git on windows is a pain, but that’s not GitHub’s fault.
Yeah. I'm using it right now. After doing a single-time setup and making sure that I keep a backup of the .gitconfig, etc, I haven't had any problems. I made sure to point Git specifically to the OpenSSH I provided (which I keep in c:\utils\openssh) with the following bit in the .gitconfig file:
For GPG, the only things I've done is to use gpg-agent and set up a passthrough for gpg-agent to WSL2 for both OpenSSH and GPG via https://github.com/BlackReloaded/wsl2-ssh-pageant/ since I do development both natively on Windows and via WSL2.
So it’s still as it was when I last set it up: everything is there, but it requires fiddling. I’m using a similar setup right now. One place where Linux has the better experience.
Credentials/tokens go in environment variables. For development, the convention is to make a `.env` file that's in `.gitignore` and load it into your environment variables. In deployed contexts, you often have a system to do this for you.
What sort of tools read .env files/what do I need to know to comfortably set and read environment variables in PowerShell or Bash? Do you just keep the token there in plain text in the .env file?
If you have multiple work stations, do you have a method to keep your secrets synched?
Sorry about all the questions! I appreciate any insight you might have on this though.
One way I see `.env` files used is with NodeJS webservers which will read in the .env file.
But more generally, in shell usage: On Unix machines: direnv https://direnv.net/ (It doesn't support powershell; but I see there are scripts for powershell inspired by this). -- Just be very certain these won't get committed into the repo if you're going to put secrets in them.
I think it's preferable to have different secrets for different machines. (e.g. different SSH keys, or different AWS IAM users - which can each assume a shared role if that's easier to manage).
If you want to sync secrets, one easy way is to use a password database, and then use Dropbox whatever equivalent solution. Another way would be to use e.g. PGP keys. (The public key of a PGP key can be shared between machines, and can be used to encrypt contents for that machine).
There are projects, like for Node there's a dotenv package, that can load these files.
If I have multiple machines, I manually sync them since that's usually the easiest way to manage it as secrets in my experience don't change enough to warrant syncing them.
It is hard, but everyone gets through it (or doesn’t). I kind of like it for a filter of whether people can navigate Google/stackoverflow/friends enough to get set up.
As bad as it is, it’s still better than every other source control I’ve used (vss, clearcase, perforce, cvs, subversion, tfs, mercurial) and their services.
Any attempts to make it easier will likely make it as crappy as stuff like azure devops or other Microsoft source products.
That being said, good error messages would be great. I feel like an idiot whenever someone onboards and gets stuck and I try to remember how I set up my environment however long ago.
To add, the best way to make it not hard for someone is to get them to use https://desktop.github.com, and hopefully they'll eventually expand their curiosity to the cli.
Well, I mean, if you're learning from Rust note that cargo just automatically makes new projects a git repository if that's possible, in the same way it defaults to your new project's code being a trivial implementation of "Hello, world." so that it will build and run before you write any code, and it defaults to providing you a working HTML documentation build and unit test framework. Plus of course it's access to the repository for Rust libraries. But today it does not (out of the box) provide you with automated fuzzing, good performance / benchmarking tools, a bug tracker and so on. Maybe the hot new language of 2035 will have all those things.
Also however GitHub is not able to have the clean slate cargo gets. If I make a new repo, GitHub suggests I populate it with many good things, license info, a README document, and so on... but of course I can't because it's actually just somewhere to push my existing local git repo and if it's not empty that would cause a conflict.
>It's more of a hassle now than it was 5 years ago
I've been authenticating with Github with SSH for 10 years - as in the creation date for my SSH key on Github is 10 years old. The process is exactly the same.
This seems wrong. Your public keys are.. public, whereas the password is secret. So you can keep those public keys on a PostIt note, on your home page, a text file pinned to your desktop, send them by email, whatever.
Now I understand if your password everywhere is just pass1234 that's easier but, trouble is that means you don't have any actual security. People whose password isn't obvious are clearly better off with keys.
> So you can keep those public keys on a PostIt note, on your home page, a text file pinned to your desktop, send them by email, whatever.
Everywhere I have worked that has been done with passwords so that probably illustrates many of the differences too. One job had required password changes every 90 days so we kept them on the Wiki. For many, passwords are just a required annoyance.
Had a fun time today updating requirements.txt, git submodule refs and forking old dependancies of dependancies that havent updated their repos yet =| Inception!
It seems like mostly the effects of all these auth changes are twofold
* ~~Force~~ Encourage people to have github accounts
* Make using plain git more difficult (auth tokens, deprecating passwords, etc) while encouraging people to use the github cli tool
Given microsoft's history of "embrace, extend, extinguish" I have a hard time seeing these changes as anything other than an attack on git as an open ecosystem.
I don't know anyone who would advocate using the raw git: protocol without ssh. Setting up an ssh key is not a conspiracy to make it more difficult. Using ssh is simply how it's done.
I'm not a fan of the github app or github cli, I prefer the standard tools. However, calling this "embrace, extend, extinguish" is weird.
> I don't know anyone who would advocate using the raw git: protocol without ssh. Setting up an ssh key is not a conspiracy to make it more difficult. Using ssh is simply how it's done.
But it does require having an account or public key on the server. https does not require having an account, but the http transport protocol is less efficient compared to the native git protocol. If git supported a protocol like gits where it could establish a TLS connection to the server and then use the git protocol for fetch or push, that would be ideal.
It’s great for anonymous clones of public repos where I don’t care about signing in.
I work a lot of public repos. I want anonymous reads and clones to be super easy on my users.
I don’t like forcing people to log in just to read. I don’t care about encryption as the integrity of anonymous clones isn’t something I suspect will be abused.
I don’t consider that a reasonable risk. And even if it was, then I don’t care for repos I explicitly clone without using encryption.
There’s a large set of “I don’t care” style projects where I’m just looking to grab a copy. So even if my ISP (or some nefarious jerk on the same WiFi) could MITM, I don’t care. Plus, I’m not aware of any attacks like this because of the whole git checksum thing it’s harder than injecting into http traffic.
I'd be interested to know what kind of a development environment do you have that allows you to download arbitrary source and don't get pwnd. If you're on Qubes, that's cool, but I think most contemporary toolchains for all contemporary languages will expose some kind of arbitrary code execution through their build tools. Which is of course a massive shame.
This seems unlikely, but if you thought that the site that you believe is github.com might be responding to you with tampered data, and you wanted to allow an independent third party to view the bi-directional traffic flowing between you and that entity, then you might want to communicate without encryption.
(it could also reduce encryption and decryption load at both endpoints, and if you have other means to enforce content integrity then the channel itself does not have to been encrypted)
> * ~~Force~~ Encourage people to have github accounts
I switched from git:// to https:// and it worked fine. I didn't need to authenticate. So there's no forcing to have accounts going on here at all, as far as I can see.
> I didn't need to authenticate. So there's no forcing to have accounts going on here at all, as far as I can see.
Not yet. That's the whole point.
It's past the time to move to distributed wikis and issues trackers (perhaps even a Git-based one, like bugs everywhere) so migration will be easier when Microsoft stops baiting and starts switching.
So you’re criticizing them for a sequence of events that exists only in your imagination?
And what do you believe needs to happen for MS to switch into “switch” mode? This change isn’t making people more dependent on GH, the number of accounts is probably growing slower than in the past because almost everyone has an account… What’s the piece missing for them to turn to the dark side?
The change in question does not. It only requires that you clone over SSH or HTTPS, not over git://. I don't see why anyone would want to use the latter today.
Given how trivial it is to self-host your own Git repositories on any random Unix box, I am not worried about GitHub attempting lock-in even if they tried.
I (submitter) hit this with pre-commit, for whatever reason `pre-commit` hooks all seem to specify `git://` addresses, which had been copy-pasted into our config (some by me).
I've never otherwise used `git://`, and I simply changed them all to `https://`, but I suspect it's mostly that sort of thing that'll bite people - something suddenly stopped working because something else made that decision, and maybe it's buried in a dependency used in CI, and a python library for running git operations, so you can't even find it by grepping, etc.
So while mine especially wasn't hard to fix, I just thought it might be a helpful PSA for people to keep in mind and maybe remember if something goes wrong.
> I just thought it might be a helpful PSA for people to keep in mind and maybe remember if something goes wrong
Oh, it was - I'm sure there's a lot of people who are being bitten by this, and nothing I said above should be seen as saying that your post was not useful. :)
I’m curious, where did you get those pre-commit configs from? I’ve seen many projects use pre-commit, and all of them use https. From what I know, pre-commit’s documentation also only provides examples with https repos, not git:, from pretty much Day One. It’s probably a good idea to tell whoever created the config to fix their materials, for more than this particular reason.
Well I didn't look into it thoroughly, I just wondered how/why we ended up with `git://` (as I say, that's never what I use personally - I knew I wouldn't have written it - but I was to `blame` for some of them) and at least a couple that I checked linked from the list on the pre-commit site to GitHub had a copy-pastable 'add this to your config' with `git://` in the readme.
I wouldn't be that scattershot, and would instead include "github.com" in the key and value. This is because you may actually want to to SSH clones of non-GH repos, such as if you also deal with Gerrit.
Authentication seems the essential keyword here. Even their npm registry, when configured for a public package, needs an authenticated user before you can pull. This was promised to not be needed over a year ago. We eventually gave up in this as it complicated onboarding new contributors and Ci.
Of course now every pull is authenticated and they can track exactly what users are pulling from what public repositories. Before it used to be that I only really authenticated when I did a git push.
So now when using plain git the choices are to have every action tracked, or to go through a very annoying process involving randomly generated keys.
Not that they couldn't guess that information based on IP address (presuming you're not behind a university NAT or something), but removing password auth does seem like a pretty clear attempt at gathering more user data.
> So now when using plain git the choices are to have every action tracked, or to go through a very annoying process involving randomly generated keys.
EDIT: this alias has a bug that means it doesn't work as intended, see below, don't use it
If you're that worried, I threw this shitty Bash function together in five minutes just now; stick this in your bashrc and then `anongit ...` instead of `git`:
Shit, you're probably right. It only worked because I didn't tell SSH to only use the specified identity files (I threw it together in five minutes without really testing it, you can forgive me). Changing one of the lines to this (which it should have been all along):
Taken alone it's pretty reasonable, but it does seem like part of a larger trend. Removing the ability to authenticate with a password using standard git seems to me like a pretty obvious attempt to force people to use the `gh` cli tool.
The main reason to de-emphasize passwords is that they can easily get leaked and can cause long-lasting problems.
Forcing automation to use per-service authentication tokens in the form of distinct SSH keys or access tokens allows you to
* limit the scope of the token when it gets leaked
* Using a password gives you full control over a GitHub account, a SSH key or token much less so
* verify which credentials are still actively used
* deactivate & replace tokens on a more fine-grained level if one should get leaked
(I'm the product manager for Git Systems at GitHub.)