Hacker News new | past | comments | ask | show | jobs | submit login
Git security vulnerability announced (github.blog)
549 points by delsarto on April 13, 2022 | hide | past | favorite | 273 comments



All: the originally submitted URL was https://github.com/git/git/commit/8959555cee7ec045958f9b6dd6.... Readers are divided about which link is better, which probably means you should read both to understand the thread.


> Run the uninstaller under an administrator account rather than as the SYSTEM user

How do I run something as SYSTEM? I thought I always ran as "me" or Administrator. Is this only likely to happen for deployment automation tools?

> Avoid running the uninstaller until after upgrading

Don't leave us with this cliff-hanger... Does the upgrade installer run the uninstaller first? (The original report doesn't have this bullet point.)


On Windows, it's amazing how confusing permissions are.

Seemingly everything needs admin permission to run. If the program can't get admin permission from you, it can probably just ignore that and do stuff like install to %appdata%. Chrome does this, for example, apparently to fix a bug where work place administrators didn't want their users installing programs with administrative permission. Chrome or anyone else can just fix that bug by using a windows-level exploit feature.

You can never really be sure what's happening when you install a mystery .exe file until you just install it. Sometimes you can exploit things with hack tools to take a peak inside I guess, but that doesn't seem to work very well most of the time in my opinion.

This horrible permission management system is why a lot of games only work on Windows. If they want to make sure players aren't cheating, the way they do that is by installing root kit level malware. We call that anticheat. It's amazing how that's a standard procedure that can just work on a computer. It also has a history of entitled devs screwing it up big time (Sony) or even doing it very poorly which causes huge performance drops like we see in most modern recent games.

Modern Warfare 2019 crashed and corrupted my OS which was on an encrypted drive. This lead to a 100% data loss for me. I'm 90% sure the root reason to this was the root kit anti cheat bugging out on top of awful coding / performance and bugginess in general we see everywhere in that game.

I don't understand how Windows still works off of you receiving packages at your door and the only way to verify its safety is guess that the timing of your package syncs up with what you ordered and the packaging looks like what you got, and to have hope that the place that packed it wasn't malicious. While nearly all programs do include bombs (unwanted tracking / malware / bad design), usually they're just fairly harmless firecrackers. The only way to open your package up is to put it on the flammable gas pipe in the middle of the house (a left over design from earlier versions of the house) which has a very heavy flaming chainsaw attached to a 1 inch chain.


I don't think Windows permissions are that complicated, and certainly covers the problem space of actual companies better than unix. The problem is that the original design was somehow opinionated and didn't match what home users were doing.

To oversimplify:

- there's the kernel, which can do anything

- there SYSTEM, which is is kind of like "root" in unix, but only used by services (you can't log in as SYSTEM)

- there are user accounts with Administrator rights, which can do "anything" (can be slightly limited by the above two)

- there are user accounts without Admin rights, kind of like normal user accounts in unix.

There are also more specialized Admin privileges, rights for everything are actually controlled by Access Control Lists and tokens, and your accounts and their rights might exist locally or domain-wide (centrally managed on some server), but unless you're a company you don't care about that.

The problems you describe are mostly due to three things:

- Home users got into the habit of only using one user account with admin credentials. It's convenient, but effectively running everything as root. That's why we now have UAC (that prompt when you try to do admin stuff with an Admin account), and the transition to UAC was pretty rough.

- Some software actually wants some protection from the user (antivirus has to protect itself from processes started by the user, anti-cheat has to protect itself from the user, etc.) Because home users use Admin accounts for everything, the only escape upwards in the hierarchy is into the SYSTEM account or into the kernel, with the latter being much more secure, but much worse if you do it wrong

- In large corporations obviously only IT has Admin accounts. That's how the system is supposed to work. People still want to install software without calling IT, so some software installs in the user folder (in %APPDATA%). That's no different from installing in ~/bin in unix.

The last two points are equally bad in linux, you just don't come across them as often because nobody does anti-cheat and runtime anti-virus on linux, and most linux systems are ultimately used and administrated by the same person


> - there are user accounts with Administrator rights, which can do "anything" (can be slightly limited by the above two)

Not technically. Windows permissions follow a "capability" model more than a "user" model. In the Linux user model a user account is always "just one thing". In a "capability" model the user may request different capabilities at different times (based on different needs).

Even in the bad pre-UAC days users didn't have Administrator rights "at all times" in the sense of an Admin account (or group) in the Unix/Posix model, they'd request the capabilities as they needed them and the system would grant them as it saw fit, which was usually just automatic and invisible. The tokens for admin and non-admin stuff were "always" different in Windows. UAC just finally changed it from an "auto-grant" to a "user consents to the grant". UAC wasn't bolted on to the security model of Windows NT, it intercepted token flows that already existed and removed the "automatic" nature of them. (That's why the UAC transition hurt so much at first, especially in Vista, not that it was "bolted" on, but that a lot of software had been built around presumptions about these "automatic" token flows and assumed they were cheap/easy so over-requested them rather than requesting them as rarely or as specifically as truly necessary.)

> Home users got into the habit of only using one user account with admin credentials. It's convenient, but effectively running everything as root.

Not in the Linux sense, no. A home user account isn't "effectively running everything as root". UAC is like sudo in that it acquires a separate user token for subsequent actions. Details about the account may be similar (because of the capability model, the "account" is the same, but the capabilities differ), but they are "distinct" accounts in the Posix reasoning of name+capability.

Large companies that disable UAC and require separate "Admin" accounts are over-reacting/over-correcting, often because they are expecting the Linux/Posix model or because their tools were. UAC is a sudo-like tool: the user tokens are very different on the other side of the UAC fence. Requiring a "physically" separate account is security theater and not very different from just changing UAC from Yes/No flows to "Require Password" flows (unless your auditing tools are bad at their jobs and coalesce tokens with different capabilities based on things like username due to presumptions from Posix systems). It's silly to manage twice as many accounts when you can just make UAC stricter and require passwords. (And also Microsoft's several decades of research show that "Require Password" flows are themselves security theater, people don't actually think longer about UAC prompts if they have to type their password in more often, it just trains them to type their password in more often, which makes it easier to phish their passwords.)


> I don't understand how Windows still works off of you receiving packages at your door and the only way to verify its safety is guess that the timing of your package syncs up with what you ordered and the packaging looks like what you got, and to have hope that the place that packed it wasn't malicious. While nearly all programs do include bombs (unwanted tracking / malware / bad design), usually they're just fairly harmless firecrackers. The only way to open your package up is to put it on the flammable gas pipe in the middle of the house (a left over design from earlier versions of the house) which has a very heavy flaming chainsaw attached to a 1 inch chain.

This is exactly what the "Signed by Windows" MSI feature is for. Of course it is also a cash-in with a prohibitively lengthy and expensive process for publishers though (IMHO)


This is likely aimed at remote management tools that can perform (amongst many other tasks) remote installs of apps across fleets of machines. On Windows you'll likely find these run as SYSTEM.


As an aside, those same low-level remote management tools are used by cyber actors (criminals and governments) to compromise entire organizations with ransomware and other malware. That's the real reason ransomware is such an issue today.

If corporate systems were stand-alone/isolated, we probably would not have this problem to the extent that we do.


In a company with 10000 computers you don't want to have an IT person walk to each of them to roll out new software or install an update. Sure, attackers would have a harder time, but IT departments would also have to be orders of magnitude larger


Humor me for a second: why not have them install the software themselves?


Probably because most of the users were hired to do accounting, legal work, advertising, etc., not to provide IT support.


You must not work in IT with end users.


> If corporate systems were stand-alone/isolated, we probably would not have this problem to the extent that we do.

Well.. Yeah, but also... This is what we used to have and have been moving away from. We used to have on premise and then moved to SaaS. I'm pretty sure we all realized that had some security consequences, right?


Compliance and audit driven organizations are more likely to do these things. They want consistency and control across the org. What they fail to realize is how that same consistency and low level control can be used against them. And, more importantly, the scale of the abuse will be as efficient as the scale of management.

It's sort of like building an encryption backdoor (only for law enforcement) and then to be shocked and surprised when criminals use it against you. Security technologists who know better are not consulted and/or their advice to isolate and diversify is not taken.


You can run it using something like psexec sysinternals tool or Process Hacker. But it’s not something someone would likely do by accident and it’s a bit orchestrated/non-obvious.


If the installer is deployed with SCCM or Intune it's very likely being executed in system context.


Yeah, that's something that should be clarified. I'm almost certain `winget upgrade git` will run the uninstaller first...


It shouldn't. So much of the (ugly, hideous) complexity of the MSI engine is specifically there because of presumptions that installers don't uninstall previous versions but generally upgrade in place. winget should just defer to MSI norms here for Git for Windows as Git for Windows is a mostly normal MSI-based installer still.


IIRC startup scripts and such are run under SYSTEM. Some orgs have incredibly awful startup scripts that take minutes, it's not impossible that one of them has a script of "git-uninstall.exe && git-install.exe" to "ensure" git is installed.


af


What does that mean?


One is left to assume that he feels the cliffhanger alluded to by the parent comment is "as fuck".


The Windows-specific 'vulnerability' is weird. For one, it's part of the uninstaller, which isn't a common scenario, and secondly... C:\Windows\Temp isn't even writable by unprivileged users by default, it's not even readable by unprivileged users by default (on my relatively fresh Windows 11 system, at least).


The thing about c:\windows\temp is you can’t modify another user’s files but you can create your own.

It’s actually a _really_ common vector to exploit poorly written installers by dropping your own file (like a malicious dll or exe) into that directory as a low rights user in the hope that the high rights installer process will then load that code. That’s presumably what’s happening in this case.


At least on Windows 10 and multi-user installations everyone can access C:\Windows\Temp

How do you define unprivileged?


On my Windows 10 machine, I can't access C:\Windows\Temp as unprivileged user. It makes me press Continue, which will invoke admin rights to set privileges for that folder.


That's because you don't have the permission to list the contents of the folder, but you should have permission to create files in it.


True.

  Get-Acl C:\Windows\TEMP | select -ExpandProperty 
 AccessToString
  CREATOR OWNER Allow  268435456
  NT AUTHORITY\SYSTEM Allow  ReadData, Synchronize
  NT AUTHORITY\SYSTEM Allow  268435456
  NT AUTHORITY\SYSTEM Allow  FullControl
  BUILTIN\Administrators Allow  268435456
  BUILTIN\Administrators Allow  FullControl
  BUILTIN\Users Allow  CreateFiles, AppendData, ExecuteFile, Synchronize
  BUILTIN\IIS_IUSRS Allow  ReadData, Synchronize


Both vulnerabilities are reported as git for windows vulnerabilities, aren't they?

I do not quite understand what is the 'correct' behaviour considering the parent directory thing. The concrete problem seems rather than that there is no -safe switch to use in prompts, etc.


> Merely navigating to such a space with a Git-enabled `PS1` when there is a maliciously-crafted `/scratch/.git/` can lead to a compromised account.

I'm curious about this -- what's the attack vector here?


The key is the "Git-enabled `PS1`". PS1 is an environment variable recognised by common shell programs (such as bash) that configures the shell prompt. Git often installs its own glue into the prompt that ends up running a Git executable to discover such things as the current branch name and how many changed files. The vulnerability is that it's possible to add malicious things to .git/config that the git executable will pick up and call/run, even on simple operations like displaying the prompt.


What do you mean by “git often installs”. Git does not install anything. Developers configure their PS1 or install something that does.


on windows the default git package (ie. git bash) installs a bash terminal that has the git PS1 enabled.


> Git does not install anything.

It's literally a Git installer.


The default shell on Mac and a lot of Linux distros do this installation, as well as git for windows.


Can you explain? A shell isn't supposed to be installing things on it's own... (Assuming you're talking about zsh or bash when you say shell.)


It's not doing it on it's own, I believe what they're referring to is a a pre-configured distribution of a shell packaged with some distributions of Git, for some OSes.


There's a lot of different ways to install Git on a lot of different OSes, and some of them put helper shell scripts down by default.


That isn't Git installing things though, that is third-party Git distributions bundling things along with Git.


I'm aware, which is why I phrased it like I did. A lot of folks don't know or understand the distinction when they're installing Git (or installing something else that happens to bring Git with it).

The overarching point is that the shell itself is not installing anything at shell runtime. E.g. it's the Git Bash installer, not opening `bash`.


Yeah, even zsh doesn't do this by default


Can you say what Linux distro does this? Seems like very poor taste for a command shell to be assuming one particular SCM.


On Ubuntu, and I presume therefore many debian derivatives and debian itself, `apt get git` will install a file called `/usr/lib/git-core/git-sh-prompt` (dpkg -S /usr/lib/git-core/git-sh-prompt).

This script allows you to see repository status in your prompt. It comes with 5 utility functions that AFAIKS are usable in all common shells:

    __git_ps1_show_upstream ()
    __git_ps1_colorize_gitstring ()
    __git_eread ()
    __git_sequencer_status ()
    __git_ps1 () 
On Ubuntu those aren't installed into bash by default, but you need to add it yourself to your ~/.bashrc (or zsh or whatever, but those aren't the default in Ubuntu). The mechanisms aren't in place for a package to inject itself into your prompt, but it isn't unthinkable (though maybe unwanted) to have that. Similar to how tab-complete for bash is handled:

The `git` package installs a file `/usr/share/bash-completion/completions/git` which is picked up by bash, on Ubuntu, as a pluggable way to extend autocomplete on bash. Many packages do this, on my machine there's 960 files in there (ls -1 /usr/share/bash-completion/completions/ | wc -l).¹ It would be trivial to have a similar mechanism for `bash_rc.d` which allows for a pluggable system to extend and modify your bash. Again, I presume this to be unwanted. I would oppose it.

¹ This is why I was surprised to learn that people like zsh for the reason that it offers tab-completion: I always presumed bash did that out of the box for any installed package already. Turns out it is Ubuntu (or better: Debian) doing this for me.


> It would be trivial to have a similar mechanism for `bash_rc.d` which allows for a pluggable system to extend and modify your bash.

There is /etc/profile.d/ which serves much the same purpose for login shells, though PS1 is normally a non-exported shell variable so it wouldn't be inherited. You can configure most other aspects of the shell this way, however. And there is also /etc/bash.bashrc which is read before ~/.bashrc, though it doesn't provide a convenient directory for drop-in scripts by default the way /etc/profile does. It would be trivial to add that if desired. (All based on Debian; YMMV.)


This feels like tab completion is a more serious attack vector then?


Anything installed through `apt` could be an attack vector, but that is silly, really.

If we start treating "apt get will install software that can change my system" as a security issue, we should stop using all electronic devises right now.

And yea: I know, we should have sandboxing, isolation, chroot and whatnot. And we are heading there. Yet in 2022, the vast majority of computers, servers and such are installed using package managers which install packages that have access to all the system. If you count mobile devises amongst "computers" then I guess a majority (Android) does have sandboxing in packages, which solves this particular issue.


You need to be root to add a file to the basl completions directory, so not really.


There is no the bash completion directory; there is a default one which happens to be system wide and only root writable (as should be the case in a combination of such circumstances).

Fish allows for custom ones in ~/.config/fish and there is zero reason you cannot install custom ones in ~/.bashrc or user writable (on macOS) /usr/local


Command shells assume a lot of things, such as your preferred text editor, locale, input and output devices, etc. Most of these things can be configured, but the default install makes assumptions based on common configurations.


The shell doesn't assume a particular one, but if it's in a directory where a supported one is present, then it enables those extras.


Yeah, I'm familiar with PS1, but I was a bit surprised to learn that simple things that a PS1 script might do (git status, perhaps) are attack vectors. It seems that one big concern is the core.fsmonitor option (which I just learned about now). From the git-config man page:

> If set, the value of this variable is used as a command which will identify all files that may have changed since the requested date/time.


> …PS1 script…

That’s where you should have been concerned. Just typing a bare return will run some arbitrary code, as you, wherever you might be in the filesystem. If all of that isn’t under your control, someone can do anything to you.


Damn. So the script I use to put the branch name in my prompt is bad? Could I solve this by making .git/config readonly for non sudo?


The concern here is more on `sudo vi /.git/config` than `sudo vi ~/.git/config`. Someone adding a "root" git config that you don't expect/intend to exist.

If you think things are locked down strongly enough with sudo and never install anything that might add root files you don't expect you are most likely safe.

This release also adds an environment variable you can set that makes certain that git has a "ceiling" that it never crosses when checking for .git/config files. The idea being that you'd never want git to look above `/user/*/` for instance, as you'd never expect to have a "machine-wide" git config.


in other news, access to a users account gives them access to a users account


In earlier news, it's unexpected that "cd directory" will give the directory owner access to your account.


Tbh I thought this was pretty obvious.

Git hooks have always been sketchy as hell.

Can't stand the Mac specific shit my co-workers keep dumping in there.


Wait, I thought git hooks aren't pulled from remote.

Wouldn't untrusted git hooks mean that git verify-* are useless since you're already running untrusted code?


Just to react to:

>Wait, I thought git hooks aren't pulled from remote.

You are correct. They are not. Other tools may auto-install them (I hate it), but git does not ever.


On my current app we use husky which installs hooks when you do a yarn install


You are in luck, it seems: https://blog.typicode.com/husky-git-hooks-autoinstall/

(Though not completely sure.)


This is really dismissive. Unexpected execution is not a users fault, and can happen for a variety of reasons (you telling me you’ve never unzipped a full git path someone sent you?)


Hi!

I can cheerfully confirm that I've absolutely never:

- received a git repo as an archive from someone, and then

- changed to root with "su" before unpacking it somewhere, such that

- the chosen location was above the home directory layer of a multi-user system.

- in such a way that one of the directories of the /path/to/home path has a .git/ subdirectory as a direct child, and not as an unpacked-tarball/.git grandchild which would not be accidentally found by git. I.e. that one of these directories exists, which might be found by someone running "git" in their home:

  /.git
  /path/.git
  /path/to/.git
  /path/to/home/.git
rather than the more likely:

  /foo-project-123/.git
  /path/foo-project-123/.git
  /path/to/foo-project-123/.git
  /path/to/home/foo-project-123/.git
which will not be found by someone running "git" in their home directory.

If I did such a thing, I'd care more about what happens when I happen to step on one of the malicious hooks in that repo as root, and less about what happens if users step on it.


Good for you. You are not everyone.


There are far more likely accidents that the superuser can perpetrate, that we do not compensate for with silly logic in applications.

Superuser could download some malware and put it into the system PATH. OK, so let's not execute anything in the PATH, unless it is owned by us.

/bin/ls? Not owned by me, don't trust it.


Okay.


Because linux has failed almost entirely to meet user level threats. Unlike android - which has per app permissions file permissions, linux is not there yet.

I know a lot of people are interested in better incapsulation for specific programs, and I know there's a lot of work being done in the area, but it's nowhere near as effective, in my opinion, as android and other systems.

linux follows the unix philosophy on this sort of. OK, you're a user, with some shell script, maybe git, maybe bash and it's PS1, I don't care, all I see as a kernel is that, you have permissions to edit this, upload this, send a packet, whatever, have fun!

From that perspective, nothing is wrong. That was my point. You could download s script that does 'rm / -Rf' and there's no security issue. User are given access to do as they please with files.

The issue is users can no longer reasonably trust the software on their system, from home-dialing marketing information and tracking, to having all sorts security issues in their virtual machines and sandboxes, running random code from websites constantly, we need a better way to encapsulate per file, per folder, per camera, per whatever, permissions, implemented at a system level.


You can download a directory containing a script, cd into that repo and less the file. ‘cd foo’ should never trigger arbitrary command execution. Ever.

Containerisation wouldn’t solve this, bash or similar would almost always be fun with near limitless boundaries.


I beg to disagree. I like my https://direnv.net/

As long as it's strictly opt-in, it's fine. But it needs to be opt-in to be secure.


No? Why would someone zip me a git repo? You can clone/push/pull directly between machines.


The source machine may not be set up as a server, or you may not have an account there. Sending a .zip file could be simpler than pushing the repo to a third system the receiver does have access to.


Sure, mailing binaries can also be the right thing sometimes, but just as with git repos, you definitely shouldn't be taking them from untrusted sources.

(and I bet that many people don't even realize how easy using `git push` to mirror a repo actually is, since git is very commonly used in a centralized fashion)


You’ve never unzipped a repo? That’s something I regularly do - clients can’t get me access to their scm, and it’s not worth the effort.


I don't think I ever did. I usually have an opposite problem and sometimes accidentally zip/tar a repo when it shouldn't be included (at least there's `--exclude-vcs`).

In any case, this doesn't sound like a good idea at all, as git hooks exist.


You’re right but also I’m mad that you’re right. I need to check what hook functions there are, but I knew about hooks and still overlooked them, which to me thinks that they’re non obvious. Maybe git should prompt you on first run, which is either whitelisted on your host or signed by your git keys or similar


This article on a CVE for git published today has details on the vulnerability: https://github.blog/2022-04-12-git-security-vulnerability-an...



You basically substituted both the link and the title to a completely different one after long time people discussing that specific link and title. Now half of the comments don't really make sense at the first glance.


Yeah, that happens sometimes, but if the second link is more suitable then the discussion eventually adapts for the better. So the only real question is which link is more suitable.

One principle I like to use is to assume readers are smart—e.g. in this case, that readers are smart enough to figure out that there are two relevant links to the comments. Of course randomness is also a factor, but it mostly all works out.


The original link barely made any sense and many of those comments were comments without the useful context. The root cause here is the iffy submission, not the outdated comments or the change to a more meaningful link.


The original title was "Git 2.30.3 will not operate in non-owned directories" which makes perfect sense to me, and the link provides a good explanation of the security problem. What barely makes any sense about it?


It's a made-up title linking to some random commit. The new link tells you it's a fix for a vulnerability, the details, its CVE, affected platforms and use cases, etc, etc. The other thing doesn't.


The title was "made up", I'll give you that, but it's a pretty good paraphrase of the commit title to add context.

The old link also tells you it's a fix for a vulnerability, and also explains how it affects all platforms, and also talks about the use cases etc etc.

The only thing it doesn't have is a CVE number, which I don't think is all that important.


The official announcement tells your that there's a vuln, it's considered important enough to break things and that it's out right now. The other thing tells you someone committed something a few weeks ago. The missing context also helps drive a lot of under-informed grumpy threads, rather than bettter-informed grumpy comments/threads. There'd have probably been fewer grumpy threads with the better link.


They both say right at the top that it's a vulnerability, and the old title put the breakage front and center. So I don't know what you mean by missing context.


The context of one is 'someone committed a thing a few weeks ago and it does a thing, according to someone posting to HN'. The context of the other is 'one of the biggest git users on the planet tells you there's git vuln, fix out right now'.


Git itself removing an ability should tell you that it's a big deal even more than "one of the biggest git users on the planet".

And again, first line says it's a vulnerability. "it does a thing, according to someone posting to HN" is a big fat strawman.


I don't know, it really doesn't sound like a real CVE - maybe add some setting I guess for those worried? Others bring up good points, if your attacker can write to C:\ you probably have other issues.

On top of that, it breaks completely valid functionality - someones 'bug' is someone elses feature.


It's really more of a Stay-Puft Marshmallow Man.


I feel like title doesn't really focus on the specific behaviour change (not operating in a non-owned directories) that will be affecting a lot of CI/CD, which is what I was interesting in seeing discussion on.


Ah ok this was the real link. The top level link to github.blog doesn't seem to have anything that this link here has. Please change it back.


Sorry for the confusion! I've added https://news.ycombinator.com/item?id=31016938 at the top of the thread so people will see both links.


Repos can have precommit hooks, which are just executables (usually executable she'll scripts, but anything will do) that will run (as your user) on commit, checkout, etc.

I feel like this change is a far bigger one than thought, and it's gonna break some workflows, such as mine where I have a git repo that's shared between multiple "users" that I run applications as. I'm glad I've not gotten too far into this project. The next step is just to keep doing a pull/push cycle on every commit, but it's a bit more of a pain to make that happen.


> I feel like this change is a far bigger one than thought, and it's gonna break some workflows

Breaks the CI system for perl, for example.



.git/config sets fsmonitor to malware.exe and boom.


Can you do this to a GitHub hosted repo?


No, the .git directory is not cloned. But if the repo is already on disk it can be game over.


Though you could have a repository on Github that contains a subdirectory that is a malicious bare Git repo. So doing:

``` git clone github.com/foo/bar cd bar/subdir/ ```

is unsafe with a Git PS1. See https://offensi.com/2019/12/16/4-google-cloud-shell-bugs-exp...


Or create a mercurial repo that contains a .git directory, and rely on finger memory making them run git immediately after cloning...


Finger memory, or just a shell configured to run `git status` before every command, as some people have.

And besides a Mercurial repo, it could also be a tarball or zip file…

Quite a dangerous situation.


I’ve done that a few times. I’ve typed “git foo” while mentally thinking about “svn foo”, and after a few hours working on a project that still uses svn I will start making the opposite substitution too.


We use the grunt build tool. More than a handful of times, I've tried to get grunt to do a merge, or git to build the app.


Looks like git complains of invalid paths when you try that.


Just because the cli won’t add doesn’t mean it may not be possible.


I was able to manually construct a commit with a .git subdirectory using `git mktree` and `git commit-tree`, but Git still refused to create the .git subdirectory in the index or working copy:

  [testrepo]$ git checkout --orphan test-branch
  
  [testrepo]$ git update-ref HEAD f4da9cde406a7b80d99694b5f8d369a8dd6e5a7d
  
  [testrepo]$ git ls-tree -r HEAD
  100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391    a/.git/config

  [testrepo]$ git show
  commit f4da9cde406a7b80d99694b5f8d369a8dd6e5a7d (HEAD -> test-branch)
  Author: <<redacted>>
  Date:   <<redacted>>

      WIP2

  diff --git a/a/.git/config b/a/.git/config
  new file mode 100644
  index 0000000..e69de29

  [testrepo]$ git status
  On branch test-branch
  Changes to be committed:
    (use "git restore --staged <file>..." to unstage)
          deleted:    a/.git/config

  [testrepo]$ git restore --staged a/.git/config
  error: invalid path 'a/.git/config'
  error: pathspec 'a/.git/config' did not match any file(s) known to git

  [testrepo]$ git reset --hard HEAD
  error: invalid path 'a/.git/config'
  fatal: Could not reset index file to revision 'HEAD'.
So it looks like even if you do try to check out a tree with an unexpected .git subdirectory it won't actually be created in the filesystem.


What do you mean?


I'm guessing the goal is to lock down git's security for any potential future vulnerability. If some security issue was discovered with git that could be exploited with a malicious .git directory structure, requiring git directories to be owned by the logged-in user will reduce the impact.


Or, an exploit was discovered, and quietly patched out recently



Replying to myself again, apparently the thread title changed from "hey git won't let you do this thing anymore" to "here's the CVE", and changed the link too. sorry for the confusion


Couldn't that be mitigated by git adding a --readonly option that prevents any write operations? Then just use that option for any PS1 executables?


core.fsconfig is supposed to be read only, but since it’s an external process, git doesn’t know what really happens in there, so it can’t propagate any —-readonly enforcement.


That's where some sandboxing could help, maybe.


This is bullshit. I mean, ok, you are concerned about somebody using git-enabled PS1. Guess what, not everyone is using git-enabled PS1. Unbelievable, right? I would even mock the fact that you are trying to protect users from the behavior they pretty much explicitly allowed, but this is pointless. Truth is, developers are doing something that can fuck up their system daily. Let's forget about wget | bash and copying completely untrusted git repositories (and it's pretty much guaranteed that everybody using git-enabled PS1 won't shy away from that). Just using composer or npm is enough to compromise your system. "Fixing" this is like introducing DRM: you cannot do arbitrary unsafe stuff without doing arbitrary unsafe stuff. And there simply are people out there, who want to do arbitrary unsafe stuff.

But ok, let's not take it as an excuse. How about fixing git, then? I mean, actually fixing: making it possible to disable hooks & core.fsmonitor & whatever else they fucked up? No, right, let's just disable git instead.

And if I'm reading this correctly, I'm not even allowed to say "I don't care" — I must explicitly mark every shared directory as trusted (I mean, safe.directory = '/' won't work unless / is actually a git directory, right?).

I guess I just shouldn't update git until this "fix" is fixed. Or until git is forked.


>Guess what, not everyone is using git-enabled PS1

not everyone is running on a multi-user system either (realistically speaking, most personal computers are single user). That doesn't mean microsoft/apple/linux doesn't care about escalation of privilege exploits.

>Truth is, developers are doing something that can fuck up their system daily. Let's forget about wget | bash and copying completely untrusted git repositories (and it's pretty much guaranteed that everybody using git-enabled PS1 won't shy away from that).

So because devs are doing dumb shit on a daily basis, they shouldn't fix security vulnerabilities? What if I'm not doing dumb shit? should I get hacked because I entered a malicious directory on a multi-user system?

>I mean, actually fixing: making it possible to disable hooks & core.fsmonitor & whatever else they fucked up? No, right, let's just disable git instead.

but then what if you need hooks? then you'll have to somehow manually enable it on a repo-by-repo basis, which also doesn't seem very convenient. At least with the ownership check it's transparent to most users. For people that use shared directories and/or network drive mounts, they can always whitelist the path.


I am tempted to say that no matter who you are, I am pretty much positive you are doing dumb shit daily, and pretending you are not is laughable, but that would be off the point: I pre-emptively answered your 2 first points in the original comment, so I'm ignoring them. The only part that requires an answer is this:

> but then what if you need hooks?

Now that's just genius! So, making it possible to disable the functionality that specifically allows for the execution of arbitrary code (which is questionable on its own to say the least — it's pretty much the definition of aforementioned "dumb stuff") is bad, because having to enable it back is "inconvenient", and disabling the whole multi-purpose tool that git is (which has hundreds of user scenarios that don't require allowing to execute arbitrary commands) is good? This is a rhetorical question of course, just think about what you are saying. Worth nothing that making enabling it back inconvenient is a strawman of yours: this is precisely my point that even what they did would be ok, if I was allowed to simply disable their "fix". And it's exactly the problem, that there's no convenient option to do so.


> but then what if you need hooks? then you'll have to somehow manually enable it on a repo-by-repo basis, which also doesn't seem very convenient.

What's wrong with that? Git hooks are inherently dangerous (i.e. running arbitrary code) and should be something you opt into manually.


That doesn't mean microsoft/apple/linux doesn't care about escalation of privilege exploits.

Because they're authoritarian control-freaks who want to take away even the concept of ownership eventually, having it all to themselves. They want to be able to force users into doing whatever they want.

If you're wondering "Linux too?" --- I'm not saying Linus himself is an enemy, nor a lot of the neutral developers who have contributed good things to it, but all the corporate interests (like Android --- via Google) have shoved plenty of "trusted" computing shit into the kernel, and "secure" boot for Linux distros is still ultimately controlled by a Microsoft key.

We are starting to wake up to this "security" bullshit.


is_path_owned_by_current_uid(const char *path) isn't symlink safe given a multi-component path.

Symlinks, the poisonous gift that keeps on giving.


Are you saying this was the bug that was fixed, or that is is a new bug, that's not fixed yet?


I'm not sure it's a new bug, just that if you look up that call you'll find if it's used with any path containing a "/", then it can be raced to check somewhere other that the place the original author intended.

That's why symlinks MUST die.


I do not understand why symbolic links are "poisonous"? Can I get some context?


I think it's just that they're tricky when it comes to ownership. People who write code that depends on some type of file or directory ownership for security often don't think about the ways symlinks can be used to bypass their security model.

You can sort of think of a symlink as having 2 owners: the user that owns the symlink itself, and the user who owns the file pointed to by the symlink. One of those owners might be an attacker, so every time you interact with a file, you have to think "this file might be half-owned by an attacker, and half-owned by a victim".


Daemons that care about security setuid temporarily before opening a file and then setuid back


That doesn't always fix it. An attacker can race you to make you write something in a place you didn't intend or expect unless the application is incredibly carefully written.

And by "incredibly" I mean beyond the scope of human endeavour :-).


Thank you!


I'm going to be giving a talk at SambaXP this year (it's virtual, so you only need to register to attend) explaining why IMHO symlinks have utterly broken the POSIX filesystem API, making it impossible for application developers to write secure applications.

https://sambaxp.org/

It's not just a whine, I'm also going to make some suggestions for fixing it :-).


I’d love an OS that didn’t even support those.


I think that was one of Plan9's selling points.


DOS? Non-NT Windows?


Well, depending on exactly how much this blocks, this could get pretty awkward -- typing 'git log' in a repo owned by someone else can be awfully handy, even if file system permissions block changing it at all, and putting together a list of all places you might want to do this in advance could get pretty awkward. (Not running hooks, or allowing operations that would trigger them, from non-owned directories would preserve some of this usage, and still at least mitigate the dangerous cases somewhat.)

It's also not entirely clear to me what this does to site-wide shared remotes, though I suppose if they can be listed in system config, it's at least not a per-user hassle.


I do this quite often, actually. I have my NixOS system config officially stored in /etc/nixos/ and owned by root. I have a clone that lives in my home directory for WIP changes, but builds always run out of the official copy. Sometimes it’s convenient to quickly run some read-only commands directly in the official copy


Ultimately you own that repository so just set it as safe in your config. Similarly if you are looking at a coworker’s repository then you can probably trust them. It’s only when you start sharing a computer with people that you don’t really know that you have a problem.


Yes I’ve run “git status”, “git log” and “git diff” on other people’s repo’s plenty of times to help debug things, so it’d be sad to see this stop working. It seems some basic readonly operations should still be supported.


It doesn't even have to be a repo. I do git diff --no-index all the time on arbitrary files, because it's simply configured the way I want, unlike some default diff or whatever command, that I don't even remember how to properly use.


Maybe the pragmatic way from now on is to clone the repo, even if it's on the local machine.


Cloning will miss any uncommitted local edits (which you might want to have run "git status" to detect)...


Yeah, it does seem to be very limiting. The previous behavior of allowing it can be configured by safe.directory[0].

In lack of CI I still tend to do builds in a container from a separate local user with read-only permissions. I wish they'd add an option to allow it when the user is in the owner group, which could be a decent compromise.

[0]: https://git-scm.com/docs/git-config/2.35.2#Documentation/git...


I kindof agree with you, but if you go to a university lab then you will likely find that all the students are all in the same groups. You don’t really want to blindly trust the group that way.


Oh yeah, in such an environment that would not make sense. I'm saying it should be a (local/global) config option, not enabled by default.

EDIT: Or maybe even more general, trustedOwnerUsers / trustedOwnerGroups


That’s a good idea; you should implement it and send them a patch.


Question for Mac users. Apple installs git with its command line tools and is currently at version 2.32. Is it wise to install git via Homebrew so that you can upgrade faster? Or are there some benefits from apple-git?


I've always used homebrew for git.


No benefits that I’m aware of. I prefer having the latest greatest, so I’ve always used the homebrew tap.


I use the macports version. As far as I am aware, there are no advantages to using the Apple version.


Out of interest any reason to select macports over brew? Since I started using mac brew seemed like the done thing


I’m not aware of any benefit to using the system git, as a user.


Did the link get changed? I can't find anything of what anyone is talking about in this github.blog post.



Thank you, I was confused. I'm very curious if the people complaining about this change as being too paternalistic still feel that way after reading the full disclosure link.


Even after reading the full disclosure link, I'm pretty surprised to learn that a security boundary was intended here. I thought it was common knowledge that git did an uncontrolled search up the filesystem for a .git file, and it would never have occurred to me to run git on a machine where people I don't trust have write access.


I was vaguely aware that git would search for .git directories. I had no idea that "git status" would run commands from such a directory.


If you can create a .git directory above a victim's home directory, then you're root.

Or else, if you're not root, you're in messed up system. Whoever is root should go read some 40-year-old book on Unix about how it's supposed to be laid out.

This is not a genuine security vulnerability; though of course, it's good to fix it.

Here is how I would fix it. Forget about permissions and ownership entirely. There is a weaker, more powerful condition we can check. Ready?

Git should terminate if it is executed from a subdirectory of a git repo that contains no tracked files according to the first .git/ directory that it finds while ascending the file system.

If you're in a directory that contains no files that are tracked by the closest .git/ that can be found by walking up the stairs, then that directory has no relationship to that repo. Git should diagnose that and bail out. (It could allow files in that directory to be added to the index, but only with -f option to force it.)

If git finds a .git/ dir, and that repo's index shows that at least one item in, or below, your working directory is in that repo's index, it should go ahead and work with it, regardless of ownership.


The issue isn't specific to home directories. /tmp, for example.

Your suggestion may protect against accidents, but doesn't seem to me to do anything for deliberately malicious behavior.


Right, so someone could create a malicious /tmp/.git. You then go to /tmp/experiment to do something and run some git commands.

Easy fix: on boot, have the "rc" script create a root-owned /tmp/.git dummy file with r-------- permissions.

Someone can also create a /tmp/foo/.git; but to be susceptible to that, you have to be under /tmp/foo. That's another user's directory. What are you doing in there? Serves you right.

If /tmp/foo is your own, and someone planted a .git into it, that's your problem too: you're creating material in /tmp that is accessible to others, which is a security no-no.

Probably, this should be fixed in the kernel: the kernel should not allow a regular user to create a hidden directory (i.e. name starting with ".") in /tmp. Or probably any hidden object.

Such a fix is more general: it fixes the issue for any git-like program that walks up the tree looking for a special dot directory or file, including all such programs not yet written.

The rule could be general, such that creating a hidden object in a directory is only allowed to the directory's owner, not just to anyone who has write permissions to the directory.

In other words, if multiple users have write access to a directory, such as /tmp, but any other kind of directory, then they are not allowed to perpetrate hidden objects on each other (both because those things don't show up under "ls" without "-a" and because programs find those and react to them).

In fact, I would go one step further and enforce the kernel rule that writing to an existing dot file is denied to anyone other than the owner that file, regardless of its write permissions.


I shouldn’t ask too much of an open source project, etc. etc., but this sounds like something Git should fix themselves rather than just outright disabling. “I want to go into a directory and run git log” is kind of a simple thing to want to do and to not be able to do that sucks. It’s easy to pontificate on this forum but having a “safe” git that doesn’t automatically run hooks or whatever seems like the way forward here, and useful outside of even just a “I want my PS1 to work”.


Their solution sounds reasonable. I don't particularly get why this is even a new vulnerability? If someone manages to create a .git at the root, that means the whole filesystem is under version control and appropriate hooks should execute. Why would this be surprising behavior now? In any "multi user system" as the article says, this would need root access and such a user can do many other bad things if their intent is malicious. I feel like I'm missing something obvious?


I can run git init in /tmp.


Then don't run commands in PS1 that blindly execute arbitrary code in that directory.

Honestly, people haven't learned a single thing from Window's autorun days.


> Then don't run commands in PS1 that blindly execute arbitrary code in that directory.

That's like telling people "be secure". Nobody expected that git commands would do that, and also they shouldn't do that.

Not that I think this is a good fix...


This is kind of why I’d want a git command that doesn’t blindly execute arbitrary code from the directory, as I mentioned above.


The intention of someone running `git log` or `git status` isn't to execute arbitrary code, it's to see a log or the changed files. Telling people "just don't run arbitrary untrusted code" is useless advice when the whole problem is that git runs arbitrary untrusted code in situations where most people wouldn't expect it to execute arbitrary code.

At least spend a couple seconds to think about what you're saying and how it relates to the issue at hand. You're being insufferable.


I mean, I don't have git status in PS1 anyway, but I woke up today not knowing that git status will run arbitrary commands. The documentation for git-status does not mention this possibility, like git-commit and git-pull do.


Hmmm. git status seems like a simple read only command, but it can run a hook, which will change the result. So a default of no hooks might mean subtly or silently different, depending on who runs command, which can be really annoying kind of breakage to track down. "Why didn't the build start? Why wasn't the change noticed? I can see it." I can imagine wanting to avoid that scenario. Painted themselves into a bit of a corner I think. There's no such thing as a side effect free read only git repo.

(A better fix would be to allow the command if there's no hooks, which does seem feasible, and only failing if it's actually asked to do something dangerous.)


Right, I get the perspective, I’m just saying that there’s a lot of usecases where I know I don’t want this (PS1, git directory in /tmp, SSH into a shared machine) where I just want simple git commands to work, and a “git --safe” or something would be really helpful. I’m not even sure what the alternative would be here for a lot of these cases, besides writing my own “dumb” porcelain…


Not really any sort of realistic alternative by this point but a distinction between 'programmatic git' where everything is explicit and 'interactive git' that has all the implicit config conveniences would have probably made some difference. It doesn't prevent the specific problem but provides space for less draconian fixes.


On the one hand I agree with you; on the other hand that would entail enumerating all possible unsafe configurations. In general when designing a security measure you never want to try to enumerate everything that could be unsafe, because there is always an attacker who is more clever than you are who will think of something you left off the list.


Yes, but I feel like the other thing you need to keep in mind here is that this is going to be a massive pain for a lot of people, and they might end up doing things that are substantially worse for security, like refusing to update their git.


It’s possible, but I doubt it. 99% of people use a personal computer with just a single user account on it (or they use a phone with no git client, so let’s just think about git users for now). With only one real user account on the machine they are not very likely to encounter this security measure.


Just about anybody that uses docker and mounts a volume is going to have multiple users accessing their files, even if there's only one real user.


Not exactly. Inside the docker container, the files you have bind–mounted will be owned by the same user who runs the git binary. You’ll never even notice the security check.


Today I'm sitting here staring at a 'fatal: unsafe repository warning' in my local workspace in docker and can tell you definitively that I have noticed.

This change broke a lot of workflows.


Git is a very conservative project, that they did it like this suggests that they had very good reason to. Nobody is perfect, but the git team has earned a lot of trust over the years. I plan to read up on it more and wait for the CVE to be clearly explained before trying to backseat it, myself.

No criticism intended.


I respect the git authors as well, and I feel like I understand their perspective. I’m just curious why they chose this balance instead.


What is a scenario where you’d be running git in the subdirectory of one owned by a malicious user? Unless a machine is badly configured and administrated, when would one user ever have authority of ownership over /home or /opt or /? And if they have sudo privileges well then they have the authority to do whatever they want. Is this only an issue because of some Windows idiom? I’m somewhat dubious.


I can think of shell prompt plus exploring /tmp, but the fix for this “vuln” doesn't address that issue and seems to be more of a problem with a prompt that automatically runs git in every directory.


Here is a quick fix to prevent system-wide exploits, salt to taste:

  $ grep GIT_CEILING_DIRECTORIES ~/.bashrc 
  export GIT_CEILING_DIRECTORIES=$HOME:/var/www
But malicious Git repos could still affect your user profile. You can harden that by putting all git repos in a sandbox, e.g.:

  export GIT_CEILING_DIRECTORIES=$HOME/sandbox


This "fix" breaks deployments where files are checked out as the root user and then chowned to an app-specific user. Any subsequent action as the root user will fail.

It seems they forgot to provide an exception for the root user or a way to disable this "feature" on a global level, instead of per-directory.


>This vulnerability affects users working on multi-user machines where a malicious actor could create a .git directory in a shared location above a victim’s current working directory

If a malicious actor has access to the filesystem, isn't it a bigger problem? I remember Raymond Chen recounted in his blog that Microsoft usually dismisses vulnerability reports that start with "to use the exploit, you must have access to the machine". As he likes to say, "the gates are already open". If you already have access to the machine and can create files outside of your home directory, what stops you from causing even greater havoc?


In the case of a multi-user machine, e.g. in a library, you expect there to be low privilege users with filesystem access. This bug introduces a way for them to do privilege escalation and potentially run code as root, which you did not intend.

Generally, you still want these additional protections even if you don't expect others to have access to a machine. Can't say if one or the other is a bigger problem. I think they are all components of having a secure system.


I mean, we have multiple user accounts for a reason. Maybe less so on windows but on unix with its mainframe ancestory, local priv escalation definitely feels like a real bug (then again, on linux it would be super weird for / to be writeable by someone not root)


> If you already have access to the machine and can create files outside of your home directory, what stops you from causing even greater havoc?

These systems don't let you put files in other people's directories. You can only create things in a specific spot, and if that thing is a directory then you and only you can put files inside it. Sometimes the only thing you can make in that spot is a directory.

(Other users can access those files if you explicitly add them to the permissions, of course.)


Network shares on corporate networks come to mind, they don’t need to be git repositories either (and presumably chowning all files to 1000:1000 would hit the large majority of Linux users even with this fix)


Will Ubuntu update to v2.35.2? My current install is using the elder v2.25.1:

  ubuntu@vpn1:$ git --version
  git version 2.25.1

  ubuntu@vpn1:$ cat /etc/os-release
  NAME="Ubuntu"
  VERSION="20.04.4 LTS (Focal Fossa)"
  ID=ubuntu
  ID_LIKE=debian
  PRETTY_NAME="Ubuntu 20.04.4 LTS"
  VERSION_ID="20.04"
  HOME_URL="https://www.ubuntu.com/"
  SUPPORT_URL="https://help.ubuntu.com/"
  BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
  PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
  VERSION_CODENAME=focal
  UBUNTU_CODENAME=focal


Same for debian 11 bullseye, which is today on git 2.30.2 . I'm not too worried, if someone besides me manages to create a /home/.git, then I'm already owned. But it's a bit surprising.


I assume the patch will be backported and there will be a point release .30.? including it, but bullseye will probably sticks with 2.30.

However, you can use bullseye-backports to get 2.34.1 if you want: See https://packages.debian.org/git

Edit: None of the debian versions have the patch yet: https://security-tracker.debian.org/tracker/CVE-2022-24765


A fix has already been released: https://ubuntu.com/security/CVE-2022-24765


So does that mean git v2.25.1 is patched? That’s what my Ubuntu 20.04 is running now.


Since the security bug is windows-specific, you could play it cool.


It's not windows-specific. It's just that the git-for-windows people are more receptive to the idea of this being a security bug.

The behavior is cross-platform.


CVE-2022-24765 is not Windows-specific.


Was this change discussed publicly prior to merge?

I think this is a big mistake. Build environments use separate users for security purposes. It's insane to decrease security for everyone by requiring a single user to do everything because some of your users want to have fancy terminal prompts.

At the very least, let users configure this at a per-user level.


Fixing RCE vulnerabilities isn't something that should be debated about publicly.


Nitpick: This isn't an RCE. An attacker would need 1) write access to a /local/ directory that the target will navigate to in his shell, and 2) convince the target to execute arbitrary git hooks in every directory (or parent directory) he visits by adding git to his shell's PS prompt.

Besides, now that this security issue is patched, git devs should seek a proper solution to that doesn't break git and decrease security for everyone else.


This isn't an RCE, you need to have control over the parent directory first which usually implies some sort of admin privileges already


This is just one way to fix the vulnerability. There are others, with different tradeoffs.


Shouldn't `safe_directory_cb` be checking the key parameter? It's ignoring it completely. So any unrelated config that has a directory in its value will also mark it as safe. Unless I'm misunderstanding something?


Maybe it's intended? If you specify a directory for something in your git config it sounds reasonable to assume you trust it.

That said, if it is intended, I'm surprised there isn't a comment mentioning that because it certainly looks like a bug.


I considered that too... but not sure. There's also the fact that it'll reset is_safe to 0 on each config line... which is likely not intended. Seems like a rushed patch. Unless I'm seriously misunderstanding how that read_very_early_config function works (it calls the cb for each key-value pair in the config, I'm assuming).


It does. In fact every time that function is called it completely reparses all the config files. That seems like a really weird choice to me, since there are dozens of functions that do this to check individual settings, but I guess in practice it’s not really that slow.


It's using it here on line 1042?

     git_config_pathname(&interpolated, key, value)


Yes but that's a general use function, it won't check for safe.directory inside of it


Right, all that does is turn paths like ~/foo into /home/<user>/foo. I’ve no idea why it even takes the key as an argument.


It's so it can print an error referring to the key if there was a problem parsing.


Oh, that makes sense.


That does seem like a mistake, upon a cursory examination.


I submitted a PR on github https://github.com/git/git/pull/1235. Supposedly there's a bot who will send an email, but I don't have permissions to use it... mhm...


Fun :)

Going to send an email the old–fashioned way?


Gonna beg in the irc channel for git so they give me access to that bot. God forbid I have to format a patch the way they want me to


You probably know this, but for anyone else following this thread: the bot is https://gitgitgadget.github.io/.


lol :)


The more I think about it, the more I think this is the right call. The only alternative would be something like falling back to running with no hooks and printing a warning to stderr indicating that there are disabled hooks. Actions that modify that repository should also be disabled in that case. Then there should be a command like 'git hooks trust' that adds the directory to the user's list of trusted folders.


And .... there go probably tens of thousands of person-hours of human effort due to fixing this across huge numbers of systems.

It's fascinating to me that we have people out there just casually making these kind of decisions with enormous cost implications with barely any thought to the downstream implications. Then meanwhile, we need approval in our org to claim a $30 taxi voucher as an expense.


Some nature documentary explained why ants easily fall prey to anteaters. Why don't they evolve more defenses? It turns out the risk of anteaters is so low, that additional defenses are not worth the cost of e.g. slower foraging. Better count on the law of numbers, spread the risk over all ants, and get on with your life until you get unlucky.

I think the same thing is going on with non-security reviewed software. The additional cost of thoroughly reviewing everything is so high and software is generally so good, that we can rely on word of mouth and past project behavior to make guesses that make economical sense.

So git has proven a good enough steward in the past, and society swallows the cost of this update. Organizations trusting humans with $30 at scale has proven not to work out, so red tape grows like mushrooms in that region. And log4j2 might be an example of all of us stopping for a moment and evaluating if the cost is still worth it.


It boggles the mind that someone thinks security decisions like these are made casually.

Staying sane as an open source maintainer means ignoring such thanklessness as best you can.


Perhaps "casual" is the wrong word since it does have a pejorative implication. Put more neutrally, what I find fascinating is the asymmetry b/w the weight of process applied compared to impact. I am sure the individuals concerned thought very hard about it (I could not find the discussion on the mailing list, but from what I can see it may have been kept off the public list due to the security aspect).


Isn’t git Hamano’s day job?


Are you talking about stuff getting broken by this fix, or the patching effort required?

This is relatively low risk so I would expect the mitigation to consist of "let your existing automation update it".


No, much more thinking of broken CI systems and other deployment scenarios where a shared user setup is presumed.


Related: https://blog.sonarsource.com/securing-developer-tools-git-in... ("Securing Developer Tools: Git Integrations")


Ah, so this is why my simple github release action randomly stopped working today... awesome


Surprise!


There's something super jarring about the format of this blog post. I think my brain has been trained to glaze over whenever it runs into corporate abstract art at the top of a page.


Interestingly if you're on Windows, then Chocolatey is the better package manager to use.

Microsoft's own package manager Winget only has v2.34.1 right now.

Chocolatey https://community.chocolatey.org/packages/git#versionhistory

Winget https://winget.run/pkg/Git/Git


winget.run isn't up to date, I do see 2.35.2 by running the winget CLI (note that `winget upgrade git` will run the uninstaller first).

$ winget show git.git

    Found Git [Git.Git]
    Version: 2.35.2
    Publisher: The Git Development Community
    Publisher Url: https://gitforwindows.org
    Publisher Support Url: https://github.com/git-for-windows/git/issues
    Author: Johannes Schindelin
    Moniker: git
    Description: Git for Windows focuses on offering a lightweight, native set of tools that bring the full feature set of the Git SCM to Windows while providing appropriate user interfaces for experienced Git users and novices alike.
    Homepage: https://gitforwindows.org
    License: GNU General Public License version 2
    License Url: https://raw.githubusercontent.com/git-for-windows/git/main/COPYING
    Copyright: Copyright (C) 1989, 1991 Free Software Foundation, Inc.
    Copyright Url: https://raw.githubusercontent.com/git-for-windows/git/main/COPYING
    Installer:
    Type: inno
    Download Url: https://github.com/git-for-windows/git/releases/download/v2.35.2.windows.1/Git-2.35.2-64-bit.exe
    SHA256: 8d33512f097e79adf7910d917653e630b3a4446b25fe258f6c3a21bdbde410ca


Yes, you're quite right.

For future reference the GitHub manifest page seems to be the better choice:

https://github.com/microsoft/winget-pkgs/tree/master/manifes...


Yes, that's always good to check but given the size of the repository (the number of directory and files is just massive!) it can be really annoying to navigate. Often faster to just run the winget CLI somewhere.

winget.run should add a link to the manifest directory, that would be useful.


When people ask me why I don't have a "git-aware" PS1, I shall point them to this CVE in the future.


Would be pretty incredible if the git branch command had a vulnerability.

On the other hand, having a git aware PS1 would also immediately alert you to the fact that a user had created a top level .git folder, thereby allowing you to prevent the first cve here.


> having a git aware PS1 would also immediately alert you to the fact that a user had created a top level .git folder,

And to the fact that someone other than me had write access to my disk, in which case git is probably the least of my worries.


Doesn’t homebrew typically get setup as a different user? How’s that going to work?


Homebrew typically changes the ownership of the directory to the current account.


Homebrew's mecahnisms are inherently "fail-unsafe". It really has no excuse.


On install or execution? What happens if there is more than 1 user on the box?


This feels like a thing that should be introduced default-off, allowing users to opt in to it first, and once it's been in place update the default, rather than break things suddenly when updating without being able to share a git config between systems which don't upgrade simultaneously.


I feel like this doesn't have much to do with Git specifically. Seems to me like PS1 needs to avoid accessing files that aren't owned by the current user. Easier said than done though...


That’s partly true, but it is more relevant to Git than to other things because there are malicious ways to configure a git repository that will end up running programs written by someone else under your user id.


Why are you setting your ps1 to run arbitrary code in any directory? Don’t do that!


It’s one step more indirect than that. If I want my prompt to tell me what branch is checked out, I can have it include the output of running `git branch`, for example. Unbeknownst to me, running `git branch` can cause git to run programs specified in the git repository’s config file. It’s not normally a problem of course, because I am using my own computer with all of my own git repositories. But it can be a problem if the computer is shared with others.


> `git branch` can cause git to run programs specified in the git repository’s config file

This is the real vulnerability. Why is git branch running random external programs?


I think `git branch` here wasn't intended to be taken literally. If anything, you'd use a plumbing command to get the branch, not a porcelain command. I think they just meant that some commands that might be run might in turn run programs specified in the config file (either now or in the future).


So what's a good example of a git command that you might reasonably run as part of your $PS1 that runs an external program? It's not like people have git push or git commit in their $PS1.


`git status`. This essentially has a "prompt" mode with `--porcelain`, which can even print branch and stash state, so it features all of the information for a prompt. Prompts have been mentioned in adding the v2 porcelain format[0]

It will call the fsmonitor hook configured in core.fsmonitor - this is supposed to speed up figuring out which files to check.

The official git-prompt.sh calls `git diff`, which will do the same[1].

[0]: https://github.com/git/git/commit/00d27937bf0348e7da615f04b6... [1]: https://github.com/git/git/blob/11cfe552610386954886543f5de8...


Awesome thanks for that !


Don’t make tools use processes for “plug-in behavior”. Do one thing and do it well doesn’t really appeal to me to begin with but “let the first thing do the next thing on its own” is definitely a bastardization of that idea as well. Git has that Unix disease where the go to method of getting anything user configurable done with one program is launching another program. I’d much rather use tools that use huge convoluted script languages or good plug-in apis than tools that duct tape together with exit codes.


> I’d much rather use tools that use huge convoluted script languages or good plug-in apis than tools that duct tape together with exit codes.

The Unix "plug-in API" is pipes and exec and "everything is a file (descriptor)". A "good plug-in API" that doesn't support anything written outside the "huge convoluted script language" is not a plug-in API, it's an internal API of the "convoluted script language".

"Do one thing and do it well doesn’t really appeal to me to begin with" means that you don't like the Unix model in general.


> means that you don't like the Unix model in general.

Absolutely correct. While it does have benefits in some situations, writing cross platform command line tools isn’t a place where it shines.


> "Do one thing and do it well doesn’t really appeal to me to begin with" means that you don't like the Unix model in general.

Yes? Some people don’t like it.


Modular design is bad for profit, amirite


Deep inside some large enterprise company:

Jr Engineer: "Hey, I know we've always managed our little dotnet application via email and shared-network-drive, but I've been reading about a thing called "git" that we should probably use."

Sr Engineer: "Change is scary and bad, also we are not a software company. We're not going to learn some newfangled whatsit. Just email me the .vba files when you want me to review the changes with the one copy of visual studio 2008 that our team has access to."

Jr Engineer: "C'mon, give it a chance! We can leave everything the way its always been, but have better tracking of changes. Remember that time Bruno hard coded the tool to point to the C: drive? Git would let us just undo that, instead of having to search our emails for the last-most-recent version."

Sr Engineer: "Ok fine, I've got 10 minutes, show me."

Jr Engineer: "Ahh! Well I just got it installed, so let me go to the network drive... and then I think I have to git init our project folder... huh? Let me just... Maybe if I..."

Sr Engineer: "Times up! Looks like this "git" thing isn't compatible with our setup after all. Those modern dev types never make anything that works in a real enterprise environment."


The premise of this story is one I lived. I was a web dev intern for a local government office and they actually emailed each other zips of dotnet apps.

The only difference is that my git pitch went really well and they promised they would start using it. They never started using it.


Local government software dev is making half of what they could make doing barely anything at a private sector operation. Not surprising there’s an IQ problem.


Hey, don't chalk it up to IQ. I've met plenty of people in government software dev who work hard trying to make a dent in their career because they came from backgrounds which private sector operations ignore.

The person you're talking about totally exists, but they run the department.


We joke, but actually the legacy team I took over a few years ago, used to do this (email each other stuff, and versioning/branching was basically copying folders around). I had to drag them kicking and screaming into git (and self-hosted gitlab - thanks gitlab; no, seriously, I do really appreciate it), and now they wonder how they ever survived.


Time to preemptively post this to StackOverflow and self-solve the question for some of that sweet sweet StackOverflow rep. jk


I can relate to both of the engineers in this puppet theater.


It mildly bugs me that things like this are reported as "Git Security Vuln".

CVE-12345: insecure use of consumer grade operating system in multi-user role when expecting any form of real isolation

CVE-12346: faulty system administration techniques, including running anything as SYSTEM, can cause things to run with elevated privileges

CVE-12347: failure to secure root (C:) and important system directories can allow malicious actors to access them. This can be exploited to trick other parts of the system into doing ... things.

I don't mind patching git for windows to workaround these things, but sheesh, the root cause of both of these is people using Windows incorrectly/insecurely.


> people using Windows incorrectly/insecurely.

Let me fix that for you:

> people using Windows.


This is silly. Fix PS1, I can’t trust all repos I clone. I also want to cross-user access git log/blame etc.


Can't you? What kind of foreign code can be executed that way?

Clonning will not copy .git/hooks/ nor .git/config which is the main danger here, I guess. But I'd sure want to hear about other risks.

Maybe an env variable to disable hooks execution and .git/config parsing would be nice to have for safer use of git repositories you didn't clone yourself as part of shell prompt customizations.


Git clone doesn’t mean I’m blindly executing the code inside it.


You don't have to trust repos you clone, if I understand correctly. You just need to trust ones you're given in other ways. The difference is, clone won't let you set up arbitrary config (or malformed internal data or etc.)


Can you store a .git/config filepath in a git repository, either via the cli or manually hacking the repo data files?


I hope not!

But, as it was pointed out in <https://news.ycombinator.com/item?id=31010522>, you can have nested malicious bare git repos.


No.

I think long ago there may have been some bug that allowed it with a hacked repo, so it's not a ridiculous thing to consider, but no git won't let you and any way you could would be a major CVE.


Also, looks like you didn't read the linked page. The first thing there is a git config option to disable this check on select directories.


Obviously adding every single repo you will ever work with into the config is not workable.


This feels really pointless. If I can create /.git, I have root already. Any other parent-directory-escalation that I can think of would be so obscure as to be not worth caring about, and would also probably require having access to an already-higher-privilege account.

And of course the unspoken: almost nobody uses git on multi user systems, and when they do, most of the time every single user already has sudo.


This certainly came as a surprise to my team today.

We operate some number of repositories and the majority of them use https://github.com/actions-ecosystem/action-get-latest-tag - or more specifically, a fork of that repo which more or less works the same way.

Midday today our CI/CD started failing. We must have hit this so soon because the `apk add git` in that Dockerfile grabbed the new git version. Evidently the SID that ultimately executed the git command inside the included actions' dockerfile was not the same as the one that owned `/github/workspace` on the runner.

We were able to patch around using the new `safe.directory` option, but I'm curious to see if there's more fallout since CI/CD environments in particular create this sort of shared repository.


This is why I pin all dependencies in CI/CD.


Fuck. Now the security guys are going to break all my shit.


Just run your git checkouts in a container and then link the volume to other containers! /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: