To be clear on what this is, it allows you to mount a git repo and use writes to the filesystem to autocommit and push to the remote, effectively using git and a server to synchronize directories between remote hosts.
That is not the same thing I would think of as a version-controlled filesystem. That would be something more like the ClearCase MultiVersion Filesystem, which allows you to define versioned views of an entire filesystem. This gives you the holy grail of versioning, snapshotting, backup, restore, synchronizing entire systems, with equal treatment of text and binary files, in a way that is totally transparent to any higher-level tooling that reads and writes to these files, that developers have been trying to reinvent for 30 years because ClearCase is proprietary and very expensive.
I've done something that sounds similar with bash scripts and systemd path units. Every time I modify a configuration file from any of a number of paths I'm interested in versioning, I get a prompt to supply an optional commit message, and the change is automatically committed and pushed to remote. It's not the whole file system, but the commit messages on configuration file changes have come in handy at least once.
I used ClearCase for many years. It was a good version control system. It had a certain amount of complexity associated with it, but I thought it was fine and worked well.
I assume it’s still sold and supported, but I haven’t checked on it in ten years.
I used it in a global telecoms company about 15 years or so. We had all the bells and whistles dynamic snapshots FS integration etc and I found the experience very underwhelming. Clunky and brittle and it never felt fully transparent and for all that we had to do a whole heap of training to learn to use it anyway. Subversion when I came to it was a step up, but still felt a little brittle and they kept on changing things between releases. For all its warts git is simply the best tool in common use. Fast, reliable and probably about as hard to learn as any of the other options if you’re just sticking to the basic usecases.
I think this is a great idea, but I would want to see it supported by the main Git maintainers (integrated into the Git Mainline —probably wouldn’t happen. It’s written in Python, and Git is C, so there’s a lot of structural issues).
I really like the idea of adding the work of non-tech team members (Graphic Design, Localization, Marketing, etc.) directly into Git (as opposed to requiring the devs to act as “gatekeepers”).
My main concern would be that this could end up “ballooning” a Git repo, and I’m not sure if this addresses the existing LFS issues, which would reduce its utility for some use cases (like media assets).
There’s a reason that some game studios still use Perforce (DISCLAIMER: I was a teenage Perforcer —I don’t miss it).
Git ships (usually, but technically depending on package I suppose) with 'contrib' stuff in Perl at least (e.g. diff-highlight).
I think more of a barrier if they even considered it would be the FUSE dependency. I mean, of course it uses FUSE, but it makes it Linux and (with a bit of pain) macOS only.
git is a clear, simple file format (with a horrible user space) which makes sense as the back end of far more things than it is used for. I feel like integrating this into mainline is a little bit like integrating a visual database design tool into a database server. They're complementary, but different.
Is the intent to turn Git into something like Jetbrains's Local History, an unbranched log of all changes on disk without commit messages? I don't think it can replace human-curated Git histories, cu but could be useful as a safety net to avoid losing uncommitted code. You probably wouldn't want to put a build directory in gitfs without a .gitignore rule though.
I don't know why most people are assuming this is mostly for enabling non-technical users to write to Git repositories.
A popular setup ( so popular it's (was?) the recommended way to setup Saltstack's configuration) is to use it to basically mount an autosynced folder from a central location ( with the benefits of revision history at that location).
IMHO it's a much better use case since you can't make conflicts and it won't result in poor (autocomitted) git history.
I'd be curious how well this performs with files that are changed often, and if the disk space needed to store snapshots of all the changes would grow a lot over time.
IIRC git stores copies at first - so interactive time performance is great. It later compresses a set of files into a pack - so long-term space performance is great.
It's like the perfect partner - at least, from afar.
I like the idea - does it support .gitignore though? I wasn't able to find anything in the documentation and GitHub issues seem to be confusing in that regards. If it doesn't, it might be hard to use with tools that create temporary or output files in the directory, such as compilers, LaTeX, some editors, etc.
"however git would" is by asking the user what to do. This automatically pushes, even if the push isn't clean it sounds like. It's unclear to me exactly what that means, `push -f` ? A pull with some options first if upstream can't be fast-forwarded? Doesn't sound very...safe.
This looks great!, but since a filesystem doesn't need most of the git features (decentralization, merging, etc). How does this compare to ZFS, btrfs, since they allow "snapshots" of filesystem to be stored and restored quickly similar to how git works.
It’s not meant for version controlling whole existing file systems. It’s for mounting git repos in the file system so that people can modify files in the repo and have them automatically committed. From what I understand from reading the readme.
I'm curious what behavior you'd expect from a git file system then? For devs and commits, understood, but for time machine like history of directories, etc seems useful for many folks.
> I'm curious what behavior you'd expect from a git file system then?
I never thought about or considered a “git file system” before today. While a neat thing conceptually and even technically, I can’t imagine a situation where I would want a git file system. I never want to auto commit to the remote repo. I want to make sure my local changes aren’t complete shit before I committing to the remote. For me this is a solution to a problem that I don’t have.
When I clicked on "arguments" I was half-expecting it to be arguments like "--verbose" and "--noconfirm" and half-expecting it to be arguments like "why WOULDN'T I do this?" and "no this is actually a GREAT idea because...".
That said, this is pretty neat and I can see a couple use cases for it. If this ran on Windows there're more than a few programs worth of config directory that I'd love to use it to handle.
Always thought modern oses shoukd have an on machine document management system. If vendors would implement this into their office applications we would nearly be there
Apple has had a similar feature for years in its OS. Its office suite makes use of it. https://www.makeuseof.com/tag/recover-word-pages-mac-documen...: “Every time you save changes to a document, iWork archives a copy that you can recover at a later date.”
Time Machine and Dropbox (only in the paid version, I think) also keep copies, but (important for documents that consist of many files on disk) can’t really know which files ‘belong together’ isn a series of file writes.
Both, of course, also potentially are data leaks waiting to happen. Not only can’t you be sure that a Save overwrites an old document, you can’t even be sure that the program you’re using is giving the system that opportunity. On the plus side, these make it impossible to accidentally send out a file with (partial) content from an old version.
NixOS can also move between system generations seamlessly. The approach is entirely different though, the system is fully defined by its config files and will be recreated from scratch each time it is changed (except /home and /var, mostly).
VFS for Git solves the issue of having gigantic bloated monorepos used by thousands of devs, making sure user efficiently downloads only what is needed for him.
This is basically Git checkout with an autocommit feature, making sure your grandma will be able to do check grammar in your thesis without teaching her how to Git.
Clicking the link, I was kinda hoping it was some version of VFS for Git, but for operating systems other than Windows. Kinda bummed that it wasn't, this seems kinda pointless. Microsoft is (allegedly) working on macOS and Linux drivers for that thing, but it's been a while now. I hope it's going to become an actual thing one day, it's such a cool idea.
I usually understand that people read mostly titles. It was a bit funny though that someone likely managed not to read both the posted site and a link in a comment they replied to.
It's different from git-annex in that it's using git itself (git-annex just uses git to track metadata/hash to facilitate large files), but there's a similarity to git-annex's 'Git-annex Assistant' in how it "any subsequent changes made to the files will be automatically committed to the remote".
From a brief experience with git-annex assistant, I've been finding the experience of having things automatically 'synced' to be confusing and prone to issues when doing things manually somewhere else. Think it's real power may be in evolving it's user interface to be pervasive within file browers, projects like https://github.com/andrewringler/git-annex-turtle are an example.
Sounds like it's intended for non-git-users to work on an existing git repo - editing docs, graphics, whatever and pretending it's just a bunch of files with no git - rather than particularly for a general purpose filesystem with git as automated backup / version history.
That is not the same thing I would think of as a version-controlled filesystem. That would be something more like the ClearCase MultiVersion Filesystem, which allows you to define versioned views of an entire filesystem. This gives you the holy grail of versioning, snapshotting, backup, restore, synchronizing entire systems, with equal treatment of text and binary files, in a way that is totally transparent to any higher-level tooling that reads and writes to these files, that developers have been trying to reinvent for 30 years because ClearCase is proprietary and very expensive.