
Is Git Irreplaceable? (2019) - cnst
https://fossil-scm.org/forum/forumpost/b251b6e48e
======
comex
Git's biggest flaw is that it doesn't scale. If a new system can fix that
without sacrificing any of Git's benefits, I think it can topple Git.

It's ironic that Git was popularized in the same era as monorepos, yet Git is
a poor fit for monorepos. There have been some attempts to work around this.
Google's `repo` command is a wrapper around Git that treats a set of smaller
repos like one big one, but it's a (very) leaky abstraction. Microsoft's GVFS
is a promising attempt to truly scale Git to giant repos, but it's developed
as an addon rather than a core part of Git, and so far it only works on
Windows (with macOS support in development). GVFS arguably has the potential
to become an ubiquitous part of the Git experience, someday... but it probably
won't.

Git also has trouble with large files. The situation is better these days, as
most people have seemingly standardized on git-lfs (over its older competitor
git-annex), and it works pretty well. Nevertheless, it feels like a hack that
"large" files have to be managed using a completely different system from
normal files, one which (again) is not a core part of Git.

There exist version control systems that do scale well to large repos and
large files, but all the ones I've heard of have other disadvantages compared
to Git. For example, they're not decentralized, or they're not as lightning-
fast as Git is in smaller repos, or they're harder to use. That's why I think
there's room for a future competitor!

(Fossil is not that competitor. From what I've heard, it neither scales well
nor matches Git in performance for small repos, unfortunately.)

~~~
kemitche
I disagree that Git's biggest flaw is its lack of scalability. Cases where git
needs to scale tend to be isolated to companies that have the manpower to
build a finely-tuned replacement (see: MS, Google).

Git's flaws are primarily in usability/UX. But I think for its purpose,
functionality is far more important than a perfect UX. I'm perfectly happy
knowing I might have to Google how to do something in Git as long as I can
feel confident that Git will have the power to do whatever it is I'm trying to
do. A competitor would need to do what git does as well as git does it, with a
UX that is not just marginally better but categorically better, to unseat git.
(Marginally better isn't strong enough to overcome incumbent use cases)

And for the record: I think git-lfs issues are primarily usability issues, and
tech improvements. The tech enhancements will be solved if there's enough
desire, and as I mentioned the usability problems are more annoyances than
actual problems.

~~~
Ace17
I work for a 40-people game studio.

A major limitation of git is how it deals with many "big" (~10Mb) binary files
(3D models, textures, sounds, etc.).

We ended up developing our own layer over git, and we're very happy ; even
git-lfs can't provide similar benefits. This technique seems to be commonplace
for game studios (e.g Naughty Dog, Bungee), so certainly git has room for
improvement here.

~~~
taftster
Do you have tools that you can utilize diffs from your binary file changes? Or
does a change simply just replace all the bytes.

I'd argue if it's the later, that git was never the right choice to begin
with. You don't really want to record a full 10MB of data every time you
change one pixel in your texture or one blip in your sound, right?

So I don't know if this is a "major limitation" of git per se. Not saying
there's a better solution off-the-shelf (you're obviously happy with your home
grown). But this was probably never a realistic use for git in the first
place.

~~~
MaulingMonkey
While I can't speak for the person you're replying to, the technology at least
exists. Binary diffs are sometimes used to distribute game updates, where
you're saving on bandwidth for thousands if not millions of players - which
costs enough $$$ to actually be worth optimizing for. On the other hand,
between simpler designs and content _encryption_ being sometimes at odds with
content _compression_... so is just sending the full 10MB. For a VCS - I'd
probably be happy enough to just have storage compression - using any of the
standard tools on the combination.

> You don't really want to record a full 10MB of data every time you change
> one pixel in your texture or one blip in your sound, right?

Actual changes to content in a gamedev studio are very unlikely to be as small
as a single pixel. Changes to source code are unlikely to be as small as a
single character either. And we definitely _want_ a record of that 10MB.

We're willing to sacrifice some of our CI build history. Maybe only keeping
~weekly archives, or milestone/QAed builds after awhile, of dozens or hundreds
of GB - and maybe eventually getting rid of some of the really old ones
eventually. Having an _exact binary copy_ of a build a bug was reported
against can be incredibly useful.

~~~
chrisweekly
"Having an exact binary copy of a build a bug was reported against can be
incredibly useful."

Sure, immutable build artifacts can be invaluable -- but aren't they also an
orthogonal concern?

~~~
MaulingMonkey
> Sure, immutable build artifacts can be invaluable -- but aren't they also an
> orthogonal concern?

One person's immutable build artifact is another person's vendored build
input.

It's common to vendor third party libraries by uploading their immutable build
artifacts (.dll, .so, .a, .lib, etc.) into your VCS, handling distribution,
and keeping track of which versions were used for any given build. It makes a
lot of sense if those third party libraries are slow to build, rarely
modified, and/or closed source - no sense wasting dev time forcing them to
rebuild it all from scratch.

The next logical step is to have a build server auto-upload said immutable
build artifacts into your VCS, for those third party libraries that you _do_
have source code for, when your VCS copy of said source is modified. Much more
secure and reproducable than having random devs do it.

And hey, if your build servers are already uploading build artifacts to VCS
for third party libraries, why not do so for your own _first party_ build
artifacts too? Tools devs spending most of their time in C# probably don't
need to spend hours rebuilding the accompanying C++ engine it interoperates
with from scratch, for example, so why not "vendor" the engine to improve
their iteration times?

This can lead to dozens of gigs of mostly identical immutable build artifacts
reuploaded into your VCS several times per day, with QA testing and then
integrating those build artifacts into other branches on top of that. The
occasional 10MB png is no longer noticable by comparison.

~~~
Nullabillity
I can sympathize with the game assets argument, but this problem is just the
result of trying to stuff a square peg into the round hole.

Build artifact caching is a different problem from source control, with very
different requirements:

1\. As you mentioned, the artifacts tend to get huge.

2\. The cache needs to be easy to bypass. From your example, it needs to be
easy for the C++ engine devs to do builds like "the game but with the new
engine" to test out their changes.

3\. The cache needs to be precise, so you don't end up with mystery errors
once it finally does trigger, or people wondering why their changes don't seem
to apply.

4\. The builds need to be exactly reproducible, so you don't end up with some
critical package that only Steve Who Left 5 Years Ago could build (or Jenkins
Node 3 That Just Suffered A Critical HDD Failure).

Git either doesn't care about or fails spectacularly for each of those points.
In particular, #3 will be very confusing since there will be a delay between
the code push and the related build push.

Nix[0] solves #2 and #3 by caching build artifacts (both locally and
remotely[1][2][3]) based on code hashes and a dependency DAG (for each
subproject or build artifact, so changing subproject X won't trigger a rebuild
of unrelated subproject Y, but will rebuild Z that depends on X). It helps
with #4 by performing all builds in an isolated sandbox.

#1 is solved by evicting old artifacts, which is safe as long as you trust #4.
If the old artifact is needed again then it will be rebuilt for you
transparently. Currently this is done by evicting the oldest artifacts first,
but it could be an interesting project to add a cost/benefit bias here (how
long did it take to build this artifact, vs the amount of space it consumes?).

[0]: [https://builtwithnix.org/](https://builtwithnix.org/)

[1]:
[https://nixos.wiki/wiki/Binary_Cache](https://nixos.wiki/wiki/Binary_Cache)

[2]: [https://nixos.org/nix/manual/#sec-sharing-
packages](https://nixos.org/nix/manual/#sec-sharing-packages)

[3]: [https://cachix.org/](https://cachix.org/)

~~~
MaulingMonkey
Assets and code have mostly the same needs out of a version control system -
diffs, history, control over versions, etc. - and there are version control
systems which handle both adequately. That said, I'll grant git is quite
focused on code version control specifically - and I would not dream of trying
to scale assets into it directly.

> 1\. As you mentioned, the artifacts tend to get huge.

This, admittedly, is more common with build artifacts. That said, I've hit
quota limits with autogenerated binding code on crates.io, with several
hundred megs of code still being in the double digits when better compressed
by cargo than I can figure out how to compress with 7-zip.

And that's a small single person hobby project, not a google monorepository.

> 2\. The cache needs to be easy to bypass

I need to bypass locally vendored source code frequently as well, to test
upstream patches etc.

> 3\. The cache needs to be precise, so you don't end up with mystery errors
> once it finally does trigger, or people wondering why their changes don't
> seem to apply.

Also entirely true of source code.

> 4\. The builds need to be exactly reproducible, so you don't end up with
> some critical package that only Steve Who Left 5 Years Ago could build (or
> Jenkins Node 3 That Just Suffered A Critical HDD Failure).

Enshrining built libs in VCS is an alternative tackling of the problem. You
might not be able to reproduce that exact build bit-for-bit thanks to who
knows what minor compiler updates have been forced upon you, but at least
you'll have the immutable original to reproduce bugs against.

> In particular, #3 will be very confusing since there will be a delay between
> the code push and the related build push.

It's already extremely common - in the name of build stability, including with
git - to protect a branch from direct push, and have CI generate and delay
committing a merge until it's verified the build goes green. By wonderful
coincidence, this is also well after CI has finished building those artifacts
- in fact, it's been running tests against those artifacts - so it can
atomically commit the source merge + binaries of said source merge all at
once. No delay between the two.

There are some caveats - gathering the binaries can be a pain for some CI
systems, or perhaps your build farm is underfunded and can only reasonably
build a subset of your build matrix before merging. Or perhaps the person
setting it up didn't think it through and has set things up such that code
reaches a branch that uses VCS libs before the built libs reach the same spot
in VCS - I'll admit I've experienced that, and it's horrible.

Nix, Incredibuild, etc. are wonderful alternatives to tackle the problem from
a different angle though.

------
tasuki
> I worry that Git might be the last mass-market DVCS within my lifetime.

The possibility of git being the last mass-market DVCS within my lifetime
leaves me with warm fuzzy feelings. Git is simple and elegant, though its
interface might not be.

~~~
alkonaut
I think it's simple and elegant as a data structure, when what people _need_
and _want_ is something that is (at least also) simple and elegant in its UX
and most importantly VERY simple and elegant for the 80/20 use cases.

For example a typical question on Stackoverflow is "How do I answer which
branch this branch was created from", always has 10 smug answers saying "You
can't because git doesn't really track that, branches are references to
commits, and what about a) a detatched head? b) what if you based it off an
intermediate branch and that branch is deleted? c) what if...

5 more answers go on to say "just use this alias!" [answer continues with a
200 character zsh alias that anyone on windows, the most common desktop OS,
has no idea what to do with].

I don't want to write aliases. I usually don't want to consider the edge
cases. If I have 2 long lived branches version-1.0 and master. I want to know
whether my feature branch is based on master or version-1.0 and it's an
absolute shitshow. Yes it's possible, but is it simple? Is it elegant? No.

The 80/20 (or 99/1) use case is

\- centralized workflow.

\- "blessed" branches like master and long lived feature branches that should
ALWAYS show up as more important in hisory graphs.

\- short lived branches like feature branches that should always show up as
side tracks in history graphs.

Try to explain to an svn user why the git history for master looks like a
zigzag spiderweb just because you merged a few times between master and a few
feature branches. Not a single tool I know does a nice straight (svn style
swimlane) history graph because it doesn't consider branch importance, when it
should be pretty simple to implement simply by configuring what set of
branches are "important".

~~~
Ntrails
As a very basic git user, about once a month my local git repository will get
into a state I cannot fix. I cannot revert, cannot reset, cannot make it just
fucking be the same as origin/master. Usually I accidentally committed to
local master and then did a couple other things and it's just easier to blat
and re-clone than work out how to resolve.

Git is hard for idiots imo, and there are a lot of us

~~~
vvillena
> Usually I accidentally committed to local master and then did a couple other
> things

Create a new branch and check it out while you are on the last commit (git
checkout -b my-branch), delete the master branch (git branch -D master), and
pull it again (git pull -u origin master). You'll end up with a local branch
with a bunch of commits that you can merge, rebase or cherrypick, depending on
what you want.

If you want to learn more about git in a practical way, there's an awesome
book called Git Recipes.

~~~
brigandish
That is not easier than "blat and re-clone" so I think you're proving their
point.

~~~
munmaek
If you’re just going to blat and reclone, then you may as well use a folder
system instead of git.

I see no reason git needs to be changed in order to cater to people who refuse
to read basic documentation or learn from their mistakes.

~~~
brigandish
> I see no reason git needs to be changed in order to cater to people who
> refuse to read basic documentation or learn from their mistakes.

In my opinion, solving problems and making improvements involves reducing
complexity, not defending it. Many people, including myself, have read the Git
docs and learnt about the underlying data structures etc etc and _still_ we
can make the claim that it could be better, in numerous ways.

Calling everyone feckless won't invalidate that.

~~~
munmaek
> we can make the claim that it could be better

I’m not disputing this. Of course git isn’t perfect.

What I’m against is changing git to cater to people who can’t read the manual
and make basic mistakes.

~~~
antris
> What I’m against is changing git to cater to people who can’t read the
> manual and make basic mistakes.

Why? Isn't software that doesn't require reading a manual and doesn't let the
user make irreversible mistakes considered good design?

~~~
munmaek
Not if it means reducing capabilities of the program in order to add bumper
guards.

I can’t think of any software that handles a complex program that doesn’t have
a manual, documentation like a manual, or a learning curve. Git is a tool for
developers, not casual users who want typical apps.

Again, you wouldn’t make an argument like this for a tool used by a plumber or
a mechanic. If a tool succinctly handles a problem, good! But using tools is
part of the profession; they have learning curves.

Most issues with git are PEBKAC issues because people refuse to spend 10
minutes of their life reading about a tool they may use for hundreds or
thousands of hours. I wouldn’t want to cater to those kinds of people.

~~~
antris
Software can cater to multiple types of uses at the same time. You can have a
learn-as-you-go experience while keeping your powerful tools that enable more
fine-tuned or complex tasks. Easy-to-use vs. powerful is a false dichotomy.

About the plumbing/mechanic analogy, I totally would make the same case!
Hammers and wrenches don't require a manual _and_ can be used for very complex
tasks, and that's exactly what makes them so well designed and popular. Few
people want their hammer to have more features, and if they do, they still
want to keep the good old hammer ready, because it's so easy and simple to
use.

Especially calling out PEBKAC (Problem Exists Between Keyboard And Computer) -
while even most of the expert git users, including the author himself say the
interface could at least be made much better - makes me really suspicious that
you simply like feeling superior to other people because you know something
they don't, and you don't want to lose your "edge" if suddenly everyone can
use version control without resorting to manuals.

~~~
munmaek
> Easy-to-use vs. powerful is a false dichotomy.

iMovie vs Premiere/Final Cut. Final Cut X vs 7. Garageband vs Pro Tools. Word
vs LaTeX. and so on. It's very difficult to design interfaces that are easy
enough for average users that don't impede pros/power users.

> hammer

A hammer isn't a good comparison. Something like a multimeter is what I was
thinking of, etc. Git solves a significantly more complex problem than either
of these, though.

> including the author himself say the interface could at least be made much
> better

I don't disagree! Git's interface -could- be better. That has nothing to do
with my points above with regards to people refusing to read basic literature
about the tools they use, expecting them to just magically do everything for
them out of the box, "intuitively".

> feeling superior ... you don't want to lose your "edge"

This could not be further from the truth. I simply have no sympathy for people
who refuse to read the manual or an intro to using a tool, and then complain
about the tool being hard to use. Yeah.. it's hard because you didn't do any
reading! Git is actually really easy if you read about the model that it uses.
Most people don't need to venture out beyond ~5-6 subcommands, and even then
it's easy to learn new subcommands like cherrypick, rebase, etc.

Adobe Photoshop, as another example, has a learning curve, but that tool is
indispensable for professionally working on / editing images. (GIMP is also
good, but that's not in the scope of this discussion). A lot of beginner
issues are basically PEBKAC because they didn't read the manual. Same with Pro
Tools, or probably any other software used by industry professionals. They're
harder to use but what you can do with them (since they treat you like an
adult, instead of holding your hand and limiting you) is incomparable to the
output of apps designed for casual users.

------
ken
Every version control system that's become dominant in my lifetime became
popular because it fixed a major obvious flaw in the previous dominant system
(RCS, CVS, SVN).

From where I sit, Git has a couple obvious flaws, and I expect its successor
will be the one that fixes one of them. The most obvious (and probably
easiest) is the monorepo/polyrepo dichotomy.

~~~
hn_acc_1
I don't see any obvious flaws with Git.

The monorepo/polyrepo discussion exists apart from your choice of version
control system and has little to do with Git, as far as I can tell

~~~
rhabarba
> I don't see any obvious flaws with Git.

Merge conflicts.

~~~
hn_acc_1
Merge conflicts are not so scary, and are an elegant way to handle distributed
changes with simultaneous edits to a single file.

If the conflict is huge, rebasing can help you by "playing" the commits from
one branch one at a time so the conflicts are smaller / easier to fix.

At a previous job, a team was forced to use checkout-style VCS due to their
manager's unfounded fear of merge conflicts; I couldn't go in that office
without hearing one developer shout to another: "Hey, can you finish up and
check in that file so I can get started on my changes?"

~~~
hinkley
I’ve spent too much time helping others fix bad merges, and I still catch
myself making mistakes. There’s a lot of work that could be done for clarity
and error avoidance.

------
romwell
When people talk about killer features missing in Git, there is more beyond
the UX and mono/poly repo.

One thing is code review. There is no code review in Git.

What I expect in 2020 is that I should be able to specify reviewers for the
commit (which I pick out of a list of people who _can_ approve it). These
people should be able to leave comments on the commit. I should be able to
both respond to comments _and_ modify the code _before_ the commit gets
checked in. The history of the comments and changes should be maintained.

There is nothing in Git that supports this flow in a natural way.

A replacement for (or evolution of) Git can be a tool that would support this
code review flow from the get-go.

And yes, there are external tools for code review. But that all should be a
part of version control.

~~~
tick_tock_tick
I think it doesn't exist because there is no demand for that to be part of the
codes history. Nothing you've described sounds very useful after a month or
so.

~~~
pbh101
All this talk of Git-sympathetic code review tools and nobody has mentioned
Gerrit, which seems at least somewhat close to what is being described. Each
patchset of each review is its own Git ref and is often referenced in the
final commit.

Separately, I know many including myself who would love for code review
comments to more seamlessly be integrated into the code browsing experience.

~~~
hinkley
People also seem to be conflating review comments associated with lines of
code with an opinionated code review tool.

You can keep the banter with the code and the go-no-go decision separate, even
external. But post mootems have worked better when someone realized that one
of the team repeatedly calls out a class of errors that bite us later and
they’re being ignored. You have the ability to prevent this error. Wise up or
that person will decide rightfully that we are a bunch of clowns and leave.

------
l0b0
It is necessary but not sufficient for any new contender to do _at least_ the
following to have any chance of taking over:

\- _Interoperate with the major player(s),_ currently Git and in many places
unfortunately still Subversion. svn2git probably did more for Git adoption
than any other feature or tool, because it allowed a fairly painless
transition without losing information.

\- _Solve at least one big problem with the current contenders._ Git made it
possible to run VCS without a separate server program and sped up VCS
operations massively. Both of those were huge. Looking at the Fossil home
page[1] it does have some features I personally have wanted in VCSes, such as
integrated "bug tracking, wiki, forum, and technotes," but the devil is very
much in the details of how that actually works (How easy is it to write your
own custom frontend or add business-critical bug tracking fields, for
example?), and it's not like we don't already have good bug trackers, wikis
etc.

Just as a tangent, some issues (but not necessarily major or fundamental,
depending on who you ask) with Git as it works right now:

\- Does not use cryptographically secure hashes, and has no clear migration
path to a different hashing mechanism.

\- Git Annex is not yet built in.

\- The command line is complex, including many niche subcommands and tons of
rarely used options.

\- The command line is inconsistent, such as `git rm` vs `git branch --delete`
vs `git remote remove`.

\- It is based on a less than ideal theoretical model of patches[2]. IMO this
is the most exciting development in VCSes since Git.

[1] [https://fossil-scm.org/](https://fossil-scm.org/)

[2] [https://pijul.org/model/](https://pijul.org/model/)

~~~
rhabarba
> unfortunately still Subversion

Which is a better choice than Git for most projects, to be honest.

~~~
kstrauser
Everyone's has their own tastes and preferences, of course, and I respect that
yours is different than mine. That said, I used and loved CVS and then SVN for
years and didn't get why all the kids were fussing around with this new Git
thing. I finally made myself try it for about a week. At the end of that
experiment, I ported all my repos from SVN to Git and quickly set to purging
all Subversion-related knowledge from my brain. There's literally nothing
about SVN that I prefer to Git, other than its UI was a little more pleasant.

~~~
codazoda
I actually LIKED having a central repository, which many of us still seem to
prefer (i.e. GitHub, GitLab, Bitbucket). I switched to git mainly because my
colleagues were all using it. I found it difficult to use, at first, because
of my expectation of a central repo. Many years in, however, I see extreme
value in having all your history locally. Specifically, never having to worry
about a server crashing or your "host" going out of business.

~~~
duskwuff
Having history available locally also means you can perform interesting
operations on history -- like "git blame" \-- without making the server do all
the heavy lifting.

~~~
TwoHeadedBeast
The problem is, is that git will happily destroy history. git blame is not
useful because it doesn't tell you who authored the line of code.

~~~
anoncake
> The problem is, is that git will happily destroy history.

No, _people_ will happily destroy history. Git is just a tool.

~~~
wyoung2
Fossil's opinion on this is that history is an immutable record of project
history. It may be messy and unfortunate at times, but it is what happened,
and it shouldn't be altered in place any more than you'd do that with an
accounts ledger.

In extremis, Fossil offers the "shun" command to remove improperly-committed
artifacts, but even then it's subject to a lot of restrictions.
([https://fossil-scm.org/fossil/doc/trunk/www/shunning.wiki](https://fossil-
scm.org/fossil/doc/trunk/www/shunning.wiki))

~~~
anoncake
> Fossil's opinion

Tools don't have opinions, people do. Fossil is just inflexible.

You don't want to alter history? Don't do it then. Git supports not altering
history just fine.

~~~
TwoHeadedBeast
I can't control what other users of the repo are doing, so it's not that
simple.

~~~
anoncake
If you are their superior, other users disregarding your orders is s social
problem, not a technical one. If you aren't, it's a good thing they are able
not to do what you want.

Tools being more flexible is strictly a good thing. If they are misused, the
person that misused them is responsible. It is that simple.

------
emodendroket
I am happy to use whatever everyone else starts using, provided it works at
least as well. But I also do not have many problems that git won't solve, so I
don't feel a burning need to switch. I think git is good enough that source
control is no longer a very interesting problem.

~~~
ken
But did you think that RCS, CVS, or SVN were also good enough? Or is this new?

~~~
sys_64738
These fell by the way side as they never had consensus of being better than
SCCS. Now git has the weight of the Linux kernel behind it which pretty much
EOL'd all other source control mechanisms.

~~~
theamk
I don’t think Linux kernel matters that much - only a tiny fraction of the
total number of developers use it.

From what I saw, it was git vs hg - which had different philosophies. Hg gave
you nice, polished workflow for supported tasks. Git gave you building parts
that you can make your own system from. It turned out that enough programmers
wanted to build from parts.

------
tzury
Bazaar and Mercurial are dying... Bazaar is dead, and Mercurial is near death.

Linus wrote the Linux kernel and it became the "de-facto standard" for web
application stack servers (and phones, and watches, and Chroemcast and all
sort of `Internet things`)

Linus wrote git and it became the "de-facto standard" for version control.

If a world had 23 more Linuses, we would have full control over our SaaS and
"Clouds" and not locked-in as we are at the moment.

But we have only one like him.

~~~
alexhutcheson
Facebook[1] and Google[2] have both publicly stated that they are using
Mercurial internally, so "near death" seems like an exaggeration. Mercurial
has some significant advantages over Git if you want to implement a scalable
backend for really large repos.

[1] [https://engineering.fb.com/core-data/scaling-mercurial-at-
fa...](https://engineering.fb.com/core-data/scaling-mercurial-at-facebook/)

[2] [https://cacm.acm.org/magazines/2016/7/204032-why-google-
stor...](https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-
billions-of-lines-of-code-in-a-single-repository/fulltext)

~~~
BjorksEgo
Theres been plenty of work done to make git enterprise scale in the last year

[https://vfsforgit.org/](https://vfsforgit.org/)

~~~
alexhutcheson
That's a promising project, but it's currently Windows-only. My main point was
that Mercurial is very much "not dead yet!"

------
rhabarba
Fossil's new(ish) semi-automatic bidirectional interaction with Git mirrors
has finally made Mercurial replaceable for me. However, it is hard to ignore
that the IT world has mostly decided to settle with whatever is the most
commonly used right now, no matter if it is actually the best solution - so at
least when it comes to DVCS, the war seems to be over.

~~~
theamk
Does not Fossil’s “no rebases” philosophy bother you?

IMHO, the ability to “commit early, commit often” and to squash/rebase
original work later into clean and understandable commits is one of the best
features of Git.

~~~
SQLite
[https://fossil-scm.org/fossil/doc/trunk/www/rebaseharm.md](https://fossil-
scm.org/fossil/doc/trunk/www/rebaseharm.md)

~~~
theamk
Right - and I think that point 6 and 7 are exactly why I use rebase daily, and
why I will never use Fossil.

The version control history is made for reading by other people, so it is a
story. After all, we only write the commit once, but people will read it many
more times. (This is especially true if there are code reviews involved)

I am an imperfect programmer. My work-in-progres is often broken, and even
fails to compile. Sometimes I will refactor interface only, and will want to
checkpoint my work before I go and refactor implementation as well. Sometimes
I will choose a totally wrong approach and revert it later. Sometimes I will
disable/break large part of system on purpose, to make testing easier. And I
often do stupid data-destroying mistakes, so I want an ability to save/store
all the past versions, even if they are completely broken.

My “raw” commits may look like: “start on feature X”, “refactor interface Y”,
“more work on feature X”, “wip commit”, “fix tests”, “fix performance”, “fix
more tests”. Does any future reader care it took me 3 commits to get the tests
right? Do they care that I discovered the need to refactor only while I was
halfway in feature X implementation? Do they want to see a repo that won’t
even compile? Do they want to hav to cherry-pick dozens of commits to get the
tests to pass? I don’t think so.

The final version will only have two commits, “refactor” and “feature X”. It
would be obvious to everyone which lines of code are associated with which
change. Each revision will be buildable, and will pass all tests - so bisect
will actually work.

(If rebase support is missing, it is possible to “fake” it by having multiple
checkouts and manually copying files around. But this is much more error prone
and dangerous. I have spent plenty of time with SVN/CVS, manually copying
files and applying patches - and I can tell that having this integrated with
version control is much more pleasant)

------
zestyping
It feels that way, and it's heartbreaking that our entire species ended up
locked into a tool with such a horrible interface.

In a way, the git monopoly is worse than Windows or x86 or IPv4, because it's
not just a piece of technical infrastructure. Its arcane commands and its
branching model have infected all of our brains. You can choose a different
editor, you can choose a different operating system, but for as long as we all
live, we will never escape the fact that "git reset" does a half a dozen
confusingly different operations, or that renaming cannot be tracked, or that
most users don't fully understand most of the commands they regularly use.

~~~
Skunkleton
Check out new versions of git. The overloaded checkout and reset commands are
being phased out to some extent.

------
DecoPerson
> 1\. Metcalfe's original Ethernet has been replaced a bunch of times...

These replacements were seemless to users. New Ethernet adapters were
compatible with at least the previous spec. The Git import/export of Fossil is
not seemless at all. It actually adds quite a bit of complexity if you want to
introduce it your regular workflow.

> 2\. Microsoft's long-term stalwarts Windows and Office are dying...

Citation needed.

> 3\. Adobe's having a hard time hanging onto its old market...

Citation needed.

> 4\. IPv4 still won't go away...

There is a lot of hardware out there that only works with IPv4. The costs and
risks of switching your org's internals, product or services from IPv4 to IPv6
are phenomenally higher than switching your org from Git to Fossil or adding
Fossil support to your product or service.

If Fossil is truly superior to Git and people are not switching to it, then
there's no hope they'll switch to IPv6. Not until the cost of not switching is
greater.

~~~
fulafel
Wifi mostly replaced ethernet for user facing applications.

There was partial protocol compatibility between ethernet and wifi, but the
user experience was very different. Some features were lost or degraded (eg
speed, reliability, security, configuration complexity) but a pain point
(cables) was fixed.

------
mikece
The problem with Git is that the software and API aren't separated. Like all
modern software, there should be an API or interface to which all distributed
source control engines comply, allowing the specific DVCS engine to an
implementation detail. Want the old-school one written in C by Linus? Fine.
Want a revamped version of Mercurial that uses the same commands and creates
the same repository format? Cool! Want to Show HN your ability to implement
the same thing in Rust with lower memory and better safety? Awesome!

But with Git this isn't possible because there's no concept of the API or
interface for distributed version control being separate from the
implementation -- and the discussion of "could the API be better than what it
is now?" has never been had in a meaningful way.

~~~
orf
There is, it's the .git directory. Both the protocol and the directory are
well specified. See libgit2 for an example non-reference implementation.

I think someone even built an implementation in bash, but I can't seem to find
it now.

~~~
Iwan-Zotow
> I think someone even built an implementation in bash, but I can't seem to
> find it now.

someone named Linus, I guess?

~~~
orf
No, it was someone exploring git internals by writing a simple git
implementation in bash.

------
interfixus
For what it's worth, the linked page is itself running on Fossil, which has of
course build in forum facilities and several other suchlike goodies - being a
complete solution, packed into a single smallish executable with no
significant dependencies apart from SQLite. The Fossil site runs on very
moderate hardware, and gets no hiccups from hitting the HN frontpage.

I like this kind of untroubled minimalism, and so far have never encountered a
reason not to run every personal project on Fossil. The real world will
occasionally force me onto Git territory, but I can't really say I have ever
enjoyed the experience.

Git is here to stay for any foreseeable future, of course. And while I do
understand points often made about the benefit of one de facto standard to
rule them all, monolithic dominance always tends to unsettle me. My SE friends
in general simply use 'Git' as a given synonym for 'version control'. And it
does annoy and somewhat worry me that they've never even _heard_ of Fossil
until I roll out my sermon.

~~~
petre
We use Fossil as well @work and we love it, although I don't think it's not
the best choice for huge codebases such as the FreeBSD ports tree which I
tried to import once to see how it scales and gave up after >2Gb and a hour or
crunching. Maybe importing into nested repositories would work better?

------
pbiggar
I'm working on a git replacement, in a way. The thing that makes git powerful
is that its just text. Git is probably the most powerful thing for code as
text. When code no longer is just text (and by "text" I mean bytes on disk,
not that we're switching to coding with emoji or VR) you get to do more
powerful stuff.

Our plan in Dark ([https://darklang.com](https://darklang.com)) is to combine
all the different ways that people "branch" (deployment, feature flags, git
branches, staging/dev/prod environments) into a single concept. And then we
also plan to combine all the ways to "comment" (PRs, slack messages, commit
message, code comments) into a single concept.

Not sure if you'd call that a git replacement, but it's a displacement of
sorts - the function of git is replaced by non-git.

~~~
zestyping
Sounds intriguing! Can you say more about what that single concept is and how
it works?

~~~
pbiggar
To be clear, it's two concepts, one for branches and one for comments.

The first concept we've been calling deployless, and discussed it here:
[https://medium.com/darklang/how-dark-deploys-code-
in-50ms-77...](https://medium.com/darklang/how-dark-deploys-code-
in-50ms-771c6dd60671)

The second one is just an idea right now, suggestions welcome. The observation
is that comments on a particular line of code are spread in as many as a dozen
places (a google doc, slack, trello, the code itself, an old version of the
same code, PRs on github, comments on commits on github, commit messages,
another place in the codebase referencing this one, the docs folder in your
repo, another repo that uses this API, your 3rdparty docs on README.io, etc).
This is weird and bad, and it must be possible to do better.

~~~
shalabhc
For comments, fully agree that we don't need several isolated systems (source
files, docs, commit messages, ...)

How about _one_ system which is a hypermedia system and can hold many
different kind of objects. So annotation objects can reference 'code objects'
such as types, fields, functions or even blocks directly. Not sure if Dark
gives each of these an identity (it should) which will make it possible to
refer to them via hard links rather than text snippets. Once you have all code
and comment objects in one hypermedia system, creating views from that is
about multiple projections of the interconnected objects.

Wouldn't it be great if I can use a query to refer to 'all functions that
reference this type' inside some docs? Or list all annotations that reference
a function? These could be embedded inside annotations as well. Gtoolkit does
something similar.

Perhaps even a 'branch' can be thought of as a subset of the hypermedia graph.
E.g. using a code block X2 instead of a code block X1 within the same
function.

------
mikl
It’s not irreplacable. But it’s very well put together, and a great ecosystem
of tooling exists around it. So any potential replacement would have to be _a
lot_ better for people to move.

And for those of us that remember the dark days of CVS and the pain of
migration (I helped migrate drupal.org from CVS to Git), that is a good thing.

I don’t miss the days where you had to make difficult SCM choices when
starting a new project, whether bzr, Mercural or Git was the right choice.

Git has become the lingua franca of SCM, and that is great. So many great
developer tools exist that integrate seamlessly because it is safe to assume
that 99% of the audience for it will be using Git.

------
rurp
I really doubt Git is going anywhere for the foreseeable future, but I could
imagine a more approachable VCS catching on. Git is extremely opaque to most
new developers and even for experienced devs looking up a new command.
Sometimes I look up how to perform an unfamiliar task with Git and find 4-5
competing answers on Stackoverflow with no real clear explanation of why one
is better than another.

If an easier VCS caught on enough to be used in schools and boot camps, many
younger devs would start with that one and just keep using it as they progress
in their career. The new VCS wouldn't have to be better or more powerful, just
easier to use for the basic stuff that makes up 95% of Git usage, in order to
gain traction. That said, Git works perfectly fine and is so entrenched that I
don't see it going anywhere.

~~~
roca
Git's UI is unbelievably bad. However, it's practically impossible to avoid
learning how to use it these days, so "much better UI than git" will never be
a compelling selling point for an alternative VCS: almost its entire target
market has already paid the cost of learning git.

That problem could be gotten around if there was some enormous pool of
potential VCS users who aren't using VCS currently but would if there was a
better one. I don't think there is.

~~~
james_s_tayler
It doesn't have a UI. Its just a program that takes instructions and does what
you tell it. If you want a nice fancy GUI for Git there are plenty of
reasonable options. Fork is the one my coworkers seem to be enamoured with at
this point in time. I myself don't see the need to use any sort of GUI for Git
the vast majority of the time.

~~~
mrunkel
The manner in which it takes instructions and returns results is the User
Interface.

That's what the person you are responding to is talking about.

UI = User Interface which is the interface between the User and the Program
GUI = Graphical User Interface. Same thing, but pretty.

~~~
roca
That's right. The Git CLI is the default Git UI.

A better Git UI is possible, like Gitless. A GUI that just has the
functionality of the Git CLI doesn't solve the problems I care about, like the
staging area being unnecessary and complex.

------
throwGuardian
> Microsoft's long-term stalwarts Windows and Office are dying

Judged on what metric? Office's paying monthly active users and revenues have
consistently grown, measured both quarter on quarter, and year on year

~~~
majewsky
I would probably think the same if I were stuck in my Linux bubble, but $WORK
allows me to observe the average computer worker. They're incredibly deep in
the Microsoft ecosystem and not at all interested in switching.

------
consultutah
SVN seemed irreplaceable not so long ago and CVS before that and RCS before
that... When someone comes up with something demonstrably better and gets the
right people to start using it, people will switch over.

~~~
colechristensen
SVN also had serious difficulties and performance limitations which git does
not.

Git is somewhat confusing to use, but not enough so that anyone really cares
all that much (besides a few people who _really_ care) and that is not a
recipe for easy replacement.

There were/are slightly less confusing version control systems (mercurial) but
they didn't catch on for whatever reason.

~~~
emodendroket
I mean so is Unix and it's still around all these years. Heck there's nothing
especially obvious about a for loop, yet every language has them. Once
everyone's accustomed to the weird interface they are just going to demand
everything else have the same weird interface they're used to. This is one of
the lamentations of the Unix Hater's Handbook.

~~~
wwright
If you’re talking about a “for (initialize; condition; step) { body }” for
loop, most newer languages don’t have them AFAICT.

I would say the other kind of for loop, a “for x in y { body }” loop, is very
intuitive and obvious to most English speakers. We use idioms like “for each
A, do B” all the time.

~~~
emodendroket
I meant the C-style one and they do appear in most commonly-used languages,
though I guess most commonly-used languages are not all that new.

------
thenewnewguy
I find it ironic that in the same post the author complains about a git
monopoly ("Git effectively has a global monopoly on DVCSes") while wishing the
same thing for fossil ("Fossil's world domination will not come from...").

~~~
lidHanteyk
"world domination", specifically, refers to a long-running joke about the
corporate-sponsored capitalist competitors to FLOSS. See [0] for an example of
the joke being used to title a talk and frame business decisions. [1] contains
a history of the phrase in the Linux world. Note that Linux and git share the
same original author/designer.

[0] [http://www.catb.org/~esr/writings/world-domination/world-
dom...](http://www.catb.org/~esr/writings/world-domination/world-
domination-201.html)

[1]
[http://xent.com/pipermail/fork/2002-January/008429.html](http://xent.com/pipermail/fork/2002-January/008429.html)

------
oxfordmale
Git can easily be replaced in my company if there was business need for it.
For now it does do its job, but there is no reason we wouldn't migrate if
there was a better alternative. Right now there isn't one.

~~~
rhabarba
Which features would make it for you?

~~~
enjo
Perforce is still in use in content heavy industries because Git still
struggles with large binaries. Something with git branch semantics that could
handle those files would be huge.

~~~
emodendroket
Isn't that what Git LFS is about?

~~~
Aeolun
Yeah, but it’s not really out of the box.

------
quadrifoliate
I find it amusing that the author mentions Git and Excel in the same breath.
One is a GPLv2 piece of software that you can submit patches to, fork, or
build your own copy and sell it (so long as you release the source); while the
other is a closed-source piece of software that provides largely static
utility to many people, yet keeps increasing in price (latest is ~$100/year
for an "Office 365" subscription if I remember correctly).

Imagine if you had to pay $100/year to use Git. I would probably hand it over,
begrudgingly. But the fact that I don't is probably one of the more important
triumphs of Free Software over proprietary in recent years.

------
ChuckMcM
Does Fossil do sub-modules correctly? Specifically does it let you compose a
project of several repositories and do configuration management on that
repository? That is something git kinda sorta does but it gets out of control
easily.

~~~
hinkley
I like the idea of storing commit history in a database. Why write all of the
bits from scratch?

But I don’t like how difficult databases make it to represent graphs of data.
To pull a subtree cheaply, you need a graph. not unlike SVN’s data structure.

~~~
swish_bob
Use a graph database?

------
734129837261
I wish there was a version control system that didn't need user input beyond a
"push". Only branches and pull requests, nothing more. No commit messages, you
have one master branch and however many sub-branches you need.

Whatever code is in your branch is what it is. And then there's a layer of
magic on top of it all, maybe a UI or command line tool or both, just to be
able to easily rewind time. Either per file, per line of code, or folder, or
just a folder but not recursive, or the entire branch.

Think of Apple's Time Machine, it should be that simple.

Honestly, I rarely–if ever–read commit messages to begin with. The way I
navigate older code is never done by searching for commit messages. That's not
reliable.

Instead I simply go to a point in time where I think the thing I'm looking for
might be. I'll look at the code, recognise its state, and continue the search
up or down the timeline.

And that would cover the needs of most projects I'd say. And it would save us
a shitload of time.

Hell, I've been working with git for almost a decade now. I never needed to
rebase or merge things, until I recently did need to do it. It's too arcane to
make intuitive sense, to me anyway.

Git is good, I'd just welcome a breath of fresh air...

~~~
tenken
Do you work on a team? Or in a job were you need to have an audit trail for
your work? Commit messaged and many other features of Git like branching and
tags are indespensible for sharing and collaborating on code.

~~~
eesmith
Flip your questions around. Might someone who isn't on a team, and doesn't
need an audit trail, want something simpler?

Versioned files systems, including Apple's Time Machine and Dropbox's version
history, have no extra UI to save versions. A small, short-lived project with
at most a few collaborators (eg, working on a small scientific paper) might
find those more useful than git or other VCS.

~~~
at_a_remove
I would have to agree with this. I have been programming for decades and I
haven't had to work on a team for a project. "Branches" have not happened, nor
have I ever felt the lack of them.

I would not mind something simple. I find the arguments, flags, whatever in
git to be rather opaque. That, coupled with the lack of need for its features,
have not thrilled me.

It seems to be pretty effective if you have tons of programmers working on a
single project, but at the bottom end of the scale, I find it baffling.

------
kjgkjhfkjf
Eventually git will be replaced by something better. It might not be replaced
everywhere, but eventually it will be replaced in most places. This is true of
almost all things in tech.

I expect that git's eventual replacement will initially boast compatibility
with existing git repos.

------
harry8
Taking the question literally, obviously not. If git became a problem due to
some unforseen licensing issue or whatever mercurial does the job just fine
right now and has for years.

As a heads up if you're stuck on cvs, svn or other stupid vcs due to "old-
codgers" in your office, mercurial has a shallower learing curve and easier ui
to get the same job done which may make it easier to switch.

Either git or mercurial, who cares? Either of them until there's something
better. Never deal with SVN & CVS branches and merges again. Feel the
immediate team productivity boost which will pay for the initial learning
curve costs by day 2. Seriously.

~~~
choeger
> day 2

Unfortunately, you have to settle on some workflow for that. And this settling
will take some experience and discussion. So "day 2" sounds very overambitious
to me.

~~~
harry8
Use the existing svn/cvs workflow with the trivial changes for git/mercurial.
Central repo, push when you're done to master. Branches are all but unusable
on cvs/svn anyway - if you do use them, feel the win and laugh with glee. When
everyone is on top of that basic workflo then you do something incrementally
better. Really it's day 2, really. I've seen this more than once now. The
fight to get to that point where the changeover happens is getting easier but
I'm sure it still sucks.

------
hnrodey
Unsure if this is just my hot take but...

GitHub is what mainstreamed Git to what it is today. Who knows what our
industry would look like (with regards to source control) if GitHub never
existed.

Probably still SVN and Team Foundation Server (shutter).

------
taspeotis
If you take a look at Subversion vs. Git's interest on Google [1] and you
agree it has some correspondence to the technology adoption lifecycle [2] ...
Git's got a long way to go.

[1]
[https://trends.google.com/trends/explore?date=all&q=%2Fm%2F0...](https://trends.google.com/trends/explore?date=all&q=%2Fm%2F012ct9,%2Fm%2F05vqwg)

[2]
[https://en.wikipedia.org/wiki/Technology_adoption_life_cycle](https://en.wikipedia.org/wiki/Technology_adoption_life_cycle)

------
sys_64738
Git is a tool to get the jobs done, not a religion.

------
JohnFen
Of course Git is replaceable. When a new tool that provides a substantial
benefit over using Git comes around, it will be replaced.

If that never happens, then it's a pretty strong indication that Git is
working perfectly fine -- so why worry about whether or not you can replace
it?

------
pixelbath
Absolutely not. Give me something that can more elegantly handle sub-projects
and sub-repositories but still works as well as Git, and I'll stop using Git
_today_.

------
micimize
fossil is a really interesting project, but the switching cost from git is
pretty massive. The two killer issues for me: * community/history: any issue I
have with git, someone else probably has had. Being a pioneer is very
expensive, time wise. * ecosystem/tooling: aside from git host provided
tooling, there're integrations for vscode, slack, CI, etc.

------
lasereyes136
[https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...](https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines)

\- Any headline that ends in a question mark can be answered by the word no.

~~~
omarhaneef
Do you mind if I take all your money?

------
jasoneckert
Am I the only one who read this thinking "Oh come on....git with the program
dude...."?!? Seriously, I love git.

------
kseo3l
git has big problem with binary files, hopefully a new technology can fix that

------
vfclists
When they say Git, isn't Github and Gitlab they actually mean?

------
HALtheWise
One major fundamental issue with git that nobody has brought up is that its
data model is close-to-incompatible with some of the current legal and moral
requirements around data privacy today. For example, as far as I can tell,
GDPR allows any European citizen who has ever committed to the Linux kernel to
request that their name be permanently expunged from their contributions, and
everyone with a git clone of the kernel is legally obligated to perform a
rebase to do so. It's possible to make a DVCS where that is an easy operation,
but git very much isn't it. That means it can't be used to store personal
data, and even standard corporate policies of "we delete old historical
records so they don't bite us in court" aren't supported.

There are definitely times you want to store the full history forever, but it
would be nice to have a DVCS that gave other options.

~~~
GauntletWizard
The legal requirements in the EU are at odds with the moral requirement.
Changing or erasing history in Git is an anti-feature. Checked in secrets
should be assumed to be compromised and changed immediately. Removing
contributions should be treated as any other form of censorship.

Those who forget (or rebase away) history are doomed to repeat it.

------
dustingetz
The cloud is going to change a lot of stuff, file-system based code might be
gone, code-in-database is already halfway here ... opportunity

~~~
krasin
I agree with the first half of the statement: we indeed do not have a cloud-
first version control system. It could bring some real benefits of tight
integration with continuous build and other things considered to be essentials
nowadays.

I don't see files going anywhere. I would like to listen to examples of "code-
in-database is already halfway here", if you happen to have them.

~~~
dustingetz
My startup is one [http://www.hyperfiddle.net/](http://www.hyperfiddle.net/)
Others are [https://darklang.com/](https://darklang.com/) and
[https://www.unisonweb.org/](https://www.unisonweb.org/) and of course
scripting google sheets with javascript

Low-code is very suddenly going to be a big thing in the next couple years and
when it happens there are now 750M more "coders" and those types of people are
not interested in git pull --rebase --fucked --whatdidido

~~~
krasin
Thank you, very interesting pointers!

------
tomesch1982
The git internals are good for some kinds of projects but for the actual
majority of projects git is a very bad fit. Most people don't get that because
they are blind for the much better tools like mercurial and fossil.

The git UI/UX on the other hand is the worst piece of crap ware known to man.
This piece of shit has probably destroyed more data than any other tool ever
written. People who think that git is any good are obviously wrong and can
easily proven wrong. It's sad that git has become the default.

~~~
rimliu
How funny, it was actually mercurial that destroyed my repo. Never looked at
it again. Also, it is very hard to "destroy data" in git.

