At Sun all stuff was in Teamware, except when it wasn't, because every group at Sun could do whatever they wanted to (we used SVN in the x86 ILOM team). Teamware was good but suffered by being a wrapper on top of SCCS.
SVN was a revelation after CVS. I resisted switching to git from SVN for a long time because the mental model for SVN is so much simpler: everything is in one central place, every change is marked with a monotonically increasing version number, remember a number and you can always reproduce the state of the project. Eventually I saw the huge benefit of the "git workflow" (local branches, pull requests) for collaboration. Branches in SVN are error prone and always risked painful conflict resolution, so we made them rarely.
Perforce, which is commercial, was like stepping into another dimension (you mean all this stuff just works? Out of the box?). There's another historical timeline where Perforce has an early open-source model (like Redhat) and is the dominant VCS.
I enjoyed that remembrance of 20 years working on free-software based projects, not a bad way to start Sunday. I hope it was worth your time!
I do agree that source code control needs to move beyond lines to deal directly with nodes in a tree structure. That could be XML, JSON, a C++ parsed abstract syntax tree, etc.
But this has a lot of problems. Namely - most of the time, when two people change the same line, the changes really do conflict, and someone really does need to manually merge them. In fact, sometimes even when your VCS merges things happily, it still gets them wrong because two changes conflict semantically but don't touch the same lines.
So I guess I would say, I think merging is an impossible problem to 'solve', and going to a more granular merge strategy is actually a move in the wrong direction. You could maybe fix my particular case by 'whitelisting' those typename changes, and saying 'these are allowed to merge with anything else', but at that point... the effort required to specify that unambiguously and make it work properly is probably higher than just merging the changes.
I've not seen someone actually use it for source control, but the core idea is simple enough for the taking.
For example, I have used .gitattributes to make git use MS Word to merge docx files, and LabView to merge .vi files. TortoiseGit (for Windows) includes diff/merge driver scripts for use with MS Word files.
It doesn't always work quite right, but it's good enough that I use it quite often.
We used to do code + art in the same 1TB perforce repo back when I did gamedev. Art folder was ~600GB. Having a workspace to eliminate that as a developer was awesome.
Git still fails to even approach the usefulness that P4 brought to those types of shops. Between the auto-cache proxies, dealing with 1TB+ repos and workspaces there's a reason P4 still does pretty well.
Much simpler conceptually, in my opinion.
Imagine a world where 'svn up' or 'cvs up' takes 20 minutes. Not only did the client mappings limit the scope of the operations, but local indexing brought the time for those operations down to 0-4 seconds.
The inability to go back to old states of your project unless you happened to tag them makes finding the cause of bugs extremely hard. I don't know how it is in CVS, but in Clearcase you can tag only a subset of the files and what you check out from the server is determined by a complex configuration file ("the configuration specification"). This is an additional hurdle to reconstructing old states of your software because now you need to know the config-spec to do so. Even if you have the old config spec, it might contain fallback rules ("just take the latest version of the file") that effectively make it impossible to reproduce the old state. Unless you're extremely disciplined in your usage of the tool it's a real challenge to fix bugs for old releases. And don't get me started on trying to backport bugfixes into several old releases when branches are managed for each file separately.
This lack of a proper project wide history also makes it extremely challenging to migrate to a different tool without losing a lot of information.
I agree with you, the code I had inherited was built with clearmake and no one had any idea how to move it to a newer build system. It was multi-platform code and you had to compile all the C++ code on an HP-UX machine first. The build would fail the first time and then succeed the second time. Once it was built on HP-UX, it could be built on any other platform
There was a dedicated "build engineer" who was the only one who knew how to fix build issues. Thankfully the project was finished by the time he had left the company :)
Or maybe I should just send the PTSD bill to them
MS didn't write Skype but it sure got worse after they bought it.
If you disregard their bully sales and "promotion" tactics ando how much the "Rational" suite of products cost
So no, I'm happy that github took over and I'm happy to boycott it as much as I can
Don't remind me. I worked for a group in a top tech company that used cvs until 2014 and were very resistant to switching. They finally switched due to a department mandate to stop using cvs (they were not the only team in the department using cvs).
But I do remember when CVS was considered a step up, so I suppose I'm getting old.
But it did also let you check out files as of a particular timestamp, which could be more reliable.
Also, the same workplace where I used ClearCase was the one that used Lotus Notes.
The way I see it the evolution of these version control systems was driven by the precipitously falling price of disk space.
CVS only stored a tiny bit of metadata locally so almost every operation required contacting the server. That made using CVS very slow and branching expensive.
Subversion stored a whole separate copy of every file locally. This made the critical "show me just my local changes" operation an order of magnitude faster and made branching cheap, at the expense of doubling the disk usage.
Git stores the entire history of everything locally (compressed). This makes most operations an order of magnitude faster still, so much faster that some things that were completely impractical with the earlier systems is now routine, and branches are free.
What's coming next?
Definitely a very interesting project. Will have to play around with it a bit.
I know this is HN and we need to know 100 git commands. We should do interactive rebase, we should cherry picking changes, and all that jazz, but I really hope the next source control will be a whitelisted git commands subset. push, pull, commit, amend and rebase or something similar.
Deprecate the rest and make our lives easier.
The ones that I normally use are fetch, merge, push, reset, rebase, clone, diff, status, branch, log and checkout. When I used svn, I had to use co, commit, add, delete, diff, status, log, copy, and update. Eleven versus nine commands, so the difference isn't really that much.
I was an early adopter and promoter of Subversion. I loved how much faster it was than CVS, although it took me a few years to fully understand some of the more complex things like the details of merge tracking.
I was very resistant to Git at first - I fundamentally just didn't get it. The whole concept of a distributed VCS seemed like anarchic nonsense. I basically had to be dragged there kicking and screaming, but of course now I'm very comfortable and would never go back to Subversion.
Everyone rightfully complains about Git being hard to use but it's totally worth it for the power. It seems unlikely to me that the next generation will be "works like Git but simpler to use". I think whatever comes after Git will have to be more significantly different than that.
The emergence of git and GitHub have transformed Open Source development, being able to just open a pull request or an issue and know you’ll get notified when things happen is great - I’ve submitted patches for many things which I just wouldn’t have bothered signing up to a mailing list to keep track of in the past.
Github user 362 (from back before you could just sign up)
As recently as the first dot-com boom, Git didn't even exist. Even Subversion was brand new, and it was mind-blowing how much easier it was to work with than CVS.
One aspect of history that this article glosses over is that Git is not the only or even the first third-generation version control tool created. The earliest buzz I remember around DVCS was for darcs and bazaar, neither of which I've heard mentioned since about 2009. Mercurial and Git were released around the same time as one another, and were in a vim-emacs sort of grudge match for a few years before Git became the clear winner.
Mercurial seems to still be in use in some odd corners of both the corporate and open source worlds - probably a legacy of people choosing it for projects during that period before Git "won". When I first tried it it felt a lot like Subversion made distributed. Nowadays it feels incredibly clumsy next to Git.
Mercurial has exactly the same features - and maybe something more - than git, with a UX which is at least more consistent.
The main problem I see with mercurial is that its team stays too quiet: everything is so smooth that probably nobody feels the need of doing much fuzz about it.
The HN community has a high technical level, but during my career, out there in the world, I have seen a lot of different folks: people that constitute the bulk of the workforce often do not have a strong mastery of the tools they are required to use. Sometimes they just endure them. For these people, using an easier tool (and mercurial is a good candidate in my experience) could probably help them to really improve their skills and become more productive team members.
This is one moment in my past that was formative in helping me make good tech predictions. Namely, if I see a technology that I think is better, and I think it will win out over an inferior technology, I simply reverse my prediction and enjoy being correct.
Coming from Subversion, Mercurial made so much more sense than Git. I still think the CLI is more consistent. I keep hoping for a new shakeup in revision control systems, though I suspect such a change will be a long time coming.
In my view, the legitimate choices for FOSS version control these days are git (good enough, and popular) and fossil (featureful and obscure.) Why you would ever pick mercurial instead (marginally better than git, but almost as obscure as fossil) is completely beyond me. It occupies an uncomfortable middle ground of mediocrity.
(The one caveat here is that Fossil may not be appropriate for very large scale decentralized projects, but frankly that's a problem git and mercurial are solving that most companies don't have.)
(Incidentally, editor extensions for git have almost eliminated my CLI interaction with git. The UX of vim-fugitive is great, and similar extensions exist for just about any modern text editor. I think CLI UX is becoming less and less relevant when it comes to version control.)
Which noone has ever seriously claimed for git.
Mercurial + Mercurial evolution outcompetes git rebase workflows in a really nice slick reliable package.
Merges are so hassle free for me these days.
But when it comes to Fossil, nothing I've seen comes close. It's built on sqlite which I think turns off a lot of people who have prejudices against SQL, but sqlite is the furthest thing from a big 'enterprisey' RDBMS that give most users the shivers. It's a really tight piece of software, and in fact Fossil is created by the creators of sqlite and the sqlite project is managed with Fossil. So fossil's vanguard project (sqlite) has more deployments than even git's vanguard project (linux)! Mercurial has Firefox and Python, nothing to sneeze at; clearly it's a capable VCS, but I guess my point here is that Fossil doesn't get the attention it deserves whenever people talk about a FOSS alternative to git.
It might have been the best you had access to but commercial version control systems of various stripes were common. The first version control system I used for work was distributed and that was a decade before git. Version control systems with global locks, version control systems pretending to be a filesystem, version control systems fueled by the souls of the damned - it was like a Rule 34 of VCS - if you could think it, someone was selling it as a VCS.
Signed, Github user 3527
I started without version control. I very quickly realised that it's very easy to break a project but forget how to undo your latest breaking changes. I discovered subversion and it was amazing. It was 2006 and I was the only person on my course to my knowledge who was using version control.
At around that time git came out and some people were trying it but many people said it was completely unnecessary for most projects. I then tried to use svn for a project with more than just myself as a developer and it was a disaster. We had giant commits once a day that cause conflicts every time. It was horrible. Git was truly amazing. I agree the cli isn't great (I use magit) but you have to have lived without it to understand why it's so important.
We've recently transitioned to git at work and a couple of weeks later I'm already stuck in a week-long repository cleanup project on one of the central repositories because people just created a phenomenally huge mess in it. They were experienced and happy SVN users before, but somehow the boss forced the git hype train on us and has to pay the price now.
The other (and this is a major one) downside of adopting SVN for your org is the dearth of decent tools for code review and collaboration. At a previous company we used Fisheye/Crucible which is seriously not fit for purpose. At another SVN shop I worked at, we emailed patches to each other (seriously). And the lack of quality tooling is down to SVN's declining popularity - there's no market.
Linus didn't set out to create git because of some missing features in svn, but because he wanted a fundamentally different tool. If you find yourself with merges that are clean in version control system but creates conflicts in another, which is entirely possible, you are likely doing something very special that isn't a great fit for either.
There might be more straightforward process to follow that doesn't end up with such difficult merges. Maybe it's just merging more often, maybe it's something else. But it's very easy to blame the tools when the processes are broken.
Whoa whoa whoa, hold up there. Linus isn't some VCS visionary, he didn't magic the idea of distributed VCS out of thin air. There were plenty of other DVCS out there at the time, he just built another one.
I’ve seen this three times now.
The best and simultaneously the worst feature of git is the offline commit ability.
Not just commits - log, diff, status, almost everything I can remember needed to go off to the remote repository for information. Not only was this annoying when you didn't have connectivity, it was slow when you did.
I do occasionally miss the ability to version files individually though.
The network server stuff was hacked in on top.
The first time I ran “git commit” and it finished almost immediately blew my mind.
And yet, it's a great tool that I suspect that it will never get traction for precisely that reason. The abundance of options enables people to be shallow enough that a silly-sounding name knocks it out of consideration.
I wonder why the same didn't happen with git. It's really rather rude in English.
Git is like 10x worse and still 'won'.
Git famously was not built for monorepo 
I would like to see sub-tree checkouts, sub-tree history, storing giant asset files in repo ( without the git-lfs hack ), more consistent commands, some sort of API where compilers and build systems can integrate into revision control etc
OpenBSDs involvement here is conveniently missing, arguably GitHub may not have ever existed.
Also of note, despite the paper being presented in 1999, AnonCVS was operating as early as 1995. Other projects were still putting tarballs on FTP, no read access to source history.
When I wanted everyone to switch to Git or Mercurial (~2007) the main questions were about branch merges (MUCH easier in git than CVS) and the reliability of the version storage.
Many have now moved back to a centralised model of control (github), even if they have many partial copies. The incredibly RDBMS (non-git) method of managing the meta-data of github systems is very disappointing, but not surprising. If github is 4th gen, then I'm hoping for 5th gen, where all SE meta data is also available as a replicated database, which you can spin up with a local httpd.
Also, fun fact: SourceSafe was the first cross platform VCS I used. There were Mac, Unix, and dos/windows versions. Then Microsoft bought it and axed everything except dos/windows. :-(
- your latest changes
- version of the code before you made changes
- latest version of the code on the server
So yes, would have 3 full checkouts of the project locally to accomplish this. I guess it boils down to "patch" workflow, except you get to both create and apply the patch yourself. We used a real-world commit token (rubber duck IIRC) to make sure only one person was doing merges at a time...
(Tortoise)SVN was an easy sell when we discovered it.
Why people liked it or even tolerated it still puzzles me.
Edit: While I'm sure this was what I experienced it might be because of configuration by the organization I worked for, but I doubt it as I remember reading everything I could find about Perforce since I disliked it so much and wanted to find out why everyone seemed to like it.
FWIW you could configure P4 to only lock certain file extensions. I also found it useful to find out who was "working" on a file if I had to touch it.
Why? Two reasons. I'd played with git but hadn't really understood the power of trivial branching (though I was one of those CVS power users who could branch, but tended to use my IDE to manage it). I remember thinking to myself, oh this is like CVS, because that is how I used it when I played with it.
The bigger reason is that I was managing a team of 2-4 developers that rarely worked on the same thing. We all worked in the same room. The codebase was relatively small (35k loc). I could see no good reason to make the change when CVS was "good enough". I was the same reason we used the same old crufty bug tracker--too many features to write to spend time upgrading infrastructure. Unless it was a 2x efficacy improvement; we did add automated testing and scripting around deploys because the benefits were obvious.
Now I love git and the power to branch and stage commits but I am still not sure it's needed for colocated teams of that size.
The main downside is how condescending some Git users can be to SVN users. When I mention I use SVN, I often hear nonsense like "Oh, you must not understand tree structures." Actually, I do, and I see that they have no benefit for certain things I'm working on.
The one thing I'd like is the ability to commit without an internet connection, which distributed systems can easily do. But this hasn't been enough of an issue to motivate a switch.
- Data scientists and researchers who want to check in the original products of data collection, cleanup, etc.
- Software with large artistic assests like audio, textures, and other visual art.
I think about the reproducible research movement and think to myself that SVN is strictly better for many such projects.
But use of compression and binary deltas does mean that (for regular text-based code) a git checkout including all history can be smaller than a subversion checkout with just the latest version.
With git you don't need a server at all, your full repository is locally stored and you can just back it up like you do everything else.
This was also during SVN's heydey. I'm sure it's much more stable now. ;-)
There are major projects out there still using subversion. GCC
for example. They have lots of branches. As for performance... well, svn is way better than CVS.
The worst I've had with SVN was a broken working copy, which is usually easily fixed. With Git, problems like that occur much more frequently. In the past 6 months I likely have completely wiped my local Git repository for a particular project alone more times than I've ever had to fix a broken SVN working copy. My experience is closer to the famous XKCD comic:
Now, perhaps you could argue that I don't understand Git well enough, and I wouldn't necessarily argue against that. I think Git has a terrible and confusing UI compared against other distributed systems. (And the simplicity of SVN makes its UI good, I think.)
As for performance, again, I use SVN with small projects, so performance hasn't been an issue. To be honest, I probably save time compared against Git from not having to type as much with SVN!
And I don't branch in SVN because I don't want to, not because it's difficult or dangerous. Branching would provide no benefit in my case. If I wanted to branch, I'd switch to a distributed system. My experience talking to some Git people is that they often branch as a habit without considering what could be gained from branching.
There's just never a reason to not branch. It keeps ideas, efforts, tasks separated really nicely and has essentially no cost to doing so. It lets me have a completely different environment to try things out, wreck, and abandon things without ever touching the branches that are important. When I'm done, a simple merge brings it all in at once.
These are all completely valid reasons to branch. They also don't apply to most of my projects. And for the ones they do apply for, I use Git.
My small projects tend to be relatively simple, and often contain a lot that's not code. (Some contain almost no code at all.) For example, I've had people recommend branching to keep track of different versions of the same paper they're writing. But this immediately struck me as a waste of time. I'll be submitting only one version of the paper. Why keep multiple internal versions?
I am not convinced by the argument that I can have a branch for each person I ask to read the paper. Merging in the handwritten changes they provide me is not hard. Branching would just be extra work in this case.
It can be nice to try different organizational structures sometimes, but I've found it easier to simply have a different TeX file in that case. (Or better yet, multiple TeX files for each part, and then a set of master TeX files that organize the paper differently.)
If someone has a good argument for distributed version control in this use case, I'd be happy to hear it.
Same here, I have used the SVN + Trac combination for my personal projects for almost 12 years now, it isn’t broken for me so why should I ever think of fixing it?
I agree that the mental model with SVN is much simpler that Git, and I used SVN on Windows with AnhkSVN and TortoiseSVN, so maybe it works better on Linux
To draw an admittedly flawed comparison I work at a contract engineering and manufacturing firm. There are some products that we produce by the tens or hundreds of thousand and benefit greatly from a lot of automation in both assembly, testing, packaging, etc. We also do low count production runs that quite simply don't get much automation because the per unit cost would end up being astronomical. There's no reason to tool up for a 100,000 piece run if you're making 10 pieces.
In our field the barrier to entry seems free though. So while git was designed to meet the needs of the linux kernel, people also use it for their own person 1kloc side project. It doesn't stop here of course, introductions for making a simple web app are often filled with tooling, frameworks, etc that need to be included, configured, and used. Undoubtedly these make sense for large projects, but are used for personal sites as well.
Note that I realize that you didn't argue for no version control.
* There was no separation between commit and push. How weird.
* "svn log" or "svn blame" would take ages, because it had to tal to the server.
* Well-run larger projects had branching guides, because the built-in commands didn't track enough meta data to do merges safely later on
* SVN made it trivial to checkout only a subdirectory of a bigger repository (which I still sometimes miss in git), so people often tracked different projects in separate directories of one repository.
The only thing I remember about CVS was that to clone something from CVS, you had to know some root directory (this presumably was the webroot), and sourceforge.net didn't show available webroots -- so there were tons of technically "open source" repositories that you still couldn't clone, because the webroot wasn't documented.
CVS at the time felt like an amazing upgrade to RCS, just like Git feels like an amazing upgrade to CVS.
I wonder though, have we reached the end? If there anything beyond Git? When I used RCS, I would always lament "it would be nice if two of us could work on a file at the same time". When I was using CVS, I'd lament, "It would be nice if two of us could work on a group of files at the same time and merge our changes".
But using Git, my only lament is, "I wish this were easier for new developers" and "it would be great if there were a standard workflow". Problem one has been somewhat solved by GitHub/GitLab, and problem two has been solved by some pretty standard git-flow tools. Neither one really demands a new paradigm in VCS though.
The ability to split and merge repositories as easily as we can split and merge branches might open up some new use patterns.
The particular context I'm thinking of is scientific repositories. These tend to grow in size and scope in an unplanned manner. Pieces inevitably need to be split off for a collaboration, to be made public, or because someone is changing institutions and needs to take part of the project with them.
Author: jkf <jkf>
Date: Mon Oct 21 07:53:43 1985 +0000
Git was actually started in order to complete core functionality and let someone else make the front-end for the VCS. But somewhere along the line people just decided they didn't need a user friendly frontend, and now the core is what people use every day. 13 years later and it's still difficult to use. Unless someone comes up with a really slick universal frontend for it, it's probably time for a new VCS.
Combined with Evolution extension.. unbeatable.
I have used SCCS, RCS, CVS, SVN, bzr, git, but hg is by far the least pain and most powerful I have seen.
I'm one of the few people who deliberately learned and used CVS (for a while) in recent times. I did not have any public repositories at the time and needed VCS for my configuration and some documents (Org mode mostly), and the model where I could have a central repository on a local directory which I could easily back up was compelling. Then I figured out a filesystem layout where I could back up all my work easily and this became useless, thus I switched to Mercurial. Nowadays I'm considering going just git, because it's what everybody uses, and Magit is a compelling piece of software.
I use RCS regularly along with Mercurial and Git nowadays. RCS is good for e.g. when I have a tree where most of the content is pdf files (papers), images, and other binary data that does not really need to be version controlled, together with an Org mode files for notes. I also have a pool of Elisp files which contain the personal bits of my Emacs configuration, and I use RCS on them because their histories are not related to one another. It's no good for projects anymore because it is essentialy a tool used when people used a single computer to develop software to which they connected w/ terminals, so they were all users of the same machine and the code was always in a determined location.
One thing people tend to confuse with CVS or SVN is that they think it's a client/server model whereas it's actually a repo/checkout model. The repo is central and can totally reside in a local tree, and checkins from different checkouts go directly to that repo. This is akin to sharing one .git tree between all your checkouts of a single repository.
When did it start to be like this? Making code better is a dick move now? Who rewrites stuff passive-aggressively? What does that even mean?
> It illustrates well why understanding the history of software development can be so beneficial—picking up and re-examining obsolete tools will teach you volumes about the why behind the tools we use today.
As the article re-examines this obsolete version management tool, it becomes clear it's pretty easy and straightforward and can do a lot of things that git can to a certain degree. On top of that it's dead-easy to setup and use, in fact, its simplicity might be an indication that's it's not all that obsolete and might be exactly the right fit for new small personal projects.
I played with Bazaar, Monotone, Mercurial and Darcs but not enough to really appreciate them.
As an aside, I met Larry McVoy at a Linux convention in 1999 and heard him speak about BitKeeper. Those were interesting times.
One of the engineers had made a SVN repository for all our design specs and had cooked up a simple intranet page where the latest version of a design could always be shared by a permanent URL but also a history of all earlier versions.
That was my first experience with version control and I remember thinking it was magic. I never found out who made that, so if you’re reading thanks for going the extra mile :)
Trac was great though, especially for the time: subversion server, source and changeset browser, tickets, wiki, roadmap. Aside from my own personal stuff, I switched several open source projects to it, and got a couple companies on to it. It quickly became to me an essential part of the dev stack, and was a great way to get the full dev stack* up and running relatively quickly.
* Other than continuous integration, but for me that came later. I worked on a lot of php stuff that could be deployed from source and never really saw the need then. Now I think it's essential and don't work without it.
But using only `cvs commit` would be the equivalent of using a single script with git that adds every single thing in the relevant dirs and then blindly commits it.
In other words, the people coming from cvs and svn and complaining that git added a step for them were either doing an impeccable job of keeping their source dirs clean at all times, or they were implicitly admitting that they weren't keeping track of what they were adding to their own repos.
I would guess there are old projects that fit the latter description. But I know from experience there are old projects that clearly fit the latter.
SVN also allowed you to commit only specific files, so if your working directory wasn't clean, you could mostly still commit just the parts you wanted to commit.
OpenBSD still uses it, and it's the main reason I've only rarely contributed patches. CVS is just that crappy.
When I say it's awful I should admit that like FORTRAN it's not bad for its time. But 1986 was 32 years ago. It's not bad because it's old, but it's not 32 years of good either.
Commodore 64 was great for its time, but I'm not going to load my version control from a cassette player in 2018.
Version control system timeline for me has been
IBM ClearCase -> CVS -> Perforce -> SVN -> Git
The non linear path is from switching jobs/working on legacy projects, but yeah.
Git has it's problems, especially on usability, but it's much better than all the others in that list!
I know of companies who only very recently moved from CVS, and I'd bet there are many that still use it too..
Assertions like this are always dangerous, because inevitably, someone somewhere is still using that tech you think is long dead ;)
I never learned CVS. My peers at work hate me because I think branches are something you prune off a rose bush in winter.
(I use git now)
I was appalled first time I had to pick apart a mess made by SVN on a branch.
I’ve made peace with git now, but struggled with the many different ways of getting things done in git.
Pure git without GitHub may be a little easier than CVS.
I do wonder if it's pointless though. Not clear time spent on stuff like that is really worth, vs how often you need to go back and check the history.
Maybe it's all just a exercise in OCD.
Personally I feel the best method is using lots of small repos, one for each service or library, that get stitched together by the build system. I know some large tech companies have created such systems and I have experience with one of them working very well (they migrated from perforce). But this is a big change from the monolithic repository model and institutional inertia is very real.
(Perforce will eventually start to hit a wall when you get to the point where money won't buy hardware big enough for Perforce to serve your repo fast, but that's a very long way off for most organizations and I believe there are some mitigations for it.)
But then your teams have to manage dependencies - or your release team has to do it for them. It's very easy to run into diamond-dependency problems or runtime classpath issues.
Woof, I bet this guy's fun to have at standup.
That’s the normal behavior of most version control systems. Only a couple of new distributed systems have separated staging and pushing.
Just forget about commits, there are only pushes.