Another comment suggests that there's "fierce competition" in the version control space. In fact, I think that there's fierce collaboration in the version control space! Durham Goode from their team gave an interesting talk at Git Merge 2017 about how they're using Hg and some of the interesting user-experience tools that they've built on top of it: https://m.youtube.com/watch?list=PL0lo9MOBetEGRAJzoTCdco_fOK...
I don't personally prefer Mercurial (I like Git, as you might have guessed), but I think that there's some opportunities to learn from their user experiences and perhaps add or improve some high-level Git commands.
The experience of using Mercurial within FB is quite different from the stock open source one. We have many local extensions, as well as integrations with the rest of our developer infrastructure. We're definitely optimizing for the linear-history monorepo model.
We don't see things as a competition between hg and git, but between good developer experiences and bad. Thus far, we've found it easier to improve the developer experience with hg than with git, so that have been our main focus.
But we feel we're reaching the limits of what we can do with hg, which is why we're investing in Mononoke. We feel that version control in general has been pretty stagnant, and we're hoping to do some neat stuff over the next few years (based on Mononoke and other projects).
Do you think GVFS will also be able to support "thousands of commits an hour"? (It prominently claims to support "millions of files").
I find the "thousands of commits an hour" measurement to be conflating multiple things. I'll split out my expectations in our experience with the Windows repo.
The Windows repository is hosted in VSTS and handles over 4,000 developers doing their daily work. This includes an average of 3,000+ pushes per day via the Git protocol. Each push may include multiple commits. At least 2,300+ pull requests are completed per day, which creates a new merge commit each time. Also, the build machines run daily and kick off a push as their first action to update a version number. This creates 250+ parallel pushes within a 10 second window, so a spike of concurrent pushes are supported by VSTS.
(Plus, a nice ad for VSTS – well done).
/git evangelism strike force
We're not enemies. We're not competing. We're trying to make developers lives better, and there are a number of techniques that can work, and a number of tools that can help them.
Edit: sarcasm aside, you're right in saying that there is competition here. And competition is healthy, it's what drives us to solve problems in different ways and _allows_ us to collaborate. But to suggest that we shouldn't use each others technologies feels short sighted and like the wrong kind of competition.
You have to be careful what you use other technology for. If you don't show that you're relying on your own product when you clearly have a chance to do so, you immediately lose trust. First impressions matter a ton there.
SCM hosting is very different than SCM software; the latter is usually... not lucrative.
Eg; I'm sure MSFT hopes to make money from their GVFS effort, but not by selling it – simply by using it as a marketing tool to establish credibility (eg; for Team Foundation Server and other developer products).
Even if nobody _used_ GVFS, it could still be a profitable project for MSFT if it made everyone think MSFT an expert in source control. (To be clear, I love this incentive structure, and it's working on me – my respect for Microsoft is growing, thanks in part to GVFS).
Mono-tone -> The first cryptographic Merkle-tree distributed source control system, which was a large influence on both Git and Mercurial.
Monotone was written by Graydon Hoare, who later went on to design Rust (I think he was actually designing proto-Rust at the time), so the reference to Monotone is to both Mononoke's function and implementation.
"Our engineers were comfortable with Git and we preferred to stay with a familiar tool, so we took a long, hard look at improving it to work at scale. After much deliberation, we concluded that Git's internals would be difficult to work with for an ambitious scaling project."
This leads me to believe that to handle large repos and files within a tool like git, its internals should be changed a bit so that there's not as many file accesses that need to be done (fstat, read, write). Also for certain operations to be batched together to better hide the latency involved in global communications.
So if you're remote file server is relatively close it doesn't matter too much and the lag is not noticeable, but if it's across the country or across the world...
The Microsoft PM for GVFS commented there.
It can be easy to forget that the cognitive overhead of version control is pretty extreme. Looking at git through beginners’ eyes is telling. And our curriculum basically gets people to a baseline proficiency with change, add, and commit plus pushing, pulling, and a little merging. In other words, scratching the surface.
Scientists then start asking all the reasonable questions about restoring work under various scenarios, and whether they should store large amounts of data with their code. It’s in those moments where I start asking myself hard questions about git, its usability, and its architecture. Subversion at least had binary diffs and reasonable support for big files.
Git may have won the day, but I cannot help but think that it’s a pretty major compromise (even a retrograde step) compared to what could have / should have been.