Reading that it sounds like they reached the same conclusion, that they didn't want to mess with git's internals. They wrote a filesystem to run underneath git.
Hearing from friends that work there, the RTT for the underlying filesystem adds quite a lot of time for daily operations, especially if they are not working on the west coast of the USA. It was said a pull takes 45 minutes.
This leads me to believe that to handle large repos and files within a tool like git, its internals should be changed a bit so that there's not as many file accesses that need to be done (fstat, read, write). Also for certain operations to be batched together to better hide the latency involved in global communications.
If you're working with a Git repo on a remote filesystem, you're doing it wrong. Git is not designed for remote filesystems. It relies on certain file operations to be extremely fast. So Git works best with local filesystems. With Git, you want to clone the entire repository locally and work with it locally. That's the idea behind the distributed version control: every committer has the entire copy of the repository. With a remote filesystem you're effectively centralizing your repository.
Yeah, the git metadata is on a remote file server, that's the entire point of the Microsoft file system extensions for running a huge git monorepo. The code and assets you are working on is local, but the history and other metadata elements are stored remote and lazy loaded.
So if you're remote file server is relatively close it doesn't matter too much and the lag is not noticeable, but if it's across the country or across the world...