The other case seems valid in a sort of 'if your build intentionally makes use of mtime, you'll need to look at mtime'. It seems like an odd thing to do in the first place - I guess Makefile as deployment rather than for building?
A 470K LoC project I have here with >1000 files takes 0.04 seconds to do a full sha256sum traversal on my box from the cache. That's single-threaded.
If I drop caches, it takes approximately 1 second (from spinning rust, not SSD).
You can probably argue that this is a consequence of bad tooling rather than any strength of nfs builds, but it is an example of a non-trivial number of developers frequently building over nfs.
It's the huge projects with millions of files and tens of gigabytes of source and assets that need these optimizations the most, and that's also where checksumming is the most painful.
It's not as unrealistic or monstrous as it sounds. It happens in monorepos when you include all of a project's thousand dependencies (down to things like openssl and libpng).
It seems like solving a problem that could be fixed more easily by just rsyncing or cloning the codebase. Storage is cheap.