> Mocking out the time functions means you don't get any race conditions.
This is a common misapprehension. Actually, even if you fully mock out time, you can still get race conditions, because goroutines can remain active regardless of the state of the clock, and there's no general way to wait until all goroutines are quiescent waiting on the clock. This is not just a theoretical concern - this kind of problem is not uncommon in practice.
I think clock-mocking can be very useful for testing hard-to-reach places in leaf packages. But at a higher level, I think it can end up producing extremely fragile tests that depend intimately on implementation details of packages that the tests should not be concerned about at all. In these cases, I've come to prefer configuring short time intervals and polling for desired state as being the lesser of two evils.
OK, correction acknowledge (no sarcasm), mocking out time functions means you can write test code that doesn't have any race conditions.
"and there's no general way to wait until all goroutines are quiescent waiting on the clock."
Hence my semi-frequent usage of "sync" channels which I described in the previous post.
"But at a higher level, I think it can end up producing extremely fragile tests that depend intimately on implementation details of packages that the tests should not be concerned about at all."
I'd rather have a test that correctly reasonably verifies that a package is correct (or at least "passes the race detector consistently") and reaches into some of the private details than fail to test a package. Too many bugs I've found that way.
It may also help to understand my opinion when I point out that I tend to break my packages down significantly more granularly than a lot of the rest of the Go community, which in my opinion is a little too comfortable having the "main app" directory contain many dozens of .go files. My packages end up way smaller, which also mitigates against the issues of excessively-coupled tests. I have a (not publically published) web framework, for instance, that is broadly speaking less featureful than some of the Big Names that are all in one directory (though it has some unique ones all its own), but is already broken up into 16 modules.
> I'd rather have a test that correctly reasonably verifies that a package is correct (or at least "passes the race detector consistently") and reaches into some of the private details than fail to test a package. Too many bugs I've found that way.
I agree with this, with the caveat that if you can test a package with regard to its public API only, it is desirable to do so because it gives much greater peace of mind when doing significant refactoring.
The difficulty comes in larger software where the package you're testing uses other packages as part of its implementation which also have their own time-based logic. Do we export all those synchronisation points so that importers can use them to help their tests too? If we do, then suddenly our API surface is significantly larger and more fragile - what would have been an internal fix can become a breaking change for many importers.
This is a common misapprehension. Actually, even if you fully mock out time, you can still get race conditions, because goroutines can remain active regardless of the state of the clock, and there's no general way to wait until all goroutines are quiescent waiting on the clock. This is not just a theoretical concern - this kind of problem is not uncommon in practice.
I think clock-mocking can be very useful for testing hard-to-reach places in leaf packages. But at a higher level, I think it can end up producing extremely fragile tests that depend intimately on implementation details of packages that the tests should not be concerned about at all. In these cases, I've come to prefer configuring short time intervals and polling for desired state as being the lesser of two evils.