Google Cloud Spanner uses atomic clocks to be able to synchronize using timestamps across distributed DBs. CockroachDB does not require atomic clocks, but I believe there is an "atomic clock mode" available. This approach sounds like it doesn't use atomic clocks, but instead just machine learning algorithms to detect offsets.
Would like to understand if the founders consider their approach to be a viable alternative to "atomic clock mode", but without actual atomic clocks.
I thought even spanner only relied on sync in the 100us-1ms range. Maybe I’m out of date or it’s more about having clocks advancing at a very reliable rate? Usually ‘atomic clocks’ means ‘gps appliance gets time from atomic clocks in gps satellites’ but maybe not in this case.
So Spanner uses TrueTime as an integral part of it's concurrency control algorithm. To preserve external consistency, Spanner sometimes has to wait out an uncertainty interval at transaction commit. The tighter the bounds on timestamp intervals from TrueTime, the less this waiting impacts performance. I've heard through the grape vine that TrueTime currently operates much better than the numbers in the original paper, but can't confirm if that's true.
They use GPS for global sync and a local atomic clock as reference. From conversations over the last few years this seems to be the common setup at big DCs, as the cost for a couple atomic clocks is minuscule compared to the scale of everything else.
Would like to understand if the founders consider their approach to be a viable alternative to "atomic clock mode", but without actual atomic clocks.