The display was technically challenging, but in the meaty way that developers often relish. The clock skew, however, was not.
For liability purposes it was sometimes necessary to know if Event A happens before Event B, which means you have to normalize all of the events across time zones and then correct for drift too.
That experience and a bunch of others (including statistics classes, and studying the Java Memory Model, which other languages have borrowed or stolen) have left me with a lingering doubt about how we record distributed activities.
I really kind of feel like we [all] need a new data model [like the one they mention here] where dependent events are recorded in that way. I don't know exactly what that would look like, but I think it would help a great deal in consensus situations where you have to resolve a conflict, or even just for displaying a sequence of events in proper order.
It feels like we keep trying to get the exact nanosecond when something happens, but the only thing I ever see humans use that information for is to reverse engineer a sequence of events that resulted in a peculiar state in the system.
[edit: tie-in to article]
From the abstract:
> The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specializedfor synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.
You know, it seems like most of my quality of life improvements over the past 20 years has been due to my peers and I finally acting on much older information. The future is here, it's just unevenly distributed.
It is only infrequently I encounter something that still feels properly new under any kind of scrutiny, instead of revealing itself to be a refinement of something that already was known. Off the top of my head, I can think of escape analysis, Burrows Wheeler transform, and the object ownership semantics in Rust. I'll throw Raft on there since the joke is that only 12 people understood Paxos.
Lamport clocks do exist and can provide a partial or total order of events.
Not sure if that would've helped with your problem but such algorithms do exist. But it feels like they are rarely used (subjective feeling).
Agreed, though you do see them a lot in CRDTs: https://en.wikipedia.org/wiki/Conflict-free_replicated_data_.... Just about every non-trivial CRDT has something akin to a logical clock embedded inside it.
Einstein said you can't absolutely order events -- and with widely enough distributed systems and small enough time quanta, sooner or later you're going to run into relativistic implications.
Probably not a problem for most applications we currently have to deal with, but one day -- soon enough that we're already giving new protocols names like "Interplanetary File System" -- our databases will spread out among the stars, and how will we handle time and event ordering then?
Even between stars the difference isn't that big, but offsets don't matter at that scale anyway. When it takes a decade to send an email from one system to another, it doesn't matter if their timelines are offset by a week.
Since the 1970s, TAI construction has had to compensate for the differences between the physical locations of the atomic clocks in laboratories around the world and an ideal surface of equal gravitational potential around the world.
Nowadays, BIPM and other laboratories routinely talk about general relativistic corrections across the width of the measuring devices. To quote Appendix 2 of the SI Brochure:
> In 2013, the best of these primary standards produces the SI second with a relative standard uncertainty of some parts in 10^16. Note that at such a level of accuracy the effect of the non-uniformity of the gravitational field over the size of the device cannot be ignored. The standard should then be considered in the framework of general relativity in order to provide the proper time at a specified point, for instance a connector.
Being able to measure the drift at a specific point doesn't mean it's relevant to computers timestamping their calculations. If a computer is ten nanoseconds off, that's basically the same as it being one rack to the left, or having some slack in the cable. There's no real effect.
Just syncing your clock once a day is enough to let you completely ignore the effects of relativity.
Edit: A strict consistency earth > mars transaction would actually take longer, because 30 minutes is just an average round trip, and you’d need more than one.
So interplanetary, yeah one planet is going to be the central transaction handler and everyone else is going to have to deal with it. So likely things that require transactions, you'll have to have enough cash in the bank in your Mars account if you want quick transactions, and expect wiring money from Earth to take a while.
Coming back to the current day, if you need low-latency global transactions just here on Earth, then that's probably how you'd have to design it. You prime each region with a certain account "limit" of whatever it is you're transacting to use in local transactions, and when that limit gets low you transact some more from your central data store back into your regional account, or something along those lines. It'd be a two-stage thing.
I remember being very impressed by Spanner when I first heard about it, but as you can see in the OP, it does make a lot of consistency guarantees through sheer brute force. It simply throws power/resources at problems which would generally be considered impractical when normally designing a distributed system. An approach which does of course have its limits (not to say it isn’t still impressive, it’s a great system).
Libraries for handling time would need to include functions for converting times between frames.
(I work at Cockroach Labs and gave an internal talk on this paper some time ago).
An atomic clock is not that expensive: the cost of an Armageddon master is of the same order as that of a GPS master
Pretty similar form factor.
Of course, it turns out there aren't that many home applications for an atomic clock, other than collecting precision metrology equipment. And if you're running a data centre and want higher precision than NTP, chances are you'll choose a PTP grandmaster clock at the high end, or a GPS receiver with a 1PPS output at the low end, in rather than buying second-hand parts from ebay.
For all the people who have a rubidium standard at home, 70% are using it for electronic lab, amateur radio, or NTP at home. But the remaining 30% is another large userbase too - audiophiles. Many audiophiles claim jitter and phase noise in the clock signal significantly affects audio quality, and the most extreme audiophiles use a lab grade frequency standard, rubidium or better, and feed it to all their Hi-Fi gears.
P.S: Given all the factors that affect audio quality, is the phase noise from a PLL synthesizer of reasonable quality really a major factor? Even if it is, does a rubidium standard really have any benefit beyond diminishing return over a crystal oven? Well, of course, these are the questions that are never answered by audiophiles.
> You can buy a used rubidium frequency standard on ebay
There is a continuous source of used rubidium standard coming from retired telecommunication and lab equipment, they are the cheapest atomic standard available. The catch is that the rubidium inside the discharge lamp will eventually get depleted during operation, usually within 10 years, once its life ends, it's useless and needs to be rebuilt completely by the manufacturer. So read the manufacturing date if you are going to power it 7x24.
The real expensive ones are the Cesium Standard, such as the HP 5061A. Recently, the Hydrogen Standard also saw some uses.
Cesium oscillators on the other hand do deplete their cesium (by design), as mentioned by CuriousMarc in the video. The tube in his unit was probably replaced at least twice, at a cost of >80k. The high performance tubes fare even worse because they "burn" through their cesium at a 3x rate.
The 5071 successor by HP-Agilent-Symmetricom-Microsemi is the most frequently used COTS clock that contributes to UTC.
Thanks for the tip, that's interesting. I never know that they can be renewed in this way.
> Cesium oscillators on the other hand do deplete their cesium (by design), as mentioned by CuriousMarc in the video. The tube in his unit was probably replaced at least twice, at a cost of >80k.
Sad story. Recently I was browsing a web store that sells decommissioned U.S. military equipment. I was here to search for some cheap RF power meters, and I was surprised to see a HP 5071 in the listing. The vendor noted that he knew nothing about the equipment and information was welcomed. For a moment, I was thinking about sending him a service manual from the HP archive, before I realized it was simply impossible to get it up and running again without replacing the tube :(
> The 5071 successor by HP-Agilent-Symmetricom-Microsemi
Aha, the old Hewlett-Packard was truly a unique company. Its electronic technology has at least three separate chains of succession.
Test Equipment: HP-Agilent-Keysight
Semiconductor: HP-Agilent-Avago-Broadcom (Broadcom ended production of most HP parts, so now it's mostly dead, RIP...)
Frequency Standard: HP-Agilent-Symmetricom-Microsemi
Frequency Standard: HP-Agilent-Symmetricom-Microsemi-Microchip
Ideally, the clock frequency should be as stable as possible. Unfortunately, all clock oscillators have inherent short-term instabilities, long-term instabilities, and non-zero temperature coefficient. Short-term instabilities, known as phase noise (in the frequency domain), or jitter (in the time domain) is a particular concern. As long as the audio system is properly optimized and characterized, jitter is not an issue. A good DAC will normally have a jitter around -90 dBc or lower, and it's probably negligible. NwAvGuy (an audio engineer known for his criticism of baseless audiophile practice), has a good explanation of clock jitter: https://nwavguy.blogspot.com/2011/02/jitter-does-it-matter.h...
But of course, some audiophiles want to power their Hi-Fi gears with the best oscillator available, so that it'll have the lowest jitter and the minimum absolute frequency error and temperature coefficient, even if the benefits is dubious to other people. And they realized that a rubidium frequency standard is the best clock oscillator they can find. First, it's possible to DIY. All you needed to do is understanding how the digital part of the Hi-Fi system works and find the crystal responsible for generating the system/ADC/DAC clock. You simply remove the crystal and inject an external clock signal from the rubidium standard to the chip. 10 MHz is a common output frequency from a standard oscillator, if the audio circuit also has a 10 MHz clock, the rubidium standard output can be used directly. If it's not, if the frequency radio of the two is a integer, it's possible to use a frequency multiplier or a divider to generate the needed clock frequency from the rubidium standard output. If it's not an integer, a PLL frequency synthesizer can be used.
Second, in an electronic lab, many gears have a "10 MHz Reference Input" port, their internal timings can be derived from the reference input, so that all signal generators, frequency counters, spectrum analyzer to a master clock oscillator (can be crystal oven, WWVB radio, GPS, or atomic) for consistency, so that all equipment's frequencies don't drift to different directions, all get a good clock with constant performance, and there won't be a disagreement between equipment. In some professional audio equipment for music production, it also allows an external reference to be used for the same purpose. One can simply plug the 10 MHz standard oscillator, including a rubidium standard, into the BNC connector of the audio gears.
I think one answer is in this blog post:
> A simple statement of the contrast between Spanner and CockroachDB would be: Spanner always waits on writes for a short interval, whereas CockroachDB sometimes waits on reads for a longer interval. How long is that interval? Well it depends on how clocks on CockroachDB nodes are being synchronized. Using NTP, it’s likely to be up to 250ms. Not great, but the kind of transaction that would restart for the full interval would have to read constantly updated values across many nodes. In practice, these kinds of use cases exist but are the exception.
Most recent spec is IEEE 1588-2019.
I don't know if you need that for distributed transactions or not. The logic as to whether or not the time is "good" is probably a large part of the complexity of this scheme. Better hardware makes the software simpler and vulnerable to less failure modes.