Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that a pure software solution can't distinguish between clock skew and forward/backward relative time differences. Consequently, no such solution can guarantee error better than RTT/2. If your RTT is 2 microseconds, it's impossible to guarantee synchronization within hundreds of nanoseconds without incorporating additional information, regardless of jitter.

Hence my initial question: are they adding extra information to actually achieve those stated goals, or are there algorithms just "probably" better, and in the latter case what are the use cases? Distributed transactions and whatnot are fundamentally broken if your "better" synchronization might still be wrong.




You would enjoy reading the paper. They are making a few assumptions that turn out to be pretty good in a datacenter environment to simplify things. They are also using graph cycles to set clocks, which is a very different approach. My guess is that the precision of their approach comes at an accuracy cost, and the clocks are not particularly accurate.


You're right, I did enjoy reading the paper.

The graph cycles are neat, but they even admit in their own paper [0] that the approach is limited to half the max path asymmetry (RTT/2 if the asymmetry is totally unknown) -- pure software clocks have hard lower bounds on accuracy that can't be overcome without additional information (which digging elsewhere on their site it looks like they do actually integrate with sources like GPS antennas).

The rest of it is actually pretty interesting; in a datacenter context you might very well have low asymmetry, and everything else seems well done and likely to be much better than NTP for common scenarios.

[0] https://www.usenix.org/system/files/conference/nsdi18/nsdi18...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: