When One-Way Latency Doesn't Matter 33 points by Strilanc on Feb 12, 2013 | hide | past | web | favorite | 18 comments

 There has been a long debate in Special Relativity over whether the one way speed of light is even measurable in principle, with some claiming ingenious experimental setups are able to gauge it while others criticise those for making some assumption about synchronising clocks that presumes the result.Then: if one accepts the one way speed of light is indeed not measurable and that only two way speed of light need be homogenous, it will follow that any one way speed is a result of convention in synchronizing clocks and we thinking in terms of 2 way speed is unphysical.
 I wish the article would at least point out the cardinal rule of time duration measurement: always use the same clock to measure the "start" and the "end". There's a lot of verbiage but the key point is lost.As an example of a discussion where someone made much ado about what turns out to be a clock issue:Original article: http://news.ycombinator.com/item?id=5146508My comment (which reflected the essence of the article): http://news.ycombinator.com/item?id=5146805
 That's right, although keep in mind that 'start' may not be 'when the packet was sent'.For example, when you measure the difference between two paths (e.g. A->D vs A->B->C->D), the 'start' is one arrival time and the 'end' is the other arrival time.There's actually a few simple consequences I didn't mention in the post. Maybe in the future I'll try to include more.For example, I mention that it's a good thing we can model games as 'all players seeing the same time'. This is probably how most players actually model the game in their head! If it didn't work, online games would be a lot harder to understand, differ more from reality, and generally be less fun!
 Correction, what's important is that the skews from each clock are cancelled. Doing both measurements on one clock is a special case.For example, if you measure the skewed time for A->D as (arrival time on D) - (send time on A) and then later measure the skewed time for A->B->C->D in the same way, you can get the non-skewed path difference by subtracting the two skewed times.
 That's actually incorrect because of drift (if D's clock is faster than A's clock, if you don't correct for drift the measurements can't be reliably compared). You'd have to correct for drift for every step (A->B, B->C, C->D) before you start.
 Right, I was assuming negligible clock drift between when each measurement was performed.
 In networked realtime multiplayer games you will never see the exact state the game world currently is in, but you will see a very close (based on your latency) approximation of it. In reality, between the network packets you recieve you use alot of tricks like client side prediction and interpolation/extrapolation to make a very good guess at the current state of the game world.If your latency and that of the other players is reasonable you dont feel the very minor corrections taking place all the time. Getting this right from an engineering point of view is easier said than done though :)
 I ran into this in the domain of networking.Trying to deduce one-way path latency from the round-trip time ("ping") is impossible without further assumptions. You basically cannot understand if you are on a wire connection or if your uplink is via a sat by looking at just the ping output. You can try an bounce ping off multiple servers, relay pings and gather RTT properties over any graph path that includes your node and test servers, and yet you will always come one equation short for solving the linear system you'd end up with.-- However --If you make an assumption about some property of some path, then - boom - you can find the one-way latency for any edge on the graph. For example, if you have two test servers that sit on a backbone, you can assume the path latency between them is about the same either way. This gives you an extra linear for the system and this in turn lets you calculate the one-way latency between you and either of these servers.It's a pretty neat problem. Makes you think, laterally.
 The puzzle overstates the importance of figuring out the protocol in practical applications. Why does it matter whether or not you are in a 2:2 case versus a 3:1 case in something like games (their example)?The problem of networked games isn't strictly timing. The actual problem is the illusion of real time play. We want networked players to feel like everything is happening in real time. But their changes to the shared world might, due to latency, be mutually incompatible.The puzzle is not making a system that is accurateâ€”that's trivial (it involves a lot of waiting). The puzzle is making a system that appears to be in real time.
 If there was a way to distinguish 2:2 from 3:1, it would affect how networked games played out. It would affect how ordering violations were resolved, which is pretty important in games ("Did I block before that punch landed?"). It so happens that the cases can't be distinguished, so it "doesn't matter".As an analogy, what would the world be like if the speed of light wasn't invariant with respect to velocity? It happens to "not matter" because the speed of light is invariant in that way, but it's still an interesting question.
 It certainly matters to adjudication. But perhaps the question players care about is not quite, "Did I block before that punch landed?" (a very important question), but, "I just blocked, but he hit me for some reason anyway."That is to say, players get upset (1) if something seems to violate their own worldview, due to adjudication. Naively, we could solve that case by running the timesteps of the simulation slower than the greatest latency (or more accurately, when heartbeats from all players have been received for a given timestep). That would introduce us to another reason players get upset: (2) if the game is really laggy.I'm glad you bring up relativity as an example. The best method games have for improving the real-time illusion is locality.
 Slightly off-topic, but what was made to make those animated diagrams?
 I'm more comfortable animating with code than with a tool like flash or after effects, so I generate animations myself with custom code.I use the RenderTargetBitmap and GifBitmapEncoder classes (plus GIMP afterwards to fix the frame delays and repeat count, and cut down the file size a bit) to translate what's showing in a WPF app into a gif.
 OK, it's 4am, so presumably part of my brain is asleep, but I'm missing something here. If the latencies were unconstrained, I would agree that it is impossible to solve the initial riddle. However, since it states that there are only two possibilities, 2s:2s or 3s:1s, it sure seems solvable to me. You have two equations and three variables, but you don't need to solve for the three variables. You can eliminate the skew and end up with a single equation and two variables - the two latencies. That allows you to solve for the difference between the latencies. And since the possibilities are constrained, that is sufficient to solve the problem.Of course, in a real world scenario the possible latencies would not be discrete, so indeed the problem would not be solvable. But that's not how it was presented. So either the puzzle was poorly constructed as an analogy for the real-world scenario, or (more likely) my sleepy brain is missing something fundamental. Somebody help me out.
 Pick some simple protocol. Draw a sequence diagram of what it does for the 2s:2s case, using a clock skew of 0s. Now draw sequence diagram of what happens for the 3s:1s case, using a clock skew of 1s or -1s (depending on which direction you put the asymmetry in).Both diagrams contain the exact same interactions. The protocol will return the same thing in both cases.
 Ah yes, and looking at it in the morning, my resulting single equation is degenerate. I had a sign wrong last night, which suggested I'd solved for the difference between the two latencies. Actually (of course) I'd solved for the sum of the two latencies, which as the article mentioned, is all you can really determine. facepalm
 The article misses that unequal lag gives less lagged game players the edge
 There is no advantage to having different one-way latencies, as long as the round trip time is held constant.A player with all delay on sending TO the server will have to wait for their actions to be received, whereas the player with a large delay on receiving FROM the server will be reacting to 'old' states. These two situations happen to be indistinguishable.

Search: