Then: if one accepts the one way speed of light is indeed not measurable and that only two way speed of light need be homogenous, it will follow that any one way speed is a result of convention in synchronizing clocks and we thinking in terms of 2 way speed is unphysical.
As an example of a discussion where someone made much ado about what turns out to be a clock issue:
Original article: http://news.ycombinator.com/item?id=5146508
My comment (which reflected the essence of the article): http://news.ycombinator.com/item?id=5146805
For example, when you measure the difference between two paths (e.g. A->D vs A->B->C->D), the 'start' is one arrival time and the 'end' is the other arrival time.
There's actually a few simple consequences I didn't mention in the post. Maybe in the future I'll try to include more.
For example, I mention that it's a good thing we can model games as 'all players seeing the same time'. This is probably how most players actually model the game in their head! If it didn't work, online games would be a lot harder to understand, differ more from reality, and generally be less fun!
For example, if you measure the skewed time for A->D as (arrival time on D) - (send time on A) and then later measure the skewed time for A->B->C->D in the same way, you can get the non-skewed path difference by subtracting the two skewed times.
If your latency and that of the other players is reasonable you dont feel the very minor corrections taking place all the time. Getting this right from an engineering point of view is easier said than done though :)
Trying to deduce one-way path latency from the round-trip time ("ping") is impossible without further assumptions. You basically cannot understand if you are on a wire connection or if your uplink is via a sat by looking at just the ping output. You can try an bounce ping off multiple servers, relay pings and gather RTT properties over any graph path that includes your node and test servers, and yet you will always come one equation short for solving the linear system you'd end up with.
-- However --
If you make an assumption about some property of some path, then - boom - you can find the one-way latency for any edge on the graph. For example, if you have two test servers that sit on a backbone, you can assume the path latency between them is about the same either way. This gives you an extra linear for the system and this in turn lets you calculate the one-way latency between you and either of these servers.
It's a pretty neat problem. Makes you think, laterally.
The problem of networked games isn't strictly timing. The actual problem is the illusion of real time play. We want networked players to feel like everything is happening in real time. But their changes to the shared world might, due to latency, be mutually incompatible.
The puzzle is not making a system that is accurate—that's trivial (it involves a lot of waiting). The puzzle is making a system that appears to be in real time.
As an analogy, what would the world be like if the speed of light wasn't invariant with respect to velocity? It happens to "not matter" because the speed of light is invariant in that way, but it's still an interesting question.
That is to say, players get upset (1) if something seems to violate their own worldview, due to adjudication. Naively, we could solve that case by running the timesteps of the simulation slower than the greatest latency (or more accurately, when heartbeats from all players have been received for a given timestep). That would introduce us to another reason players get upset: (2) if the game is really laggy.
I'm glad you bring up relativity as an example. The best method games have for improving the real-time illusion is locality.
I use the RenderTargetBitmap and GifBitmapEncoder classes (plus GIMP afterwards to fix the frame delays and repeat count, and cut down the file size a bit) to translate what's showing in a WPF app into a gif.
Of course, in a real world scenario the possible latencies would not be discrete, so indeed the problem would not be solvable. But that's not how it was presented. So either the puzzle was poorly constructed as an analogy for the real-world scenario, or (more likely) my sleepy brain is missing something fundamental. Somebody help me out.
Both diagrams contain the exact same interactions. The protocol will return the same thing in both cases.
A player with all delay on sending TO the server will have to wait for their actions to be received, whereas the player with a large delay on receiving FROM the server will be reacting to 'old' states. These two situations happen to be indistinguishable.