
Deterministic Lockstep for Networked Games - GuiA
http://gafferongames.com/networked-physics/deterministic-lockstep/
======
Kenji
I have always been looking for an article that explains exactly this. Most
expanations of networking are extremely vague and lack concrete examples, but
not this one! I have thought about deterministic lockstep for a long time and
this article finally sheds new light on some questions that have been
lingering in my head for years. Thank you for that.

------
asuffield
The point of this article seems to be that high latency and packet loss
combined with extremely high available bandwidth is not a scenario that TCP is
optimised to handle.

This would be interesting if I could think of a common scenario on the
internet where you get that combination of features.

~~~
ANTSANTS
Games and streaming audio/video aren't common scenarios on the internet?

Don't be distracted by latency/packet loss/bandwidth configurations, the main
problem here is that TCP was never intended to serve the needs of soft-
realtime communication. It's not relevant to deterministic lockstep, but for
most realtime problems (I'm thinking of multiplayer FPS, but you can think of
Skype if that's not "serious" enough for you), if you miss a packet of data,
it might as well have never existed. Better to hear blips and glitches on a
phone call over a shitty connection than to start adding seconds of latency.

You missed the point about deterministic lockstep as well, which is intended
for scenarios where trying to transmit i.e. the entire state of a virtual army
per tick is prohibitive. Designing the program to behave deterministically so
that you can send only the player inputs per tick is the exact opposite of an
"extremely high available bandwidth" solution.

~~~
asuffield
Audio/video streams are a completely different problem - lost data is just
lost. This article is discussing a very specific problem where no data loss
can be tolerated.

The solution described only works when very high bandwidth is available,
because it aggressively sends duplicate copies of data until acked, using on
average 2-15x more bandwidth and 120x more bandwidth in the worst case
(numbers taken directly from the article).

If you happened to have that kind of spare bandwidth available then yes, this
is an excellent solution. When do you have high latency, packet loss, and
bandwidth overprovisioned by a factor of 15?

~~~
ANTSANTS
Deterministic lockstep is primarily used in RTS games, where the _only_ data
that is transmitted per tick are _tiny_ commands like "player 1 is
constructing a war factory at this location" or "player 1 told unit 6 to move
to this tile." If we're ridiculously generous and say that a kilobyte of
commands are sent per tick, then 120x bandwidth would be 120 kb/tick. 10
ticks/s is a common RTS tickrate, so that's about a megabyte per second. In
practice, _far, far less_ bandwidth would be used than this, given that this
architecture scales down to 90's _dialup_ connections
([https://www.codeofhonor.com/blog/the-making-of-warcraft-
part...](https://www.codeofhonor.com/blog/the-making-of-warcraft-part-3)), but
even a megabyte/s is not a _completely_ ridiculous amount of upload bandwidth
for much of the first world (but sadly not the USA).

> Audio/video streams are a completely different problem - lost data is just
> lost. This article is discussing a very specific problem where no data loss
> can be tolerated.

I know that, which is why I separated deterministic lockstep from other
solutions in my comment. You're missing my point, which is that soft-realtime
problems aren't suited for TCP, and cannot be addressed by _any_ single
protocol because constraints differ.

~~~
asuffield
You appear to be missing my point, which is that this proposed protocol
responds to network congestion by increasing the amount of bandwidth that it
uses by a factor of more than 10.

Consider what happens when many clients are using this protocol over a shared
transit link and it approaches 80% utilisation, so starts to drop packets at a
low rate: all the clients will start batching up larger amounts of history and
sending them, which increases the utilisation, which increases the packet
loss, which increases the amount of history each client sends... it's a
destructive feedback loop that launches a DoS attack against the congested
link. The absolute size of each individual client stream is unimportant, as we
are merely assuming that the number of clients sharing a link is high enough
to congest it. The last-mile user connection is not the worst problem here.

~~~
jtown_
This isn't a theoretical, unproven suggestion he's recklessly advocating --
this technique is implemented in several games played by millions without your
DoS scenario playing out. Games are typically low bandwidth applications --
they use that extra wiggle room to increase reliability and responsiveness
with reliable real world success.

He's also written about flow control to properly handle congestion when it
does happen: [http://gafferongames.com/networking-for-game-
programmers/rel...](http://gafferongames.com/networking-for-game-
programmers/reliability-and-flow-control/)

~~~
gafferongames
To be clear, not all games are networked using the deterministic lockstep
technique but _most_ realtime games (FPS and so on) do use UDP to send time
critical data, and implement some aspects of reliability, ordering and
congestion avoidance within their own custom protocol.

~~~
jtown_
Yeah sorry -- I should have been more clear. I didn't mean to imply most games
use deterministic systems.

