Hacker News new | past | comments | ask | show | jobs | submit login
Latency Compensating Methods in Client/Server In-Game Protocol Design (2001) (valvesoftware.com)
87 points by HaseebQ on May 17, 2019 | hide | past | favorite | 14 comments



Here's my humble contribution to provide a clearer explanation, and a simple live demo with source code, of client-side prediction, entity interpolation, and server reconciliation: https://gabrielgambetta.com/client-server-game-architecture....

Over time it has become a relatively popular alternative to Valve's (excellent) documents, mostly because the concepts are explained and demoed one by one, making accessible to a non-expert audience. The live demo has embedded JS code: https://gabrielgambetta.com/client-side-prediction-live-demo..., and it's usually useful to have a standalone implementation.


Humble indeed. Your articles are a staple of game network programming and the ones I most often refer back to. Thank you for them.


> Getting a "feel" for your latency is difficult. Quake3 attempted to mitigate this by playing a brief tone whenever you received confirmation of your hits. That way, you could figure out how far to lead by firing your weapons in rapid succession and adjusting your leading amount until you started to hear a steady stream of tones.

Ah, so that's what that sound was! I always wondered why Q3 had that weird sound when I hit someone, and never knew what it was supposed to sound like. It ends up being one of those things you get used to, and the game wouldn't be the same without it.


When I was around 18 I thought it would be fun to implement a simple networked version of pong. While the whole implementation wasn't the best in the first place, even on Lan it was pretty much unplayable due to jitter and lag. I found this article back then and was crushed by the complexity. This is when I decided I never want to work on timing sensitive networked code ever. It's still cool to read about all the smarts that go into games for this.


Highly recommend checking out Glenn Fiedler's stuff for a modern, standalone, take on game networking/physics code.

https://github.com/gafferongames


the overwatch developers made a video explaining their implementation of these concepts at a bit of a higher level: https://www.youtube.com/watch?v=vTH2ZPgYujQ


A while back Unity released their FPS Sample that contains implementations of similar rollback algorithms. It’s nice to see it in action in a modern context.

https://github.com/Unity-Technologies/FPSSample


As important and related paper: https://www.gamedevs.org/uploads/tribes-networking-model.pdf

It's the base for most modern FPS networking models.


Fond(?) memories of the Tribes 2 engine discussed in that paper, which became the GarageGames engine, which was so poorly architected that every function of interest ran inside the network serialization function. Why did all the code run inside the network serializer? Because the engine was such a mess that no one could figure out why their values were getting stomped on and eventually someone realized "hey! The last bit of code that runs each frame is the network serializer! If I put my player bone animation code in the network serializer nothing can stomp on my animations" and a new arms race was born as every developer rushed to put their code into the network serializer.


Disclaimer: I spent five years working at GarageGames doing core Torque development (the Tribes 2 engine derivative we sold).

The core code was pretty clean but there was a LOT of cruft on top of this part which tended to obscure the really good bits. IMHO the library that handled the animation was genius - it was incredibly light, it supported a broad featureset, and it could load any old asset from v1 up to v30. It even did a bunch of crazy data layout stuff to allow extremely fast endian conversion for PPC vs Intel (back when that mattered).

Good efficient networking for that era meant you had to be miserly with your resources. Tribes 2/Torque was very much aligned with these requirements and your example is actually a good example of those strengths.

The engine had three update cycles, all in service of the networking.

First, it would process fixed timestep logic - ticks guaranteed at 32 per second (this also aligned with packet send rate). Client and server both ran this. This is physics, user input, health management, etc.

Second, it would run "time" based logic. This would be things like particle systems which don't care a lot about whether you advance them 100ms at a time or 1ms at a time, and don't need to match precisely for gameplay anyway. Only client ran this.

Third, it would interpolate tick state. This would smoothly interpolate between the last and current game state based on how far you were between the two states. It introduced a small amount of lag but since it did not predict it never caused visual glitches. This is gave a smooth appearance for any of the stuff that happened in the first step. Only client ran this.

The result of all this machinery is that you paid exactly what you needed to for each type of thing in your simulation and no more.

Later versions of the engine added lag compensation. This meant that client would snapshot game state and re-run the fixed timestep logic for compensated objects based on latency. You could configure it to only consider objects that might have interacted with the player (and thus were mispredicted) to save on CPU.

What happened in the case of authoritative skeletal animation?

1. Gameplay relevant parts of the skeleton would be simulated in the fixed ticks. For instance, the current orientation of a player's weapon which might need to take their animation pose into account. So you might see the spine and one arm updated here, while legs wouldn't be touched.

2. In the time based logic, you would run the full skeletal animation update so that the player could see smooth animation.

3. In the interpolation phase, you would interpolate the position of the player between the two states to give a smooth appearance (in conjunction with the animation work in phase 2).

I would submit the above is actually a pretty elegant solution to the above. Unfortunately the surface level code was all cut down versions of Tribes code which was written with shipping, not long term reuse, in mind.

Most of the community developers never really got their hands wrapped around this architectural stuff. A big fault of our core engine product was that it was oriented towards AAA projects and we never really made it both user friendly AND powerful. So we went with powerful but that didn't serve our indie customer base well.

It took Unity around a decade and $500M to add "really powerful" to "easy to use" so I don't feel too bad about this.


Valve's wikis are surprisingly detailed yet very well explained, albeit a few years old, the concepts haven't changed.

I highly recommend their wiki as a resource for those interesting in game engines and game networking!


I wonder, how low could we get latency if network providers put as much effort into reducing lag as they do increasing bandwidth?


In the US, for those of us who are fortunate enough to have proper fiber, latency is regularly near speed of light, at least in my experience.

Afaik (someone correct me if I'm wrong) the latency overhead with non-fiber (e.g. cable) mostly comes from analog signal processing.


I'm surprised the article doesn't mention the name of the process:

Dead Reckoning.

What is fun, is if you write any frictionless game, like Asteroids, hockey, etc. You will have automatically written the compensation!

Here is a little multi-player spaceship game in 200 lines of HTML!

https://github.com/amark/gun/blob/master/examples/game/space...

It even works with WebRTC! We've tested it against 3+ people in different continents.

Hopefully it can be a useful starting point for someone. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: