Hacker News new | past | comments | ask | show | jobs | submit login

> With UDP the client application can receive all packets when they arrive even if there are some missing.

Isn't that a good thing?




Depends on the game.

I still play a game that's almost 20 years old (the original Descent), which has a small but active multiplayer community [0]. I regularly play against players in my own house as well as players from as far away as Brazil, Qatar, and Australia -- so I have to deal with various connection consistency issues. Given the game physics, I would much rather have out-of-order packets (resulting in an opposing ship briefly flashing in a wrong location and firing a shot from there) than having a delay on in-order packets (resulting in an opposing ship completely freezing, and then completing several seconds of movement and several seconds of shots in a fraction of a second).

There are other games that behave differently, where TCP would be preferable.

[0] descentrangers.com for anarchy, cooperative, and team games; descentchampions.org for competitive 1v1s.


Yes. It gives you the flexibility to make your own decision in the trade off between: increased latency vs. bandwidth overhead vs. loss of data. With TCP, you get increased latency.

If your protocol is stateless, you might choose to lose the data, because you know that the loss doesn't matter as soon as you get a new update. This might be applicable in a games, if the state you need to synchronize (say, position) is quite small.

Even if you need your protocol to be stateful: you might choose to design your protocol such that loss results in a temporary degradation in the client view of state, but which will converge back on the correct state as it receives new data. This is a very common pattern; the quake net protocol referenced in OP is an example, as are most streaming A/V codecs.

If you absolutely cannot tolerate the loss of any data, and are also latency sensitive, you can also choose to add redundant data so you can recover the lost data without retransmitting.

And you can make intermediate trade-offs: A little bit of extra overhead for error-correction, and in the now-rare cases where error rate overwhelms your correction ability, you accept a degradation / retransmit round-trip. Or you accept a certain amount of degradation before you need retransmission to recover. Etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: