
GameNetworkingSockets – Reliable and unreliable messages over UDP - ivanfon
https://github.com/ValveSoftware/GameNetworkingSockets
======
Animats
Second Life does something very similar. Reliable and unreliable messages,
binary format, multiple messages in a single datagram.

Unreliable messsages are for ones that are superseded by later messages.
There's no point in retransmitting; you always want the latest object
positions.

The big problem is message priority. Games need a low-latency, low-traffic
channel for updates and a high-latency, high-traffic channel for assets. It's
tough making this work on the open Internet. End to end QoS just isn't widely
available. So using much less than the full available bandwidth is needed to
keep intermediate FIFO buffers from filling up. This is related to the
"bufferbloat" problem - network devices now all have lots of RAM, and if they
have FIFO queuing and less output bandwidth than input bandwidth, they will
build up huge queues that generate huge latency.

For QoS to work in the wild, there has to be some throttling or incentive to
discourage too much high priority traffic. You'd like to have under 5% of your
traffic at high priority. It's hard to make this work in the public Internet.

~~~
woah
I believe this type of QoS would easily violate “net neutrality”, which is why
it hasn’t happened yet.

~~~
matthewmacleod
That is a misconception.

It’s totally legitimate to provide QoS based on open protocols, or even for
particular classes of traffic. Net neutrality comes into play where providers
start prioritising traffic based on the remote service provider.

------
calebh
I've used ENet for my games which is pretty similar to this project.
Unfortunately ENet does not support IPv6, and the pull requests on the ENet
repository appear to be ignored. The ENet author refuses to accept any changes
that break the ABI, which is highly unfortunate.

[http://enet.bespin.org/](http://enet.bespin.org/)

------
johnhenry
> GameNetworkingSockets is a basic transport layer for games.

I understand that this is primarily derived from a gaming platform, but is
there anything that makes this useful specifically "for game" and not
networked applications in general?

~~~
blackflame7000
Let's start with why TCP won't work: every time a packet is lost or reordered,
your on-screen avatar will put Michael Jackson's moonwalk to shame. UDP won't
work by itself either because what happens if that packet that said I got the
winning kill didn't make it to the server? There are all sorts of time-and-
order-critical messages that are needed to correctly keep score in the game
for example. So obviously we need 2 channels, some for order sensitive data
(like score) and some for last update possible (like movement)

~~~
cma
If you round robin on a bunch of tcp connections you can avoid many of the
packet loss latency issues (doesn't give an advantage over udp really, but
allows you to work in tcp only environments).

~~~
blackflame7000
Yea if you're gaming on a budget I suppose that would work, but I'd still be
worried about the exponential backoff algorithm TCP usages because packets
aren't guaranteed to take an unblocked pathway on their subsequent attempts.
Its possible to lose a whole bunch of packets before being successfully
diverted. And furthermore, what happens if your opponent is scoring points
while you're frozen in TCP rectification mode. You're going to need UDP for
just about anything that involves players interacting in a 3-dimensional
world.

------
xir78
Would be great for TCP to better address the reliable transmission of messages
for games, these “reliable UDP” code bases in game engines don’t address all
of the other issues such as bandwidth sharing fairness and avoiding saturation
of networking links, which ultimately will just make networks slower than
faster and more reliable for everyone.

~~~
kabdib
If you changed TCP sufficiently to make it a real-time protocol suitable for
gaming, it wouldn't be TCP any more. Reliable streaming and real-time packet
delivery are two completely different animals.

I would argue that bandwidth-sharing fairness with stuff going over UDP is
easy: Just start dropping packets when pipes get full.

Most updates are going to be pretty small. It's not like game developers want
the user experience of their titles to be bad, after all.

~~~
xir78
It’s not easy to share the bandwidth fairly, it’s taken decades of research
and it continues to be improved in TCP.

You’re not consindering a server, the bandwidth is very high on the backend
and does saturate links. You run many servers per physical or virtual machine
due to cost, so you can have 1000s if players connected over a single network
path.

~~~
kabdib
... which is why you provision servers and design your software and network
architecture to take the demand (latency, bandwidth, etc.) into account. Data
rates for online games are pretty predictable. A 10 Gbit fiber connection to a
racked server doesn't cost that much.

At the datacenter level you're making sure that the bandwidth you bought from
providers is sufficient (and ideally, redundant), and that you can shift load
from one area to another if necessary. You can buy this capability from AWS or
Azure, or build it yourself in many different ways.

------
bluejekyll
Anyone know how this differs from RTP?

I haven’t worked with either but did review the RTP spec, and many of these
features appear similar.

RTP also has some multicasting features built in for broadcast delivery, which
can be useful in certain contexts.

Edit: the reason I ask is that RTP is getting wide scale deployment and
testing as it’s being used for WebRTC.

~~~
gfodor
Also worth comparing to WebRTC data channels, which are SCTP.

------
popee
Stupid question. How do you solve (de)fragmentation and out of order delivery
with unreliable protocol? What are cases and is it possible to make it simple?

~~~
camgunz
You generally only use this for data where only the latest update matters. So
if you get "Packet 18" "Packet 30" "Packet 19", you take 18, then 30, then
ignore 19. The canonical example is an object's position; usually if you know
what something's position is at 30 then information about where it was at 19
is so out of date it's useless.

The gain here is that you typically get the latest information as fast as
possible. The downsides, of course, are that there's a pretty steep decline on
connections that are even a little high latency or lossy. UDP drops packets
even on wired LAN, for example. Practically all games use this method, and
they all have lots of smoothing and prediction tech to make things seem like
you're getting constant position updates -- which you almost certainly aren't.

You can also use this for data that doesn't matter -- although if it doesn't
matter you should question why you're sending it in the first place.

------
limaoscarjuliet
The moment you say "reliable messages with UDP" you really are saying "we are
implementing TCP on our own", which quickly turns into question "why?".

By its own admission, the project says: "The reliability layer is a pretty
naive sliding window implementation". A bit scary if you really want to have
guarantee message is delivered.

Also scary part: "Our use of OpenSSL is extremely limited; basically just AES
encryption", "we do not support x509 certificates". OpenSSL is difficult,
coming up with your own key exchange is likely problematic.

So I'm still searching for the answer as to why we need to do this? What's the
market for this?

P.S. Yes, Valve being the author certainly brings some "reedeming value" to
this!

~~~
thezilch
Your assuming the usage of such a library is sending, let's say, a network
crushing amount of data. But, this is really intended for games that try to
fit under 512Kb/s (or even less). We're not trying to jam down 20MB/s of JS, 5
MB/s of CSS, and 100MB/s of PNGs. Reliable packets are sending you, at worst,
"the state of the world" in a few KB.

~~~
xir78
You’re just thinking of a single game client, if you have a server running 100
matches then it’s real bandwidth and will saturate network links.

~~~
kabdib
Last time I bought some 10Gbit fiber (some AOC, the stuff with SFPs on both
ends) it was about $70 for a 3 meter cable. Amortized cost of switch ports are
maybe another $200. You paid $700 for a 10Gbit networking card with (say) four
ports ... but a LOT more for the server to run all of this.

Back-of-envelope calculation: You can get about 50,000 client connections at
50 updates/sec of 500 bytes each on a 10Gbit link. Four ports, double
redundancy gets you 100K clients on a server. Yike -- that's _wayyy_ more
clients than you want on a single server (you almost certainly run out of
server-side CPU for game simulation and so forth before you run into bandwidth
issues). A dual 40Gbit networking card is pretty cheap, but you'll run into
CPU load issues trying to feed that card enough traffic -- it's frankly plenty
hard to do that even when you're not doing game computation.

You can probably run all of your servers on 1Gbit copper for under $100 /
port. There are better ways to wire things up, but I've done this in the past
and it's worked fine.

Capital outlay for sufficient server bandwidth just isn't a big deal.

[edit: back-of-envelope calculation low by a factor of 10 :-) ]

~~~
arca_vorago
70 dollars for 9ft of fiber with connectors? I almost always make my own
cables (fiber included) but for some special jobs where it was a time
contraint I'd use pre-made but that still seems very steep. Where are you
sourcing cables?

I also suppose not everyone knows how to terminate fiber and I've done it so
much it's become second nature.

~~~
kabdib
It might have been closer to $35, I'd have to dig a little. Remember, this is
with the SFPs, not just the raw fiber (which is significantly cheaper on its
own, even if you buy it terminated).

