Hacker News new | past | comments | ask | show | jobs | submit login
Game servers: UDP vs TCP (1024monkeys.wordpress.com)
247 points by lerno on Apr 1, 2014 | hide | past | web | favorite | 126 comments

Impressive that he managed to say so much without mentioning head-of-line blocking, which is the main TCP problem with realtime apps.

This is not due to congestion control, but because of the ordered byte stream semantics of TCP. You can't deliver a packet until the retransmission dance is done for any previous lost packets.

As an aside, these days it's worth noting that a 3G, WLAN or 4G link layer will try very hard not to drop any packets. So when you're doing UDP, watch out for those 40000 millisecond old packets that will burst to your lap once the rain cloud moves away from the cell tower or your co-worker's cocoa blings in the microwave.

SCTP is sometimes promoted as an alternative to solve that problem. You can have multiple stream per connection so if one blocks the other ones can keep going.

Another alternative is minion[0]. The format is TCP on the wire which avoids the NAT problems that SCTP has, but it is able to deliver some really interesting features, including multiple message streams like SCTP.

[0] https://datatracker.ietf.org/doc/search/?name=minion&actived...

Or if SCTP firewall issues are the only thing holding you back you can use RFC 6951; SCTP over UDP.


My thoughts exactly. Also, you can send some data reliably (e.g., a new character model) and other data unreliably on the the same SCTP association.

SCTP has always looked intriguing to me, but would most backbones and ISPs pass it natively?

It sounds like it would be a waste to tunnel it in UDP, which is the alternative...

WebRTC Data Channels are, in fact, SCTP tunnelled over UDP (or rather, over DTLS over UDP.) They work pretty well.

(I'm not sure how much of the redundancy is shaved off--does anyone know if WebRTC's tries to encapsulate "normative" SCTP, or whether it uses SCTP without checksum/port information?)

Yeah not sure either, good question.

There is also QUIC protocol also by Google, which is yet another crack at solving the problem. Wonder if those 2 groups sat down and discussed merging or re-using code.

The poor prospects for getting widespread SCTP support was one (though not the only) justification the SPDY people cited for doing some similar things at the application-protocol level: http://www.chromium.org/spdy/spdy-whitepaper

Yes they would. The real problem is your home router/gateway/modem will not - and you're normally behind a NAT, which in all likelyhood does not know anything about SCTP (SCTP also isn't very NAT friendly).

Sctp only breaks if the ISP or user has turned on NAT. A router wouldn't break it. NAT does bad things to IP (breaks the fundamental guarantees).

>Sctp only breaks if the ISP or user has turned on NAT.

This describes 95%+ of home users. NAT may do bad things to IP, but if you are an app developer that depends on no NAT, you're product is going to have a very limited market of people that can use it.


NAT isn't going away with IPv6.

IPv4 NAT is not, but you can sidestep it by using IPv6. Look at eg how XBox One is doing it.

Or are you predicting the rise of IPv6 NAT?

What is Xbox One doing?

They are tunneling IPv6 over Teredo if native IPv6 is unavailable. They provide their own relay servers for people who have NAT traversal incapable connectivity.

Teredo is a Microsoft-developed (but IETF-standadized) NAT-traversing IPv6-over-IPv4 tunneling method.



An ISP employee here. We did nothing particular to support SCTP, so I got curious whenever it works. Just ran a pair of socat's (sctp-listen:12345 and sctp:example.org:12345) and it worked perfectly.

In our setup, NAT's done on a GNU/Linux machines in a simplest possible way, almost like `iptables -t nat -A POSTROUTING -s -o eth1 -j MASQUERADE` (we offer dynamic globally-routeable IPv4 addresses first, but when our pools are drained, we have no choice but do NAT for v4). So, it seems, almost every ISP who uses software routers (they're stable, performant and cheap) should be SCTP-friendly.

No idea about Ciscos and alikes, though. None we have does NAT, and given I'm not a Cisco guy I'm not going to reconfigure one just to test things.

Most cheap home routers are Linux-based, so the adoption plan seems trivial.

Linus should just make STCP NAT support the default with a comment "router manufacturers - please don't disable", so the next time router vendors update their kernels, it gets built into.

Then just wait for 3-4 years.

Look how long it's taking IPv6 support in consumer NAT boxes, and that's something that's hyped and all vendors are officialy drumming for it, and has been finished spec-wise since the 90s.

SCTP has had 14+ years - SCTP made standards track in 2000 (rfc2960) and was in use before that. Chicken and egg.

This is why NAT is bad. It's has introduced a systemic freeze in the Internet, breaking the inert router/smart host model that used to enable deployment of new protocols.

IPv6 is complicated matter. Even if kernel has ipv6 module/support built in, it's still a very tiny first step for router to support IPv6.

On the other hand, STCP NAT support is just one kernel config option away and no further configuration's needed. A complete no-brainer - just persuade those building routers' firmware to not tick it off.

Backbones and most ISPs are gonna route IP packets regardless of content. Until you hit a NAT or firewall and everything that's not UDP or TCP gets dropped.

Or just open multiple TCP streams (that means of course multiple TCP connections) and waste resources on additional TCP state tracking and additional recieve/send buffer allocations, etc.

I was under the impression that packets behind the blocked TCP packet still sent through the network, relying on the client-side or last-hop to maintain the sequential stream illusion and thus reducing the delay of the subsequent frames to the delay blocked/resent frame. Could with a deeper knowledge confirm or deny this?

Yes that is true but maintaining the illusion means that the application can't read the data after the lost packet until the packet is retransmitted and received.

So yes, selective acks preserve network bandwidth but don't solve the real time problems. With UDP the client application can receive all packets when they arrive even if there are some missing.

> With UDP the client application can receive all packets when they arrive even if there are some missing.

Isn't that a good thing?

Depends on the game.

I still play a game that's almost 20 years old (the original Descent), which has a small but active multiplayer community [0]. I regularly play against players in my own house as well as players from as far away as Brazil, Qatar, and Australia -- so I have to deal with various connection consistency issues. Given the game physics, I would much rather have out-of-order packets (resulting in an opposing ship briefly flashing in a wrong location and firing a shot from there) than having a delay on in-order packets (resulting in an opposing ship completely freezing, and then completing several seconds of movement and several seconds of shots in a fraction of a second).

There are other games that behave differently, where TCP would be preferable.

[0] descentrangers.com for anarchy, cooperative, and team games; descentchampions.org for competitive 1v1s.

Yes. It gives you the flexibility to make your own decision in the trade off between: increased latency vs. bandwidth overhead vs. loss of data. With TCP, you get increased latency.

If your protocol is stateless, you might choose to lose the data, because you know that the loss doesn't matter as soon as you get a new update. This might be applicable in a games, if the state you need to synchronize (say, position) is quite small.

Even if you need your protocol to be stateful: you might choose to design your protocol such that loss results in a temporary degradation in the client view of state, but which will converge back on the correct state as it receives new data. This is a very common pattern; the quake net protocol referenced in OP is an example, as are most streaming A/V codecs.

If you absolutely cannot tolerate the loss of any data, and are also latency sensitive, you can also choose to add redundant data so you can recover the lost data without retransmitting.

And you can make intermediate trade-offs: A little bit of extra overhead for error-correction, and in the now-rare cases where error rate overwhelms your correction ability, you accept a degradation / retransmit round-trip. Or you accept a certain amount of degradation before you need retransmission to recover. Etc.

That's true, but the delay from waiting for the lost packet to be retransmitted is the problem, not a delay for subsequent data after retransmitting the lost packet. Retransmission can take a substantial amount of time.

He didn't mention the term head-of-line blocking, but the issue was apparent to me as I read it. Perhaps because I have a bit of previous experience with tcp vs udp, but still.

We use TCP for some things, UDP for some things, and our own variant on Reliable UDP for others.

We use TCP for sending real-time vessel track data to the central correlation servers, because they need to know if the connection has dropped and change their behavior accordingly. Traffic volume is low, so congestion throttling and re-tries are not a problem. The system is able to cope with reports being delayed and then coming in much later (and catching up) because reports are time-stamped and the correlator works in fuzzy 6D phase-space.

We use UDP for sending live radar data because, frankly, if you've missed one sector of data you really, really don't want to delay subsequent sectors while waiting for the data that will be irrelevant in 2 seconds anyway. The compression and encryption schemes are specially crafted to allow for lost packets while still using solid implementations of off-the-shelf algorithms where appropriate, and lovingly hand-crafted algorithms for the non-security critical aspects.

And I can't tell you what we use our locally defined reliable UDP for.

Usually that's for state messages. The kind of message that defines the meaning of subsequent messages. Like codec parameters, or framing data or something.

I bet it's for firing the missiles.

Avoid loosing WW3 due to congestion control...

The article emphasizes that TCP congestion control can make TCP underperform with respect to the actual available bandwidth. So True and So Frustrating!

But it doesn't seem to talk about the virtues of congestion control - the primary one being it prevents the collapse of the Internet. Reacting to packet loss by just trying harder results in nothing but a network filled with doomed packets. This is not theoretical - before Van Jacobson that was the Internet.

I'm not defending the status quo nor TCP - TCP has deep problems figuring out what is really loss. and it speeds up too slowly but still doesn't manage to slow down when it should resulting in induced delay. But the mere fact that is worried about congestion (not just unreliability - one result of congestion) and sharing the channel isn't its problem.

have a look at QUIC, ledbat, Minion and the recent IETF TAPS BoF for work going on improving transport options in these areas. Its just the wrong takeaway to read that article and say "UDP is better because it doesn't have congestion control" - you need to consider CC in any reliable UDP based transport you roll too.

rfc 5405 provides some advice (http://tools.ietf.org/html/rfc5405) - its a little dated but still useful.

Excellent point, awesome RFC! It's worth reproducing this point from the abstract, which makes the same point you do:

> Because congestion control is critical to the stable operation of the Internet, applications and upper-layer protocols that choose to use UDP as an Internet transport must employ mechanisms to prevent congestion collapse and to establish some degree of fairness with concurrent traffic.

It's a shame that the internet requires this kind of good behaviour from individual hosts, because it leaves it vulnerable to buggy or malicious nodes. But as long as it does, it's essential that users of UDP know about this.

On the other hand it's very fortunate that the internet architecture is what it is, because the "dumb network, end-to-end protocols" design principle has enabled deployment of new and improved protocols and applications without touching the network routers, which would have been very difficult to coordinate. Might be difficult to believe now but this was a radical idea at the time.

(Or at least enabled it for ~25 years until NAT put a damper on it - let's all keep our thumbs up for IPv6).

> TCP has deep problems figuring out what is really loss. and it speeds up too slowly but still doesn't manage to slow down when it should resulting in induced delay

TCP is really pretty reasonable. Fast retransmit/fast recovery lets TCP eat up small packet losses without entering congestion control. Plus everybody recently upped their intial congestion window to 10.

Also, there was an attempt to deploy ECN (explicit congestion notification) for TCP, but net equipment vendors and providers resisted deploying it. It was specced as an official standards track TCP feature in 2001.

The one major problem TCP currently has is the assumption that packet loss is always caused by congestion, which is a simplification that proves most damaging over radio links.

There are smaller problems (but still worth fixing), like TLS over TCP doing a few more round trips more than necessary. QUIC tackles this in particular.

How would you change TCP to improve its behaviour here?

Current radio link layers (3g, wifi) are designed to play nicely with TCP. Their below-IP layers do their own retransmissions to make up for radio link issues. They have to take into account radio channel congestion - it doesn't help if everyone on the channel just starts shouting louder and repeating themselves.

If you absolutely positively need to have that lowest lag time between points then UDP is the way to go. However 1) UDP will drop the occasional packet. 2) UDP will NOT work behind many corporate firewalls. 3) UDP connections between users on separate networks will fail due to random configuration issues (I am sure there is a reason, just don't know what it is after years of looking). 4) UDP connections between users on the same DHCP servers requires a different addresses then the global address to make connections. 5) UDP connections can time out if you don't send anything for a few minutes. Easy to fix, hard to find.

In the virtual world platform my team wrote, we started using UDP exclusively, then moved to mixed TCP and UDP, then moved to using TCP for everything except P2P voice connections. with UDP I had always dreaded going into see a potential corporate client, and not knowing if my networking was going to make it through the firewall.

This is a good discussion of the subject. http://stackoverflow.com/questions/992069/ace-vs-boost-vs-po...

There are no such thing as "UDP connections". UDP is datagram based : you send a packet to an address, either it reaches its destination either it not (regardless of the order). A packet can be dropped in case of network congestion, TTL that reaches 0, etc.

That is : One can easily rewrite a TCP-like on top of UDP.

Almost every real use of UDP is "pseudo connection based", and lots of NAT systems operate on that basis: that a packet going (host,port) -> (host2,port2) implies both that there might be packets in the reverse direction and further packets between that pair of ports.

This isn't baked into the protocol, but in practice everyone relies on it.

While a UDP datagram not a "connection" in the TCP sense. You often send a packet via UDP from a port on one machine, to a different port on the same or different machine. If one of those ports changes, then it effectively "breaks" the connection. Since you are no longer getting data on the port you expect to receive it from. So while a pair of UDP ports are not a "connection", it is often a useful abstraction to treat them as one.

A firewall treats continuous messages exchange between two endpoints as a connection. This is also what NAT punching takes advantage of.

NAT mapppings will time out.

You are objecting to semantics, not substance.

All true, if you read 'connection' to mean 'path'. Yes firewalls are an issue. Often HTTP tunneling is tried because 'HTTP will always work' but in practice HTTP has more pitfalls than UDP. There are 100 firewall settings about HTTP; you have to get them all right to stream HTTP thru a firewall (and proxies and ISP servers in the same path). Whereas UDP usually has exactly one setting: enable UDP.

We've been poking UDP holes in firewalls for 5 years; its the best (most frequently successful) option for enterprise by far.

Failures to communicate on the same subnet with UDP are the wellknown 'hairpin' issue, where some routers will not recognize their own outside IP address and short-circuit a packet back to another subnet inside their domain. Not a lot you can do about it, except use TURN or another UDP proxy.

Years ago, i read about some application protocol (for a game, i think) that sat on top of UDP. It packed small messages into datagrams for transmission, and handled acking and redelivery.

One of the cute things it did was improve reliability for critical messages by preemptively sending them twice, in successive datagrams. That allowed the protocol to avoid an ack-miss-retransmit cycle for those packets, as long as it was only a single datagram that was dropped.

It's the Quake 3 network protocol. Carmack invented it, and that protocol was the first one to solve the UDP ack problem in a way that didn't impact gameplay. It was a huge deal and was probably one of the reasons the Quake 3 engine generated over a billion dollars in revenue.

Not Q3. Q3 has a more elaborate scheme, keeping more updates in the resend buffer.

You rock! Thanks for correcting me and for pointing out this wonderful read in your other comment: http://www.gamasutra.com/view/feature/131781/the_internet_su...

A neat write-up of the algorithm used can be found here: http://fabiensanglard.net/quake3/The%20Quake3%20Networking%2...

That's a variation of FEC and its good stuff http://en.wikipedia.org/wiki/Forward_error_correction

we're seeing it come back into vogue in TCP and modern protocols. QUIC has a FEC component and TCP Tail Loss Probe (http://www.ietf.org/proceedings/84/slides/slides-84-tcpm-14....) is a subtle variation on the theme too

the million dollar question around FEC is how correlated are different loss events which has a lot to do with how well it can work.

I guess it's a kind of FEC. To me, FEC usually means that (a) the error-correction information is smaller than the message it covers and (b) the error-correction information is included in the message it covers. The Header Error Control field in an ATM cell is the canonical example for me.

But, yes, sending duplicates in another datagram does allow errors to be corrected without being reported to the sender, so it corrects errors in a forward direction, which i suppose makes it FEC.

That Google tail loss stuff is nice, thanks for the link. It's cool that thirty years after RFC793, we're still making significant improvements to TCP.

That's just FEC with a code rate of 1

X-Wing vs Tie Fighter, a classic read on networking over the internet: http://www.gamasutra.com/view/feature/131781/the_internet_su...

I don't think that was it. They sent the whole of every datagram twice:

> Our solution was simple and surprisingly effective. Every packet would send copy of the last packet. This way if a packet were dropped, a copy of it would arrive with the next packet, and we could continue on our merry way. This would require nearly twice as much bandwidth, but fortunately our system required so little bandwidth that this was acceptable. This would only fail if two consecutive packets were dropped, and this seemed unlikely. If it did happen, then we would fall back on the re-sending code.

The protocol i remember specifically send some parts of datagrams, the most critical messages, twice.

gsfgf says:

"Hi, I'd like to hear a TCP joke."

"Hello, would you like to hear a TCP joke?"

"Yes, I'd like to hear a TCP joke."

"OK, I'll tell you a TCP joke."

"Ok, I will hear a TCP joke."

"Are you ready to hear a TCP joke?"

"Yes, I am ready to hear a TCP joke."

"Ok, I am about to send the TCP joke. It will last 10 seconds, it has two characters, it does not have a setting, it ends with a punchline."

"Ok, I am ready to get your TCP joke that will last 10 seconds, has two characters, does not have an explicit setting, and ends with a punchline."

"I'm sorry, your connection has timed out. Hello, would you like to hear a TCP joke?"

I'd reply with a UDP joke, but you might not get it

I used to use UDP. Its nice for some things. Things I liked: When you read a UDP message you got the whole message or nothing (its not a stream). Fast. Multicast!

Things that challenged us: 64K message size limit (varied slightly from linux/solaris/hpux) you can't tell if the other side is listening or got the message. multicast requires use of certain ip ranges which people seem to forget frequently.

I had the UNIX Network Programming by W. Richard Stevens tome I borrowed from my boss. Good for networking.

Yeah you have to recapitulate some TCP semantics to use UDP effectively. Packet numbering (so you can catch out-of-order packets). Dice-and-reassemble. Congestion control.

But the good news is, you CAN do those things yourself. And avoid the TCP lockups, slowdowns and endless retransmissions.

the wrs book wears very well. he was a good man.

as an aside, its generally a mistake to use IP-Size > PMTU (commonly ~1500).. the result is IP fragmentation and IP stacks commonly have very limited resources devoted to IP reassembly (otherwise its a DoS attack) - so its very easy for your fragmented UDP message to get "lost" even when the network transport does not have an error.

When it seems to work ok its basically because your app is the only one doing it.. if it were common practice you would all fight for the same reassembly buffer space and the OS would have to start dropping things. Your application also becomes trivially dosable at low bandwidths by an attacker intentionally using up the reassembly buffer.

Biggest problem with UDP: No support at all in modern browsers without plugin. So when doing web games, you can't really get around using Websockets which are TCP based.

Or, the biggest problem with web games: no UDP.

This will no longer be the case once you can use WebRTC data channels. Its just a matter of making sure your server can support the protocol.

The data channels run on UDP per default?

Yes, DTLS over SCTP over UDP. TCP doesn't really work if you want to do NAT traversal, which WebRTC needs to do because it's aimed at P2P.

And NAT traversal only works if you focus on broken NAT implementations. If you have a "secure" NAT like a firewall may implement, then there is no such thing as workable NAT traversal.

You have that backwards! NATs are tasked to facilitate communications (same as routers).

Yes, firewalls can be configured to restrict communications. And you can have a combined NAT+firewall that is so configured.

Just likes routers vs firewalls. Routers forward packets as well as they know how, firewalls selectively drop them, router-FW combos can configured either way.

Reliance on restrictive centralized firewalls is a pretty 1990s mindset, and doesn't lead to good security outcomes in the current world where users constantly suffle devices between networks, and vpn in to your network, etc. You just end up making your internal network unusable for production work.

That's an interesting viewpoint and would go against practically everyone's (except the IETF's) expectation for the NAT to provide a default firewall-esque policy.

I suppose if you view NAT to "facilitate communications" then mapping a port for all inbound IPs instead of symmetric makes sense.

In my experience people behind NAT very much like Skype, games, WebRTC, file sharing etc to work. They're usually not very knowledgeable about firewalls but most often used host-based firewalls. Which is good since they're generally much more easier to configure according to your app needs.

The reality is that with broadband connections these days there is generally little or no packet loss unless your ISP is having problems. In the case where you are getting packet loss it usually tends to be around 50%, so UDP is going to have trouble as well.

Also, these days with fast retransmit TCP isn't as bad as it used to be when there are network problems.

I've been using TCP for over 20 years for multi-player games, VOIP and web conferencing, and it works incredibly well. Even if you do all the legwork to get UDP working reliably (which TCP give you for free), you still have the problem of firewalls.

Packet loss is usually due to upstream router congestion - you can send packets to your router much faster than it can forward them over a much-slower link to your ISP. SO loss depends upon your burst size/interval and your router buffer size.

Wireless still has interesting(?) packet-loss behaviors. It can lose single packets, bursts of packets or even too-large packets (MTU exceeded). It depends on the AP and your local radio noise characteristics.

A very good recap on how networking in Quake3 works without reliable packet delivery:


Simple, yet robust.

I've been hoping someone makes a decent server-side WebRTC peer, meaning that we can use UDP in the browser and really push HTML5 gaming forward. It seems to go completely under the radar with all this talk of audio/video P2P streaming.

from the_skys_kid:

I'll defend UDP.

UDP is the honey badger of the internet protocol suite.

UDP is all about the transaction. UDP is standing on a cliff yelling, "Come at me, bro", whether you're there or not.

UDP is a man's protocol doing real shit like bootstrapping your ass and slapping an IP on you. Get up, motha fucka!

UDP will talk shit to one of you or all of you. UDP ain't scared. UDP brought the fear.

UDP understands that you may be slow sometimes. So UDP will wait for your sorry ass. UDP grew up without a father, too.

UDP sends a message and couldn't give a fuck if you got it or not.

UDP got a message from you saying that you got his messages and guess what? UDP didn't even open it! Not one fuck given.

Don't try to shake UDP's hand! You crazy?

And, when UDP dies because you weren't available, UDP doesn't shed a tear. UDP is hardcore. He's going out even if he knows you ain't there. UDP is a goddam one-man slaughter house. Why?

Because UDP doesn't give a fuck.

In re the sections on "hiding the lag" - Bungie (the guys behind the original Halo games) have a pretty cool presentation on how they hid lag in Halo Reach.


Just use RakNet http://www.jenkinssoftware.com

I mean if you want to ship a product. If your goal is to learn networking, then by all means, roll your own.

One side note - what the protocol is on the wire and how you're using it are not quite the same thing. With regard to head-of-line, in principle you can look at later TCP packets before a dropped packet comes in. Of course, correcting framing may be expensive or impossible, and (more importantly, for most applications) if the OS is handling TCP for you you'd have to find some way to cajole it into passing those packets to you.

I have used UDP for two major projects: a game engine while working at Angel Studios (networking was tightly coupled with camera and vehicle AI) and for the NMRD project in the late 1980s (NMRD could detect nuclear tests based on seismic waves).

I am surprised that UDP is not used more often when the business cost of loss data is small.

Some RTC implementations (VoIP and video) are using both in order to get through firewalls - try UDP first, resort to TCP (including even HTTP over TCP) if UDP can't connect.

The TCP fallback only consistently works well (in terms of QoE) in cases in which the TCP legs are short.

Though to some extent it sometimes feels like an RFC 3093[1] solution, as if one is running TCP on HTTP.

[1] http://www.ietf.org/rfc/rfc3093.txt (funny, an April 1 RFC applicable to a serious post on April 1)

One thing I've always wondered - has anyone written something that works over UDP but offers fairly basic TCP-like reliability functionality? So you'd have the speed advantage of UDP, however you could test the validity of your data, for instance.

We have one, called STRAW. Our founder asked me to open-source it; but in the crush of work it never happened. Now he's gone; don't know if it would still be allowed by new mgmt.

Advantage of STRAW over other reliable-udp protocols: multiple connections with a single port/NAT pinhole. Multiple channels per path.

Someone should write a series of articles about this issue from the POV of Javascript/browser clients. There, UDP isn't available at all, and we also have to deal with quite a bit of pain in terms of differences between platform implementations.

Well, you can use UDP in data channels:


I am very enthusiastic about the future of WebRTC, but the last time I looked, it was a bit too new. I see now that it's supported in Chrome, Firefox, and Opera stable. However, that only takes care of the browser side. I don't know of any Clojure library/server I can use with WebRTC.

Safari might support WebRTC in the coming months. Apple appointed 4 members to the WebRTC Working Group: http://www.w3.org/2000/09/dbwg/details?group=47318&public=1

Since the signaling is not part of the standard, on purpose, you can choose whatever you like (XMPP, SIP...) or create you own solution. On the server side, TURN is implemented to go through the NATs, the firewalls, that helps.

Web browsers were created to browse text files using hyperlinks. If you want to do advanced networking with a browser, use a plugin like Java.

Web browsers were created to browse text files using hyperlinks. If you want to do advanced networking with a browser, use a plugin like Java.

Yup, it's April Fools on the Internet, and everyone's a comic genius!

How is that a joke? It's unpopular, but entirely factual.

It works technically, but the main reason for using Javascript is that distribution and discovery is fantastic and requires no install. Java plugins have a bad reputation in terms of security and require an additional install. Using a Java plugin would take away any distribution and discovery advantage of the browser platform.

Yeah, I completely agree, but it's the only option for UDP. :-)

I wouldn't say the fact is unpopular. What's unpopular -- or using two better words, wrong and stupid -- is the fallacy that because something was originally created for a certain purpose, it must be used for that purpose and only that purpose for the rest of eternity.

Lately, you can just use things like http://www.pubnub.com/ now a days and do both and not have to worry.

Socket servers are so 80's.

TCP can get you quite far, even in a fast paced multiplayer game, as long as the game is suited.

For example, the multiplayer version of asteroids in this article uses TCP sockets:


Cheers, Paul.

The problem with using UDP is that many ISPs block UDP packets because it doesn't have features like congestion control. So using it for an internet game is out of the question, because there's a chance that a bunch of your users won't be able to play online.

I've heard before that TCP is a really awful fit for mobile. So shouldn't there be a general replacement for TCP on mobile? Everybody inventing their own custom fix does not sound like the most efficient way to solve this problem.

> So shouldn't there be a general replacement for TCP on mobile?

But what would this magical protocol do differently that would make it work better?

In my opinion, most of TCP's semantics arise not out of the network, but rather the data itself. I can't have packets getting lost in the middle of an SSH session: it just doesn't make sense. My keystrokes are a stream of data that must be in order, and must be delivered: thus TCP.

I commute daily, and use 4G on my commute. Another poster has problems very similar to mine[1]:

> As an aside, these days it's worth noting that a 3G, WLAN or 4G link layer will try very hard not to drop any packets. So when you're doing UDP, watch out for those 40000 millisecond old packets that will burst to your lap once the rain cloud moves away from the cell tower

You can see this just by pinging in the background. The loss of good signal will result in many packets getting dropped, but then when signal resumes, ping will receive the packets that it had presumed lost, and you'll get things like:

  64 bytes from icmp_seq=65 ttl=46 time=37324.3 ms
  64 bytes from icmp_seq=66 ttl=46 time=36324.3 ms
  64 bytes from icmp_seq=67 ttl=46 time=35324.3 ms
  64 bytes from icmp_seq=68 ttl=46 time=34324.3 ms
  ... and so on ...
From which you can tell that they were buffered somewhere, and then when the signal cleared up, basically transmitted quickly and cleanly. But it makes you wonder if TCP connections then receive a flood of duplicate packets (the original + the resends); I've not watched wireshark closely enough to see.

For all the ads make you think speed is the determining factor in who has the better 4G, I'd say simple packet loss or connectivity loss makes much more of a difference to me day-to-day. "Coverage", you might call it, except typically according to the phone there is a signal, it is 4G, but the strength is just so poor as to be unusable. I use an app to get this info (the bars in the corner just aren't fine-grained enough), and it reports things like "Net. type: NSDPA * 7.2 Mbps", and that'd be okay, except: "Net strength: -99 dBm * 7 ASU" — too low for connectivity; in my experience, I require >-80 dBm for actual data to transmit successfully. (Ping packets to round trip, etc.)

  [1]: https://news.ycombinator.com/item?id=7507969

> In my opinion, most of TCP's semantics arise not out of the network, but rather the data itself. I can't have packets getting lost in the middle of an SSH session: it just doesn't make sense. My keystrokes are a stream of data that must be in order, and must be delivered: thus TCP.

Mosh[1] is basically (a better) SSH over UDP. It fares a lot better than SSH on mobile connections. It does away with hanging connections and such nonsense.

1: http://mosh.mit.edu/

A big problem for TCP with mobile is that when packets get lost, TCP assumes there's a lack of bandwidth, and throttles the throughput. On mobile, it's more likely it just got lost, and needs to be resent as soon as possible. TCP doesn't do that. It waits. TCP connections deteriorate on mobile.

Back when i did this kind of stuff i used both of them udp for nonesential stuff like player position.

Where a lost packet did not matter, there would still be a new player position packet a half a second later.

Then for chat and score and stuff like that tcp was used.

For a realtime game, player position would be kind of essential, no? "move here unreliably" "shoot at x reliably"

Mixing reliable and unreliable updates for game logic would seem to result in a lot of complexity as things can be out of sync in a variety of ways now.

The Quake 3 writeup cites exactly this problem as the reason Carmack abandoned the reliable+unreliable combination:

"His next iteration involved using UDP with both reliable and unreliable data, pretty much what many would consider a standard networking architecture. However standard mixed reliabled/unreliable implementations tend to generate very hard to find bugs, e.g. sequencing errors where guaranteed messages referenced entities altered through unreliable messages."

(from http://fabiensanglard.net/quake3/The%20Quake3%20Networking%2... )

For a realtime game, player position would be kind of essential, no?

Yes, but UDP is choosing unreliable transport, not throwing away data entirely. No matter the transport, once you start dropping a large enough number of packets it will degrade (or break) the experience. The use case being discussed is when getting the next packet fast is more important than the overhead of TCP. You can build retries or various other forms of reliability into your UDP protocol when outdated information is still useful.

Mixing reliable and unreliable updates for game logic would seem to result in a lot of complexity

Yes and no. It's complex because game state is complex to begin with, but less daunting than you may think because you can compartmentalize to different components of the game. You're not likely to using multiple transports for the same information. See the parent's chat vs player position example.

Obsolete player position data is worse than no player position data.

enet is a networking library which uses UDP but generate redundancy and other things so it has the advantages of udp with some advantages of TCP.

I don't get the bottom line part: how can you send something via "HTTP"? What method of transport layer is being used?

The HTTP service sends by responding to a client request. For psuedo-streaming, usually just client polling.

not an expert, but as far as I know, HTTP is a layer over TCP

Yup that's what I thought as well. After looking into it some more, I think HTTP/S implies you are using one of the transport layers; conversely, you can directly access the transport layer without using one of the traditional application layers. But that doesn't completely mesh with the parts about modifying TCP or UDP.

You might as well paint "DDoS the hell out of me" on your forehead if you rely on UDP. Use TCP and block all UDP at your edge. Backhaul your own UDP traffic through GRE tunnels to offsite VPS's.

If you have cool transit providers, you can have them place upstream filters to block UDP. That might be a pipedream though, I don't know of any large providers who will do that for you these days.

Why the downvote? Is there anything technically wrong in my answer?

Kids these days.

Could anyone point me to books and ressources for beginners on the topic of multiplayer game development?

I've written a series of articles plus a live demo, with very little previous knowledge required, here: http://gabrielgambetta.com/fpm1.html

Thanks a lot. It looks very interesting. I will go through it when I have some time. Do you know any good book that covers this subject?

Not really... besides these articles, the other two frequently cited sources are Valve (https://developer.valvesoftware.com/wiki/Source_Multiplayer_...) and Gaffer (http://gafferongames.com/game-physics/networked-physics/).

We all say essentially the same, at different levels of complexity. I have a live demo :) But you should read all three to get a complete picture.

funny, I just checked your profile and noticed you are also in Switzerland :-)

Small world :)

Be weary of buffer overruns when using a language like C :).

When you're weary of buffer overruns and can't be wary of buffer overruns, you stop using C.

This article is boss. TCP vs. UDP holywar is not going to end any time soon, it's nice that some people can actually explain the differences in simple manner.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact