Hacker News new | past | comments | ask | show | jobs | submit login

I was excited when I saw the title -- UDP is the workhorse for data transfers on many projects I work on. The info, though, was very basic.

TL; DR version: packets get dropped when some buffers, at your local computer or a router between here and there get full.

I do not want to sound too critical -- the info is good for someone who never heard of UDP.

But I was hoping for more information. More substantiation on why full buffers are the main source of UDP drops (e.g., can smart throttling take some/most of the blame -- given the need to drop a packet, dropping a UDP is usually less painful than dropping TCP, etc.)? any quantitative numbers on sample network / hardware? etc.

Dropping TCP is normally preferable, I'd have thought, as it'll cause the TCP socket to back off. Dropping UDP is less likely to lead to such behaviour.

TCP is rather sensitive to dropping. Very small fraction of package drop cripples TCP connection's throughput. It is a well-known research topic in TCP; and probably is the holy grail in the field.

It's more because UDP is designed to be an unreliable protocol, packet losses are expected to happen and applications will have deal with it - meanwhile dropping a TCP packet may cause the congestion control algorithm to back off, but you're guaranteed to waste more traffic while those TCP sessions figure out WTF happened and retransmit.

UDP isn't designed to be unreliable. It inherits the reliability of what it's built and run on, and doesn't compensate for it.

It's designed to be unreliable in the same way that a car with crumple zones is designed to be driven into a wall.

Of course dropping traffic isn't the intended purpose of the protocol, but it's designed to allow unreliable communication when you're stuck with that situation.

UDP should be considered in the context of IP network and TCP. In this canonical and typical context, it is designed to the unreliable alternative of TCP, as an IP transport layer protocol.

I recall reading that UDP was left as such a thin wrapper over packets so that when another protocol like TCP was found to be the wrong solution for a problem, you could easily build your own reliability and congestion protocols over the top of UDP.

Of course, I think that reference was in a book, so I can't find it now.

Semantics. There was a conscious decision involved.

"Smart throttling"? The point of UDP is that it doesn't impose a flow control discipline.

If a link in the path cannot handle all currently requested packets (in whatever protocol) then one of the endpoints needs to decide what to send and what to delay/drop. It can do this randomly, sequentially (youngest/oldest) or try to be smart about what causes least impact.

A few systems I saw would, in this case, heavily favor dropping UDP on the assumption that the applications using it can handle dropped packets in stride; plus, dropping TCP packets only causes more traffic in the short term due to retransmissions.

In this case the drops could have nothing to do with buffer sizes -- UDP packet can still be dropped even if it is the only UDP packet in a large buffer.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact