Hacker News new | past | comments | ask | show | jobs | submit login

Dropping TCP is normally preferable, I'd have thought, as it'll cause the TCP socket to back off. Dropping UDP is less likely to lead to such behaviour.



TCP is rather sensitive to dropping. Very small fraction of package drop cripples TCP connection's throughput. It is a well-known research topic in TCP; and probably is the holy grail in the field.


It's more because UDP is designed to be an unreliable protocol, packet losses are expected to happen and applications will have deal with it - meanwhile dropping a TCP packet may cause the congestion control algorithm to back off, but you're guaranteed to waste more traffic while those TCP sessions figure out WTF happened and retransmit.


UDP isn't designed to be unreliable. It inherits the reliability of what it's built and run on, and doesn't compensate for it.


It's designed to be unreliable in the same way that a car with crumple zones is designed to be driven into a wall.

Of course dropping traffic isn't the intended purpose of the protocol, but it's designed to allow unreliable communication when you're stuck with that situation.


UDP should be considered in the context of IP network and TCP. In this canonical and typical context, it is designed to the unreliable alternative of TCP, as an IP transport layer protocol.


I recall reading that UDP was left as such a thin wrapper over packets so that when another protocol like TCP was found to be the wrong solution for a problem, you could easily build your own reliability and congestion protocols over the top of UDP.

Of course, I think that reference was in a book, so I can't find it now.


Semantics. There was a conscious decision involved.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: