TL; DR version: packets get dropped when some buffers, at your local computer or a router between here and there get full.
I do not want to sound too critical -- the info is good for someone who never heard of UDP.
But I was hoping for more information. More substantiation on why full buffers are the main source of UDP drops (e.g., can smart throttling take some/most of the blame -- given the need to drop a packet, dropping a UDP is usually less painful than dropping TCP, etc.)? any quantitative numbers on sample network / hardware? etc.
Of course dropping traffic isn't the intended purpose of the protocol, but it's designed to allow unreliable communication when you're stuck with that situation.
Of course, I think that reference was in a book, so I can't find it now.
A few systems I saw would, in this case, heavily favor dropping UDP on the assumption that the applications using it can handle dropped packets in stride; plus, dropping TCP packets only causes more traffic in the short term due to retransmissions.
In this case the drops could have nothing to do with buffer sizes -- UDP packet can still be dropped even if it is the only UDP packet in a large buffer.