Hacker News new | past | comments | ask | show | jobs | submit login

That's not an unreasonable approach. When I came up with the original approach to congestion control in TCP 30 years ago (see Internet RFCs 896 and 970), TCP behavior under load was so awful that connections would stall out and fail. The ad-hoc solutions we put in back then at least made TCP well behaved under load.

In the early days of TCP/IP, it was clear that there was no known solution to congestion in the middle of a big datagram network. There still isn't. Out near the edges, we can use fair queuing, which I invented and named, and which is offered in most Cisco routers for links below Ethernet speeds. We still don't know what to do in the middle of the network. "Random Early Drop" is an ugly hack, but it's what we have in the big routers. What saved us for years is that fiber backbones had so much more bandwidth than the last mile consumer connections that the congestion stayed near the ends. Then came streaming HD video.

The treatment of congestion went off in a different direction than I'd expected. Originally, ICMP Source Quench messages were used for congestion control. Routers were expected to tell endpoints when to slow down. But they weren't implemented in Berkeley's networking stack, and gradually were phased out. (They would have been a denial of service attack vector, anyway.) Instead, congestion was inferred from packet loss. Even worse, congestion is inferred from not having a packet acknowledged, so you don't know which direction is congested. That's not a particularly good approach, but it's what we've got.

I originally took the position that, as an endpoint, you were entitled to one packet in flight. That is, the network should be provisioned so that every connection can have one packet in transit without packet loss. If you have that much capacity, and you have fair queuing in routers, you only lose packets if you have more than one in flight. You're thus competing with yourself for bandwidth, and an "increase slowly, back off big on packet loss" strategy works fine.

In a world with Random Early Drop, not enough fair queuing, and big FIFO buffers (someone called this "bufferbloat"), you get both packet loss and excessive transit time. It's hard for the endpoints to adapt to that no matter what they do.

Anyway, there's nothing wrong with using a smarter algorithm to calculate optimal window sizes and retransmission times. As long as it evaluates how well it's doing and feeds that back into the control algorithm in a rational way, that's fine. Today, you could probably afford to run machine learning algorithms in routers. So go for it!

If you're going to do that, it's a good time to fix delayed ACK as well. As is well known, the interaction between delayed ACKs and the Nagle algorithm is awful. This is a historical accident; I implemented one, the Berkeley guys implemented the other, and by the time I found out I was out of networking and involved with a small startup called Autodesk. So that never got fixed. Here's how to think about that. A delayed ACK is a bet. You're betting that there will be a reply, upon with an ACK can be piggybacked, before the fixed timer runs out. If the fixed timer runs out, and you have to send an ACK as a separate message, you lost the bet. Current TCP implementations will happily lose that bet forever without turning off the ACK delay. That's just wrong.

The right answer is to track wins and losses on delayed and non-delayed ACKs. Don't turn on ACK delay unless you're sending a lot of non-delayed ACKs closely followed by packets on which the ACK could have been piggybacked. Turn it off when a delayed ACK has to be sent.

I should have pushed for this in the 1980s.

John Nagle




John, if you care to get back in touch with Jim Gettys and I, we can update you on where things stand on the whole bufferbloat effort.

FQ on modern cisco routers is generally quite crippled, and even that much is not available elsewhere.

However, on the whole, the fq + aqm situation along the edge of the internet is looking up, with nearly every third party home router firmware incorporating fq_codel (notably openwrt and derivatives), and it being integral to streamboost, and other QoS systems like those in the netgear X4.

Some examples of the kind of results we get now from these new algos:

http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results....

http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-o...

I really liked the PCM paper (wanted code tho) for clearly identifying the parameter space modern tcps need to operate in. FQ everywhere, or nearly everywhere, would make the tcp's job far easier.

There is an aqm/packet scheduling working group in the ietf in the process of standardization, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: