Hacker News new | past | comments | ask | show | jobs | submit login

Flow control and QoS was critical 20 years ago. I helped build a global VoIP network, and some of our patents included dynamic routing across multiple ASes...they were critical.

Now, we (different company) have similar real-time algorithms, and the algorithms see much less problems (mainly across backbones of AWS, Azure, Oracle, IBM, Alibaba).

I suspect this is due to more bandwidth, more routes and better network optimization from the ISPs (we still see last mile issues but those are often a result of problems which better flow control algorithms usually can't completely solve).

Curious if ISP engineers can give a more expert view on the current state of the need or impact of better flow control in middle mile and/or last mile situations?




Honestly, QoS at the internet level was never really a big thing outside of special cases like VoIP. Network gear vendors tried like hell, desperately, from around 1998 forward to convince everyone to do diffserv, qos, etc etc etc like crazy plus DPI because they thought they would charge more by having complex features and "be more than a dumb pipe."

The situation now is that bandwidth is plentiful. A lot changed in 20Y.

100G and 400G are now quite cheap - most of the COGS for a given router is the optics, not the chassis, control plane or NPU, and optics has been, on and off, a pretty competitive space.

Plus, almost all the traffic growth has been in cache-friendly content - non-video/audio/sw image growth has been modest and vastly outpaced by those. Not just cache friendly but layered cache friendly. Of the modern traffic types, only realtime audio-video like Zoom is both high volume and sensitive to latency and congestion. That's a small component, is often (but not always) either hosted or has a dedicated network of pops that do early handoff, and so on, so your typical backbone is now mostly carrying CDN cache misses..


Not a networking guy. I'm curious if packets have priorities, and if so does everyone get greedy and claim to be high priority? They talk about delay reduction in the article, but a lot of the internet bandwidth today seems to be video which doesn't need to have low latency once it's got a bit in the receive buffer. It just seems like gamers packets should be prioritized for delay while streaming stuff should be (maybe) prioritized for bandwidth, possibly with changing priority depending how far ahead of the viewer the buffer is. Not sure where regular web traffic would fit in this - probably low delay?


Paket properties are not respected on the public internet, but organizations do make use of them internally.

For public clouds who operate global networks, they can typically send video streams at low/mid priority all the way to "last mile" ISP by peering with so many networks and just running massive WANs internally. So they can get most of the benefits of prioritization, even though the internet doesn't support it.


"The internet" is "best effort". What this really means is all of the networks that make up the internet don't pay attention to the DSCP marks[0] in the IP header.

In reality almost all large networks (internal and internet traffic handling) use MPLS[1] or some variant to tunnel/encapsulate different types of traffic and handle priority that way while not paying attention to whatever DSCP markings users can arbitrarily set. MPLS (in most cases) is invisible to the end user so the carrier can do their own QoS while not allowing customer configuration to impact it.

If "the internet" cared about DSCP you would definitely see the situation you're describing where everyone would just mark their traffic highest priority. Note you can still mark it, it's just that no one cares or respects it.

On your network and queues you can definitely use DSCP and 802.1p[2] (layer 2 - most commonly ethernet) to prioritize traffic. Thing here is you need equipment end to end (every router, switch, etc) that's capable of parsing these headers and adjusting queueing accordingly.

As if this isn't complicated enough, in the case of the typical edge connection (a circuit from an ISP) you don't have direct control of inbound traffic - when it gets to you is just when it gets to you.

Unless you use something like ifb[3], in which case you can kind of fake ingress queuing by way of wrapping it through another interface that effectively makes the traffic look like egress traffic. All you can really do here is introduce delay and or drop packets which for TCP traffic most commonly will trigger TCP congestion control, causing the transmitting side to back off because they'll think they're sending data too fast for your link.

UDP doesn't have congestion control but in practice that just means it's implemented higher in the stack. Protocols like QUIC, etc have their own congestion control implemented that in many cases can effectively behave like TCP. The difference here is the behavior in these scenarios is left dictated to the implementation as opposed to being at the mercy of the kernel/C lib/wherever else TCP is implemented.

Clear as mud, right?

Good news is many modern end user routers just kind of handle this with things like FQ-Codel, etc.

[0] - https://en.wikipedia.org/wiki/Differentiated_services

[1] - https://en.wikipedia.org/wiki/Multiprotocol_Label_Switching

[2] - https://en.wikipedia.org/wiki/IEEE_P802.1p

[3] - https://wiki.linuxfoundation.org/networking/ifb


Thank you! I learned a ton.


No problem. I'm happy I've had experience with all of this and learned it. I'm happier I don't have to deal with it on a daily basis anymore!


And 20 years of properly tuning congestion control algorithms. Don’t underestimate the benefits BBR and the fight against buffer bloat in the early 2000s did to improve the quality of TCP stacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: