Edit: Most Comcast/Xfinity modems and converged gateways are Intel based and have this and other issues, pure garbage devices.
You'd have to scroll down a bit on the page (I'm copying the factual data in case the remote link goes down at a later date).
The bad news is that a lookup table is literally required to know what chipset is in use inside of the device - just like with all of the WiFi adapters :(
(PS, this table looks like garbage, but many users on mobile will cry if I prefix with spaces to make it look OK... I'd really rather every newline were a <br> element.)
Motorola SB 6121 4x4 (Intel Puma 5)
Motorola 6180 8x4 (Intel Puma 5)
Arris SB 6183 16x4 and Motorola MB7420 16x4 (Both Broadcom)
NetGear CM1100 32x8 (Broadcom)
NetGear CM1000 32x8 (Broadcom)
NetGear Orbi CBK40 32x8 (Intel Puma 7)
Note: I tested this model and was told that this modem build into Orbi does not have same issues as other Puma 6/7 modems. I haven't seen any issues with it since using it. The Orbi modem is based off Netgears CM700.
TP-Link - TC-7610 8x4 (Broadcom)
Routers what work with zero issues with the above cable modems list in my current collection:
Asus - RT-AC66U and GT-AC5300 (OEM and Merlin FW)
D-Link - Many model routers tested including COVR models.
Linksys - WRT1900AC v1 and WRTx32v1
NetDuma - R1 Current firmware version (1.03.6i)
Netgear - Orbi CBK40, R7800, XR450 and XR500
Forum User Modem and Router Experiences
Arris - SB 6141 8x4 (Intel Puma 5) and D-Link DIR-890L and ASUS RT-AC5300
Arris - SB 6141 8x4 (Intel Puma 5) and Asus RT-AC66U
Arris - SB 6183 16x4 (Broadcom) and Linksys WRT1900ACM and WRT32x and NetGear XR500
Arris - SB 6183 16x4 (Broadcom) and NetGear XR500
Cisco - DPQ3212 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000
Motorola - MB 7220 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000
TP-Link - TC-7610 8x4 (Broadcom) and NetDuma R1
That's what the Internet Archive is for! The URL above has now been archived .
EDIT: And the resulting problem isn't just the resulting end-user latency. TCP's congestion control mechanisms (i.e. the ones that let the endpoints push as much traffic as the network can bear and no more) rely on quick feedback from the network when they push too much traffic. The traditional, quickest, and most widely implemented methods of feedback are packet drops - when those are replaced with wildly varying latency, it's hard to set a clear time-limit for "this packet was dropped", and Long-Fat-Network detection is a lot harder.
Whereas most UDP applications are constant rate, with some kind of control channel.
Bufferbloat should not matter for your home connection. (Unless it is constantly in use by more than one client.)
However, when congestion occurs and the data you sent, that sits in these buffers are already stale, irrelevant, but the problem is that there's no way to invalidate the cache on the middleboxes. And it leads to worse performance because it clogs up pipes with stale data when those pipes get full. So it prevents faster unclogging. This results in a jerk in TCP, because it scales back more than it should have without the unnecessary wait for the network to transmit the stale data.
That is wrong. A single client can saturate the connection easily (eg. while downloading a software update or uploading a photo you just took to the cloud). Once the buffers are full, all other simultaneous connections suffer from a multi-second delay.
The result is that the internet becomes unusably slow as soon as you start uploading a file.
Using my smartphone, it induces and measures > 700ms latency on my cable modem connection. That’s worse than old-fashioned high-orbit satellite internet!
Traditional tcp congestion control in an environment where buffers are oversized will keep expanding the congestion window until it covers the whole buffer or the advertised receive window, even if the buffer is several seconds of packets. There may be some delay based retransmission, but traditional stacks will also adapt and assume the network changed and the peer is expected to be 8 seconds away.
Is this bufferbloat? I guess what happens is that a bunch of packets get queued up and I have to wait until all of them are delivered?
Ericson, at least, published a paper showing they recognized the problem: https://www.ericsson.com/en/ericsson-technology-review/archi...
and I do hope that shows up in something, however the chipsets on the handsets themselves also need rational buffer management.
To exclude cases you'd need to watch the network traffic with something like WireShark and look at retransmissions. If it suddenly shoots up and then packets start to trickle later but very slowly, then that could be bufferbloat.
But the 1 minute seems too long.
Reading more about it, you are correct about 1min being too long, therefore it's probably not (just) bufferbloat.
I hope to have a document comparing it to docsis 3.1 pie at some point in the next few months, in the meantime, I hope more (especially ISPs in their default gear) give cake a try! It's open source, like everything else we do at bufferbloat.net and teklibre.
My Netgear R7000 can't handle my 400mbps connection using qos throttling. I will need probably at least a mid range Ubiquiti router to handle it.
Avoid modems with Intel, specifically the various "Puma" chipsets. Best to double-check the spec sheet on whatever you buy.
The main alternative seems to be Broadcom-based modems: TP-Link TC7650 DOCSIS 3.0 modem and Technicolor TC4400 DOCSIS 3.1 modem (of which there are a few revisions now).
$69 MikroTik hAP ac2 will easily push 1Gbps+ with qos rules - https://mikrotik.com/product/hap_ac2#fndtn-testresults (it's a bit more tricky to setup and you need to make sure you don't expose the interface to the internet)
_Some communications equipment manufacturers designed unnecessarily large buffers into some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued for long periods in these oversized buffers. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput._
I hope I get this right, please correct if needed:
So basically Intels chipsets were creating what looked like a fat network pipe that accepted packets from the host OS really fast but in fact was just a big buffer with a garden hose connecting it to the network. The result is your applications can write these fast bursts, misjudge transmission timing causing timing problems in media streams like an ip call leading to choppy audio and delay. The packets flow in fast, quickly backup and the ip stack along with your application now have to wait (edit: I believe the proper thing to say is the packets should be dropped but the big buffer just holds them keeping them "alive in the mind" of the ip stack. The proper thing to do is reject them and not hoard them?). The buffer empties erratically as the network bandwidth varies and might not ask for more packets until n packets have been transmitted. Then the process repeats as the ip stack rams a load more into the buffer and again, log jam.
A small buffer fills fast and will allow the software to "track" the sporadic bandwidth availability of crowded wireless networks. At that point the transmission rate becomes more even and predictable leading to accurate timing. That's important for judging the bitrate needed for that particular connection so packets arrive at the destination fast enough.
Bottom line is don't fool upstream connections into thinking that your able to transmit data faster than you actually can.
Buffering is layer 4's job. Do it on layer 2[a] and the whole stack gets wonky.
[a] Except on very small scales in certain pathological(ly unreliable) cases like Wi-Fi and cellular.
And in the case of telco equipment, that's an tradition-minded and misguided conscious policy decision.
Smaller buffers are in general better. However advanced AQM algorithms and fair queueing make for an even better network experience. Being one of the authors of fq_codel (RFC8290), it has generally been my hope to see that widely deployed. It is, actually - it's essentially the default qdisc in Linux now, and it is widely used in quite a few QoS (SQM) systems today. The hard part nowadays (since it's now in almost every home router) is convincing people (and ISPs) to do the right measurement and turn it on.
More generally it refers to any hardware/system with large buffers - needed to handle large throughput but can lead to poor latency due to head of queue blocking.