Hacker Newsnew | past | comments | ask | show | jobs | submit | davecb's commentslogin

Unconfirmed reports have Australia, Canada, New Zealand and the United Kingdom considering dropping the United States from the five eyes, and inviting the EU in its place.

The putative reason is that they are concerned about the shared information being used by Mr Trump in a manner harmful to the other countries.

In addition, Mr Trump has proposed that the five eyes expel Canada, because of their current economic disagreements. That doesn't sit well with anyone (:-))


That's just a comparison to Star Trek, which is set in an imaginary universe where things _do_ exceed the speed of light.


Yup, it was Rogers, which is what my building has, and which I'm stuck with for TV. Other, luckier, people can have Bell (sorta-maybe better) or TekSavvy (definitely better)

The terrible performance is spotty: that was a particularly glaring example I detected when everything was failing (:-))


I'm with Bell for my home internet. I had no choice. It was either them or a cellular connection. Install went ok, but the kid got a panicked look in his eyes when he realized I knew what "1.5gb" actually meant. He had me plugged in using some shoddy cables that couldn't handler more than 100mbps. I fixed it once he left.


On of my _evil hidden agendas_ is making end-users aware that their ISP (Rogers, anyone?) is doing a terrible job, and someone like TekSavvy can solve their problems for them (;-))


Hi, Dave here.

I recommend using a small box running LibreQoS adjacent to the big router. Large-scale routers based on Application-Specific ICs do a wonderful job, but are hard to change. Having a transparent fix in an inexpensive device now is way better than waiting and hoping that the router vendor can update their ASICs (:-))

I emphasised your problem in a video about the article, at https://vimeo.com/1017926413


I have another article in the "You Don't Know jack" series from the ACM, now in the November Communications of the ACM (the print magazine), and a 5-minute video about it at https://vimeo.com/1017926413

This is a follow-on from Dave Taht's "bufferbloat" work, now a project called LibreQoS, where QoS stands for Quality of Service.

If you are having trouble with unintelligible con-calls or gaming lag, have a peek.


Oh goodness no, always measure latency under full load, in the middle of the bandwidth test. A convenient example is https://www.waveform.com/tools/bufferbloat which can easily tell crappy ISPs from good ones


Measuring latency under full load is like measuring how fast you can drive into a crossing and not making the turn. It is meaningless. See the fantastic explanation by Gil Tene in the link above.

With best case latency you can determine if the service can be suitable for real-time or not. There will always be buffer effects and they will vary by other user activity and the complexity or the packet processing. These effects will largely be unknowable in advance and the part that is knowable is extremely difficult to communicate to an average user. They don’t really get HDR histograms https://github.com/HdrHistogram/HdrHistogram.NET/blob/master...


I have tried to point out MANY times that the DEFAULT behavior of the TCP slow start algorithm is to saturate the link - however briefly - get a drop, and then back off. I try to do it with some humor using jugglers here - and get so far as slow start, I think, about 10 minutes in: https://www.youtube.com/watch?v=TWViGcBlnm0&t=510s

ALL NETWORKS have perceptible jitter due to this, unless you rigorously apply FQ and AQM techniques to each slower hop on the path

Fatter tests like waveform show the common bufferbloat scenario in ways humans can see, better. But slow start overshoot is always there, on any connection that lasts long enough, which only takes a couple RTTs. You can clearly see netflix doing you in here, for example...

https://www.youtube.com/@trendaltoews7143

(except that in this case, the link has libreqos and FQ on it, so all that buffering is just local to the netflix flow, invisible to everything else)


Nice videos Dave. I guess I have personally given up on effective buffer management. Perhaps if ipv6 and infiniband becomes the underlying infrastructure? There is just so many layers of abstraction hiding no longer useful decisions in the stack that I have just decided to leave infra and networking behind for a while to see if one can make a difference elsewhere.


Don´t give up. Ask for RFC7567 everywhere.


Sorry, I think you are thinking of something else. Maybe a railroad crossing (:-))

Joking aside, the https://www.waveform.com/tools/bufferbloat test looks to see if the networking software is working correctly by putting a large load on the network, and then seeing if other streams are affectec by the overload.

The example on the https://libreqos.io/ home page is of * good software delivering 9 and 23 milliseconds down/up latency at full load * bad software delivering 106 and 517 milleseconds latency under load.

It is, in effect, a test for software failure under load


Yeah that sounds like looking for fairness and effectiveness in buffer management in a shared environment. I was thinking of measuring latency of x.

The M1 cards I mentioned was notorious with 1GB (?) buffer and a massively oversubscribed packet processor. Causing extreme jitter.


> Latency to where? Greyface also asks "Latency is an end-to-end metric which will almost always involve path components beyond the control of your ISP - can and should they be held responsible for those paths?"

The latency you see is almost always from last-mile software, under the control of the ISP, and can be fixed locally. Before fixing, my local ISP gave me ping-time to the internet interconnect point in downtown Toronto that were typical of a link to Istanbul, Turkey (:-))

They weren't trying, and aren't to this day. Arguably they need a nice unfriendly regulator to require at least a good-faith attempt.


I often get spikes of delay showing up in https://test.vsee.com/network/index.html while on conference calls. Those correspond to periods in which speakers "sound like aliens", "have fallen down a well" or simply freeze up.

Having a standard that ignores that is less than useful. It's the standards body showing disrespect for the people it's supposedly creating the standard _for_.


The bottleneck device can change with every different destination, so CAKE and fq_codel find it the same way older TCP code did: increase the load until they get a timeout or a "slow down, you idiot" flag (:-))

The initial bandwidth setting is really to get it started at a good value for your usual bottleneck device, such as your local cable modem.

Always starting off as if you had fast ethernet to the ISP when you actually have a super-el-cheapo link is a waste of time and effort. Even if you have a good algorithm like CAKE.

--dave


When I used CoDel on OpenWrt years ago, setting a 1 Mbps limit prevented the link from ever going faster. If it's now smart enough to discover the correct limit, then that's an interesting/useful development.


Followup: I just tested OpenWrt 23.05.2 + luci-app-sqm + cake with https://speed.cloudflare.com/

With the limit set to 1 Mbps (1000/1000), my upload latency dropped from 80ms to 25ms, but speed was hard-limited to 1000/1000. With the limit rasied to 1G/1G, cake stopped working and my upload latency returned to 80ms.

So I stand by my original comment. You still have to configure the speed limits manually.


You're right: CoDel and derivatives like fq_codel and cake don't auto-tune anything on timescales much longer than the interval parameter which defaults to 100ms. And fq_codel doesn't even do traffic shaping.

But I think davecb may have been confusing traffic shaper limits with TCP congestion control behaviors, and maybe the impact of a large TCP initial window releasing a sudden burst of packets that may be large enough to build up a bit of a queue on particularly slow links. (It was a serious problem in the ADSL era; now, only wireless gets that slow, and large bursts of packets are as likely to help as not when frame aggregation enters the picture.)


I mean setting a static traffic control limit was generally a high water mark, and in general even if you had excess bandwidth you didn't want to exceed it.

Now, to get around that some devices perform regular speed tests and dynamically adjust the high water mark. This said there are some limits at which you should perform these tests as too often and you may affect the actual applications you're trying to run.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: