yes, this is userland, but it's for fuchsia, so "kernel functionality" as far as its users are concerned.
Either way, it looks like the congestion control is just New Reno.
Also, this isn't an official Google product, per the disclaimer; just some code that happens to be owned by Google. Where are you seeing the reference to Fuchsia?
edit: oh, I see. Fuchsia codebase has references to this netstack.
Combined with fq-CODEL, BBR finally offers a way to load balance traffic on your NAT without allowing someone to monopolize bandwidth
Over the last 10 years networks buffers have become gigantic and sending packets as fast as you can causes standing queues that can be a dozen seconds long. This is where the lag comes from when someone on your LAN is uploading/downloading.
The lag actually gets worse with higher connection speed because the devices tend to have even bigger buffers.
BBL and to some extent VEGAS are smarter going about things. They attempt to maintain constant latency instead of maximum bandwidth. For a few percent less connection speed all the buffers stay empty and delay stays minimal
Implementing it in the kernel also makes programs like “trickle” much easier.
Huh? Congestion control is enforced on the sender side while ACKs are generated by the receiver.
Apple has a patent on throttling lower priority TCP flows by advertising a reduced receive window based on inter-arrival jitter. I feel this is a cleaner way than delaying ACKs.
Anyways, my point was that LEDBAT wouldn't help.
So, you get a lot of complexity for not a lot of benefit. However, Fuchsia is destined for mobile devices, which may not have the LAN-WAN bottleneck, so this could see some benefit.
The server has to give you an indication of round trip for that to work - although I suspect there’s some clever way to get it using Some kind of URG or OOB or keep alive even if the server doesn’t natively support TCP ledbat.
Why so many netstacks in Go? I would imagine even the tiny GC pauses Go has would be undesirable for a netstack.
Mirage OS has the problem that it still relies on Xen, so sometimes listing it gives argumentation power to those disregarding the dependency as a convenience, rather than lack of support on OCaml for such kind of programming.
Go's weird threaded runtime is nothing new. That is how Active Oberon tasks (aka Active Objects) are implemented.
Not particularly important, I'm just curious as to how much the GC is really responsible for slowness as opposed to just correlating with languages that don't allow controlling heap vs stack allocations.
- a box/whisker plot in the latency comparison graph — esp. if we're to talk about Go's GC...
- some discussion/arguments why the particular C implementation was chosen ("tapip") — I'm not an expert in this area so I don't have the slightest idea how notable it is;
- how did they detect/measure the claimed memory leaks in C? also, some statistics about the claimed crashes?
- isn't clear to me if they used some "well known" load testing tools, or some homemade framework? (e.g. "siege" is a tool I read about more than once?)
- more details on how exactly the "correctness was determined by testing against Linux kernel [implementation]".
That said, my initial loose conclusions from this seem to be:
- it appears it may be easier to write a correct implementation in Go (the implementation seems to be written just ad hoc by the article authors?) than in C;
- it appears it may be easier in Go than in C to write an implementation scaling well w.r.t. average latency & throughput, assuming the need is for a multi-threaded & user-space implementation.
We intend to merge them into one repository soon. I am just a little disorganized.
Seeing that this is for fuchsia makes that seem relevant.
Libuinet is the FreeBSD TCP stack in user space (including over netmap)
There is also work in FD.io on userspace TCP & UDP.
- unikernel go
- making better parallel abstractions over network streams without having to deal with the mess of a kernel interface
- as a platform to implement more integrated network policy (like congestion control, or state management for ddos protection)
- ultimately might be really helpful for portability by requiring only a raw packet interface from the host OS
- much easier environment for tying in w/ SDN and QOS (which google seems to be pretty gung ho on)