I think it would be much better to allow the auto-detected TCP window to be exposed to the application level as the "send-buf" size (with perhaps some 10% buffer bloat to allow filling in the gaps when the window grows or acks return prematurely).
Also, it would be good for high-bandwidth-high-latency situations, where the default Linux 0.5MB send-buf size is not enough. Allow the send-buf to grow if the TCP window needs to grow beyond 0.5MB.
Even in a high latency high bandwidth situation having huge buffers is self-defeating. Anybody that has used Bittorrent and assumed their ISP was doing some sort of throttling were you have weird latency spikes and reduced performance... this is a direct result of buffer bloat in the majority of cases.
Large buffers only help in the case of a single TCP connection using up as much as the bandwidth as possible, which helps network routers and such things look good in benchmarks.
What you are talking about is some sort of Quality of Service type thing. Probably most usefully expressed at the edge of networks for prioritizing traffic and at the ISP level so they can route traffic through different internet links based on requirements.
ISPs have to deal with choosing different links to other networks and what it costs. They can do choose stuff like use main backbones versus secondary links and you get different trade-offs based cost versus latency and things of that nature. So ideally there should be some sort of flag you can set in the TCP packet that would indicate latency importance or some such thing. Or just making internet routing equipment application-protocol-aware.
The way it works now, the kernel is basically forcing applications to have buffer-bloat (fill a 0.5MB socket buffer), or auto-detect RTT or manually select a proper buffer size. All are bad options.
Also, in high-latency-high-bandwidth situations, the default 0.5MB buffer will simply fail to make use of the available bandwidth, so increasing the buffer size does not defeat the purpose. Latency spikes are a different situation, there are cases of constant high latency (e.g: inter-continental 1Gbps links).
Such field exist in the IP header and is called TOS (Type of Service).
Assuming a 1Mbit link, it takes 4 seconds of latency to send the default SO_SENDBUF.
I agree that applications that care about this don't use TCP, but one of the primary reasons for this is exactly this problem: that you don't get to send at the edge of the TCP window. There are other reasons, of course, each of which is fixable (and should be fixed!)
Also does anyone explain the difference between vanilla CODEL and FQ_CODEL? Is one an obvious choice over the other?
Initial CODEL commit entry from May:
Search for CODEL in linus's tree:
And there has not been a lot of CODEL activity in the net-next tree since the activity in May:
Am I missing something obvious?
git describe --contains 76e3cc126bb223013a6b9a0e2a51238d1ef2e409
Anybody know how the central kernel (the parts that are on all hardware) has evolved speed/size-wise?
Also, is the drivers in linux of higher quality than the ones for example windows? I have gotten the impression that it is so.
Purchasing random laptop off the internet that runs other OS because it is cool and then trying to install Linux on it is not going yield good results unless you are willing to spend a lot of time researching and making sure you bought something that will work well, as well as spend some time fiddling around with drivers a bit and filing a couple bug reports. This would be because your using hardware that no Linux developer has direct access too. Essentially you are the primary means of support for that particular OS/Hardware combination.
Or you can purchase laptop from a company that provides support for your particular Linux/hardware configuration (ie: they sell Linux pre-installed) and then you will likely get close to a 'Apple-like' experience with it 'just working' out of the box.
Also, did you buy direct or get from a company that does the Linux install? I did the latter and was very disappointed with Emperor Linux; I've gotten basically zero support.
As for the support - I have to support the other users ^^
Emperor Linux is one of the companies that offers pre-installed Linux laptops and guarantees compatibility and offers support. The box was indeed compatible (even the touch-screen), but their support was poor.
Someone pointed out zareason  in another thread; I may give that a shot next time.
Just check a site like http://www.linux-on-laptops.com before buying and I believe you can actually get better bang for the buck by buying a cheap laptop will full linux support (even if the manufacturer does not officially support it).
After working extensively with linux USB drivers by large corps like Philips, I cannot but assume they put their worst teams on the Linux drivers.
And as a proud owner of (what is widely considered the most supported linux laptop) lenovo laptop, I can say that many things just dont work. E.g. Suspend.
Suspend worked out of the box for me on my X220t. I am running Arch Linux.
It really depends on the driver, but, at least, you can look into the code to see if it looks high quality. Or pay someone to do it.
I wonder how you arrived to such conclusion.
The thing that got me on this track was thinking about all the systems that still run the 2.6.X kernels (and maybe even 2.4.X), how much, if any improvements would they get from upgrading to 3.5?
The features that are in the Linux kernel are used by all sorts of folks. And there is a constant review of developers refactoring and debating how to merge features. And they never seem to merge features unless it is being done right from a technical viewpoint!
It depends on hardware and workload. A newer kernel will be able to use hardware an older one wouldn't be able to. It'll also do some newer tricks (I gather the work on TCP is yielding big performance improvements). I also like BtrFS, but won't deploy it to production just yet.