Hacker News new | past | comments | ask | show | jobs | submit login
Linux 3.5 Kernel Released (kernelnewbies.org)
192 points by moonboots on July 22, 2012 | hide | past | favorite | 36 comments



It is worth nothing that Android is now closer to the mainline than ever. With this release, the kernel incorporates a functionality called autosleep which is similar to Android's wakelock, which was the main controversy between kernel developers and Google. Android team will now probably be able to switch to the new Linux infrastructure, if they wish so.


Here is a nice discussion on LWN about wakelock / autosleep: http://lwn.net/Articles/479841/


A very nice networking feature has landed in 3.5: CODEL AQM packet scheduler to fight bufferbloat. It is mainly relevant for routers.

http://queue.acm.org/detail.cfm?id=2209336


Some of the buffer bloat problem is due to SO_SENDBUF being statically set to some value, which also results in hiding the actual bytes-in-flight from the application.

I think it would be much better to allow the auto-detected TCP window to be exposed to the application level as the "send-buf" size (with perhaps some 10% buffer bloat to allow filling in the gaps when the window grows or acks return prematurely).

Also, it would be good for high-bandwidth-high-latency situations, where the default Linux 0.5MB send-buf size is not enough. Allow the send-buf to grow if the TCP window needs to grow beyond 0.5MB.


Well exposing it per application kinda defeats a bit of the purpose behind it. The point is that you get rid of the buffer so you get packet loss when the link to the other peer(s) is(are) saturated and allow the TCP congestion algorithms to work properly.

Even in a high latency high bandwidth situation having huge buffers is self-defeating. Anybody that has used Bittorrent and assumed their ISP was doing some sort of throttling were you have weird latency spikes and reduced performance... this is a direct result of buffer bloat in the majority of cases.

Large buffers only help in the case of a single TCP connection using up as much as the bandwidth as possible, which helps network routers and such things look good in benchmarks.

What you are talking about is some sort of Quality of Service type thing. Probably most usefully expressed at the edge of networks for prioritizing traffic and at the ISP level so they can route traffic through different internet links based on requirements.

ISPs have to deal with choosing different links to other networks and what it costs. They can do choose stuff like use main backbones versus secondary links and you get different trade-offs based cost versus latency and things of that nature. So ideally there should be some sort of flag you can set in the TCP packet that would indicate latency importance or some such thing. Or just making internet routing equipment application-protocol-aware.


By "expose it to the application", I mean in the flow-control sense. That send() (or select/epoll/etc) will block until there is room in the TCP window, rather than in the pre-determined buffer which will always be too small or too big.

The way it works now, the kernel is basically forcing applications to have buffer-bloat (fill a 0.5MB socket buffer), or auto-detect RTT or manually select a proper buffer size. All are bad options.

Also, in high-latency-high-bandwidth situations, the default 0.5MB buffer will simply fail to make use of the available bandwidth, so increasing the buffer size does not defeat the purpose. Latency spikes are a different situation, there are cases of constant high latency (e.g: inter-continental 1Gbps links).


> So ideally there should be some sort of flag you can set in the TCP packet that would indicate latency importance or some such thing.

Such field exist in the IP header and is called TOS (Type of Service).


Regarding bittorrent, I'm wondering if that's not just as much the fault of crappy NAT modems. Around here everyone gets a Zyxel, and bittorrent totally chokes it periodically. The hash table used for NAT grows, the CPU can't handle it, to the point you can't even log in to the modem, and new TCP sessions get dropped until the NAT clears out the old TCP connections. (With a proper network device, this shouldn't be an issue, but it's one I see all the time with the cheap modems we get from the ISPs)


The socket send buffer from applications is not very relevant to buffer bloat, they do not cause retransmissions, nor do they contribute much visible latency.


0.5MB of buffer contributes a lot to latency when your link is slow.

Assuming a 1Mbit link, it takes 4 seconds of latency to send the default SO_SENDBUF.


True. But if you have 0.5MB data to send right now, it will take that long regardless of whether it is queued in the socket buffer or your application. Applications that need to care about this are typically not sitting on top of TCP, and would usually need to control the send buffer size anyway.


If you have 0.5MB data to send right now, you could queue it all in your application layer, then you could still decide to cancel it or schedule your sends based on your own priorities.

I agree that applications that care about this don't use TCP, but one of the primary reasons for this is exactly this problem: that you don't get to send at the edge of the TCP window. There are other reasons, of course, each of which is fixable (and should be fixed!)


I am looking forward to CODEL support in shorewall's simple or advanced traffic shaping scripts. I have tried to cobble together a custom `tcstart` file for CODEL and it has not been very successful. Does anyone have shorewall working with CODEL?

Also does anyone explain the difference between vanilla CODEL and FQ_CODEL? Is one an obvious choice over the other?


I have seen a number of people write that CODEL will be mainlined in 3.5. But from what I have seen it has been there for sometime now:

Initial CODEL commit entry from May:

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git...

Search for CODEL in linus's tree:

http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Fl...

And there has not been a lot of CODEL activity in the net-next tree since the activity in May:

http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fdavem%2Fnet-...

Am I missing something obvious?


Stable releases branch from Linus's tree. 3.4 branched on May 20th.

  git describe --contains 76e3cc126bb223013a6b9a0e2a51238d1ef2e409


Thank you for responding. For some reason I have always thought or just assumed that linus's tree was the tree people referred to when it came to versioning. HN clues me in yet again...


Surely the Linux-kernel must by now be the worlds most advanced and most bloated kernel? Then again, the kernel is not the thousands of high quality modules (which I guess could be called drivers) which allow you access to the hardware.

Anybody know how the central kernel (the parts that are on all hardware) has evolved speed/size-wise?

Also, is the drivers in linux of higher quality than the ones for example windows? I have gotten the impression that it is so.


As far as quality goes, it varies largely depending on the hardware. Some stuff is easy to figure out (or has specs published) and results in good quality stuff being available. Other things don't tend to fair so well, like the graphics drivers, and I believe many broadcom wifi adapters. However if you have something headless with a wired network you're generally not going to be able to ever be unhappy.


You get best bang for your buck by purchasing hardware that is made by companies that support Linux with open source drivers.

Example:

Purchasing random laptop off the internet that runs other OS because it is cool and then trying to install Linux on it is not going yield good results unless you are willing to spend a lot of time researching and making sure you bought something that will work well, as well as spend some time fiddling around with drivers a bit and filing a couple bug reports. This would be because your using hardware that no Linux developer has direct access too. Essentially you are the primary means of support for that particular OS/Hardware combination.

Or you can purchase laptop from a company that provides support for your particular Linux/hardware configuration (ie: they sell Linux pre-installed) and then you will likely get close to a 'Apple-like' experience with it 'just working' out of the box.


Yes. Especially ThinkPads seem to be very well supported (typing this on a X220), with everything working out of the box, amazing battery life (up to 12 hours on mine) and a very stable system in general.


I'm using the X220 as well and love it. There are minor quirks on Ubuntu 12.04; are you on 12.10?

Also, did you buy direct or get from a company that does the Linux install? I did the latter and was very disappointed with Emperor Linux; I've gotten basically zero support.


I'm on a Google internal ubuntu 10.04 version, which works very well.

As for the support - I have to support the other users ^^


If you are a power linux user why pay for someone to install linux? What is emperor linux?


At the time I needed a new laptop, but didn't want to spend the many hours previously necessary to get all the devices working properly, and I didn't want to take the risk of device incompatibility. So my theory was that I was paying them to do all the fucking around. Or, alternately, to have the same sort of out-of-the-box experience I would with, say, a Mac or the same laptop running Windows. Except with my OS of choice.

Emperor Linux is one of the companies that offers pre-installed Linux laptops and guarantees compatibility and offers support. The box was indeed compatible (even the touch-screen), but their support was poor.


I used to think this, but my W520 has had more than its fair share of graphics problems (especially when I naïvely started with the Canonical-recommended 32-bit install; I've since gone 64-bit). It still won't link to an external display (at least not without rebooting, which I haven't tried but is a complete non-starter for me). Other than that, it's pretty solid.

Someone pointed out zareason [1] in another thread; I may give that a shot next time.

[1]: http://zareason.com/shop/Verix-2.5.html


Yes but thanks to all these people who fiddle around with the bug reports I would say that after a while (a few months, half a year?) your laptop will work fine with Linux.

Just check a site like http://www.linux-on-laptops.com before buying and I believe you can actually get better bang for the buck by buying a cheap laptop will full linux support (even if the manufacturer does not officially support it).


Sadly this is untrue.

After working extensively with linux USB drivers by large corps like Philips, I cannot but assume they put their worst teams on the Linux drivers.

And as a proud owner of (what is widely considered the most supported linux laptop) lenovo laptop, I can say that many things just dont work. E.g. Suspend.


>And as a proud owner of (what is widely considered the most supported linux laptop) lenovo laptop, I can say that many things just dont work. E.g. Suspend.

Suspend worked out of the box for me on my X220t. I am running Arch Linux.


Same, in fact it bugs me out a little when Suspend works flawlessly with KVM VMs running!


> Also, is the drivers in linux of higher quality than the ones for example windows?

It really depends on the driver, but, at least, you can look into the code to see if it looks high quality. Or pay someone to do it.


> most bloated kernel?

I wonder how you arrived to such conclusion.


I was just thinking about how many new releases there have been and how little new things have actually affected me or the servers I administer in any way. Of course, most of the added stuff are not part of the core kernel and I will probably never run the code but still?

The thing that got me on this track was thinking about all the systems that still run the 2.6.X kernels (and maybe even 2.4.X), how much, if any improvements would they get from upgrading to 3.5?


I wouldn't say it's "bloated", more featureful! In fact, I believe the kernel developers routinely cull old and crufty code.

The features that are in the Linux kernel are used by all sorts of folks. And there is a constant review of developers refactoring and debating how to merge features. And they never seem to merge features unless it is being done right from a technical viewpoint!


For example econet and token ring were purged from 3.5. So in some ways this is the most unbloated linux kernel in some time.


> how much, if any improvements would they get from upgrading to 3.5?

It depends on hardware and workload. A newer kernel will be able to use hardware an older one wouldn't be able to. It'll also do some newer tricks (I gather the work on TCP is yielding big performance improvements). I also like BtrFS, but won't deploy it to production just yet.


A new stable kernel is released on a regular schedule about every 2-3 months. Unlike many projects, these aren't based on major changes, but rather incremental improvements. In an ideal world, you should not notice any change other than module X is now supported (or occasionally you might see a performance boost)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: