Hacker News new | comments | show | ask | jobs | submit login
The 3.9 kernel is out (lwn.net)
99 points by edwintorok 1516 days ago | hide | past | web | 36 comments | favorite



Particularly interesting is the possibility of using a SSD as a cache for a HDD just like the Fusion Drive in the recent iMacs (https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....)

Reviews were quite positive of Apple implementation. Has anybody tried the linux version ?


I've got flashcache running on a 20GB SSD partition fronting a 120GB spinning disk partition, on a desktop at home. It feels (anecdotal, subjective) as fast as a pure SSD for all normal desktop things. I also ran a small MySQL db through this setup for a few weeks, and that improved noticeably over a single spinning disk, especially when there was IO load from other processes.

The install procedure is a little rough around the edges, so I'm looking forward to having that more polished.


I'm quite interested in the user namespaces feature, but can't come up with any use cases for myself other than say sandboxing (sort of) applications. Can anyone explain it a bit better? The lwn article is a bit heavy going


What about this other LWN article "A new approach to user namespaces": https://lwn.net/Articles/491310/ ?

> It allows for applications with no capabilities to use multiple uids and to implement privilege separation. I certainly see user namespaces like this as having the potential to make linux systems more secure.


It was primarily designed for sandboxing, so it's not strange that you cannot come up with other use cases.


That... Would make sense then. Now I feel stupid ;)


I don't know enough about the feature to say if it is applicable directly, but Android uses uid/gid to create sandboxes on a per-publisher basis, so that installed apps do not have access to other publishers' apps' files.

This can be used in two ways: Application security, in general, would benefit from this practice, and if you are creating a Linux that can be both a traditional "desktop Linux" as well as running the Android environment, this is a useful feature for enabling Android app security to remain secure, which it wouldn't if you did user namespace separation merely by convention.


Here's one use case: http://docker.io


It looks like the kernelnewbies.org page is getting stampeded and is unreachable. Too bad, they usually have a nice high-level overview of the new kernel features and changes.


The H also has an overview, but I usually prefer KernelNewbies too: http://www.h-online.com/open/features/What-s-new-in-Linux-3-...


If you can read french, http://linuxfr.org/news/sortie-du-noyau-linux-3-9 is a good overview.



Am I the only one who feels that the kernel releases have been more frequent during recent times? Or is it that they have changed the version numbering like firefox?

About an year or so ago, it was something like 2.6.x. Then, it jumped to 3.0 and now it is at 3.9.


Kernel releases have been getting a bit more frequent. A couple of years ago they were about 80 days apart, now it's closer to 70. A change, but not a huge one.


Linux jumped from 2.6.39 to 3.0 because the numbers were getting too big. He also decided that as the middle digit wasn't being incremented, he'd drop it.


I really wish he would've gone with date-based version numbers. 3.X is eventually going to get "too big" and it will be the same (non-)problem.


I remember seeing 2.6.42 on my ubuntu 10.04. May be it was kind of an alias for some 3.x version.


Yeah, some software assumed that there must be three digits in version numbers so 3.X kernels were sometimes called 2.6.X+40 which means 2.6.42 actually is 3.2.


There's no unstable 2.7, it's mainly because of that. The change to 3.0 happened basically to signal that.

EDIT: For those who do not remember the 2.x stuff when x was a odd number the version was unstable, when it was an even number the version was stable and basically was what distributions used.


Agreed. It seems that the trajectory from 2.0 to 3.0 was much, much slower than the one from 3.0 to 4.0 (at least judging by data points so far).

Anyone know of any reason for this?


What kind of data points do you have for a trajectory to 4.0? There's no plan for a 4.0 kernel.


I think Linus said he's gonna switch to 4.* once the minor version number gets "too big" (probably 30-40 like he did on 2.6.*).


sorry, I just meant there's no schedule for 4.* for which you can gauge a "trajectory", just a plan to bump it after a few years.

Since 2004, the kernel team has aimed to release every 2-3 months, and current releases are still within that timeline.


From the first paragraph of the announcement:[1]

“makes me suspect [...] people were gaming the system and had timed some of their pull requests for just before the release”.

Does anyone know what Linus means by that?

[1] http://lwn.net/Articles/548799/


Pull requests this late in the -rc cycle should contain only important bugfixes, other changes should normally wait for the next kernel release. For example Linus ignored some pull requests sent for 3.6-rc2 that didn't really belong there: https://lkml.org/lkml/2012/8/16/577


Been waiting for raid5 with btrfs!


How stable is btrfs at this point?


I use it full time on my Desktop and my Laptop. It glitches fairly often but nothing unrecoverable. You'll mostly run into issues if you run out of space. Also there's some weird side effects with improper shut-down that you should note. Btrfs tends to leave some stuff in my logs too, but nothing that actually lost data, normally I see the error after a brief lockup in the system.


On that note about running out of space, an interesting thing you have to keep in mind with btrfs is that, as a snapshotting filesystem you can't simply unlink files to free up space. They're still present and taking up space in a previous snapshot.

There's ways to free up space for real but you have to manually use some tools to do it from what I understand.


> glitches

Panic? OOPS?


Mainly oops left in my logs, one or two panics on shutdown. Once a panic when I was doing something silly with my uni research project.

Pity I can't report these really, tainted (AMD) kernel.


I'm using it on two machines (both running arch-linux, so it's typically only one week behind kernel.org kernel releases or so), and I have not noticed data loss or corruptions, even though I crash these machines from time to time (with btrfs-unrelated things).

The ability to make daily snapshots effortlessly is godsent on my tinkering computer (so I can rollback whatever stupid idea I came up with), and on a slow laptop with a even slower 2.5" PATA harddisk, I at least have the illusion of a performance improvement by LZO compression causing less disk-accesses.


They say we must wait a little longer...


Raid5 is not really a good idea. See BAARF[1] (battle against raid five/four) for some reasoning.

[1]: http://www.baarf.com/


I've read a little bit of the material, but I don't get the feeling that they looked beyond block-level RAID5/6 (which btrfs/raid5, as far as I understand, is not).

[1] mentions that RAID5 never checks parity on read, which is not an issue with ZFS/zpool (and, I think, the proposed BTRFS/raid5) because they verify all read blocks' check sum. Also the creeping multiplication of bit-errors on parity rebuild is a non issue because of this. [2] only mentions ZFS, but attributes copy-on-write as the remedy against distribition of bad data over the RAID volume.

[1]: http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

[2]: http://www.miracleas.com/BAARF/Why_RAID5_is_bad_news.pdf

And, yes, I see this silent corruption problem, and the inabilty to identify the "wrong" drive in case of parity mismatch as a big deficit of block-level RAIDs, and I share their view on the abysmally bad performance of some degraded/rebuilding arrays. But that is mainly what the filesystem-integrated redundancy mechanism try to address with their "blatant layering violation".


Yeah, I don't think they cover raidz or raidz2. raidz is not exactly raid5, in the classic sense, because as you mention zfs does checksum verification.

I wasn't sure what btrfs (or the parent) meant by 'raid5'. I think zfs was wise to call it something else.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: