Hacker News new | past | comments | ask | show | jobs | submit login
The ~200 Line Linux Kernel Patch That Does Wonders (phoronix.com)
420 points by pietrofmaggi on Nov 16, 2010 | hide | past | favorite | 96 comments



I don't think I've managed to piece all this together, perhaps someone can fill in the blanks.

• The patch automatically creates a task group for each TTY.

• The patch automatically assigns each new process to the task group for its controlling TTY.

• In the case where there are large (>cores) numbers of cpu bound jobs, the latency of interactive jobs is vastly improved.

I think the piece I'm missing is the behavior of the scheduler. Does it now make its decisions based on task group cpu consumption instead of process? I saw options to that effect back around 2.6.25.

Why is this an improvement over just nicing the "make -j64" into the basement and letting the interactive jobs have their way as needed? (Likely possibilities are that it is automatic, or maybe there is something about disk IO scheduling happening from the task groups as well.)


Good questions. As far as the "why is this an improvement over just nicing" I found this link elsewhere in this thread:

http://marc.info/?l=linux-kernel&m=128991621119292&w...

It includes this discussion:

No, it won't. The target audience is those folks who don't _do_ the configuration they _could_ do, folks who don't use SCHED_IDLE or nice, or the power available through userspace cgroup tools.. folks who expect their box to "just work", out of the box.


> I think the piece I'm missing is the behavior of the scheduler. Does it now make its decisions based on task group cpu consumption instead of process? I saw options to that effect back around 2.6.25.

Yes, and yes. Previously you could set things like this up explicitly, this makes it (optionally) automatic.

> Why is this an improvement over just nicing the "make -j64" into the basement

That gives you... I think 10% less CPU weight per level, so you can get down to 13% of base. So your 64 processes will still weigh about 8x one base process.

This lets you consider everything spawned from your terminal as one group, and everything from your X session as another, so your compile processes collectively (no matter how many you have) weigh as much as your GUI processes collectively weigh.


So does "tty" or "terminal" in this context also refer to pseudoterminals, so that each xterm/konsole/gnome-terminal instance gets its own scheduling group as well?


I would be very surprised if it didn't, this should be the same as what shows under the "TTY" column from "ps -f".


I don't follow kernel development extremely closely, but it fascinates me that people are still actively working on the kernel's scheduler and achieving a "huge improvement" like this.


Pretty much all of the performance enhancement in the kernel has been toward server-type high-throughput applications. This was mainly because subjective things like jutter and interactivity lag are really hard to measure objectively (using existing performance benchmarks), and it's really hard to optimize for something when you have no benchmarks and no regression tests. Desktop-style interactivity improvements have been advancing only recently. The cgroups feature has actually been available in the kernel for some time, but distros weren't using it. This new patch sort of auto-configures cgroups per-TTY.


I remember reading something about an anesthesiologist who got into hacking the scheduler targeting desktop use cases. He generally said that the scheduler gets a lot more attention from people who care more about server work loads and such. He had his own custom kernel patches that people used to get directly from him that weren't in the mainstream kernel -- This was before the era of multi-core, but people said his scheduler had better responsiveness than the default one.


Con Kolivas. He's also mentioned in the article.

http://en.wikipedia.org/wiki/Con_Kolivas


It makes me wonder whether it is a sign that desktop responsiveness has been neglected by the kernel devs which possibly prioritize server issues. I had read a Google engineer suggesting Canonical should hire decent kernel developers : " P.S. Next thing for Ubuntu to learn --- how to pay their engineers well enough, and how to give them enough time to work on upstream issues, that once they gain that experience on Ubuntu's dime and become well known in the open source community, they don't end jumping ship to companies like Red Hat or Google. :-)"

http://news.ycombinator.com/item?id=1321029


Canonical has some decent kernel developers, just... not enough, especially for their install base and the amount of work they do.


At least they're aware of it; just yesterday they posted this: http://webapps.ubuntu.com/employment/canonical_KD%20PG3/


Here's the actual patch (http://marc.info/?l=linux-kernel&m=128978361700898&w...), with a bit of a summary of what it does.

I don't have enough context to fully follow it, but it sounds like it sets up a better hueristic for grouping related processes into task groups in the scheduler.


So what's the downside? You almost never get optimizations like this for free. The post hints that this is also good for server workloads, but what suffers? Realtime would, but realtime usually involves a different scheduler anyway.


There isn't one. It's an existing option in the kernel, you can configure cgroups that way already, but most people don't do this so the feature is wasted. All this patch does is roughly approximate a decent-looking cgroup configuration by splitting processes by tty automatically.


Of course there will be a regression if you change the scheduler policy. ck tried something similar to this and mplayer performance suffered with it (though I don't remember the details). It also broke gnome-startup because it assumed some specific schedule ordering, though this patch is more limited so it might not.


When your application breaks because of scheduler ordering, You're Doing It Wrong.


Yeah, well, kernels need to support existing programs…

By the way, your post is the single most obvious statement I've read this year. You got upvoted just because you capitalized some words?


I got upvoted because I'm right. Kernels don't need to support horribly designed programs just because they exist, just like they don't need to support horribly designed programs that don't exist yet. Kernels support an interface and that's it. If you write code that abuses the interface, get ready to become a regression, and that'll be your own fault.

(TBH I have no idea why I got upvoted, it wasn't that insightful, but I stick by what I said)

(EDIT: I'm talking about gnome-startup. That's a stupid regression that never should've happened. The mplayer performance bug is totally understandable if you're mucking with the scheduler. What we really need is for someone (distros?) to pick up cgroups and provide a nice UI for it, some sane but nondestructive defaults, etc. Until then, this is a nice patch that keeps badly behaving programs from dragging down the entire system. At the very least, we mostly get user separation in multi-user environments.)


Since this only groups processes according to TTY/PTY, it should only affect jobs kicked off by an interactive login session. Background daemons, cron jobs, and the like all run detached from a controlling terminal, so their priority should be unaffected.

As long as the fixed overhead of the patch is small (which the linked thread seems to indicate) this should be a sizable win for desktop Linux boxes without much downside for server loads.


I think gxti is asking for a quantification of 'without much'.


The downside is that a bunch of processes started from one TTY doesn't get as much CPU as before. It basically shifts the scheduling granularity a level higher from processes to (interactive) sessions. Because that's what the question is: on which level do we want to have a fair scheduling? For a desktop user, processes have little meaning. Sessions, instead, are much more useful because they correspond better to his different tasks, for which he expects that the CPU power is distributed in a fair way.


It's not down for me... but here's the text:

In recent weeks and months there has been quite a bit of work towards improving the responsiveness of the Linux desktop with some very significant milestones building up recently and new patches continuing to come. This work is greatly improving the experience of the Linux desktop when the computer is withstanding a great deal of CPU load and memory strain. Fortunately, the exciting improvements are far from over. There is a new patch that has not yet been merged but has undergone a few revisions over the past several weeks and it is quite small -- just over 200 lines of code -- but it does wonders for the Linux desktop.

The patch being talked about is designed to automatically create task groups per TTY in an effort to improve the desktop interactivity under system strain. Mike Galbraith wrote the patch, which is currently in its third version in recent weeks, after Linus Torvalds inspired this idea. In its third form (patch), this patch only adds 224 lines of code to the kernel's scheduler while stripping away nine lines of code, thus only 233 lines of code are in play.

Tests done by Mike show the maximum latency dropping by over ten times and the average latency of the desktop by about 60 times. Linus Torvalds has already heavily praised (in an email) this miracle patch.

Yeah. And I have to say that I'm (very happily) surprised by just how small that patch really ends up being, and how it's not intrusive or ugly either.

I'm also very happy with just what it does to interactive performance. Admittedly, my "testcase" is really trivial (reading email in a web-browser, scrolling around a bit, while doing a "make -j64" on the kernel at the same time), but it's a test-case that is very relevant for me. And it is a _huge_ improvement.

It's an improvement for things like smooth scrolling around, but what I found more interesting was how it seems to really make web pages load a lot faster. Maybe it shouldn't have been surprising, but I always associated that with network performance. But there's clearly enough of a CPU load when loading a new web page that if you have a load average of 50+ at the same time, you _will_ be starved for CPU in the loading process, and probably won't get all the http requests out quickly enough.

So I think this is firmly one of those "real improvement" patches. Good job. Group scheduling goes from "useful for some specific server loads" to "that's a killer feature".

Linus

Initially a Phoronix reader tipped us off this morning of this latest patch. "Please check this out, my desktop will never be the same again, it makes a lot of difference for desktop usage (all things smooth, scrolling etc.)...It feels as good as Con Kolivas's patches."

Not only is this patch producing great results for Linus, Andre Goddard (the Phoronix reader reporting the latest version), and other early testers, but we are finding this patch to be a miracle too. While in the midst of some major OpenBenchmarking.org "Iveland" development work, I took a few minutes to record two videos that demonstrate the benefits solely of the "sched: automated per tty task groups" patch. The results are very dramatic. UPDATE: There's also now a lot more positive feedback pouring in on this patch within our forums with more users now trying it out.

This patch has been working out extremely great on all of the test systems I tried it out on so far from quad-core AMD Phenom CPUs systems to Intel Atom netbooks. For the two videos I recorded them off a system running Ubuntu 10.10 (x86_64) with an Intel Core i7 970 "Gulftown" processor that boasts six physical cores plus Hyper Threading to provide the Linux operating system with twelve total threads.

The Linux kernel was built from source using the Linus 2.6 Git tree as of 15 November, which is nearing a Linux 2.6.37-rc2 state. The only change made from the latest Linux kernel Git code was applying Mike Galbraith's scheduler patch. This patch allows the automated per TTY task grouping to be done dynamically on the kernel in real-time by writing either 0 or 1 to /proc/sys/kernel/sched_autogroup_enabled or passing "noautogroup" as a parameter when booting the kernel. Changing the sched_autogroup_enabled value was the only system difference between the two video recordings.

Both videos show the Core i7 970 system running the GNOME desktop while playing back the Ogg 1080p version of the open Big Buck Bunny movie, glxgears, two Mozilla Firefox browser windows open to Phoronix and the Phoronix Test Suite web-sites, two terminal windows open, the GNOME System Monitor, and the Nautilus file manager. These videos just show how these different applications respond under the load exhibited by compiling the latest Linux kernel using make -j64 so that there are 64 parallel make jobs that are completely utilizing the Intel processor.



Some good stuff in this thread. I found this post by Mike Galbraith (patch author) explaining why it's needed especially interesting:

http://marc.info/?l=linux-kernel&m=128991621119292&w...



OT: is "make -j64" overkill unless you have dozens of cores or am I missing something?


You're right - but that was the point. The patch was trying to fix problems with the process scheduler, and "-j64" is going to make lots of processes that want to do work and need scheduling.


thanks, but then the "that is my tipical workload" thingy does not hold, as you rarely have >60 cpu bound processes running at the same time. Well, flash player in chrome notwithstanding ;)


It probably approximates Linus's typical workload, which I imagine involves constant compiling and testing while compiling. He's probably still CPU bound.


make?


If you're the head of the world's largest computer OS project, the root of the maintainer tree as it were, I would make no assumption about what his typical CPU workload is like. :)


The number of jobs to run for an optimal compile time can be quite confusing. If none of the files you are going to compile are cached it is alright to run a lot more jobs then usual as a lot of them are going to wait for the disk I/O. After that twice the jobs then you have cores is mostly appropriate.


According to Con Kolivas's benchmarks, with the BFS scheduler you just do make -j [numprocs] for best results. I can't recall if he was accounting for disk cache, though.


How many simultaneous threads will your next computer be able to run?

Chances are it already runs at least two, most probably four. It's not unreasonable to see 4 and 8-threads as the norm. Also keep in mind we are only considering x86s. SPARCs, IIRC, can do up to 64 on a single socket. ARM-based servers should follow a similar path.

BTW, a fully-comfigured MacPro does 12. A single-socket i7 machine can do 12. I never saw a dual-socket i7, but I have no reason to believe it's impossible.

Considering that, -j64 seems quite reasonable.


Dual-socket i7 is called Xeon.

There are dual-socket and even quad-socket 8-core hyperthreaded xeons (the Xeon L75xx series). A 1U Intel with 64 threads will set you back about $20k.

AMD has 12-core chips, so you can get 48 cores in 4 sockets there. (But I think they only have one thread per core)


So, -j64 seems quite reasonable, if you have $20K around... ;-)

Personally, I would spend a part of the money on 2048x2048 square LCD screens. They look really cool.


Gentoo recommends -jN+1 where N is the number of physical and virtual cores.


Say I have a quadcore with hyperthreading, does this mean 4 + 8 + 1? Or is it either the physical or the logical cores (whichever is higher)?


How do you get 4 + 8? But anyway, it's logical cores, not physical ones.

The kernel can multi-task processes, but each process still gets exclusive use of the CPU when it runs. So if it doesn't need an adder, that adder sits idle.

With hyperthreading you can run two processes at once and the CPU merges them at the instruction level making maximum use of the components on the CPU.


make -j$(2N + 1) is roughly where minimal compile times are.


Where N is the number of physical cores? I do not use hyper-threading (it tends to be bad for the floating point and bandwidth limited operations that I do), but usually find minimal compile times at N+1 jobs (but with little penalty for several more).


It depends on many factors what the optimal number of concurrent builds is, but the bottom line is that you want to maximize your CPU utilization and minimize context switching.

If you think that one extra concurrent job is enough to fill CPU utilization in the time that other jobs are blocking on iowait, then you are fine.

So, bottom line, factors to think about:

- your i/o throughput for writing the generated object files;

- the complexity of the code being compiled, - template-rich C++ code has a lot higher CPU usage versus i/o ratio

- the amount of cores in your system


Out of curiosity, What types of applications are you running where HT hurts performance?


Sparse matrix kernels and finite element/volume integration. For bandwidth-limited operations, it is sometimes possible to get better performance by using less threads than physical cores because the bus is already saturated (for examples, see STREAM benchmarks). For dense kernels, I'm usually shooting for around 70 percent of peak flop/s, and any performance shortcomings are from required horizontal vector operations, data dependence, and multiply-add imbalance. These are not things that HT helps with.

Additionally, HT affects benchmark reproducibility which is already bad enough on multicore x86 with NUMA, virtual memory, and funky networks. (Compare to Blue Gene which is also multicore, but uses no TLB (virtual addresses are offset-mapped to physical addresses), has almost independent memory bandwidth per core, and a better network.)


I have 4 cores with hyperthreading enabled (so 8 "threads"), and find that -j10 is the fastest.


> Where N is the number of physical cores?

Yes. Dunno about HT, never used a box with it.


I am newbie when it comes to compiling kernel. Is it a pain to do with stock ubuntu 10.10?

Sometimes I run something heavy on my laptop and desktop freezes annoy me. If this patch will allow me to get around it - I would be glad to try it out.

Anyone having url of some niuce tutorial to compile new kernel for ubuntu 10.10?


It isn't hard to do, but a painless way to learn is to install a ubuntu 10.10 instance in a virtualbox and try it all in the box. If you screw up, who cares. After you have been through the process once it won't be intimidating to do it for your real OS.


Yes, it's indeed quite easy to compile your own kernel, and virtualbox is a good idea. (For compiling you won't actually need the virtualbox, but for booting from it without worries that you broke something, virtualbox sure comes in handy.)

I use "sudo make menuconfig", which gives you a text-based menu, there may also be a graphical version. I would not recommend "sudo make config", as that only gives you a long list of questions to answer.

Anyway, the trick is to read all the documention, and stick with safe choices, if you do not know what you are doing.


  # sudo make xconfig
should give you a point-and-click interface to the same menu. I haven't used it in years, but it was pretty clunky back then. It's just nicer to poke around in than the text-mode menu.


You don't need to be root to compile a Tk application and run it! You also don't need to be root to compile the kernel.

You only need root to copy vmlinux to /boot and copy the modules to /lib.


You should probably start with the .config from your distribution's kernel, rather than the stock vanilla kernel .config. It's usually in /boot, so in make menuconfig/make xconfig you choose to load an existing saved config, then select /boot/config-[kernel version] (assuming your distribution installs the kernel config there). Some kernels may also provide the .config contents in /proc/kconfig (or something similar -- can't remember the exact filename).


One thing to consider in terms of building a new kernel is that repos tend to customize their release kernels quite a bit, by adding non-mainline patches, extra drivers, etc.

Recompiling a vanilla kernel (from the linux-next git) is a pretty easy process, but you may find that when running it you've lost some nice features of the release you run or odd things have stopped working.

One might get better results from using a vendor supplied release kernel source tree (installing the kernel sources package for the repo) and then applying a patch to add the new scheduling groups. Making a patch like that is probably too hard for a newbie, but I'd be surprised if someone on the ubuntu forums doesn't end up providing one sometime soon.


in ubuntu, the config used to compile the kernel is available at /boot/config-`uname -r`

copy it to the source dir and name it as .config


yup, but that gets you a clone of the configuration options - which is important - but doesn't include the source patches, reverts, backports etc. that ubuntu made to their release kernel. I haven't looked at 10.10 specifically, but there are likely a fair number of them - many vendors customize extensively and few (if any?) ship 100% vanilla.

It's often not a problem to replace a release kernel with a vanilla kernel, but it can definitely change some behaviors or bite you if you're a special case or are using drivers not in the kernel tree.


Should I be pulling in the source from apt and applying the patch that way? I am forced to use nvidia's drivers, and I fear the vanilla kernel may not work.


Don't be too afraid. Nvidia's driver come as a module. There's lots of documention out there, how to make them work.


How do you stay up to date with Ubuntu's changes?


Ubuntu's changes to the kernel usually just backport new kernel features/fixes to an older kernel version. Compiling a fresh mainline kernel will get you most or all of those fixes. The source code shim that loads Nvidia's blob can be automatically recompiled by dkms when you boot into the new kernel. Installing the nvidia-kernel-dkms (or similarly-named) package should make that happen.


For all those looking for a compact guide to Linux kernel compilation:

http://www.kroah.com/lkn/


The hardest parts will be A> Getting the patch in correctly and B> actually getting the OS image you compile into the correct area for the bootloader if it doesn't work. The actual kernel compile these days are pretty easy aside from picking the correct options for your system (which while easier, still isn't a cakewalk).


Anything from Phoronix should be taken with a grain of salt. This looks legit since it has a message from Linus praising the patch, but there have been several similar stories out of Phoronix that turn out to be hoaxes or misunderstandings.

That said, such a patch would be pretty rad.


i whish something similar could be ported to BSD/Darwin, OSX. I have a MBP 6,2 (i5) with 4GB mem/5400 rpm disk and it's quite easy to hog it down, to almost unbearable sometimes.


What sort of tasks?

FWIW, I had a similar configuration to yours (just an older MBP) and installing an SSD helped immensely. I can hit 200% CPU load and not even realize it until the fans kick in…


I know the subject is pretty much Apple's and oranges but i'm currently running: - Terminal with 2 tabs - firefox with 2 tabs - chrome with about 9 - gaim and skype - iTunes streaming soma.fm - Netbeans with an opened project - jEdit - Colloquy - Postgres instance

and as soon as i booted Windows XP in VMware, well, took me a while to be able to reply to this post (after the vm settled).

I also that you might be saying "d'oh" but i've had Gentoo running on this metal and a "similar" environment AND compiling stuff with -j4 doesn't freeze away my UI.

My user experience with "OSX" is that it's, way more prone to unresponsiveness due to load, but hey, who cares :P clicks Time Machine


Your typical workload sounds almost exactly the same to mine - right down to VMware temporarily killing OS X performance.

Not that Anonymous Guy On HN is worth much, but get an SSD - it'll be the best upgrade you've ever purchased.

(Or maybe I just needed to sidegrade to Linux. :)


Do you have a suggestion on a good and large SSD? And what did you install it into?


The Intel SSDs are good; they top out at 160GB and they're not as fast as some of the newer drives - but their track record is nearly flawless. It's what I have, but I ordered one the minute it was posted on Newegg last year. Some of the newer Sandforce-based drives are supposed to be good, though you'll want to pick one from a reliable manufacturer and with a stable firmware. If you have a Mac, OWC[1] is a good choice, as I believe they have firmware that garbage-collects HFS+, which helps to keep the SSD as fast as possible. I also think OCZ is good, but check reviews to make sure people aren't having too many problems.

You can also get a brackets that replaces your optical drive, and allows you to fit a 2.5" HDD. I have one in my 17" non-unibody MBP, and it's really the best of both worlds - I keep OS X, my working files and apps on the SSD, along with my main Windows XP web testing VM. My iTunes library, media, and games stay on the HDD, along with a Boot Camped copy of Windows 7 (though it's a pain to get the installer to run without an internal optical drive). I keep a cheap Samsung bus-powered DVD burner in my bag, but in reality I rarely need it. I think OWC sells a bracket for Macs, but if you can figure out exactly which bracket you need, a site called newmodeus.com sells them for almost every laptop ever made for considerably less.

I really do believe my SSD is the best upgrade I've ever spent money on; no computer I use from now on will be without one. It's not so much that the computer is faster; it's more the feeling that the computer does not grind to a halt or slow down, no matter what's going on. (I may have compared my computer to the Terminator amongst friends once or twice… it just doesn't slow down.)

1: macsales.com


Chrome doesn't behave with limited cpu/ram. It likes to "burst". I generally close it when doing anything like gaming or VMing much.


What kind of application blocks on disk IO but nothing else for extended amounts of time? I'm having a hard time seeing how installing an SSD and maxing out your cores are terribly related otherwise.

SSDs do a lot to reduce loadtimes, and thus make your computer seem much faster, but they do little for making your programs run full-speed-ahead constantly. Most every application out there blocks on network connections, user input, or just plain old throttles itself.

For that matter, I can max out my cores just using a couple dozen instances of mplayer, playing several movies at once off of a usb removable harddrive...


"What kind of application blocks on disk IO but nothing else for extended amounts of time? I'm having a hard time seeing how installing an SSD and maxing out your cores are terribly related otherwise"

Virtual memory paging to/from disk. This is probably why the new MacBook Airs feel faster than the CPU+RAM specs suggest.


That'll improve your loadtimes, but I'm having a really hard time seeing that as the reason why most people aren't maxing out their CPUs all the time. Unless something has gone terribly wrong, you should never be hitting your disk that much.

During standard home computer operation, both the CPU and the disk are generally quite idle.


5400 rpm disk

Even with a perfect scheduler you're going to have to wait on I/O. Disk speeds are the limiting factor on most machines, and this goes double for laptops. I highly recommend getting an SSD.


But not anywhere near what we see: BeOS on my 1999-era system handily trumped Linux, Windows or Mac OS on 2010 hardware (non-SSD) when it came to interactive performance, solely because it had a better I/O scheduler. Back then, I could surf the web without being constantly reminded that I had Mozilla compiling & DV streaming off of a camera; today I'm regularly reminded that work is happening in the background.

This isn't to say that there aren't real limits or that BeOS was perfect (far from it) but simply that there's considerable room for improvement before we start hitting theoretical limits.


yes, i'd like that very much but the current prices are too high for me.


You can afford a MacBook Pro but you can't afford an SSD that costs 25% of that?

(A really fast expensive SSD is around $400.)


erhm yeah basically :) recently said "no more", quit my job, bought a MBP and struggling to get bootstrapping work. One wave short of a ship wreck? Yes. Free to be creative, free from LAMPish crappy apps, free to hack away Dojo/Django/Postgres apps, free from bosses who don't code? Fuck yeah


Bingo: if I used a MBP as my main computer, I'd take out the optical drive and put in something like this: http://eshop.macsales.com/shop/internal_storage/Mercury_Extr... for $99. Or the 60GB version for $149.


Spending a thousand or two on a computer is a hell of a lot easier to justify than spending several hundred on a harddrive. Particularly when you can find less fancy harddrives for a fraction of that. SSDs are far more of a luxury item than laptops.


Agreed. My lizard brain tells me that too. When we buy a faster processor, we are valuing our time against the cost of the processor. I just have to convince the lizard inside to do the same with disk wait times.


For most desktop workloads the disk wait times exceed the processor wait times by orders of magnitude.


9 upvotes and it's already down. Anybody has a mirror? This seems pretty useful.


anybody know a ubuntu ppa for this ?


Do they put kernel patches in PPAs?


No, but you can make a deb of a patched kernel and compile that for PPA distribution.

It wouldn't be very difficult to make, I would expect to see one in the next 24 hours or so.

Be wary of getting your kernel from a PPA though - consider it experimental.


Thanks, I've only used PPAs for apps and such, didn't realize people put kernels up too. Though like you say, probably would only use that for a virtual machine instance, will wait till the next kernel patch for my base workstation.


Just for the brave ou there: to build a custom kernel for Ubuntu, based on the actual Ubuntu kernel image, you have to follow these instructions:

https://help.ubuntu.com/community/Kernel/Compile

Apply the patch before compiling and there you go.


Makes me wonder that don't they have kernel APIs for process schedulers and I/O schedulers by now? The scheduler tweaks have been going on for ages.

Instead of compiling a single new kernel module (or downloading it prebuilt from an apt repo or a PPA) and kicking it in with modprobe, we now need to obtain the sources for the whole kernel, apply the patches, configure, build, and deploy. Sure Debian/Ubuntu has that partially automatized but it's still a pain.

At least I'll wait for stock 2.6.38 on Ubuntu and cross my fingers they put this patch in.


Pluggable schedulers have been proposed, implemented, and shot down by Linus several times in the past. IANAKH, but Linus's argument seems to basically boil down to this: for a monolithic kernel, delegating something as central as task scheduling to pluggable modules is a pretty big hit in terms of latency and complexity vs. just putting the best, most tightly-tuned scheduler you can smack dab in the heart of the beast.

: I Am Not A Kernel Hacker


Linus intentionally will not allow the scheduler to be modularized to force people to develop the One True Scheduler rather than a bunch of workload-specific ones.


All this patch does is add a smarter default for just such an API: cgroups.

Obviously, any change to the default behavior is going to require building a new kernel.

There may be good reasons for completely plugable schedulers, but this is not one of them.


This is funny and cool at the same time- back when I still used Linux regularly, it was because it was way smoother than windows under load. I don't know if it regressed since then and was fixed, or just got better, but either is awesome!


So, is this really going to help, if I don't have tons of busy processes (a'la "make -j64") running?


It can help prevent background processes (like disk indexing or manpage index updating) from interfering with interactive processes.


i am impressed, i have never expected such dramatic improvement on my desktop




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: