
QNX Lecture [pdf] - whereistimbo
http://web.archive.org/web/20151123064622/www.se.rit.edu/~swen-563/slides/C9_-_L1_-_QNX_Lecture.pdf
======
whereistimbo
From:
[https://slashdot.org/comments.pl?sid=185194&cid=15285467](https://slashdot.org/comments.pl?sid=185194&cid=15285467)

Within the size of L1 cache, your speed is determined by how quickly your
cache will fill. Within L2, it's how effecient your algorithm is (do you
invalidate too many cache lines?) -- smaller sections of kernel code are a win
here, as much as good algorithms are a win here. Outside of L2 (anything over
512k on my Athlon64), throughput of common operations is limited by how fast
the RAM is -- not IPC throughput. Most microkernel overhead is a constant
value -- if your Linux kernel us O(n) or O(1), then it's possible to tune the
microkernel to be O(n+k) or O(1+k) for the equivalent operations. The faster
your hardware, the smaller this value of k since it's a constant value.
L4Linux was 4-5% slower than "pure" Linux in 1997 (See L4Linux site for the
PDF of the paper [http://l4linux.org/](http://l4linux.org/)).

