
Exploiting Recursion in the Linux Kernel - ingve
https://googleprojectzero.blogspot.com/2016/06/exploiting-recursion-in-linux-kernel_20.html
======
rwmj
Good job. This is where microkernels and exokernels work much better. Putting
complex subsystems such as encryption and filesystems into the kernel is a
terrible idea and this is just one of the reasons why.

~~~
2trill2spill
While what you said may be theoretically true, does anyone run microkernels or
exokernels on servers in production? I know microkernels like QNX are
extremely popular and effective in the embedded market and as far as I can
tell exokernels are more research oriented. But if I want to use a server with
dozens of cores, hundreds of gigs of memory, dozens of drives, and several
NICS what microkernel could I use? Currently I would use FreeBSD for such a
role. What microkernel or exokernel provides all the features of FreeBSD on
the server, like a COW file system, visibility(dtrace, vmstat, iostat, etc),
os-level virtualization(jails), hypervisor(bhyve), performant network stack,
linux binary emulation, and is open source? Unless people write a microkernel
or exokernel that can replace Linux or FreeBSD on the server than people will
continue to put monolithic kernels into production.

~~~
rwmj
L4 runs on every phone in the world (to a first approximation).

Xen (derived from the original exokernel work) runs AWS on a bazillion
servers.

Linux, FreeBSD, C are examples of how a heroic amount of work can make up for
poor design, but it's clear that with a better architecture and the same
number of developers they would be much more featureful.

It's also significant, I think, that all the important distributed filesystems
(eg. Gluster) are implemented as userspace servers, not as kernel drivers.
Once you go beyond a certain level of complexity implementing a kernel
filesystem isn't really feasible even with heroic effort.

~~~
2trill2spill
> L4 runs on every phone in the world (to a first approximation).

I was talking about servers not phones.

> Xen (derived from the original exokernel work) runs AWS on a bazillion
> servers.

You forgot to mention Xen is running on Linux in AWS.

~~~
rwmj
Linux runs on Xen in AWS.

~~~
2trill2spill
> Linux runs on Xen in AWS.

Yea your right about that. But my point still stands almost all of AWS is
being used to host monolithic kernels because the monolithic kernels(Linux,
FreeBSD) have the features people wan't for running their
business/organization.

~~~
kbenson
> But my point still stands almost all of AWS is being used to host monolithic
> kernels because the monolithic kernels(Linux, FreeBSD) have the features
> people wan't

The implication there is that they have the features _because_ they are
monolithic kernels, which is presuming the very thing being discussed. The
only thing you can say without a lot more proof is that people use Linux of
FreeBSD because they have features people want. Whether being monolithic is a
causation or correlation with those features is not established (in this
discussion) yet.

~~~
2trill2spill
> The implication there is that they have the features because they are
> monolithic kernels, which is presuming the very thing being discussed. The
> only thing you can say without a lot more proof is that people use Linux of
> FreeBSD because they have features people want. Whether being monolithic is
> a causation or correlation with those features is not established (in this
> discussion) yet.

I'm not saying FreeBSD and Linux have those features because they are
monolithic, i'm saying people use monolithic kernels because they are more
feature complete at this point in time. More specifically if I want to run a
big web app in production there are no suitable microkernels or exokernels to
do so now.

~~~
kbenson
> More specifically if I want to run a big web app in production there are no
> suitable microkernels or exokernels to do so now.

Windows NT is a hybrid kernel[1], so there's _somewhat_ conflicting evidence.

Really, I think I could make an argument that the reason a microkernel hasn't
gained popularity is because there's _far_ too much focus on performance to
the detriment of security and stability in much of the last few decades
program development, much less OS development. People aren't good at correctly
estimating risk and reward for future events that are more than a few years
out, and worse when the risk is abstract in nature.

Would everything be running slower if we used microkernels for the majority of
systems? Probably (but who knows by how much? If there was a lot more work
focused on that problem, we might have ways to alleviate much of it by now).
Would our systems be more secure and less prone to software errors? I think
so. Do I think it's possible we as a culture (species?) could choose stability
and security over performance? Not without enforcement negative consequences
when developers supply buggy and/or exploitable code, and consequences for
those companies that choose to run that code even if they are carefully
advised on the possible consequences. So, not likely, and that's a huge
discussion in itself.

1:
[https://en.wikipedia.org/wiki/Hybrid_kernel#NT_kernel](https://en.wikipedia.org/wiki/Hybrid_kernel#NT_kernel)

------
caf
ecryptfs shouldn't be running kernel_read() from the page fault handler anyway
- it should pass the read off to a work queue and put the process to sleep
until it completes.

------
dietrichepp
This is a common pattern for bugs and exploits, it seems. X is safe, Y is
safe, X+Y is exploitable.

~~~
Kristine1975
Possibly because the developers of X, Y and Z test their own software for bugs
and exploits, but nobody tests the combinations X+Y, X+Z, Y+Z, X+Y+Z.

