Hacker News new | past | comments | ask | show | jobs | submit login
L4Linux – Linux running on the L4 microkernel (l4linux.org)
140 points by phoe-krk on Jan 18, 2019 | hide | past | favorite | 41 comments



This seems like an important part that's buried

> Compared to monolithic Linux, there is a small performance tradeoff because of the µ-kernel architecture. However, the initial L4Linux has been somewhat optimized, and on L4/x86 it has a very acceptable slowdown of less than 4 % for any relevant load.

If it provided any additional security guarantees in trade for that 4%, I imagine it'll become a popular way of running Linux


It doesn't really provide extra security for processes managed by the Linux kernel, but it allows you to co locate Linux and some other code. I could see a model where you have an untrusted Linux land running next to some unikernel like rtos-esque components that are protected from the Linux side. The sel4 guys had a CTF where they ran hard realtime heleicopter software next to a Linux kernel on the same system, and gave you a root shell to the Linux side to start off with.

That being said, last time I checked even sel4 wasn't really hardened against Spectre attacks.


By exposing kernel memory Spectre diminishes the effectiveness of mitigations like kernel ASLR as well as makes it easier more generally to exploit kernel vulnerabilities. That's significant in the case of Linux as there's no shortage of Linux kernel bugs, not now nor for the foreseeable future. Even well written, bug free user space applications can't protect themselves from the kernel, AMD's and Intel's best efforts with SEV and SGX, respectively, notwithstanding.

By contrast seL4 is effectively bug free. (Literally bug free for the most the part, but AFAIU some gritty details like TLB management remain outside the scope of the formal verification.) That means you can build a reliable security model for your own application. The guarantees that model can make about confidentiality may be weaker because of Spectre, but you're still in an infinitely better position than on Linux, Windows, or most anywhere else.

seL4 is arguably even better than ARM's TrustZone. With TrustZone you're still stuck having to ensure you correctly manage transitions into and out of protected mode. Bugs in such code constitute a good chunk of kernel vulnerabilities more generally, so it's clearly not trivial. Doing this correctly is basically what seL4 is all about. Then consider the fact that people are trying to push so many tasks into the TrustZone (or secure enclaves in general) that it's common to run an OS in the TrustZone. People would do much better to run seL4 inside the enclaves than bug ridden TEE environments. And unless the enclave is a discrete processor rather than a protected mode as with TrustZone, you may as well just run seL4 as the only OS (or at least the host OS).

Interestingly, Apple's T1 and T2 chips run an L4-based microkernel. AFAIU it's something of a cousin to seL4 but predates it. If seL4 was as mature then as it is now perhaps Apple would have started with seL4.


seL4 is bug free in theory, not in paratice. It have been proved to follow the spec, but the spec can have error. Missing workaround for hardware bug can bite too.

"Beware of bugs in the above code; I have only proved it correct, not tried it." -- Donald Knuth on the van Emde Boas construction of priority deques (1977)


After proving the full functional correctness (that machine code implements spec), they did additional security proofs against the spec, including confidentiality and integrity. If you accept their TCB(no hardware bugs etc), the entire proof is incredibly strong.


> By exposing kernel memory Spectre diminishes the effectiveness of mitigations like kernel ASLR as well as makes it easier more generally to exploit kernel vulnerabilities. That's significant in the case of Linux as there's no shortage of Linux kernel bugs, not now nor for the foreseeable future. Even well written, bug free user space applications can't protect themselves from the kernel, AMD's and Intel's best efforts with SEV and SGX, respectively, notwithstanding.

I mean, yeah, that's what I'm saying. There's some benefit to to running stuff on the side of the kernel, but you gain nothing by running your Linux processes on L4Linux.

> By contrast seL4 is effectively bug free. (Literally bug free for the most the part, but AFAIU some gritty details like TLB management remain outside the scope of the formal verification.) That means you can build a reliable security model for your own application. The guarantees that model can make about confidentiality may be weaker because of Spectre, but you're still in an infinitely better position than on Linux, Windows, or most anywhere else.

The sel4 guys are super careful not to make such strong statements. And it's weird that you'd minimize the consequences of reading out arbitrary memory.

> seL4 is arguably even better than ARM's TrustZone. With TrustZone you're still stuck having to ensure you correctly manage transitions into and out of protected mode. Bugs in such code constitute a good chunk of kernel vulnerabilities more generally, so it's clearly not trivial. Doing this correctly is basically what seL4 is all about. Then consider the fact that people are trying to push so many tasks into the TrustZone (or secure enclaves in general) that it's common to run an OS in the TrustZone. People would do much better to run seL4 inside the enclaves than bug ridden TEE environments. And unless the enclave is a discrete processor rather than a protected mode as with TrustZone, you may as well just run seL4 as the only OS (or at least the host OS).

TrustZone and a microkernel sel4 are totally orthogonal concepts. Your contrasting the two concepts doesn't really make sense. And you can absolutely run sel4 in Secure EL1, you just wouldn't want to run it in EL3 because it's a little heavyweight in its current incarnation for that (hence the guy below talking about how it cut their perf in half).

> Interestingly, Apple's T1 and T2 chips also runs an L4-based microkernel. AFAIU it's something of a cousin to seL4 but predates it. If seL4 was as mature then as it is now perhaps Apple would have started with it.

It's an extremely modified L4:Pistachio that has AFAIK nothing to do with sel4.


The Australian sel4 is not yet Spectre fixed, but the Dresden L4Re is.


That sounds like a very neat CTF. I'll search but do you know if anyone wrote about it or documented it?


4% is meaningless measurement for an OS kernel.

I worked with l4 and we had to throw it out because the microcernel overhead had a 2x performance hit on frames per second for animations on a 400mhz arm9.

Are you going to see a 2x hit for disk IOps, or network packets per second, or IO latency?


> I worked with L4, and we had to throw it out because the microkernel overhead caused a 2x performance hit on frames per second for animations on a 400MHz ARM9.

Were you sending individual graphics commands and rasters over IPC or something, or was that just the cost of the flips, and maybe a second flip from a compositor (or blitter)?


Userspace would render its local window and send the invalidated regions to the kernel.

The problem was two fold. L4 ipc was just slower then a swi, and l4 driver model would lock the page table for every region instead of just checking every region all at once.


Which L4 was this, by the way? Did you replace it with an RTOS, a POSIX system?


We had an internal bear metal implementation already.

L4 was just pushed as the “hotness”, and ironically as a way to get around the linux GPL issues at the time.

I don’t want to sound to critical. L4 is actually easier to build drivers for than Linux, and you are much less likely to shoot yourself in the foot.

It’s a great microckernel. It’s just that 4% may all come from the most human sensitive application.


I still can't tell what L4 you're talking about. I've heard that there are some pretty big differences in performance between other L4s and seL4, for example.


The 4% come from a application-level benchmark that was popular back then. This is about ARM9/armv5 too, which is not suited for any multi-user OS as address spaces switching is very expensive compared to today's architectures (armv6+) due to the needed cache flushing on every address space switch. This has nothing to do with L4 variants, this is an architectural issue.


ARM9's cache architecture is not suited for frequent address space switches due to the requirement to flush caches on every switch. All multi-user OSes suffer there it's just more accentuated on microkernels.


Yep! If I recall correctly we had to blow away the entire cache on every system call because the line invalidate was so slow.


> If it provided any additional security guarantees in trade for that 4%, I imagine it'll become a popular way of running Linux

It did. This was the precursor to hypervisors and the first example of virtualization with low overhead, which is now everywhere.


Can you use this setup to use Linux's device drivers for SeL4? Given the paranoia level involved, I imagine you'd run a stripped down Linux kernel per driver, with all device commands proxied through L4. Your Linux or whatever other userland would see virtual devices through L4 as well. Possible? Hopelessly inefficient? Or does L4's capability pipelining save the day?



The idea seems to be to use paravirtualized Linux as a “time-sharing component” of a system that otherwise provides real-time capabilities...


This might be interesting for my long abandoned drone project...

Undoubtably still want propellor controls + stabalization on dedicated hardware, but everything else.


I’ll ask a naive question. Could L4 in this context be used as a means to migrate Linux to a micro-kernel architecture?

What I mean by this is, run the Linux kernel on L4, then in Linux remove different components by calling out to a library (like virtio does) which actually puts a message onto the L4 bus and has that service handle the request. For example, running an FS as an L4 service.

Not sure if this is possible, but it seems like it should be.


Unfortunately, no.

L4, unlike QNX, is so minimal that you have to put another OS on top of it to do much. L4-Linux has been around almost since L4. There's not much point in it unless you have something else running on the same hardware which needs to be isolated. If all you have is one Linux instance, you haven't gained anything.

Putting some container system on top of L4 might be useful.


Might be worth noting that this link itself points to the university project. Most of the main developers moved on and founded a company that's providing microkernel technology commercially: https://www.kernkonzept.com

Disclaimer: not working for them, but used to work on the university project.


Anyone used this before? If it supports arm it could be a handy system.


Can I ask what you'll find it useful for? The technology is neat, I'm just struggling to think of something that would benefit from it.


Primarily for soft (or hard) real time operations. Even kernel modules running in a RT-Linux have a difficult time responding consistently to an IRQ at sub-10 uS latency. Of course, the larger SoC capable of running Linux can have hardware latencies due to shared data buses, etc. but still having a RTOS layer “underneath” Linux would be a good way to ensure more consistent responses.


It does support arm, it's one of the primary ways to use sel4.


Does that mean you could run l4linux with sel4? Didn’t realize the l4 api is consistent/broad enough to do that. That’s cool! The linked “drops” rtos seemed to be dead a long time back.


Not quite. It was rebranded and even is commercially supported these days: https://www.kernkonzept.com/l4re.html


There is no documentation that I can find regarding Linux running on seL4, outside of hardware virtualisation.


Okay, can someone explain why is this a thing? It is a genuine question.

I am not very familiar with OS designs and skimming through the landing page didn't help.

Is it essentially a Linux kernel that runs within an existing instance of Linux? Why is it important/useful?

Thanks.


L4 is a family of microkernels with a namesake kernel interface. μ-kernel (minix3, L4/Fiasco et al) are extremely lean by design and unlike monolithic (Linux, Solaris, Darwin et al), leave device drivers, filesystem, and a lot of non-hardware related stuff to the userspace. This project is basically a port of linux to an L4 Hypervisor that can Para-Virtualize Linux.

In short, with this project you can run a User Space Linux along with your other user space programs in L4 kernels.


The idea was to have something you would call microservice architecture today, just for OS functions. An ethernet driver server application would talk to an IP server, which would talk to a TCP server that distributes messages to the applications. The kernel would just isolate all those applications and enable communication between them. If the ethernet driver daemon crashed you would just restart it.

The communication overhead remained a problem for microkernels, despite L4's achievements in that area. The advantages were somewhat overstated. The complexity merely moved from the kernel into the applications. A security bug in the IP driver daemon can still compromise the whole system, just like a driver in a monolithic kernel.


Microkernels and hypervisors are very closely related in the view of some. Paravirtualization, like L4 Linux, is when the guest is aware of being hosted and, at one time - maybe still, could outperform guest OSes on fully virtualized hardware. I'm not sure if that's all still true since Intel and AMD implemented VTX.

If you want to see a neat demo of this stuff - check out the DROPs demo or TUD:OS http://demo.tudos.org/.

Screenshots of a whole bunch of linuxes (linii?) running at once: http://demo.tudos.org/l4lx_screenshot.png

I think this is from 2003...


> linii?

Linuces.


Does anyone here have experience running this on desktop/laptop for general use? Does GPU acceleration in opengl/vulkan work? Intel integrated HD audio?

Could you speak to how well everything works?


This is at least a decade old... If it was going to become a popular way of running Linux, as some have suggested, what's stopped it so far?


Virtual machines running Linux have actually become quite popular in the past decade. ;)

It's not the research project's fault that a different mechanism is used in the public cloud.

As for microkernels, they are actually used, but not in mainstream server systems.


Might be a nice alternative to Docker.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: