The Linux CVE database was recently analyzed and it was determined that 96% of the vulnerabilities would be mitigated to at least non-critical status, if not completely eliminated, if linux were a microkernel architecture. Now throw in capabilities oriented security, and most of non-critical exploits become non-exploits. Throw in formal verification of the kernel, and you've basically eliminated everything except for hardware exploits.
Assuming mass adoption, what seL4 effectively does is turn million dollar black market zero days into billion dollar zero days...possibly more. A million dollar exploit probably isn't even a line-item in the FBI budget. A billion dollar exploit would be a "needs-congressional-approval" type of line-item in the national US intelligence budget. Unless your job revolves around protecting the nuclear arsenal of a major nation-state, you won't ever have to worry about OS vulnerabilities, and you can rely on the security characteristics of the OS to drastically narrow down your attack surface.
To me, that is a pretty big deal. It is slowly becoming a moral imperative that we start adopting this sort of security as a core standard.
RISC-V fits the embedded space, and IoT usually doesn't need much more software on top than a kernel and a dumb control service. If an open-source control service (effectively just controlling some I/O pins depending on a config) can get verified, the entire threat vector of IoT would be devastated.
So much for monolith performance, when one needs to pile layers and layers of security on top.
If the CEO's secretary gets convinced to wire $1M to an oversees account because somebody made them believe it's the boss requesting it, then how would a verified microkernel have helped?
> The Linux CVE database was recently analyzed and it was determined that 96% of the vulnerabilities would be mitigated to at least non-critical status, if not completely eliminated, if linux were a microkernel architecture.
In a fair comparison you need to consider the CVEs that a microkernel would suffer which had been eliminated with a monolithical kernel.
Oh but it's verified, you say? Then the fair comparison is a verified monokernel vs a verified microkernel.
Apples and oranges.
You can say that a comparison is unfair, but what matters is the end result of "which system ends up with the fewest vulnerabilities if/when it becomes widely used", and a formally verified microkernel seems like a strong bet on that front.
If Linux were a microkernel architecture, or if Linux were based on seL4?
If Linux were based on the OSI Mk microkernel (I think that was the name; it was Mach-based), every system operation would be roughly an order of magnitude slower.
As for formal methods, I'm singing in the choir.
 I've got the performance paper from the early '90s somewhere; it's hilarious. One benchmark was dhrystones: they didn't manage to physically slow down the processor. :-)
Cache locality is one hell of a drug. With seL4, the entire kernel consumes ~160KiB, which easily fits in most CPU caches.
In terms of creating a Linux replacement, drivers are the biggest hurdle. There are some attempts to programmatically port the netbsd suite of drivers, which seems to be the most promising. Apart from that, drivers will likely need to be ported manually.
The GNU userland is also unlikely to be ported easily en-masse due to the capabilities-oriented security model, although some projects would definitely be easier than others. Things like SystemD would be a monstrous task.
I personally think that apart from drivers, which can use traditional microkernel approaches, it's probably time to start implementing some of the better OS ideas that have come out in the last 30 years, as opposed to just trying to replicate a posix environment. For example, NuShell has a ton of promise, but isn't particularly posix-y.
IIRC there was once an L4 based project that integrated with Linux.
The problem with that is that you don't actually get that much benefits from the underlying microkernel. Linux is just the same old monolith, just better isolated from the other resources not exposed to this particular (para)virtualized kernel instance, so you end up isolating more or less complete systems from each other rather than getting the normal benefits of capability based microkernel architecture inside a single system.
Now that's not nothing of course, but not usually what people are after when they're hoping for a usable, modern microkernel system.
Those two examples are the only ones I could come up with, but I suspect that's because I don't know very much about hardware design. Are people working on this or am I just off-base here?
Also, task prioritization can be a problem: the rate at which an interrupt-less cooperatively scheduled system must reschedule depends on the response time for the task with the shortest maximum response time. You must guarantee that the scheduler can switch to it in time to meet its deadline. This can increase the scheduling frequency for everything, requiring all components of the system to call the scheduler more often. So you'd have pretty tight coupling between everything.
And my hope was that simplifying the hardware would allow us to fit more cores on the same chip. So your first point means that the per-core performance is worse but maybe for some workloads the processor performance as a whole is better. But yeah not sure if that's realistic.
Surely something like branch-on-random could be made to work with such code. It's effectively an interrupt point with the added benefit that you can place it exactly where you want it to.
That's what Microsoft was attempting with Singularity. Because binaries were IL, they could be reasonably verified for isolation guarantees during JIT compilation.
You'd need some sort of proof-carrying code to allow verification of these properties in binaries. Also, features like MMU's and interrupts are needed anyway to implement virtual memory, or user tasks with adjustable priorities.
I don't think so, I think moonchild's account of it is accurate. At least, it was accurate in 2012. From a 2012 blog post: 
> In CompCert, this algorithm is not proven correct. Instead, after generating the graph, the compiler calls an OCaml implementation of IRC and then checks that the coloring is valid. If the checker confirms that the coloring is valid, the compiler uses it and proceeds along. If it does not confirm that it's valid, compilation is aborted. This checker is proven correct, and because it is simpler, its proof of correctness is easier.
Still correct ;)
What I belief GP (fuklief) meant was: The SMT solver might still be incorrect. Like the register allocator itself in CompCert. But for CompCert, the result is checked by a formally verified checker. I do not know if that's true or not for the result of the SMT solver.
Glancing at the abstract, it seems like Dam, Guanciale and Nemati proved correctness theorems about a tiny hypervisor in ARMv7 assembly, using SMT solvers to automate some of the proof. The C compiler is not involved.
Whereas seL4 is written in C, and the correctness theorems are proved using the C code and the specification. There is a compiler that turns the C code into RISC-V machine code. Then the new tool chain proves the equivalence of the C program and the RISC-V binary emitted by the compiler, using SMT solvers to automate some of that proof.
I think there is another step not covered in this paper that connects back to the C code. Maybe they saved that for another publication?
Was that really a "risk" ? I think it's a given.
"Beware of bugs in the above code; I have only proved it correct, not tried it." - Knuth
In light of that, I'm not convinced by the current increment either.
The gcc git repository receives hundreds of contributions every week. Despite the heroic efforts of the open-source communities, it's still very easy to submit buggy or malicious patches (just think about the University of Minnesota scandal ). One of those could easily be used to target seL4, especially since it's hard to target seL4 in any other way, given all the other security guarantees. A tool that double-checks the compiler will buy the stakeholders significant peace of mind.