Hacker News new | comments | show | ask | jobs | submit login
SEL4: Mathematically Verified Software Kernels (2014) [pdf] (sel4.systems)
96 points by setra on Dec 27, 2016 | hide | past | web | favorite | 34 comments

If you're reading this, there's a decent chance you'll be interested in Robigalia, which is an OS that couples SEL4 with a Rust userland:


Aside from the security approach of the OS design, I believe the plan is to formally verify the Rust code once tools for verifying Rust are ready for production use.


Good timing, I'm putting out the one-year retrospective on Robigalia in just a few minutes :)

Very promising looking project. I was looking into it last week and read through some code, but it seems somewhat fragmented. They're undertaking a lot but don't appear to have a large active community.

Still there are a number of projects and they look to have a full GUI running. Impressive, perhaps it speaks to the productivity of Rust.

All in all, hopefully the project continues forward. It'd be great to try!

"Create a highly reliable persistent capability OS, continuing the heritage of EROS and Coyotos".

That's nice, but the long, long history of EROS and its successors makes Gnu HURD look like a fast-moving project. Get Rust onto SEL4 for embedded systems and you'll have something. Get a router on that and you'll have a business.

The website's front page doesn't tell me too much. Is it *nix compatible? Something entirely different?

Entirely different, but with a *nix compatability environment (not unlike Windows' lxss or Solaris' brands/zones)

I would bet pretty good money that automated theorem proving is better than the best humans within 10 years.

There's still not a lot of good research in combining machine learning and theorem proving, and intuitively it seems feasible. It's clearly somewhere between beating Go and general AI, and probably closer to Go since it's basically just a tree search with heuristics.

If this happens, it would radically change the software industry. High level application development might not change that much since the problems are usually poorly specified. But it might dramatically reduce the cost of writing system software. We'd see innovation in that space we haven't seen since the 70s. Unix and its abstractions that dominate the whole market might look as antiquated as punch cards.

Additionally, professional mathematicians might become more like philosophers, guiding the procession of mathematics and deciding what is interesting to prove.

I'm not so certain. Fully automated theorem proving in powerful logics like Coq uses is extraordinarily hard and expensive (between "P=NP" and "halting problem" levels of hard and expensive). "Tree search with heuristics" is a very naive way to put it. Tools like Lean will help close that gap, with closer integration with ATPs. But Isabelle has already had great integration with ATPs for a long time now, and while it certainly helps, I don't think we're anywhere near the automated mathematician yet. Creating specific systems which are "easy" to reason about with custom tactics is state-of-the-art, and will probably remain that way for some time.

As an aside, http://www.neural-symbolic.org/ is something I watch very closely, and it sounds like you'd be interested in it too!

The reason is this is an activity centered really on humans understanding a problem and creatively modeling it in new ways until a solution can be made in their logics. The latter part is where much of the automation is helping us. Far as Isabelle and automated provers, I discovered another one recently on that:


It's probably going to stay like that with human work doing the part requiring real understanding, machines doing the brainless explorations they're good at, and libraries piling up for all the common stuff like what we're seeing a little with Coq and Isabelle/HOL.

Btw, your project just came up on Lobsters. I can send you an invite if you want to tell them about it. Right now, nobody is looking at it though which could just be due to time of day. Description sounds like you have broader plans than the Nizza-like scheme we discussed previously.


I think 10 years probably won't cut it for a computer proving big new theorems in math.

A likely step by step evolution may be:

1) Verification of non-algoritmic programs: Everything is specified reasonably well.

2) Verification of mathematical proofs. This shouldn't be hard in principle, but it does require very good natural language understanding and the ability to prove partial results.

3) General proving as good as humans. We have still to see any non-trivial results in this direction at all.

For verified software, we probably don't need many big new theorems. We probably just need to be able to apply a large library of well-known strategies to a mind-numbingly long list of routine proof obligations. It's more about exhaustive case analysis, which computers are good at, than about creative leaps of intuition.

The big problem, IMO, is search control: keeping the computer from wandering off into parts of the search space that don't actually matter.

Why is 2) relevant? Natural language understanding is way harder and a totally unrelated problem.

2 is relevant because there exists vast amount of mathematical proofs already written in natural languages. It's about supporting legacy code.

The halting problem is "just a search tree with heuristics", but we don't need to go that far. We know that in general, non-exponential proofs for arbitrary tautologies exist iff coNP=NP. Now, it is true that we're not interested in the general case but in particular problems, but we already have some evidence that, at least when it comes to software verification, real-world software is bumping against the worst-case, and no amount of super-intelligence in the world is going to help reduce those exponentials. Also, there's the little problem that our statistical learning algorithms which are what passes for "AI" this time around, while able to handle lots of data, is still far from the capabilities of the average insect. Winning at Go is, theoretically, such an easy problem that I wouldn't extrapolate anything from it; the algorithm doesn't need to be really good at Go against arbitrary opponents; it just needs to beat other humans, which is a classical task for machine learning. Playing "against" math is not nearly so easy.

As it stands, both machine learning and automated theorem proving are 1) still based on algorithms developed in the '60s, and 2) their theory is not understood much better today as it was in the '60s. What changed was some brute-force search over heuristics over the years, some of which proved helpful, but mostly because our hardware is just better, but the pace isn't keeping up.

Now, it's not all doom and gloom, and there's a lot of progress that can and will be made, but it's important to understand how hard the problems are, and how primitive our current capabilities are. Of all the subdisciplines in computer science, two have experienced research winters: formal methods and so-called AI. Why? Because both seemed to show a lot of promise, a lot of progress, but ultimately ended up failing to deliver the results people had imagined they could. So both have come back with more modest goals than before, and this time their benefits seem a bit more tangible, but if people keep turning them into science fiction, we are headed into another research winter.

> Winning at Go is, theoretically, such an easy problem that I wouldn't extrapolate anything from it; the algorithm doesn't need to be really good at Go against arbitrary opponents; it just needs to beat other humans, which is a classical task for machine learning. Playing "against" math is not nearly so easy.

I agree. Proving "arbitrary" mathematical conjectures is surely bound to fail but all I'm saying is it might be better than humans. This would free up a lot of mental resources for the humans to do more interesting things.

> all I'm saying is it might be better than humans.

Well, there's certainly no theoretical limitation of doing it as well as humans (and maybe better, as a computer is better at remembering and recalling things), and there's a lot of interesting work on automated theorem proving. But the biggest breakthrough is still based on a SAT-solving algorithm from the '60s, and we still don't understand why it works well in practice. Like in "AI", there are algorithms that work surprisingly well for some tasks, but as we don't have a theory to understand them, we don't know how to make the next breakthrough (which is essential for getting significantly ahead), so it's very hard to put a time frame on these things.

Much of the perceived progress in both automated proofs and "AI" is due to new hardware allowing us to run very old algorithms on usefully-sized data, so what we're really experiencing is just finally observing the utility of very old discoveries. It's a little like finally being able to build sensors that allow us to observe physical phenomena that were theoretically assumed for a long time. But unfortunately, significant conceptual progress has hardly been made in decades (in both fields), and progress of the kind I described does not predict an imminent next step.

There are two problems in interactive theorem proving.

The first is proving theorems, which is essentially tree search and should actually be amenable to Monte-Carlo tree search. Based on my own experiences, it seems reasonable that this step could be automated within 10 years.

The second is theory building. Which procession of Lemmas will bring you closer to solving your problem? Which background theories and structures are useful for modeling your problem? This is entirely open ended and if you can solve this, you could equally well automate all of programming... All results in the literature for tackling this problem boil down to random search, which probably isn't what humans are doing.

The main problem and the reason why automated theorem proving has so far failed in formalizing interesting mathematics is that these two problems are not separate. There are reasoning principles in higher-order logic which require you to invent new Lemmas on the fly. For instance, in order to perform induction you typically need to generalize the goal statement to obtain a more useful inductive hypothesis. This can be arbitrarily complicated.

I'm sure that this is another problem which will eventually be solved (same as AGI?), but if you can solve it at even a fraction of the efficiency of a normal human then you have just made programming obsolete. It is a simple matter of encoding some refinement calculus in Coq and then synthesising programs by giving examples and constraints...

One last remark, before somebody accuses me of being excessively negative: There are some subproblems for which we have solutions that work amazingly well. For instance, if you are looking at purely equational reasoning, then a computer with a good implementation of paramodulation or unfailing Knuth-Bendix completion is better than any human. Boolean propositional logic is another example where computers are much better. Finally, there are classes of sums/integrals/recurrences/differential equations where again, computers are better.

There are more problems like this, but I'll stop here. The point is that we only know how to solve specific problems. Nobody has ever made any progress in developing a program which automatically explores a theory and finds solutions to new problems. Saying that we will solve this problem within 10 years is the same as saying that we will solve teleportation in 10 years. There is zero real progress in the problem, so what data are you extrapolating from?

Do you have any good resources for someone wanting to understand that a bit more? I'm super interested but this all seems a bit hard to get my head around..

If you want to have a look into the state of the art of axiomatic systems that can be (and have been) used for proofs that can be checked by a computer, it surely is not a bad idea to read the Homotopy Type Theory (HoTT) book, which is freely available at

> https://homotopytypetheory.org/book/

If you prefer a more "established" way:

Common systems for computer-checked proofs are Isabelle/HOL, Coq and Agda (the latter is often commonly "only" called a programming language, but also has been used for this purpose).

For Isabelle/HOL (https://isabelle.in.tum.de/) I only know the official documentation as a resource: https://isabelle.in.tum.de/documentation.html

For Coq (https://coq.inria.fr/) a good book book is "Coq'Art: The Calculus of Inductive Constructions" by Yves Bertot and Pierre Castéran. Unluckily it is not (legally) freely available.

Finally for Agda (http://wiki.portal.chalmers.se/agda/pmwiki.php) I recommend reading Ulf Norell's PhD thesis "Towards a practical programming language based on dependent type theory" and the more introductory paper "Dependently Typed Programming in Agda". Both texts are available at http://www.cse.chalmers.se/~ulfn/

The best resource for getting started with Isabelle/HOL is http://www.concrete-semantics.org/. Freely available.

A good resource to learn Coq is "Software Foundation": https://www.cis.upenn.edu/~bcpierce/sf/current/index.html

It teaches the foundation of software with simply typed lambda calculus, but also teaches how to use coq and use it for software verification. I found it more approachable than Coq'Art, which is also very good but I'd suggest starting with SF first then read Coq'Art.

These two are the common resources recommended for this sort of thing:



Most work in the field is using Coq or HOL. The second one applies to seL4. I don't think it will help you understand it much given the complexity of that work. You might have better luck looking at Myreen's verifications of LISP 1.5 or CakeML in Isabelle/HOL. Or even their verification of HOL itself.

I don't know that much about machine learning and have only briefly surveyed the landscape of research on machine learning for theorem proving, but if you want to learn about the current state of the art in verified programming it's probably Adam Chlipala's book: http://adam.chlipala.net/cpdt/

> High level application development might not change that much since the problems are usually poorly specified. But it might dramatically reduce the cost of writing system software.

I don't see that specifying system software is any easier - getting a specification in machine format is surely more-or-less equivalent to writing the program, and the specification is likely to not encode every single desire correctly - i.e. the spec will have bugs?

Or, in other words - you might not be proving what you think you're proving. You're proving what the code says you're proving, which is an entirely different thing.

Yeah, programming doesn't get magically easier but it could well increase programmer productivity by 10-100x.

This is not that. These are kernels are the paranoid.

It's baffling why inadequacy in system software writing is tolerated. But it is. This being said, don't assume prices will come down.

> I would bet pretty good money that automated theorem proving is better than the best humans within 10 years.

This is almost identical to saying computer programming will be automated within 10 years. It's much, much closer to general AI than it is to Go.

I don't think so, the spec writing is still the "creative" part.

Proof writing definitely requires lots of creativity and design choices in much the same way programming does. It's very challenging to automate.

I am sure there is a rich mathematician out there who would match any bet size.

I think that happened awhile ago (2005) with the Four Color Theorem proof.


The Four Color Theorem included calculations done on a computer, and later those have been more formalized. However the computations were highly problem dependent, trivial though plenty, and certainly not generalizable for proving anything else.

I didn't read through the linked PDF, but in relationship to work on SEL4, I found [1], it builds on the work that has been done for SEL4, and improves the tools further. As a first example they they implement verified file systems. They managed to implement two file systems: ext2fs and BillyFS. These file systems can be deployed in Linux. Their aim is reduce the cost of verified systems software.

[1]: https://ts.data61.csiro.au/projects/TS/cogent.pml

The $200-400 per LOC really puts it out of reach for any non-trivial project. Like additional millions of dollars in cost.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact