Aside from the security approach of the OS design, I believe the plan is to formally verify the Rust code once tools for verifying Rust are ready for production use.
Still there are a number of projects and they look to have a full GUI running. Impressive, perhaps it speaks to the productivity of Rust.
All in all, hopefully the project continues forward. It'd be great to try!
That's nice, but the long, long history of EROS and its successors makes Gnu HURD look like a fast-moving project. Get Rust onto SEL4 for embedded systems and you'll have something. Get a router on that and you'll have a business.
There's still not a lot of good research in combining machine learning and theorem proving, and intuitively it seems feasible. It's clearly somewhere between beating Go and general AI, and probably closer to Go since it's basically just a tree search with heuristics.
If this happens, it would radically change the software industry. High level application development might not change that much since the problems are usually poorly specified. But it might dramatically reduce the cost of writing system software. We'd see innovation in that space we haven't seen since the 70s. Unix and its abstractions that dominate the whole market might look as antiquated as punch cards.
Additionally, professional mathematicians might become more like philosophers, guiding the procession of mathematics and deciding what is interesting to prove.
As an aside, http://www.neural-symbolic.org/ is something I watch very closely, and it sounds like you'd be interested in it too!
It's probably going to stay like that with human work doing the part requiring real understanding, machines doing the brainless explorations they're good at, and libraries piling up for all the common stuff like what we're seeing a little with Coq and Isabelle/HOL.
Btw, your project just came up on Lobsters. I can send you an invite if you want to tell them about it. Right now, nobody is looking at it though which could just be due to time of day. Description sounds like you have broader plans than the Nizza-like scheme we discussed previously.
A likely step by step evolution may be:
1) Verification of non-algoritmic programs: Everything is specified reasonably well.
2) Verification of mathematical proofs. This shouldn't be hard in principle, but it does require very good natural language understanding and the ability to prove partial results.
3) General proving as good as humans. We have still to see any non-trivial results in this direction at all.
The big problem, IMO, is search control: keeping the computer from wandering off into parts of the search space that don't actually matter.
As it stands, both machine learning and automated theorem proving are 1) still based on algorithms developed in the '60s, and 2) their theory is not understood much better today as it was in the '60s. What changed was some brute-force search over heuristics over the years, some of which proved helpful, but mostly because our hardware is just better, but the pace isn't keeping up.
Now, it's not all doom and gloom, and there's a lot of progress that can and will be made, but it's important to understand how hard the problems are, and how primitive our current capabilities are. Of all the subdisciplines in computer science, two have experienced research winters: formal methods and so-called AI. Why? Because both seemed to show a lot of promise, a lot of progress, but ultimately ended up failing to deliver the results people had imagined they could. So both have come back with more modest goals than before, and this time their benefits seem a bit more tangible, but if people keep turning them into science fiction, we are headed into another research winter.
I agree. Proving "arbitrary" mathematical conjectures is surely bound to fail but all I'm saying is it might be better than humans. This would free up a lot of mental resources for the humans to do more interesting things.
Well, there's certainly no theoretical limitation of doing it as well as humans (and maybe better, as a computer is better at remembering and recalling things), and there's a lot of interesting work on automated theorem proving. But the biggest breakthrough is still based on a SAT-solving algorithm from the '60s, and we still don't understand why it works well in practice. Like in "AI", there are algorithms that work surprisingly well for some tasks, but as we don't have a theory to understand them, we don't know how to make the next breakthrough (which is essential for getting significantly ahead), so it's very hard to put a time frame on these things.
Much of the perceived progress in both automated proofs and "AI" is due to new hardware allowing us to run very old algorithms on usefully-sized data, so what we're really experiencing is just finally observing the utility of very old discoveries. It's a little like finally being able to build sensors that allow us to observe physical phenomena that were theoretically assumed for a long time. But unfortunately, significant conceptual progress has hardly been made in decades (in both fields), and progress of the kind I described does not predict an imminent next step.
The first is proving theorems, which is essentially tree search and should actually be amenable to Monte-Carlo tree search. Based on my own experiences, it seems reasonable that this step could be automated within 10 years.
The second is theory building. Which procession of Lemmas will bring you closer to solving your problem? Which background theories and structures are useful for modeling your problem? This is entirely open ended and if you can solve this, you could equally well automate all of programming... All results in the literature for tackling this problem boil down to random search, which probably isn't what humans are doing.
The main problem and the reason why automated theorem proving has so far failed in formalizing interesting mathematics is that these two problems are not separate. There are reasoning principles in higher-order logic which require you to invent new Lemmas on the fly. For instance, in order to perform induction you typically need to generalize the goal statement to obtain a more useful inductive hypothesis. This can be arbitrarily complicated.
I'm sure that this is another problem which will eventually be solved (same as AGI?), but if you can solve it at even a fraction of the efficiency of a normal human then you have just made programming obsolete. It is a simple matter of encoding some refinement calculus in Coq and then synthesising programs by giving examples and constraints...
One last remark, before somebody accuses me of being excessively negative: There are some subproblems for which we have solutions that work amazingly well. For instance, if you are looking at purely equational reasoning, then a computer with a good implementation of paramodulation or unfailing Knuth-Bendix completion is better than any human. Boolean propositional logic is another example where computers are much better. Finally, there are classes of sums/integrals/recurrences/differential equations where again, computers are better.
There are more problems like this, but I'll stop here. The point is that we only know how to solve specific problems. Nobody has ever made any progress in developing a program which automatically explores a theory and finds solutions to new problems. Saying that we will solve this problem within 10 years is the same as saying that we will solve teleportation in 10 years. There is zero real progress in the problem, so what data are you extrapolating from?
If you prefer a more "established" way:
Common systems for computer-checked proofs are Isabelle/HOL, Coq and Agda (the latter is often commonly "only" called a programming language, but also has been used for this purpose).
For Isabelle/HOL (https://isabelle.in.tum.de/) I only know the official documentation as a resource: https://isabelle.in.tum.de/documentation.html
For Coq (https://coq.inria.fr/) a good book book is "Coq'Art: The Calculus of Inductive Constructions" by Yves Bertot and Pierre Castéran. Unluckily it is not (legally) freely available.
Finally for Agda (http://wiki.portal.chalmers.se/agda/pmwiki.php) I recommend reading Ulf Norell's PhD thesis "Towards a practical programming language based on dependent type theory" and the more introductory paper "Dependently Typed Programming in Agda".
Both texts are available at http://www.cse.chalmers.se/~ulfn/
It teaches the foundation of software with simply typed lambda calculus, but also teaches how to use coq and use it for software verification. I found it more approachable than Coq'Art, which is also very good but I'd suggest starting with SF first then read Coq'Art.
Most work in the field is using Coq or HOL. The second one applies to seL4. I don't think it will help you understand it much given the complexity of that work. You might have better luck looking at Myreen's verifications of LISP 1.5 or CakeML in Isabelle/HOL. Or even their verification of HOL itself.
I don't see that specifying system software is any easier - getting a specification in machine format is surely more-or-less equivalent to writing the program, and the specification is likely to not encode every single desire correctly - i.e. the spec will have bugs?
Or, in other words - you might not be proving what you think you're proving. You're proving what the code says you're proving, which is an entirely different thing.
It's baffling why inadequacy in system software writing is tolerated. But it is. This being said, don't assume prices will come down.
This is almost identical to saying computer programming will be automated within 10 years. It's much, much closer to general AI than it is to Go.