Hacker News new | comments | show | ask | jobs | submit login

What isn't yet clear to me about these kinds of proofs: how can you verify that this proof is correct? How can you know there isn't a wrong assumption or wrong bit of 'code' in those 5500 lines? Would it have been impossible for Voevodsky to make his original mistake in such a formalization? Obviously the point of his entire homotopy type theory project is that he thinks he couldn't have. But why are mistakes (near) impossible here?



This is a great question that permeates all verification work. In the lingo, we call the set of assumptions a proof makes the "trusted computing base" (see https://en.wikipedia.org/wiki/Trusted_computing_base ).

I listed some of the assumptions we made in another comment https://news.ycombinator.com/item?id=10018985 .

The question you ask was believed for a long time to be the death knell of formal verification. It was persuasively argued in "Social Processes and Proofs of Theorems and Programs" https://www.cs.umd.edu/~gasarch/BLOGPAPERS/social.pdf that any proof of a complex system is necessarily at least as complex. Thus there would be no reason to trust the proof any more than the original code.

The breakthrough in verification came when we started using machine-checked proofs. In this approach, you write a short, simple, and trusted proof checker. You can then verify complex systems using complex proofs that are checked by the simple checker. Then the only possible errors in the proof can come from the proof checker being wrong. Because the checker is simple and general (ie, it can check all kinds of proofs, not just proofs about a particular program), it is more trustworthy.

Just to reiterate (including some content from my other comment): we can be wrong only if we have written the wrong definition of linearizability, misstated the correctness theorem, or if there is a bug in the proof checker or in Coq's logic itself.


To add on to wilcoxjay's excellent response, this sort of proof development is exactly what Voevodsky would like to (eventually) see happen in mathematics. The linked proof of Raft uses Coq, which is a small proof checker which makes sure that no mistakes were made in the proof. Simpler, "paper" proofs had already been done for Raft.


I included the paper proof for Raft in my PhD dissertation, but there's a good chance it contains errors. Here's a quote from the intro: "The proof shows that the specification preserves the State Machine Safety property. The main idea of the proof is summarized in Section 3.6.3, but the detailed proof is much more precise. We found the proof useful in understanding Raft’s safety at a deeper level, and others may find value in this as well. However, the proof is fairly long and difficult for humans to verify and maintain; we believe it to be basically correct, but it might include errors or omissions. At this scale, only a machine-checked proof could definitively be error-free." From Appendix B of my dissertation: https://github.com/ongardie/dissertation/#readme

This Coq proof is an enormous step forward. It's that machine-checked proof I was hoping someone would do when I wrote the above paragraph. 1) Assuming the TCB is correct, we know the Verdi implementation satisfies linearizability. 2) Assuming the TCB is correct and the Verdi implementation and the Raft paper/dissertation/spec are equivalent, then the Raft algorithm satisfies linearizability too.

I'm more willing to believe that the Verdi implementation implements Raft faithfully than I am willing to believe that my hand proof is correct. If you're really worried about (2), you have the option of using the Verdi generated implementation. If you're less worried about (2), you can now have more confidence in the safety of the Raft algorithm in general.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: