I was thinking about basically taking the OCaml one and transcribing it, leaving proving to a short blurb at the end and the references. Still useful I think.
Very old versions of Agda had an even more developed support for "literate" proofs, where any item in the proof language could have a natural-language form, that could also be endowed with features in natural language grammar (singular/plural forms, grammatical class such as noun/adjective/verb/adverb, symbolic vs. written out form) and thus allow for complex phrases to be pretty-printed automatically.
All of these features were left to bitrot, unfortunately. I don't think the version of Agda I mentioned can even be downloaded anymore. (Newer versions of Coq have some support for 'structured' proofs that is supposed to improve maintainability if proactively used, but it's a far cry from what those features could allow 'out of the box', with no need for special care or discipline from the user.)
I liked playing around with that, but it was quite restricted in what it really do. And it lacked the automation that Isabelle/HOL has. In Isabelle you can really focus on the general structure of the proof and have it fill in the intermediate steps by automated theorem proving. In Coq's mathematical proof language I remember having to write a lot more of hand-holding code.
Isabelle's Isar proof language is not quite as readable as textbook proofs, but it does make proofs somewhat "informally" readable. There are some nice examples in https://isabelle.in.tum.de/library/HOL/HOL-Isar_Examples/doc... (though the very first one is not the nicest, I think).
Edit: Forgot to mention, the above document is kind of what the OP asked for, you can generate this kind of LaTeX document from a fully formally checked Isabelle/HOL development.
This is the only way to understand a proof script, because proof scripts in and of themselves are totally opaque. It's like running a function with no source information step-by-step in a debugger. Declarative proofs are high-level (and can in fact be made arbitrarily-high-level by omitting proof steps and turning them into well-structured "sketches") and give you the equivalent information within the source code.
In addition to the sibling comment, this also breaks down in the presence of using elaborate Ltac tactics to automate large parts of proofs. There you'll just have a big macro step that eliminates the goal, or transforms it arbitrarily, or fails, in some way you don't necessarily understand.
But this kind of automation is necessary for large projects on the scale of CompCert, otherwise you'd die of boredom trying to type out all the cases yourself.
Nobody should comment every line of a proof, but they should document the general ideas and structure, and other proof languages than Coq's tactics language do this better. I really urge you to take a look at some of the proofs in the PDF I linked above. You won't understand everything, and I'm not trying to convert you to Isabelle, but you will see a different way of structuring things.
Are there any cool projects I'm missing out on?
In the programming-language community many authors communicate algorithms through the use of inference rules. To get from rules to working code requires careful thought and effort. If the rules change or the author wants to use a different algorithm, the effort required to fix the code can be disproportionate to the size of the change in the rules. This thesis shows that it is possible to generate working code automatically from inference rules as they appear in publications. The method of this generation is found in the combination of two domain-specific languages: Ruletex and MonStr. Ruletex formally describes inference rules; MonStr connects the rules to an algorithm. Ruletex descriptions are embedded in LATEX, the language that researchers use to publish their work, so that the author commands complete control of the rules ’ appearance. Moreover the generated code enjoys several nice properties: Existing code written in a general-purpose programming language can interoperate with Ruletex code, correctness of rules is decoupled from performance and termination of code, and implementations are conceptually simple, consisting only of λ-calculus with pattern matching. The main technical contribution of this work is the design of MonStr, the executionstrategy language used to form an algorithm out of rules. MonStr specifications provide an important guarantee: a valid strategy cannot affect partial correctness, although it can affect termination, completeness, and efficiency.
I recommend this paper from Lamport on how to write a 21st century proof: https://lamport.azurewebsites.net/pubs/proof.pdf
It wouldn't be practical beyond some basic checks. Turning a paper proof from even the standard you'd find in a journal publication into a formalised (e.g. Coq) proof is a research grade task. The automation required to justify each step in a typical paper proof back to your axioms isn't close to being practical for casual use.
> Proof Checking of Natural Mathematical Documents, with optional support for Isabelle Prover IDE.
The goal is to be able to proofcheck from Latex directly. Most of the machinery is (used to be) written in Haskell. It is definitely a cool project, and they could definitely use help.
This may have to do with my institution and my students (or perhaps with me) but I find that 30%-ish of them just do not have a turn for mathematics or proving, at all. Just as I do not have a turn for graphic design and no amount of effort moving things around in the GIMP seems to help, these folks do not get even the simplest exercises in Epp or Rosen. Even when the other 70%-ish are very ready to go on, there is a core that will not get those questions on the exam no matter how much time we devote to the topic.
(These folks, as far as I can tell and I've been doing this since 1990, go on to solid careers.)
This is a total cop out. Doubly so when discussing alternative ways to teach. Maybe they have the wrong background. Maybe they need something different in high school. Or a different set of prereqs. Who knows why, but saying it's something practically innate is a huge leap.