An old project I worked on, the Pascal-F verifier, from the early 1980s, worked in a similar way. We took the output from the first pass of the compiler, which was something like Java bytecode, and verified from that.
I found some old 9-track tapes of the sources recently, and I'm going to run them through a data recovery service in Morgan Hill next week and see if I can bring the system back to life. It's a historical curiosity, but should be fun to play with on today's machines. It was just too slow in 1982. Meanwhile, I brought the original Boyer-Moore theorem prover back to life and put it on Github.
Some of our lessons learned were:
1. Integrate the verification statements into the programming language. Don't try to do it
with comments. You want the language's regular syntax, type and variable checking to apply to the verification statements. We added the keywords ASSERT, ENTRY, EXIT, MEASURE, STATE, EFFECT, INVARIANT, DEPTH, and PROOF, EXTRA, and RULE.
2. Make verification program-like, not math-like. For example, if we wanted to verify the structure of a tree, we would add fields for a back-pointer and a tree depth to the data structure each node. We'd put in all the code to maintain them, and verify that the forward pointers and back pointers were consistent and that child objects always had a greater depth than their parents. This would all be done by writing ordinary-looking code
with ASSERT statements. But the new data fields would have the EXTRA attribute, and the code manipulating those fields would have PROOF in front of it. This meant it was there only for verification purposes. No executable code could depend on that data, so it could all be stripped out for execution.
3. Use two theorem provers. One prover was an Oppen-Nelson decision procedure. This is
fully automatic proving for integer +, -, multiplication by constants, inequalities, conditionals, logic operators, structures, and arrays. This subset of mathematics is decidable and there's a fast, complete decision procedure for it. That knocks off all the easy stuff automatically. Easy stuff usually includes subscript checks and overflow checks.
The second prover was the Boyer-Moore prover, which is semi-automatic; you have to propose lemmas to help it along. This is hard. By using ASSERT statements to narrow the area
of trouble, you could reach the point where you had two successive ASSERTs, one of which should imply the other, but the Oppen-Nelson prover couldn't prove it and there was no
previously proved rule on file to cover it. But now the problem had been narrowed to an abstract mathematical problem - prove the second ASSERT from the first. Someone could work on that in the Boyer-Moore prover and then export the rule for use in the main system.
This provided a separation of functions - you could have mathematically oriented people to struggle with the theorem prover, independent of the code. You could reuse
that rule elsewhere, and change other code without having to re-prove it. Today, we'd put files of useful theorems on Github. We never let the user add "axioms". That opens a huge hole.
4. Expect to do a lot of compiler-type analysis and bookkeeping as part of the verification process. For example, the static analysis to determine that a function is pure ("pure" means x = y implies f(x) = f(y), and no side effects.) is something to do as a routine compiler operation. Potential aliasing has to be discovered. Do this using compiler techniques; don't dump it into the theorems-to-be-proved mill, where it's much harder to give the programmer good error messages.
5. Don't fall in love with the formalism. Too much work in verification has been done by people who wanted to publish math papers, not kill program bugs. This is about creating bug-free code. I think people will get this now, but when I was doing this, everybody else involved was an academic.
I'm no expert, but according to this: https://blog.rust-lang.org/2016/04/19/MIR.html, MIR /is/ Rust's new intermediate representation.
Too much work in verification has been done by people who wanted to publish math papers, not kill program bugs. This is about creating bug-free code. I think people will get this now,
This might be right, but when I go areading I still find all the maths-orriented stuff.
1.Modifying front-end for Frama-C flow or similar tool to take a subset of Rust with specs, generate the VC’s, and feed them to supported provers. He seems to be doing something similar but Im sure there’s some good tools to build on.
2. For manual and high-assurance, embed Rust into Simpl/HOL that seL4 used. Do an AutoCorres tool with that. You now have ability to pull similar effort with translation validation to machine code. Next, extend COGENT to generate Rust subset Simpl supports. You can now do, like their C example, a whole filesystem functionally that outputs verified Rust or machine code. Optionally extend the COGENT tool with QuickCheck, QuickSpec, etc esp where tests/specs can translate to Rust to. Quite a foundation for other Rust developments to build on. Including redoing Rust compiler in COGENT at least for certified, reference version. ;)
Which is a key element of the Eiffel language ( https://en.wikipedia.org/wiki/Eiffel_(programming_language) ).
SPARK Ada also includes this concept of pre/post conditions: https://en.wikipedia.org/wiki/SPARK_(programming_language)
I haven't programmed in the latter, but I'd like to. Since I'm often in the maintenance end of the software cycle I don't have much opportunity (professionally) to introduce new languages (new tools, yes, but not new languages).
I have a coworker who thinks we do this with asserts (in C and C++), but it's only half the battle. The asserts only tell us that our exercised version of the program (through our non-comprehensive testing) hasn't failed pre/post-conditions. It's helpful, but not sufficient.
But I figured it was relevant to mention that there are both an RFC  on adding DbC as a language feature and a library .
Design by contract brings the concept of using preconditions, postconditions, invariants, and other contractual objects into the design and implementation of the software. These are able (depending on implementation) to be analyzed statically by machine or by hand, or dynamically during testing (with some systems the tests can be automatically generated to exercise the contracts). This may not be the limit or ideal of formal verification, but it is certainly a part of it and a good measure forward compared to the typical approaches used in our industry.
Let's settle a pertinent question first: Is Hoare logic a tool/approach for formal verification? If it is, then design by contract (being largely based on the concept of Hoare logic) is also a tool/approach for formal verification.
The potential points of disagreement, as I see them, are that Hoare logic is not a tool for formal verification, or that design by contract is not based on Hoare logic. Is there something else that I've missed or are these the points of disagreement?
EDIT: Hoare logic is a tool for formal verification, indeed. DbC is not, though.
So design by contract, based on Hoare logic, applied in languages like SPARK Ada and Eiffel, produces statements that are unverifiable.
But Hoare logic, a thing without an executable implementation, produces statements which are verifiable.
Except many implementations of design by contract execute the postcondition check and raise an exception if it's not satisfied, so I don't think your statement is correct.
It doesn't have to be fully specified or proven to count as formal verification. Partial verification is a thriving field.
If you don't have a well-defined contact, you won't be able to verify that context is adhered to.
When verifying a caller, the verification system must verify at each call that the ENTRY conditions hold, and can then assume, after the call, that the EXIT conditions are true.
You can extend this to objects. Objects have invariants, which must be true whenever control is outside the function. So an object invariant is both an ENTRY and and an EXIT invariant for all exported functions.
(This is the source of the idea that objects should have "getters" and "setters", rather than exporting fields for external access. You need some place to check the object invariant.)
(When is control inside an object? It's complicated. A strict language on this would have public and private functions, and you wouldn't be allowed to call public functions from private ones, because that's a re-entry. There's also the problem of calling "out the bottom" of an object and eventually re-entering the object. Classic source of bugs in GUIs, which are collections of objects that call each other up, down, and sideways. And if you block inside an object, is it unlocked? Spec# addressed all this, with difficultly.)
You state that design by contract has nothing to do with formal verification. You also later said that Hoare logic is a tool for formal verification. Design by contract is an implementation/application of Hoare logic. Ergo, it has something to do with formal verification.
I've never said it is formal verification, I've only said that it is part of it. It requires tooling and proper application for it to be useful/correct/effective. But that's true of everything in formal verification.
Reasonable for what? The design hypothesis behind a high-level specification/verification tool like TLA+ is that no language is currently capable of being reasoned-about in such a way that large, complex applications can be verified affordably. So far, this hypothesis holds. Language-level verification is therefore valuable when you either need to verify very local properties (pre/post conditions of a single subroutine, that is very important and/or tricky) or you need to do full end-to-end verification, in which case you will be able to verify only relatively small and simplified programs (like seL4 or CompCert) and at a very high cost (reasonable for high-assurance software only). So language-level verification and high-level specification and verification are currently useful for very different things.
edit: I'm particularly interested in formal specifications and their use in open source software.
The first isn't using formal methods AFAIK, the second is, and the third is as well, but I think they're targeting safe code only.
Disclaimer: I am affiliated with the project.
The contribution here is a formalization of borrowing and ownership semantics very close to Rust's, plus a sketch of how one would prove that these semantics give memory safety. The techniques can probably be adapted to an analysis of Rust itself.
Benitez, S. "Rusty Types for Solid Safety." ACM PLAS 2016.
(Full disclosure: Sergio and I are colleagues.)
(Version 53.0.2785.116 m (64-bit))