Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Poster here. I re-read this paper about once a year. I continue to think that it may be one of the most important papers I've read. As someone who works on high-reliability safety-critical real-time systems (automotive, avionics), being able to work in an environment where I could prove semantic properties of assembly code is pretty close to my dream -- the cost of demonstrating code correct is already so much higher than the cost of writing the code that the incremental cost of going to assembly seems minor, if there's even a small payback in time to demonstrate correctness. In practice, I think my dream language would be a relatively rich macro assembler plus a (guided, greedy) register allocator, designed entirely to simplify and allow semantic proofs. I don't really have a sense as to whether Coq (Rocq) is the right answer today, vs other options; or if there's a newer literature advancing this approach; but I'd truly love to see a deeper focus of making low level languages more useful in this space, rather than moving towards more constrained, higher level languages.




During the days I was studying/working with Coq, one visiting professor gave a presentation on defense software design. An example presented was control logic for F-16, which the professor presumably worked on. A student asked how do you prove "correctness", i.e. operability, of a jet fighter and its control logic? I don't think the professor had a satisfying answer.

My question is the same, albeit more technically refined. How do you prove the correctness of a numerical algorithm (operating on a quantized continuum) using type-theoretic/category-theoretic tools like theorem provers like Coq? There are documented tragedies where numerical rounding error of the control logic of a missile costed lives. I have proved mathematical theorems before (Curry-Howard!) but they were mathematical object driven (e.g. sets, groups) not continuous numbers.


You use floating point numbers instead of real numbers in your theorems and function definitions.

This sounds flippant, but I'm being entirely earnest. It's a significantly larger pain because floating point numbers have some messy behavior, but the essential steps remain the same. I've proved theorems about floating point numbers, not reals. Although, again, it's a huge pain, and when I can get away with it I'd prefer to prove things with real numbers and assume magically they transfer to floating point. But if the situation demands it and you have the time and energy, it's perfectly fine to use Coq/Rocq or any other theorem prover to prove things directly about floating point arithmetic.

The article itself is talking about an approach sufficiently low level that you would be proving things about floating point numbers because you would have to be since it's all assembly!

But even at a higher level you can have theorems about floating point numbers. E.g. https://flocq.gitlabpages.inria.fr/

There's nothing category theoretic or even type theoretic about the entities you are trying to prove with the theorem prover. Type theory is merely the "implementation language" of the prover. (And even if there was there's nothing tying type theory or category theory to the real numbers and not to floats)


> when I can get away with it I'd prefer to prove things with real numbers and assume magically they transfer to floating point.

True for some approaches, but numerical analysis does account for machine epsilon and truncation errors.

I am aware that Inria works with Coq as your link shows. However, the link itself does not answer my question. As a concrete example, how would you prove an implementation of a Kalman filter is correct?


There is nothing inherently difficult about practical implementations of continuous numbers for automated reasoning compared to more discrete mathematical structures. They are handleable by standard FOL itself.

See ACL2's support for floating point arithmetic.

https://www.cs.utexas.edu/~moore/publications/double-float.p...

SMT solvers also support real number theories:

https://shemesh.larc.nasa.gov/fm/papers/nfm2019-draft.pdf

Z3 also supports real theories:

https://smt-lib.org/theories-Reals.shtml


I'm curious about how you'd do that, too. I haven't tried doing anything like that, but I'd think you'd start by trying to formalize the usual optimality proof for the Kalman filter, transfer it to actual program logic on the assumption that the program manipulates real numbers, try to get the proof to work on floating-point numbers, and finally extract the program from the proof to run it.

https://youtu.be/_LjN3UclYzU has a different attempt to formalize Kalman filters which I think we can all agree was not a successful formalization.


It is really not that difficult. Here is a paper that formalizes a version of feed forward networks to prove properties about them.

https://arxiv.org/pdf/2304.10558


I have been curious about this. Where can you find definitions for the basic operations to build up from?

IEE754 does a good job explaining the representation, but it doesn't define all the operations and possible error codes as near as I can tell.

Is it just assumed "closest representable number to the real value" always?

What about all the various error codes?


The standardized operations, e.g. multiplication or square root extraction, are precisely defined, i.e. the result is always defined exactly, by the combination of the corresponding operation with real numbers and by the rounding rule that is applied.

IEEE 754 also contains a list of operations that are recommended, but not defined by the standard, such as the exponential function and other functions where it is difficult to round exactly the result.

For the standardized operations, all the possible errors are precisely defined and they must either generate an appropriate exception or produce as result a special value that encodes the kind of error, depending on how the programmer configures the processor.

The standard is perfectly fine. The support of the standard in the popular programming languages is frequently inconvenient or only partial or even absent. For instance it may be impossible to choose to handle the errors by separate exception handlers and it may be impossible to unmask some of the exceptions that are masked by default. Or you may lack the means to control the rounding mode or to choose when to use FMA operations and when to use separate multiplications.

If you enable all the possible exceptions, including that for inexact results, the value of an expression computed with IEEE 754 operations is the same as if it were computed with real numbers, so you do not need to prove anything extra about it.

However this is seldom helpful, because most operations with FP numbers produce inexact results. If you mask only the exception for inexact results, the active rounding rule will be applied after any operation that produces an inexact result.

Then the expression where you replace the real numbers with FP numbers is equivalent with a more complex expression with real numbers that contains rounding operations besides the explicit operations.

Then you have to prove whatever properties are of interest for you when using the more complex expression, which includes rounding operations.

The main advantage of the IEEE 754 standard in comparison with the pathetic way of implementing FP operations before this standard, is that the rounding operations are defined exactly, so you can use them in a formal proof.

Before this standard, most computer makers rounded the results in whatever way happened to be cheaper to implement and there were no guarantees about which will be the result of an operation after rounding, so it was impossible to prove anything about FP expressions computed in such computers.

If you want to prove something about the computation of an expression when more exceptions are masked, not only the inexact result exception, that becomes more complex. When a CPU allows a non-standard handling of the masked exceptions, like flush-to-zero on underflow, that can break any proof.


You use reducing rationals everywhere you can, not floast.

So the answer is that you are proving two things:

1. That the model/specification makes sense. i.e. that certain properties in the model hold and that it does what you expect.

2. That the SUV/SUT (system under verification/test) corresponds to the model. This encompasses a lot but really what you are doing here is establishing how your system interacts with the world, with what accuracy it does so, etc. And from there you are walking along the internal logic of your system and mapping your representations of the data and the algorithms you are using into some projection from the model with a specified error bound.

So you are inherently dealing with the discrete nature of the system the entire time but you can reason about that discrete value as some distribution of possible values that you carry through the system with each step either

- introducing some additional amount of error/variability or

- tightening the bound of error/variability but trapping outside values into predictable edge cases.

Then it's a matter of reasoning about those edge cases and whether they break the usefulness of the system compared against the idealised model.


> There are documented tragedies where numerical rounding error of the control logic of a missile costed lives.

curious about this


This is likely a reference to the Patriot missile rounding issue that arguably led to 28 deaths.

https://www-users.cse.umn.edu/~arnold/disasters/Patriot-dhar...

https://www.gao.gov/assets/imtec-92-26.pdf


Just to clarify for others because it’s a tiny bit clickbaity:

The Patriot missile didn’t kill 28 people accidentally, it simply failed to intercept an enemy missile.

And it wasn’t launched on an incorrect trajectory either, the radar was looking at a slightly wrong distance window and lost track. Furthermore, the error only starts having an effect after 100 hours of operation, and it seems to have only been problematic with the faster missiles in Iraq that the system wasn’t designed for. They rushed the update and they did actually write a function to deal with this exact numerical issue, but during the refactor they missed one place where it should have been used.

28 lives are obviously significant, but just to note that there are many mitigating factors.


Have you found Coq or other formal-methods tooling useful?

I have found Isabelle very useful, and Dafny even more so.

Amazon AWS uses Dafny to prove the correctness of some complex components.

Then, they extract verified Java code. There are other target languages.

Being based on Hoare logic, Dafny is really simple.


Have you tried using them as macro assemblers in the way described in the paper?

No, I haven't, but I want to. I haven't had the opportunity to go deep into formal methods for a work project yet, and on personal projects I've mostly played with Ada SPARK, not Coq. I've played with Coq a little (to the extent of re-implementing the linked paper for a subset of a different ISA), but there's definitely a learning curve. One of my personal benchmark projects is to implement a (fixed-maximum-size) MMM heap in a language, and convince myself it's both correct and of high performance; and I've yet to achieve that in any language, even SPARK, without leaning heavily on testing and hand-waving along with some amount of formal proof. I've written the code and comprehensive testing in various assembly languages (again, this is sort of my benchmark bring-up-a-new-platform-in-the-brain example), and I /think/ that for the way I think proving properties as invariants maintained across individual instructions maps pretty well to how I convince myself it's correct, but I haven't had the time, depth, or toolchain to really bottom any of this out.

What's a MMM heap? Is it a typo on min-max heap?

A min-max-median heap, I would assume.

Yep, that. Just a bit more fiddly than a min-max heap, and though I rarely actually need the median functionality, fiddly is a pro for learning a language at times.

I have found huge value in CBMC and KLEE for statically verifying C code for real time safety critical embedded applications.

ACL2 is also VERY powerful and capable.


I'd love some examples here.

The second your macro-assembler has a significantly complex syntax, you are done: feature creep for 2042 features then 2043 then 2044... and it mechanically reduces the number the alternative implementations and make the effort required to develop a new more and more unreasonable.

In this end, it is all about the complexity and stability in time for this matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: