This is the first time I have read about adding cost functions to monitor the state of a program to verify its integrity, and this is after having a job over a decade ago where we were running up against software attestation problems for an authenticator and device identity product (e.g. how can you know a device wasn't cloned?).
Reading this now, our assumption that the runtime attestation had to be cryptographic and deterministic instead of probabilistic (a score) based on the aggregations of something like these cost functions was our naive mistake. It's plausible to me now that software attestation is simply not a closed loop cryptographic problem. Even if I'm still wrong in some equally naive ways, this was really edifying to read.
Probably one of the coolest uses of Haskell in verification is as the high level model implementation of sel4, where a theorem prover embedding is used to prove that the c impl of sel4 microkernel operating system is basically just a more detailed refinement of the Haskell description!
Reading this now, our assumption that the runtime attestation had to be cryptographic and deterministic instead of probabilistic (a score) based on the aggregations of something like these cost functions was our naive mistake. It's plausible to me now that software attestation is simply not a closed loop cryptographic problem. Even if I'm still wrong in some equally naive ways, this was really edifying to read.