Since F# is a functional language, it allows, using a purely functional approach and a system of strong types, pure functions, to formally verify the correctness of a particular ISA. The emulator is nothing more than a side effect.
>it allows to formally verify the correctness of a particular ISA
That must be hypothetical. "Functionalness" of the language/code doesn't make everything that is written automatically subject to formal verification. (mechanized or pen and paper). What kind of correctness properties does it actually allow to formally verify? I understand if it was the F* language, which is a full blown dependently typed proof checker, but with F#, which is defined by the implementation and 300 page English spec, I don't think you can verify anything interesting. As far as I know F# itself doesn't have mechanized formal semantics and its type system could be unsound.
Your executable specification could probably be called "Formal", in the same sense as a PDF spec being called "Formal", but being subject to verification by the virtue of F# being functional or being written in the functional style might be pushing it too far.
I completely agree. And I specifically draw your attention to the fact that this is not a formal verification, which it would be reasonable to do: Coq, Isabellll, Agda, F* etc. However, Formal Specification. Those. representation of the specification in a formalized form. Haskell example: https://github.com/rsnikhil/Forvis_RISCV-ISA-Spec
In this case, the term "formal" refers to the formalization of the representation of the specification. And it seems to be already established.
Then what have you meant by "allows to formally verify"?
>Haskell example:
I think (I'm not gonna insist on it) they are misusing the word "formal" too. Formalization must involve logic/math, i.e. a formalism, either in code or in prose. Just having lambda calculus as a base somewhere deep doesn't cut it. Haskell's type system is known to be (logically) unsound/inconsistent for example. This is an executable specification imho.
They chose Haskell for the reasons stated in the section 2.1. Namely human reasons of it being a general purpose language that is supposedly more readable by engineers, and preexistence of translators that they needed. To me the "formal" aspect of their spec starts after the output of the hs-to-coq compiler, which isn't a verified compiler. I already have to trust coq, in which about a paradox a year have been discovered(maybe less now), and which isn't small (about 20k lines minimal core iirc). Trusting an additional compiler and a language subset(some total Haskell subset) is stretching it formality wise. Using a general purpose language is also their innovation (Academic papers sometimes are written to cover the yet unclaimed design space, not necessarily because the approach is the best or practical). They even used the following language "It may turn out that the ideal community-shared spec language is quite different from Haskell, but our interest here is showing that a general-purpose language can work well as the host for a multipurpose spec, leaving fine-tuning of the approach for future work." I see no reason why they couldn't have done it in coq or lean directly, other than the inconvenience(having to write to-verilog and other translators) and "non-innovativeness"(publishability). Good old C today has been fully formally specified, and therefore could also technically be used to provide a "formal" specification for hardware. Would it be a good formal specification? The further it is from the logical core the less useful it seems to me as a formalization. Haskell's formal aspects are also scattered across multiple papers throughout the years and Haskell's versions as far as I know. Coq on the other hand has https://github.com/MetaCoq/metacoq
> To me the "formal" aspect of their spec starts after the output of the hs-to-coq compiler, which isn't a verified compiler. I already have to trust coq, in which about a paradox a year have been discovered(maybe less now), and which isn't small (about 20k lines minimal core iirc). Trusting an additional compiler and a language subset(some total Haskell subset) is stretching it formality wise.
I'm a bit confused from my lack of knowledge here. If it's fed into Coq, why does the origin of the code matter? As it's a proof assistant, wouldn't it reject invalid proofs of validity (assuming you fully trust Coq)? Shouldn't it be fine even if the code is randomly generated by monkeys on typewriters?
The subject of discussion was whether an executable specification in F# or Haskell are also formal specifications. While Haskell is based on System FC, a type of lambda calculus, there are many additions and quirks, and as far as I know no mechanised metatheory, or even complete language semantics stated in one document. There are also some language features known to make Haskell inconsistent in the logical sense, not to mention that turing completeness makes any language inconsistent as a logic. In the paper linked above authors call their multipurpose specification in Haskell formal, while I don't like it, they have their reasons and justifications (total, non-turing complete subset of the language, haskell's metatheory scattered across papers, System F, which is well established). But to me when one sees some formal statement, the shorter is the path to the trustable foundation the better, and in statements in Haskell I see all of the issues above, plus possibly bugs in the language. If there were proofs of correctness/soundness of hs-to-coq compiler and of Haskell metatheory I couldn't have found reasons to complain. (But then it would have been better to invest this effort into verifying a hardware specification DSL to coq translation or embedding) As the result, what one would have to trust in this situation is the translation of this Haskell spec to coq, not the spec itself, considering that in this setup you have to prove theorems in coq anyway, and guarantees of the proven theorems only work for coq definitions. So Haskell's spec is seen by me as a superfluous entity, that only complicates things and lowers trust. You are correct that a proven statement is coq doesn't depend on the source from which boilerplate was generated, but you would have to trust the statement in coq, and not the "spec", and that's a fail for a formalization in my eyes. I suspect (did't check) that the translation and language subset they chose are trivially translatable to coq and they don't use anything particularly questionable, but the superfluousness doesn't go away.
F# has diverged quite a lot from OCaml and is very readable IMO.
In any case, I think this view stems from our Algol/c/Java centric CS education system that casts ML and Lisp as weird and “unreadable”, when really it’s just a matter of perspective.
it's really not, in the engineering school I did in the first year we were taught both C and LISP during the same semester and it was obvious to almost everyone except math nerds how much easier C was mapped to a normal human being (e.g. someone without prior programming experience)'s mental model of the world