I've written a somewhat lengthy discussion of this point on the TLA+ subreddit: https://www.reddit.com/r/tlaplus/comments/abi3oz/using_tla_t...
VARIABLE x ∈ Nat
As to "satisfying the type theory folks," I'm not sure what it means. Type theory studies the features of typed formalisms; it makes no claims as to when working in a typed formalism is preferable to working in untyped ones (although different people have different opinions). There are implementations of TLA in typed formalisms (in particular, Isabelle and Coq). Lamport explains why he designed TLA+ as an untyped language.
I simply meant that adding simple macros wouldn't make TLA+ typed, and so wouldn't make the people who think TLA+ should be typed any happier than they are today.
: https://www.isa-afp.org/entries/TLA.html, https://drive.google.com/file/d/1rAn3N5hViv3xNe2E55lMzpFFym1...
> The technologies provided by VMMs for communication between VMs have a critical impact on VM isolation properties, on the confidence components can have in the delivery of data and in the integrity of the data that is received. The video below identifies aspects of inter-VM communication system architecture that support important properties that are valuable for building secure systems. Terminology is introduced to enable classification of the existing body of art and survey relevant communication technologies in modern hypervisor, OS and microkernel systems. An example is presented — Argo, an inter-VM communication mechanism developed for the Xen hypervisor — and how it is distinguished from other communication channels on the Xen platform and elsewhere.
> One really nice feature of TLC is that (unlike a fuzz tester) it does a breadth-first search and therefore finds minimal counter-examples for invariants.
It's worth noting that many fuzzers go to great lengths to produce examples that are as small as possible, but it's true that this just comes naturally with TLC.
Making even moderately complex TLA+ models or minor iterations on a simple model's depth of representation often results in exploding the search space for TLC and consequently the time to perform checking.
Both classes of tools are useful, but the comparison seemed odd since you wouldn't use TLA+ like a fuzzer, nor vice versa.
TLC checks the TLA+ specification, however detailed it is. In practice, it's true that it is usually much more useful to specify at a level well above the code, but it could be at the same abstraction level as the code (the specification could even be mechanically generated from the code), or even, in principle, much finer: you can refine the TLA+ specification to an arbitrary level, including down to the level of electronics in the CPU. However, very fine specifications are far less practical.
> Making even moderately complex TLA+ models or minor iterations on a simple model's depth of representation often results in exploding the search space for TLC and consequently the time to perform checking.
You can avoid the explosion by asking TLC to sample the behaviors in the same way a fuzzer does, but then you'll lose the exhaustiveness and the proof of absence of errors.
In either case the purpose and thus optimization of each kind of tool is fairly different, even if abstractly they're both "state space exploration", which made the comparison seem odd to me.
While this is true for invariants, unfortunately it's not true for liveness properties. The currently implementation doesn't make any guarantees about the length of a minimum liveness counterexample, and in practice I find it's usually the _longest_ counterexample.
Still super useful, I'm just being a little pedantic here :)