> That's not an assumption I want to have to make!You don't need to make that assumption, though. TLA+ checks it for you. My point is that everything is pure and well-behaved. The very same issues of modality happens when you use monads. Only in TLA+ it's very simple to see when an operator is a constant (called level 0), a state function/predicate, i.e., a function of the current state (level 1), an action/step (level 2), or a full temporal formula (level 4). In a temporal formula, you can say something like, whenever my program sees an event x, it would eventually output a response y. In a way, TLA+ has 4 syntactic types, and the relationship between them is checked instantly at parsing. E.g. you can't take an action or a temporal formula and prime it (TLA+ checks "inside" your formula and knows what level it is). Priming a constant keeps it a constant; priming a state function turns it into an action (a relationship between two states).> This "simple isomorphism" simply doesn't existNo, I meant something very simple. Any boolean predicate, e.g. "this number is even", is the same as the set of all even numbers. It's a characteristic function. It always exists.> Sure, but a possibly nondeterministic one that you can't actually implement (at least not directly).Right, but nat -> bool, is the same as saying "all programs that take a natural and return a boolean", which is the same as the ND program that for any natural nondeterministically returns true or false. It's describing the same set or type, but in a different way, which makes simple logical implication the same as subtyping or type inhabitance. B is a subtype of A, is, in TLA+, B => A. a is an instance of A, is in TLA, a => A.BTW, it's not true that TLA+ can't tell you that your implementation is correct. It can, it's just hard (but probably no harder than in Coq). What you do is, you write a high-level model and prove it. Then you implement it in a programming language, and you write a compiler from that language to TLA+. Then you check that the TLA+ version of your implementation is a refinement of your high-level model. Academic teams have done this for C and Java. The problem is that implementations have many details, so it doesn't scale beyond small programs.> When I get some time I will look into it further.I'd say that the effort to learn TLA+ is comparable to learning a very simple programming language with a couple of deep concepts. Scheme maybe. It takes two weeks to be able to write large, complex specifications (though probably longer for the theory to become intuitive). The entire documentation for all the "standard libraries" fits on a single page. The reference for all the language, with full explanations, fits on 4 pages. This is all of TLA+, including the standard library, minus the proof language, including all syntax sugar: http://research.microsoft.com/en-us/um/people/lamport/tla/su... (e.g. all the special syntax for sequences and records is just sugar for functions, so r.f is sugar for as r["f"]). Personally, I'm interested in theory for intellectual reasons, but the single reason I picked TLA+ is that Amazon technical report, where they showed that 1. learning it is easy for engineers to do on their own, and 2. it scales well for very large systems.

Registration is open for Startup School 2019. Classes start July 22nd.

Search: