There's at least a couple more major improvements I want to make. One is fixing all of the small errors, and one is adding c-syntax. They've been stalled out due to Reasons (tm), but I'm hoping on diving back in soon.
From what I've seen, TLA is finding a niche for modeling complex concurrent/distributed algorithms.
I'd recommend design by contract as a lighter way of verification. And if you don't like the TLA syntax, you could try a verification tool similar to TLA called SPIN, since it kind of looks like C. The reference for SPIN is the book written by the tool's author.
Yet TLA+ can be beneficial to validate multiple-step interactions between components e.g. a quorum protocol on a network or locking and messaging between threads. Anything that involves more than one agent and a state machine essentially.
But when you have tighter coupling and a lot more business rules, this stuff becomes useful.
For example, imagine that you are writing a billing system. Beyond just the code, there's a design that expects a passing of time and an ordering of operations (user signs up to the system, after a bit they will get invoiced, the amount invoiced will be right, perhaps there's pro-rating)
In models where there are a lot of moving parts and where you _don't_ really have locality of code, then this stuff becomes more useful.
If you've ever had a bug that exposes an issue not just with the code, but with the entire system and set of assumptions it's built on, it's possible that something like TLA+ could help you a lot for that system
Formal specifications themselves have a few benefits. The first is they force you to make everything explicit that's usually a hidden assumption. From there, they might check the consistency of your system. From there, you might analyze them or generate code from the specs. Spotting hidden or ambiguous assumptions plus detecting interface errors were main benefits in industrial projects going back decades.
However, you can get a taste of the benefits by combining Design-by-Contract with contract/property/spec-based testing and runtime checks. Design-by-Contract is a lot like asserts. Should be easy to learn with more obvious value. I'll also note that tools such as Frama-C and SPARK Ada used with code proving are adding or did add the ability to turn all the specs/contracts into runtime checks. So, that's already available in open source or commercial form. And anything you can't spec, just write tests for. :)
Sorry it's on Russian and amount of steps was 33. Check 28:30
> But if we run the model checker, we find that’s not true: Alice could trade an item to herself and, before that process finishes running, resolve a parallel trade giving that same item to Bob. Then the first trade resolves and Alice gets the item back.
How does the model checker know that "parallel threads" are even a possibility here? It seems like it's checking what amounts to a leaky abstraction from the actual implementation of this algorithm.
In my mind a strictly logical validator would look at the proposed algorithm and say it checks out (since in math-land transactions are atomic). Is this just a feature of the model checker? Can it be turned off/tuned to the expected real-world implementations of the algorithm?
I don't know how, but I expect you can indeed model actual transactions.
 https://youtu.be/_9B__0S21y8?t=955 (15:55)
Much better to accidentally get a false positive than a false negative.
 https://github.com/endiangroup/compandauth  https://endian.io/articles/compandauth/
In comparison, proof assistants take a deductive approach, requiring you to show - step by step - how the requirements are satisfied by the definition of the system. Dependently typed languages move some of that burden to the type system.
1. You can choose the level of modeling, so you can quickly spec out the high level and find bugs in that before moving on to the low level (if you want)
Of course extraction has its own benefits. Coq is a pretty cool tool :)
The use of model checking means that TLA+ is basically a glorified fuzzer: it will attempt to find a refutation of the model you define (such a model, in turn is usually an abstraction of the system you actually care about; not something that gets verified "end-to-end" like in Agda or Coq) and any such refutation will be a genuine proof that the model fails in some way; but a failure to find one does not amount to a proof of correctness!
TLA, the logic at the heart of TLA+, is a linear temporal logic, which is, indeed, an example of a modal logic, but also the most ubiquitous and successful type of logic in program verification. I think that in computer science in general and formal methods in particular, temporal logic is more well known than all other modal logics combined (and probably more than even the name or concept of modal logic).
> which could also be understood as (co-)monads
This is only relevant if you're trying to represent temporal logic in some functional language, which TLA+ isn't.
> The use of model checking
TLA+ is a specification (and proof) language that's independent from how it's verified (or how the proof are checked). There is a model checker for TLA+, as well as a proof assistant.
> a glorified fuzzer
Model checkers and fuzzers are not at all alike. A fuzzer runs a program many times, while a model checker doesn't run it even once, but rather checks it. To see how different checking is from running, consider that a model checker like TLC (a model checker for TLA+) could completely check in a number of seconds an infinite (and even uncountable) number of executions, each with an infinite number of steps.
> a failure to find one does not amount to a proof of correctness!
A failure to find a counterexample most definitely amounts to a proof of correctness. It is not a deductive proof in the logic, but a proof nonetheless (one that employs the model theory of the logic as opposed to its proof theory). That's the meaning of the term "model checker": it automatically checks whether some program is a model for some formula (i.e. a structure that satisfies the formula).
> not something that gets verified "end-to-end" like in Agda or Coq
This is misleading. It is possible to do "end-to-end" in TLA+, pretty much as it is is in Coq, which is to say that it's possible only on very small programs, usually in research settings. For example, see here.
But the main difference between TLA+ and Coq/Agda is that it's not a research tool but a tool intended for use in industry on large, real-world systems. Now, all formal methods -- whether Coq or TLA+ -- have severe scalability limitations. Very roughly, one could verify about 2000 lines of specification -- that specification can be actual code (i.e. a ~2000 line program) or a high-level spec that abstracts 200K or even 2M lines of code. Because TLA+ is intended for use in industry, it is designed to be used on high-level specs more than code, because most programs that require verification are not small enough. But if you have a 200K-2M LOC program, it's not as if end-to-end verification is a choice. Remember that the biggest programs ever to be fully verified end-to-end were about 1/5 the size of jQuery, and took world experts years of effort -- not a feasible option for most software.
Well, since the user was specifically asking for a comparison of TLA+ with systems like Coq, Agda and Idris, I should think that the question "how would I represent the logic TLA uses, given a language like Coq/Agda/Idris i.e. a functional (dependently-typed) language?" is quite relevant! (Perhaps even more relevant than the issue of what happens to be better known among academics in the formal-methods community.)
> A failure to find a counterexample most definitely amounts to a proof of correctness. It is not a deductive proof in the logic, but a proof nonetheless
This is of course true if the model checking is exhaustive. Given that TLA+ is most often tuned "for use in industry on large, real-world systems" however, most uses of model checking are probably not going to exhaustively check the relevant state space.
(Similar to the underlying reason why the vast majority of users in industry would probably focus on the model-checking component in TLA+, not the fact that it also happens to include a proof assistant.)
> ...Now, all formal methods -- whether Coq or TLA+ -- have severe scalability limitations. Very roughly, one could verify about 2000 lines of specification...
(This is of course a very real challenge, but type checkers are far more scalable than that, and there are reasons to expect that this may generalize to other "local", "shallow" properties like simple checking of pre- and post- conditions ("design by contract"), or the sort of resource-based and region-based checking that is now being used for systems like Rust. This is where end-to-end methods might have real potential.)
If it isn't exhaustive, it isn't model checking (i.e. it doesn't check that your program/spec is a model of a formula that describes some correctness property).
> most uses of model checking are probably not going to exhaustively check the relevant state space.
Ah, so what happens there is that you check a finite instance derived from your specification. I.e., instead of checking a specification for n ∈ Nat, you check a derived specification where n ∈ 0..4. But it's a proof of the correctness for the specification that is actually checked. Obviously, a model checker can't prove something it can't check (same goes for deductive proofs).
Definitely temporal logic, then :) FP is rare in formal methods, but is well known in programming language research. There is a small intersection of the two fields (proof assistants, dependent types), but I doubt it even makes out 10% of formal methods research (most of which is around model checking and static analysis).
> but type checkers are far more scalable than that, and there are reasons to expect that this may generalize to other "local", "shallow" properties like simple checking of pre- and post- conditions ("design by contract"), or the sort of resource-based and region-based checking that is now being used for systems like Rust. This is where end-to-end methods might have real potential.
We know what kind of properties you can scale, and the method of checking them -- either by type checking or any other form of abstract interpretation -- is not relevant (none can do a better job than the others). Those are inductive (or, since if you like programming language theory, the compositional) properties. Unfortunately, most deep properties are not inductive; this means that proving them using scalable inductive techniques (type preservation is an inductive property as are the inferences in any deductive system) would always require a long proof, that exists somewhere in a huge search space.
So, type checkers, model checkers and static analysis are all scalable for the same properties, and not scalable also for the same ones.
Also, there are no end-to-end methods, as far as I know. There are just projects small enough, and people devoted enough to do incredibly tedious work.
Man it'd be super fun to write an overview essay on all the kinds of modal logic used in CS that _aren't_ temporal logic.
For a good example, see Appendix B in Diego Ongaro's dissertation , the formal Raft specification. After a little of the Learn TLA+ book, I was able to muddle through the spec enough to implement it.
This website chooses to show the ASCII version that you actually type to get the "real" pretty-printed syntax.
I'm just wondering because of the number of fairly basic questions here. I went to a middling midwestern state school 20ish years ago and that was part of the curriculum. Perhaps we have lots of non-CS-graduates here?
This is not to denigrate the discussion. I'm just surprised that this is news.
Then you know how to write some webpages with new shiny JS framework, congrats you are a "qualified" software guy. Now go to write those sh*ty code ............
Math? what math?
CS background? algorithm? if you are not going to challenge FAAG why bothered?
if you just want to code few pages, writing some sql query, there is no need for that. and for 80% of enterprises, they need only CHEAP engineers.
sorry for the rant, just another morning with lots of idiots bothered me.
If you want me to said what's the terrible thing? the code that very FEW people actually use, no one EVER VERIFIED it.
Do you dare to install these piece of crap to the plane you sit on?
Neither formal methods class was particularly full, but neither was OS architecture, advanced algorithms, compiler design... machine learning and AI were both packed to the rafters though.
Kids entering university are picking based on current trends, and formal methods are not trending.
(Proof assistants are fun, though.)
OTOH my recall of a curriculum from 15-20 years ago should not be relied upon.
My alma mater had everyone takes are an intro programming sequence, discrete math, and algorithms. Beyond that it was up to you. It could be all proofs, all systems programming, or a mix of the two. (I went with systems programming).
I think so. I'm an EE.
I'm a math major, by the way.
(You can write Java code and invoke it during model-checking if you really need to, but I haven't come across any situation where it's actually necessary.)
I had the impression it was about somehow writing a program in a more strict language and then exporting it to Java.
I don't do much algorithmic design.
I would recommend Lamport's video course and TLA+ hyperbook (Lamport says he's stopped working on it, but it's fairly complete). Note that both Learn TLA+ and Practical TLA+ don't teach much of TLA+, but rather a language called PlusCal that compiles to TLA+. Some programmers may find it easier to start with because, unlike TLA+, PlusCal resembles a programming language (and like all programming languages, it is much more complex than TLA+, but more familiar). Hillel's book also contains some good real-life examples.
You can also find examples and various other links posted on the TLA+ subreddit.
Most evidence indicates using familiar concepts aids learning. That makes me hypothesize the best recommendation is the more mathematical ones for folks strong in math and the programmer-centric ones for those strong in that. That also implies that, for each of these formal topics, we need one version for each audience to draw more people in.
Off-topic: from your linked videos, this video at the 2min 49s mark had me absolutely creased lmao http://video.ch9.ms/ch9/bab3/13798ec2-8479-4062-8aaf-9421567...
After those two, I'd probably recommend the original _Specifying Systems_. It can be a tough read, but it's a comprehensive dive into how TLA+ works mathematically and everything that you can do with it. You can get it for free here: https://lamport.azurewebsites.net/tla/book.html
I'd also recommend practicing by applying it to problems you've already solved in your place of work. That way you can compare it against a real system you know inside and out. I talk more about that here: https://www.hillelwayne.com/post/using-formal-methods/
Does that answer your question?
Now, this topic is a rather abstract and difficult to comprehend one, and having two equivalent ways of doing it, introduced right from the beginning, I thought was too much of a cognitive load, considering the difficulty of the subject.
I hope this material from the article is simpler for learners.
TLA+ has some tools to do this, if you write your implementation in some TLA+-specific language like PlusCal. But this would be the sort of "end-to-end" verification that TLA+ proponents would regard as only being feasible for relatively simple software.
Relatedly, JML basically adds support for "design by contract" specifications to the Java programming language - expressing preconditions, postconditions and invariants. These can be quite useful, but the focus with TLA+ - in this thread and elsewhere - seems to be rather on "deeper" properties that encompass entire systems in a more global and cross-cutting way, and are thus inherently harder to manage with simpler tools.
Are they a different way of doing the same thing?
Or two completely different thing?
That being said, TLA+ specifications have uncovered really hairy concurrency bugs which would be a nightmare to unit test - for instance, Amazon found bugs in DynamoDB which required 35 steps to reproduce .
It sounds a lot like writing your specs as tests that are then satisfied by the implementation code.