Hacker News new | comments | show | ask | jobs | submit login
Modeling Redux with TLA+ (hillelwayne.com)
132 points by hellerve 11 days ago | hide | past | web | favorite | 33 comments

His main site he mentions at the end is learntla.com. That's a good tutorial for using a subset of it that will get stuff done without trying to read a bunch of books on heavier stuff. He and I both also recommend Alloy for a taste in formal methods or blueprints like fouc said since it's designed for beginners with good guides and tutorials. TLA+/PlusCal with its model-checker is better at modeling order of execution (esp concurrency/distributed) whereas Alloy's is focused on structure of your program. Finally, Design-by-Contract combined with property-based testing or AFL-style testing with properties/contracts as runtime checks is probably combo most applicable to most programming languages and situations. If you know conditionals, you can use DbC in a lots of situations.




Sidenote: I remember that my teacher Maarten van Steen (who taught distributed systems at my university) talked about Leslie Lamport and I remember that he started TLA+. If you don't know about any of this, I invite you to take a look.

Background info on Lamport [1]. Maarten van Steen offers his book for free on distributed systems. See [2].

1. https://en.wikipedia.org/wiki/Leslie_Lamport

2. https://www.distributed-systems.net/index.php/books/distribu...

Yes Lamport started and maintains TLA+. I found his hyperbook to be a good introduction to TLA+ and PlusCal. http://lamport.azurewebsites.net/tla/hyperbook.html

I don't see the relevance on knowing Leslie Lamport to learn about TLA+ - what is the matter? One thing has nothing to do with the other... Would be valuable to learn that he dresses like a clown or like indiana jones in conferences and my preference on learning TLA?

"Would be valuable to learn that he dresses like a clown or like indiana jones in conferences and my preference on learning TLA?" Of course not. That question is too simple, and you know it! ;)

Could you also ask a question that completely gives a counter example? Could you think of a question that does help learning TLA+ because you know something about Leslie Lamport? Have it as a fun exercise for 10 minutes, or not, it might stretch your mind a bit since you claim that you can't see the relevance.

Here is why I like to know about authors who found a field:

Knowing the author may give an idea or context about TLA+. While not strictly necessary, it may be interesting background information that people don't know about. I presume that Lamport has one of the most, if not the most authentic reason for why he created TLA+ in the first place. Reading that reason may motivate people more to learn more about TLA+, or demotivate people more -- but for the right reasons!

Furthermore, discussions can be associative: gabuzome gave book recommendations written by Lamport. I didn't know Lamport wrote one (I know very little about Lamport) and since he now recommended it I'm happy to know that the founder of TLA+ also writes books worthy enough of a recommendation. It personally gives me more confidence to read it and take a crack at it.

Plus his website has a bunch of good resources linked from it.


Affect my decision to learn TLA? Probably not. Entertaining? Interesting? Something I would want to know? Yes.

Hillel gave a pretty interesting talk on TLA+ at last year's StrangeLoop: https://youtu.be/_9B__0S21y8

If you are new to TLA+ and just want to get the basic idea and use cases, I recommend the talk.

Edit: I think he also brought home made granola or something. So attending his talks in person has benefits.

It was 7 lb (3.18 kg) of sesame brittle.

Indeed, an entertaining and practical talk on TLA+.

_That said_, I've been making a lot of homemade granola recently. Maybe I should try bringing that next time!

Here's a recipe I really like: https://www.epicurious.com/expert-advice/how-to-make-granola...

> TLA+ is a formal specification language. It’s a tool to design systems and algorithms, then programmatically verify that those systems don’t have critical bugs. It’s the software equivalent of a blueprint.

Very cool.

just remember blueprints can also be wrong

Yeah, it's extremely important to note that any correctness technique - tests, types, contracts, formal methods - only provides _more_ confidence, not _complete_ confidence. We have to think of defense-in-depth.

Cool and impossible to apply to real world projects due to budget or deadlines.

There is an [interesting paper](http://lamport.azurewebsites.net/tla/formal-methods-amazon.p...) about how they use TLA+ at AWS to find bugs in their algorithms, and eventually prove correctness. So if you use AWS and rely upon its correctness, you have indirectly used a real-world application of TLA+. Someone else mentioned Facebook here, I think they also use formal methods.

Sure, formal methods are irrelevant for your standard CRUD application. However, the use case of AWS got me interested in applying TLA+ to this real-time collaboration feature that we have in the application that I develop. It involves multiple clients communicating over WebSockets, locking and unlocking resources, etc... Expressing the algorithm to TLA+ has:

1. vastly improved my understanding of what is going on and what the potential bad scenarios are. Simply taking a step away from the code, model the high-level algorithm and writing down the invariants that should always hold, makes a big difference.

2. identified real bugs that can occur (and probably have).

So yeah, you can't apply formal methods for everything. But I was pleasantly surprised by how easy it is to express a distributed algorithm in TLA+ and start proving statements about it. Somehow TLA+ manages to bridge the gap between engineering and more theoretical computer science.

Absolutely true. There is no way that a web tech company like Facebook could apply formal methods in such a fast moving world. Also, don't click this link, it doesn't exist. :) https://research.fb.com/publications/moving-fast-with-softwa...

People said the same things about unit and integration tests a decade ago.

And about optimizing compilers, neural networks, and static program analysis four decades ago.

Computer science is a fast-moving field, sure. But not all of today's research bring tangible results by the end of the week, and we should be comfortable with that.

I generally agree, but formal methods I can say with confidence will get adopted in the future.

Here is why: Testing is fastly becoming adopted by everyone in our industry, why? Becuase you cannot afford to not write tests if your system is large/important enough. However our current testing methods cannot not prove the absence of bugs, as Dijkstra is fond of saying. So the best we can tell our clients currently on the bug-free/security question is this: "We wrote it according to X standard and wrote tests to cover those cases.". Formal methods allow us to prove a system is bug-free/secure. It is the evolution of testing. I do not know how far off it is, but because we have already adopted testing I believe formal methods will get adopted as well.

formal methods don't magically make a specification bug-free/secure though, nor does it mean you won't making bugs in the formal verification code you're writing.. so what are you actually proving? other than just another layer of testing essentially? can anyone tell a client that a system has been 'proven' correct? I think when we use these words with people who don't completely understand the implications we need to be more precise what that means... that's what leslie lamport is all about ! being precise!

My company still says that today. All the while saying we should do them, but not impact our velocity. Yeah, okay.

Not that unreasonable, depending on what your use case is. There are some good, very approachable guides out there like this one[0].

[0] https://pron.github.io/posts/tlaplus_part1

Is it possible to compile algorithms from PlusCal to a traditional programming language? That way you could have a provable subcore of algorithms in your actual software project, and just autogenerate the code for the algorithms, instead of going manually from proven algorithm to hand-written implementation.

There is a PlusCal -> Go compiler[1] and, as I mentioned in a comment on a thread linked in another response to your question, there are tools to translate C and Java code to TLA+ for verification. But as a relatively experienced TLA+ user, I see almost no value in doing the first, and little practical value in the second.

The kind of properties you want to reason about in TLA+ are so global and fundamental, that the code you'd end up writing would both be too far removed from the high-level spec, and the process of translation would be negligible in the grand scheme of things, so that an automatic translation, if it is able to produce usable code at all, wouldn't really save you any time.

That's not to say that you shouldn't also reasons about more local properties at the code level, and there are good code-level verification tools (advanced ones include Frama-C for C, OpenJML/Krakatoa for Java and SPARK for Ada) just for that.

It is possible to use TLA+ (and other tools, like Coq or Isabelle) for what's known as end-to-end verification, which means verifying the important global properties all the way down to the code level (and even machine-code level), but the process is so laborious that it is virtually never worth the effort, and, in fact, it has never been achieved for any but very small programs (and even then at great cost).

[1]: https://github.com/UBC-NSS/pgo

Thanks for your extensive answer!

> The kind of properties you want to reason about in TLA+ are so global and fundamental, that the code you'd end up writing would both be too far removed from the high-level spec, and the process of translation would be negligible in the grand scheme of things, so that an automatic translation, if it is able to produce usable code at all, wouldn't really save you any time.

Hm - the examples in the Go compiler project look relatively good though - would it make sense to use something like this for small portions of a system? I understand that modelling and transpiling an entire system is close to impossible, but a more manageable use case may be a collection of business logic algorithms that are subject to frequent change. In that case you would still want to validate each version after a change, and it would save a lot of time to just autogenerate that particular algorithm in implementation code.

Sure, it's possible to generate particular subroutines by compiling PlusCal to code, but the gains are far too small compared to the total gains of using TLA+. The kind of subroutines you'd want to model at the code-level in TLA+ would normally be very complex concurrent routines, using low-level operations that the PlusCal-to-source compiler would likely not know about (say, memory barriers), and in any case, you'd spend 98% of the time figuring out the algorithm in TLA+, anyway. Compilation would, therefore, save you 2% of your effort. As to the correctness of the generated code, we're talking very small routines here, that a careful manual translation is unlikely to get wrong.

In short, the likelihood that a PlusCal compiler would know to use your chosen libraries so that you wouldn't need to modify the code anyway is low, and even if it did, the savings in both correctness and effort would be slim.

A more interesting possibility would be compiling TLA+ properties into code-level specifications (such as Java's JML or C's ACSL), i.e., extracting various local properties from global ones, as that's actually "interesting" work.

In general, though, you must be careful when learning about new techniques reported in research papers. In formal methods (and in many other research areas) there's a HUUUUUGE gap between what has been achieved "in the lab" and what can be made into a practical and affordable tool for industry use. TLA+ is one of a handful of formal method tools that's been put to good use in industry.

That makes total sense. Thanks!

Although TLA+ can't, that is used extensively in some other tools like the B method, Circus for concurrency, Eshersoft's Perfect, and some provers (Isabelle/Coq). That's a good thing in my book for critical software given it knocks out errors people introduce in implementations. The only drawback is it creates simplistic code that might not be what you're wanting in performance, readability, or even security (eg side channels). Even with those drawbacks, it can become a reference implementation to equivalence check/test your hand-coded implementation against. In security, one might also mix verified/hardened implementations for untrusted input and optimized ones for trusted inputs if they come from separate, authenticated places.

Lots of tradeoffs one can make with spec-to-code generation. TLA+ just isn't in that game for most languages yet. Fortunately, it's straight-forward to convert TLA+ specs to code. Thinking about how to model or fix your algorithm is where the trouble will be. ;)

There is some discussion on it here about TLA+ itself: https://news.ycombinator.com/item?id=14373359

Not about JavaScript though, but C and Java.

Thanks, will look into that - I read that Coq can generate Haskell etc, so I was wondering if something similar exists.

Any takers for doing this in lisp? ;)

Not formal verification, but you can specify and validate application state of SPAs written in clojurescript using clojure.spec(https://clojure.org/about/spec).

Example -- (https://github.com/Day8/re-frame/blob/master/examples/todomv...)

You might wanna check out ACL2, which is a theorem prover rooted in common lisp: http://www.cs.utexas.edu/users/moore/acl2/

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact