
Learn TLA+ (2018) - Twirrim
https://learntla.com/introduction/
======
zdw
Whenever proof tools are brought up, I tend to wonder how one goes from the
proofs as written to actual code that can run in production in a way that
guarantees the proof holds - how does that happen?

The linked page even says "It’s the software equivalent of a blueprint" \- how
does one go from that blueprint to something that functions?

~~~
shriphani
Look at idris - proofs are expressed using dependent types which allow full
propositional logic and implementations exploit these proofs using a haskell
like language that allows imperative + purely functional code:
[https://www.idris-lang.org/](https://www.idris-lang.org/)

~~~
wuschel
There are more: F* compiles to Ocaml/F#/C/ASM/WASM [1] and it's variant Low*,
and ATS [2], which compiles to C.

    
    
      [1] https://www.fstar-lang.org/
      [2] http://www.ats-lang.org/

------
senthil_rajasek
Here is a video of Leslie Lamport presenting TLA+ at Build 2014. It has a nice
intro to TLA+ from the creator of TLA+ himself!

[https://channel9.msdn.com/Events/Build/2014/3-642](https://channel9.msdn.com/Events/Build/2014/3-642)

~~~
twelvechairs
Its always a joy to watch Leslie Lamport. He's one of the few people in
computer science that are really worth looking up to. Everything he says and
does is incredibly eloquent and purposeful.

------
hwayne
Every time this frontpages I feel super shitty about not getting back to it.
I've learned a _lot_ about how to teach TLA+ in the past three years, and
there's a lot I know I can improve about this guide. One of these days, I
promise!

------
keyanp
Does anyone have personal/anecdotal experience with TLA+ being used in a
professional setting? I'm interested in hearing stories/use-cases of practical
application.

I've always been intrigued by formal methods but when I try to think of an
application of TLA+ my problems are either too trivial to need formal analysis
or too complicated for formal analysis to appear feasible.

~~~
Twirrim
Yes indeed. Seen it in use in AWS across a number of critical components.

The team I was on in AWS used it for the implementation of a particular
component that needed to operate at large scale and had absolutely zero
tolerance of failure. Two engineers learned TLA+, started writing up the logic
and discovered various ways in which the logic would work on small scale but
wouldn't scale up to parallel operation across multiple servers. It's fair to
say TLA+ saved our bacon there before we'd written a single line of service
code.

In the end we actually got the component out to production faster than
predicted. The TLA+ code gave solid shape for the actual production code, and
we were quickly able to prove that the code was valid and that things were
working correctly.

In my current role in Oracle Cloud Infrastructure, there is actually a small
dedicated team of engineers that help services use TLA+ for components, headed
by Chris Newcombe, one of the authors of the TLA+ whitepaper that came out of
Amazon ([https://lamport.azurewebsites.net/tla/formal-methods-
amazon....](https://lamport.azurewebsites.net/tla/formal-methods-amazon.pdf))

My current team has just started exploring it for use for a component that
similarly has little tolerance for error. It's not in the same territory of
"if it fails it's catastrophic", thankfully, but if it fails it's a major
inconvenience for customers.

------
aychtang
Hillel Wayne (who wrote this guide) also has a great blog with many posts
related to this topic: [https://hillelwayne.com/](https://hillelwayne.com/)

Leslie Lamport (who created this language, has written many seminal papers on
distributed computing and is a Turing award winner) has also published an
excellent book and video series introducing the design and practical usage of
TLA+, you can find the video series at:
[http://lamport.azurewebsites.net/video/videos.html](http://lamport.azurewebsites.net/video/videos.html)

------
ex4
Any thoughts on Alloy?

I wonder what's preventing formal methods from becoming a more standardized
practice when designing complex interconnected systems or microservices.

I mean, I know they are not as "friendly" as writing tests using mocha or
pytest, but why can't someone build an npm version of it and use it in CI? Is
there a technical challenge to doing that? Perhaps I'm missing something.

~~~
cle
They're pretty hard to learn. Formal methods are relatively abstract and the
feedback loop is a lot less obvious--in TLA+/PlusCal, you aren't writing
instructions, you're defining state spaces and relationships between/across
them. Since it's not something you generally use often, it also gets rusty
quickly. In a lot of dev jobs, it's just not worth the cost.

------
whoevercares
I know AWS uses it heavily in distributed algorithm/system domain. I’m
wondering if this is also useful for verification in other domains like data
modeling and machine learning. So far I haven’t seen too much TLA used outside
the core system/infra engineering/database groups

------
bluejekyll
Preface, I don’t know TLA+ or Coq, but I get the impression from examples,
that it is strongly reminiscent of test-driven-development.

I can see the advantage, but I wonder if there’s potential for writing the
proofs in the target language, verifying the logic, then building the code,
and verifying the code meets the proofs.

Is this something anyone is looking into? I could imagine a macro in Rust for
example that is TLA+, that takes standard test input, verifies the logic, then
you feed in the real implementations.

~~~
tonyarkles
Potentially. I’ve dabbled in TLA+ a few times and have had mixed results; one
project turned out spectacularly well, one ran into a problem where I really
couldn’t figure out how to model the problem, and one was a failure but TLA+
proved that it was impossible to build a reliable system based on the hard
requirements (interfacing with hardware from a 3rd party vendor).

I agree with you in the sense that, after the spectacularly good project, it
would have been great to run code gen and get a skeleton to fill in.

I disagree with you in the sense that TLA+ as a language forces you to think
about the problem you’re modelling from a very different perspective than how
you’d actually write it in an OO/FP/whatever language, and in my experience so
far, that has been a very powerful thinking tool. TLA+ doesn’t constrain you
to thinking about “how would I write this in Java”, but more in a “how should
the communication patterns in this system work” way.

~~~
kilotaras
> failure but TLA+ proved that it was impossible to build a reliable system
> based on the hard requirements

That's probably the most useful result TLA+ can give in comparison to not
using it.

Understanding that requirements are impossible after 1-2 weeks of modelling is
way better that after 6+ months of coding.

~~~
hwayne
One of my favorite war stories was losing a client after we spent two days
trying to model their problem and discovered they didn't actually know what
they wanted and they decided to just scrap the project and do something else

------
SlowRobotAhead
> we use = when declaring variables, while in the algorithm itself we use :=.
> Outside of variable definition = is the comparison operator. Not equals is
> written as /=

I feel like some syntax that works some way only in some section isn’t ideal.
Not a knock, it’s just apparently a problem only <your favorite language> gets
right.

------
dang
[https://news.ycombinator.com/item?id=19661329](https://news.ycombinator.com/item?id=19661329)

------
raister
How this thing from time to time appears in the front page? How is it pushed?
It's like been over and over discussed...

------
Eratosthenes
I'm planning to use this to validate parts of a new cryptocurrency I'm working
on.

------
Jahak
Thank you!

