Here are some slightly dated references to slides on the topic:
And thanks for the links1
It was the first time he'd used TLA+ (or TLA) at all, and it took a bit of experimenting to get used to it, but he soon generated a solid logical model and exposed a number of fringe flaws in the original proposed solution.
From TLA+ model to completed Java application took very little time at all, the logic was all there already, it just needed fleshed out in the language. It was also easy to split the work out amongst multiple programmers. He's argued that the total time, including learning TLA+, took less time than writing the application from scratch in Java and discovering the bugs as they went along.
About the only thing he disliked with it was that almost all (if not all) the tooling around it require Eclipse, and he hates that IDE with an almost unholy passion :D
This is nothing to do with you or your statement, but this is the pre-eminent issue in our industry. Everything absolutely has to be logically correct or we expose users to flaws in both security and stability that can in extreme cases be the difference between life or death and in regular cases be the difference between being cracked or not.
Best to keep it real about risks so your solutions match requirements. Most companies are happy to sell patches to broken software or offer software that passed many checklist items for compliance. They have lawyers for the rest.
Hence calling it an extreme case.
> Hacks happen by the millions with headaches being the main result along with lost time and money.
...which is a good reason to make software correct.
However, I'm not arguing about the ROI and efficiency of attaining absolute correctness; my point is that software should be correct. Ideally.
We're happy to settle for somewhere on the low side of correctness, but I don't think that is necessarily healthy or good for the industry.
And the satisfied customer effect can't be ignored. ;)
It does seem like online gaming companies are at the forefront of distributed systems implementation. See League of Legends' use of conflict-free replicated data types, for example: http://highscalability.com/blog/2014/10/13/how-league-of-leg...
If anyone is interested and the original article (mentioned elsewhere) as PDF:
Once you've spent a significant effort in writing and verifying a formal spec for a program, there should be an automated mapping between it and the implementation, or at least a template of such implementation.
Formal verification deals with proving that the actual code you write implements a formal specification. This is a much, much harder problem. Usually, it involves cajoling a theorem prover by adding lots of annotations to your code. There's been some impressive work with the Ironclad project from MSR which brought this down to ~5 lines of annotation per single line of implementation code - within striking distance of unit testing! Still, probably a decade or more away from widespread use.
Formal verification isn't quite what you meant, though. You were talking about code generation. For general systems specified by TLA+, this isn't going to be useful unless you write in a very idiomatic way; in which case TLA+ is just a very high-level programming language. However, code generation is implemented in some specification languages that deal with very specific areas of system design. I'm thinking primarily of the P language, which is used to specify & check state machines and includes code generation capabilities for C++ and C#.
Hope you find some of the above enjoyable or helpful on your journey to end-to-end, verified software.