Hacker News new | comments | show | ask | jobs | submit login

This is a really good idea! It's been pitched inside where I work in Azure, but not yet implemented anywhere (to my knowledge). The basic idea is you have a TLA+ spec of your system, then you run the model-checker in "simulation" mode to generate random execution traces. You take these generated execution traces, and, according to some predefined correspondence between the actions in your TLA+ spec and some API call in your code (a node becoming inaccessible or a message being sent, for example), run the execution traces on your real-world system.

I'd love to see you implement this and write about your experiences.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: