Hacker News new | past | comments | ask | show | jobs | submit login
How I Learned to Stop Worrying and Love the State Machine (raganwald.com)
700 points by gregorymichael on Feb 26, 2018 | hide | past | favorite | 208 comments

I've been studying state machines and specifically statecharts (which are essentially state-machines with hierarchy) for a while now. Here's a great resource to learn more https://statecharts.github.io/

I absolutely fell in love with them and haven't looked back ever since I learned more about how they work. As the author explains in this post we're implicitly writing state-machines all the times in the form of code. The idea of making them explicit, rather than implicit, helps us visualize the behavior which would've otherwise been hidden in code.

I'd highly suggest to also read this paper by David Harel, the creator of statecharts, On Visual Formalisms: http://www.wisdom.weizmann.ac.il/~harel/SCANNED.PAPERS/Visua...

If you're a frontend developer checkout my post on the subject related to React and Redux specifically: https://medium.freecodecamp.org/how-to-model-the-behavior-of...

State machines have been the bread and butter of embedded systems for a long long time. Many years ago we used statecharts as a basis for a machine controller for a complex machine and the result was extremely robust. It's very hard to forget to handle something when you're working with this model. The downside though was that didn't have code generation tooling and whenever we wanted to make a change we had to change the visual statechart and then go to the code and make the changes there which was a bit of a pain. If you have the right tooling though that becomes a lot easier.

Because state machines are a weaker computational model they are easier to reason about, you can prove things on state machines you can't prove on a general program, you can do things like state minimization etc.

>I absolutely fell in love with them and haven't looked back

Same for me. In university you really just a get a taste of them, usually limited to one section of a design patterns course. But man, for certain cases they really make it easy to reason about complex workflows, that otherwise would be buried in generic flow control statements spread across multiple classes, or modules. In our app, there was a flaky and bug-prone area. It wasn't always like that. It started very simple but as feature grew the workflow become more complex. Anytime someone added anything related to that module it almost always broke something else, or introduce a leak that we found 3 months later. It was an area where you really had to focus whenever you made a change and even code reviews would miss it.

It took me about a week to document the implicit state machine, another week to roll my own state machine implementation tailored for that component, and one more week to refactor the code. It's been 5 years, our dev team tripled in size and it's been probably the most rock-solid module in that time. I'm not worried that some junior dev will introduce some esoteric regression issue downstream.

great comments on statecharts previously here https://news.ycombinator.com/item?id=15835005

the top few comments i find compellingly skeptical.



I've seen executable state machines (via UML) adopted in a few large companies that thought they could cut out all the boilerplate and increase productivity. But, they didn't realize the same level of effort would go into getting the behavior with edge cases nailed down. Then you were fighting a tool to get the last few bugs details worked out. Ultimately we saved no time and alienated developers.

This could potentially save me a lot of time. Thank you very much! Just recently I was involved with a project where what I did was essentially coding up a complex state machine complete with guards and in a way events.

Plenty of other stuff I have worked on would have benefited from this as well.

Plenty of stuff I am going to work on in the future will benefit from this.

Is there much in the way of FOSS statechart implementations? It's been a while since I looked into the ecosystem.

I remember that the canonical one was always the Quantum framework, but I never wanted to deal with the licensing aspect of it.

Hey! I'm the creator of xstate: https://github.com/davidkpiano/xstate - a pure, functional, declarative implementation of statecharts in TypeScript. It has zero dependencies, is frameworkless, and fast (O(n) state and transition look-up time for most cases).

I have some important updates to announce at JSConf Iceland, including the ability to translate to/from SCXML, and an improved visualizer.

Let me know what you think!

I think that looks awesome! Makes me wish I was a JS programmer so I could try it out :)

Qt has QStateMachine http://doc.qt.io/qt-5/qstatemachine.html which is modeled after State Chart XML https://www.w3.org/TR/scxml/ (but no XML is required to use it)

It's LGPLv3 if you use Qt > 5.6 (LGPLv2 earlier), and 5.6 is getting old now.

I use statemachines a lot in embedded systems and decorate the code with comments that are easy to translate into graphvizs "dot" languge.

Also played around with doing dynamic charts as the code runs, but stopped doing that as it was more novelty value than any other kind of value :)

Definitely not FOSS, but Stateflow is used a ton in scientific computing (Automotive, Control Systems, etc): https://www.mathworks.com/products/stateflow.html

Though you should always have NIHS in the back of your mind, state machines aren't very difficult to implement which means when you write yours you can really tailor it to your use case.

Statecharts are a very different beast than a traditional state machine. Putting together a good statechart implementation is a boatload of work.

Missed that part. You're correct.

Eric Lippert's "Wizards and Warriors" blog posts seem appropriate here, especially as an example of how tracking state and OO can clash pretty badly.

A common problem I see in object-oriented design is:

A wizard is a kind of player.

A warrior is a kind of player.

A staff is a kind of weapon.

A sword is a kind of weapon.

A player has a weapon.

But before we get into the details, I just want to point out that I am not really talking about anything specific to the fantasy RPG genre here. Everything in this series applies equally well to Papers and Paychecks, but wizards and warriors are more fun to write about, so there you go.

OK, great, we have five bullet points so let’s write some classes without thinking about it! What could possibly go wrong?






I think a lot of bad things happened when the advice around how to determine classes started by looking at the nouns in the problem domain.

OO is fundamentally about messaging not abstract data types. So if you start your design from trying to determine ADTs instead of trying to work out the kinds of message conversations that need to go on you end up with mess of types.

In the wizards and warriors we see a large amount of modelling of a domain with rules. But we, even at the end, still have little idea of what messaging is going to be going on, there is a hint about attacking werewolves. But we've already made a lot of assumptions about what our first class entites are. Often what can happen is we find, while we thought we modelled it ok and even sorted out a rule based system, it still seems difficult for our objects to have conversations.

maybe what would be better is some thing that can attack other things and wizards, weapons, and warriors are all simply modelled by some composable set of stat modifiers.

The original advice was to use nouns to "identify candidate classes", rather than "determine classes".

Sadly when confronted with a problem, the majority of programmers can get no further than writing down lots of nouns, perhaps dozens of them, with no idea how to combine the few actually required into an object-oriented design.

OO isn't fundamentally about messaging. That is incorrectly assuming that object-orientation began with Smalltalk, Alan Kay and Dan Ingalls. Smalltalk's "messaging with objects, all the way down" approach largely ended with its popular use and the widespread adoption of Java. Java's object model is entirely based on Simula (James Gosling Sept 2017).

The battle is lost now, but I think it's unfortunate that we use "OO" both for class oriented domain modelling, and object oriented state persistence with message passing for composition.

Technically the latter is about object instances, the former about object/class (data/domain) modelling.

I love this series. One little rule I've stumbled onto over the years that this series revolves around is:

When I'm stuck on a software design problem, pick some random part of the program and see what happens if I make it first class.

In this case, Eric takes the game rules and turns them into objects. (Essentially the Command pattern[1], which is close to my heart[2].)

You can go overboard with this, of course, but I've found time and again if it seems like I can't get my code to hang together, it's usually because I'm missing a noun — a reification of some part of my problem that I can pass around and do stuff with.

[1]: https://en.wikipedia.org/wiki/Command_pattern

[2]: http://gameprogrammingpatterns.com/command.html

PS: Another solution to Eric's initial problem with warriors, wizards, swords, and staves is to be more precise about what capability Player has. If Warriors can only wield Swords and Wizards can only wield Staves, then it's not the case that Player's Weapon field can be set with any weapon.

So one option is to make Weapon an abstract getter in Player. Then add setters and fields in Wizard and Warrior for the specific types. If all you have is a Player, you can see what their wielding, but not change it. Then, to wield something, you need to know what kind of player you're dealing with first.

Of course, that doesn't scale very well to lots and lots of business rules as in later in the series. But it works if you have a relatively small number of constraints in your subclasses — you just push them up such that the superclass API only exposes the intersection of all of the subclasses' allowed operations.

Building a game is a probably the best way to learn and apply state machines since a lot of it is all about how to manage state effectively since there are so many pieces of state and interactions between them.

Also, since Bob is too humble to say it - http://gameprogrammingpatterns.com is an amazing architecture resource for programmers in all disciplines, not just gamedevs.

He's got another book called http://craftinginterpreters.com/ that I've been eager to check out.

Game programming is probably the best way to learn to program many, many kinds of things ;-)

On the topic of state machines, I totally agree with the observation that games are a prime example of their usefulness. I've been working on and off on a relatively simple game that uses a Lua scripting backend to implement the game logic, and over time I've been refactoring this particular part many times, slowly converging to a solution resembling an event driven state machine implemented using reactive programming techniques. It's very interesting to see how even a very simple game already forces you to either apply concepts like state machines, events, etc, or end up with horrible crappy code that is hard to debug and not fun to work on.

Crafting interpreters is great!


The first four articles are interesting and i was following along, but i am not buying the last solution - it is way too complicated and over-engineered. It is telling that unlike the other four articles, the last one has no implementation code.

What i would do is simply have Item (base for Weapon, Potion, etc) have a "IsUsableBy(Player p)" which returns true in the default implementation (which would be the case for the majority of items) and the objects that have special considerations check the player class themselves.

Similarly an Animal (or Entity or Organism or whatever) class would have a "OnDamage(Animal source, int damage)" that by default calls -say- "ApplyDamage(int damage)", but special animals - like the werewolves - may want to filter the result so that the damage is doubled when they are on holy ground or when the class of source is Paladin.

Also the IsUsableBy and OnDamage functions could check against Item objects in an inventory (e.g. Item could also provide a "FilterDamage" and "VetoUsageOf" so that an HolyCross can multiply all damage from Undead enemies by 0.25 but veto using any UnholyItem based class - truth be told here, you probably need a tag system for some of that stuff instead of relying on classes alone) and on an active Effect list (would also be able to do the same sort of vetoing and filtering and the Animal class could also provide a HasEffect method for use by the other classes to make decisions about - e.g. a PaperDoll might refuse to talk to you if you have the OnFireEffect :-P).

To me that sort of setup feels more natural and intuitive than the user-command-state-rule stuff mentioned in the article. You can't really generalize it in a way that applies to other things (e.g. a MagicSpell would need its own set of functions and such) but i believe that there is a thing as too much generalization.

(note that the above are when talking about the common Java/C++/C#/etc like OOP languages, in something like C i'd go with a more data and/or script driven approach...)

I agree with you up to a point, and that point is when the game becomes "large" in the number of rules and entity varieties. Eric is approaching this very much like a compiler developer. We always have the option of 'lofting' concepts out of the code and into types. E.g. we usually start by writing:

  int a = 1;
  int b = 1;
  int c = a + b;
And we have the option of writing, alternatively:

  class Integer { ... }
  class Operation { ... }
  class Expression { ... }
Or something 'meta' like that. The upside is that it's flexible, powerful, expressive to turn code into data. The downside is that it's slower, and there's a whole extra system to maintain.

However, in my experience as a game developer, past a certain large size, pretty much all systems seem to want to converge to Lippert's solution. You want to be able to put the entity data and the rules into tables, and not have to edit code to change the game rules. For small games, not worth it. But for large games, and also for game engines that aspire to be generic, it makes everything a lot easier.

I wanted to stick with the language side this that is what the original article was about. But yes, in large games you wouldn't do all that stuff in C#/C++/etc, you'll most likely use a data driven approach. However this can still be put at the gameplay side of the engine to provide the base. For example if you look at the mod tools for RPG engines like Aurora, Creation Kit and REDEngine you'll see that they have inherent knowledge about base item types like armor, monster, etc, but they rely on data for the specific items (and scripts for the exceptional behavior). The easiest to see that is with the Creation Kit based games like Skyrim since it gives you a nice GUI to create/edit each different item type.

Thanks for helping to qualify when the proposed solution starts to become appropriate. I'd still really need a solid example of when this starts to be beneficial. Could anyone oblige?

This is all about composition over inheritance. What if I want a sword that has a flask built in? How do I reconcile the fact that Potion and Weapon are two different branches of the inheritance tree? His points about pushing things out to a data file are also very valid.

Granted, the author may have taken it a little far, but the sentiment is right.

What you describe is a very special and exceptional case to have it affect everything else. Personally what i would do in such a case is to have the Weapon object own a WeaponFlask object and when the weapon is added/removed to the player's inventory it automatically adds/removes the WeaponFlask object too and the WeaponFlask itself can veto being removed from the inventory. You'd need to add extra events for that, but assuming we're talking about an RPG style game, you most likely will have functionality for stuff like that anyway for other items (e.g. cursed items that cannot be removed, enchanted items that apply/remove an Effect when they are added/removed from the inventory or the player wears them, etc).

That's one special case, but eventually you end up with a few dozen special cases.

I'd do something similar for the item types, although instead of a floating IsUsableBy() I'd make it a member of the Player (or probably Creature, with Player deriving from Creature) so you could go player->CanEquip(item).

I also think trying to squeegee it into the language's type system is the wrong approach (for this specific problem, I mean - I'm not saying there aren't cases where doing it in the language is useful) purely because we're going to want the art team to add more monster types and player types, and to be able to change the rules for attacks and damage and soforth quickly on the fly to try out new game mechanics. We're going to want it all in data very soon, so it'd be better to just jump straight to a data-driven design.

I think either way is fine, the main reason i placed the check to the item is that it is usually an item attribute (e.g. daggers are usuable by warriors, thieves and wizards, staves are usable by wizards and monks, swords are usable by warriors and thieves, etc).

Generally if you see mod tools for existing games (e.g. Skyrim) you'll see that the engine does have inherent support for the base classes (Weapon, Armor, etc) but it relies on data for the specific items, so what i describe above can be the engine (gameplay code) side of that.

This is on the right track. You actually sort of need both but it's important to get the language right.

Objects, particularly highly sophisticated objects, generally have minimal requirements that determine who can use them successfully. Requirements can be anything from strength to knowledge. At the same time people have skills (or don't have skills) and attributes which lets them use objects. Some people are more skilled than others.

Modelling this requires both the ability to ask an object "What are the minimal requirements needed for somebody to use you?" and the ability to ask a player "How skilled are you really at using this object?"

As a smell test, it doesn't make much sense to ask a sword "can I use you?" How would the sword know? But you can ask the sword, say, "how heavy are you?" It does make some sense to ask a person "can you use this sword?"

And yes, it's not clear at all that every object type corresponds to a first-class language type. You might have a Sword type... but in reality, unless the game is very sophisticated, all bladed weapons do pretty much the same thing. This is where it's useful to take a step back and focus not so much on nouns but also on verbs in our domain language. Swords, axes, ninja stars -- they are all cutting damage. Spears, knifes, polearms -- they are all stabbing damage.

It does make some sense to ask a person "can you use this sword?"

Does it really? I mean, if we're talking about an actual person then sure, since a person can think about the problem and come up with a creative answer. But the object representing a person in a game? That would imply that the "person" object must know all about swords and the requirements for using each of them. How is that better than requiring the "sword" object to know about people and their capabilities?

Swords have attributes (e.g. weight). People have attributes (e.g. strength). Whether a particular person can wield a particular sword is not a question either a sword or a person can answer without introducing unnatural dependencies. It is a property of the environment in which the player and sword both exist, and IMHO is best modelled as some form of external "rule" object, or via a multiple-dispatch method (if your language supports that paradigm).

Great series. He really gets to the major problem with OOP. It's not at all obvious what should be modeled or how and changing things once they're wrong is close to impossible. No wonder so few beginners pick it up and do it well. In the last part, what he's describing reminds me of clojure. A language where both data and functions are first class citizens and where functions operating on data is the default paradigm. That's the huge difference I see: in a language oriented around data like clojure, the architecture he describes is almost obvious from the start to experienced programmers and it can be slowly built even if one starts on the wrong path. In OOP languages, there are so many wrong paths and the right path is almost never clear or obvious. Even when the right path is clear, getting there from the current state of the code might be impossible without starting over (think how one would get from an architecture based on his first examples to one based around rules without a rewrite). While the author defends his use of OOP, it's clear imo he's still using the wrong tool for the job. Why use an OOP language that makes data a second class citizen hidden in objects rather than a first class citizen that the language is designed around? As they say, show me your data structures and I will know your architecture. Show me your code/functions and I know nothing.

This is exactly my impression after taking a quick glance through these articles as well.

If anyone is curious and would like to see a practical demonstration of this data-oriented approach of modelling problems, I highly recommend this talk from Mark Bastian in Clojure/conj 2015: https://www.youtube.com/watch?v=Tb823aqgX_0

In it, he contrasts the data-oriented modelling approach with a traditional OOP approach for implementing a board game, and was able to come up with a complete implementation of the game with the former approach that was shorter than even the structural boilerplate for the OOP version.

Another related talk that I loved was Chris Granger's 2013 talk on Light Table: https://www.youtube.com/watch?v=V1Eu9vZaDYw

Where he walks through his process of building a game in ClojureScript using an Entity-Component-System architecture, which is very well suited to this data oriented modeling approach.

Second that for Clojure. OOP is a mental straightjacket. When I started back-end programming I was faced with a fork in the road signposted on one side "Java" and on the other "Perl". I spent a long time with Dietel and Dietel's Java tome but somehow OOP just felt wrong. In Perl you just seemed to get on with it. Data and functions. Yes, you could bless a hash to get your OOP if you really wanted to but it was only later that the community became obsessed with OOP and Moose. So, I travelled down the Perl road and consequently found functional programming, and Clojure in particular, to be natural and easy to learn. Listening to other programmers who followed the traditional OOP path, I get the impression that they will defend OOP even when it's glaring deficiencies are staring them in the face. What is it about OOP? I don't get it. Get rid of it and give yourself a chance to approach problems differently. Bottom-up programming, especially with Clojure and its REPL, make programming a real joy. OOP is Stockholm syndrome masochism.

If you look at modern Java web application code, there’s barely any object orientation to speak of.

(Now it’s mostly Annotation Oriented Programming, especially with Spring. Which brings its own difficulties.)

You know you're in a world of OOP pain, especially with Java, when you have to employ all manner of scaffolding - annotations, dependency injection, presenters, service objects - before you begin the main task of solving your business problem.

Talk is cheap, so are Rich Hickeys dime store musings on OOP.

Show us some code, how would you solve it with functions and data?

In Haskell, this would look something like:

  data Player =
      Player (Maybe Weapon) Class
  data Weapon =
      | Staff
      | Dagger
  data Class =
      | Wizard
  type Error = String
  mkPlayer :: Maybe Weapon -> Class -> Either Error Player
  mkPlayer (Just Sword) Warrior = Right (Player (Just Sword) Warrior)
  mkPlayer (Just Dagger) Warrior = Right (Player (Just Dagger) Warrior)
  mkPlayer Nothing Warrior = Right (Player Nothing Warrior)
  mkPlayer (Just Staff) Warrior = Left "A Warrior cannot equip a Staff"
  mkPlayer (Just Staff) Wizard = Right (Player (Just Staff) Wizard)
  mkPlayer (Just Dagger) Wizard = Right (Player (Just Dagger) Wizard)
  mkPlayer Nothing Wizard = Right (Player Nothing Wizard)
  mkPlayer (Just Sword) Wizard = Left "A Wizard cannot equip a Sword"
A player is always a defined class(wizard or warrior), but they may not have a weapon equipped. This solution is a bit wordy, but comes with the benefit that if you ever add a new weapon/class, the compiler will scream at you if you haven't handled the case for it properly.

You would only export the mkPlayer function in the library and you could potentially have much fancier error handling, such as building a data structure that contains an 'invalid' player anyways (e.g. `Left (Player (Just Sword) Wizard)`) so you can custom build an error message at the call site ("A $class cannot equip a $weapon") or even completely ignore the error if that is a potential usecase (such as building an armor/weapon preview tool, where you don't care whether they can use the weapon/armor).

Modifying it is pretty easy too. Say I wanted to allow for 2handed weapons, plus offhand weapons (shields, orbs, charms, etc.) I could encode that in a data type like:

  data EquippedWeapon =
        TwoHanded TwoHandWeapon
      | OneHanded (Maybe OneHandWeapon) (Maybe Offhand)
      | Unequipped
and swap it into the Player definition:

  data Player =
      Player EquippedWeapon Class
And now I wouldn't be able to compile until I fixed the mkPlayer function and any other place that uses a Player and is dependent upon the weapon portion of the data structure.

e.g. This function wouldn't need to change

  areYouAWizardHarry :: Player -> Bool
  areYouAWizardHarry (Player _ Wizard) = True
  areYouAWizardHarry (Player _ _) = False

> This solution is a bit wordy

I'm not a Haskell programmer, but I understood it. It looks like an ML language but with a lack of | and * for guards and tuples. I like your solution a lot.

The main features which allows you to code this solution in such a safe way are the Maybe and Either types. It's high time OO programmers - and OO programming languages - learn the lessons FP languages have taught us and include these constructs in the standard library. They're just so much cleaner than the usual alternatives (nullable types, checked exceptions) and there's no reason they can't be defined as small objects.

I'm going to be That Guy and suggest you give Rust a try. It's got the best of imperative and functional mixed in.

The implication being that OO is the same thing is imperative? I'm in strong disagreement with that!

I've tried to learn rust a few times, but never with much tenacity. It's on my list because it seems to hit a good point wrt expressiveness and performance.

Not what I meant, no.

In oversimplified terms, Rust has objects but not classes. It skews more toward: - from a C dev's perspective: data-driven design - from a Haskell dev's perspective: typeclasses and ADTs

From what I can tell (I'm no Haskell programmer), in that solution there's no declared relationship between the class and the weapons they accept, right? It's just a side-effect from the fact that you can't create a Player without passing through the gauntlet of mkPlayer?

If so, is that a practicality issue, or an Haskell limitation?

The answer is very much "practicality issue". Haskell's more advanced type level features (including GADTs and type families) are very much suited for this, but they're also the sort of thing that gives Haskell a reputation for being complicated. If your just using Haskell's core features the way the parent post does, Haskell is a very simple, very elegant language.

But better yet, it certainly does have the big guns which you can pull out.

    -- Just like before, we define `Class` and `Weapon`:
    data Class = Warrior | Wizard
    data Weapon = Sword | Staff | Dagger

    -- The one really annoying thing is that
    -- at the moment you have to use a little bit
    -- of annoying boilerplate to define singletons
    -- (not related to the OOP concept of singletons, by
    -- the way), or use the `singletons` library. In the
    -- future, with DependentHaskell, this won't be necessary:
    data SWeapon (w :: Weapon) where
      SSword :: SWeapon 'Sword
      SStaff :: SWeapon 'Staff
      SDagger :: SWeapon 'Dagger

    -- Now we can define `Player`:
    data Player (c :: Class) where
      WizardPlayer :: AllowedToWield 'Wizard w ~ 'True => SWeapon w -> Player 'Wizard
      WarriorPlayer :: AllowedToWield 'Warrior w ~ 'True => SWeapon w -> Player 'Warrior
This last part shouldn't be to difficult to understand, if you ignore the SWeapon boilerplate: Player is parameterized over the player's class, with different constructors for warriors and wizards. Each constructor has a parameter for the weapon the player is wielding, which is constrained by the type family (read: type-level function) named AllowedToWield.

AllowedToWield isn't that complicated either, it's just a (type-level) function that takes a Class and a Weapon and returns a `Bool` using pattern matching:

    type family AllowedToWield (c :: Class) (w :: Weapon) :: Bool where
      AllowedToWield 'Wizard 'Sword = 'False
      AllowedToWield 'Wizard 'Dagger = 'True
      AllowedToWield 'Wizard 'Staff = 'True
      AllowedToWield 'Warrior 'Sword = 'True
      AllowedToWield 'Wizard 'Dagger = 'True
      AllowedToWield 'Wizard 'Staff = 'False
And there it is. What do you gain from all this? Something which it is very had to get in certain other languages: compile-time type checking that there is no code that will allow a wizard to equip a sword, or a warrior to equip a staff.

Once again, I want to make it clear that you absolutely don't need to do this, even in Haskell. You're absolutely allowed to write the simple code like in the parent post. But in my opinion, this is an extremely powerful and useful tool that Haskell lets you take much further than many other languages.

So long story short, the answer to your question is that it is indeed a "practicality issue", although I don't think that my code is that impracticable. It certainly is absolutely not a Haskell limitation: in fact if anything, Haskell makes it a bit too tempting to go in the other direction, and go way overboard with embedding this kind of thing in the type system.

Thanks for the detailed explanation! I'm mostly a dynamic languages programmer, but I've been reading (and enjoying) Type-Driven Development with Idris, and I have a plan to learn Haskell after that. If such a relationship wasn't modellable, I'm not sure I would have bothered after all.

mkPlayer can be simplified further!

  mkPlayer :: Maybe Weapon -> Class -> Either Error Player

  mkPlayer (Just Staff) Warrior = Left "A Warrior cannot equip a Staff"
  mkPlayer (Just Sword) Wizard = Left "A Wizard cannot equip a Sword"
  mkPlayer weapon klass = Right (Player weapon klass)

You lose exhaustiveness checks if new weapons or classes were added though. May or may not be worth the trade-off.

Holy cow, these blog posts are everything I've been trying to say about OOP but never been able to put into succinct examples and explain clearly... and then some more. Thanks so much for sharing this (and to him for writing). Now I can finally point people somewhere instead of just getting a blank stare back when I tell them OOP is hard.

I read once that the biggest issue with OOP is that everyone gets taught the “Dog inherits Animal” style which is completely worthless.

One of the biggest issues with OOP is that it’s hard ;) I gave up attempting to get good at it after trying to read Bertrand Meyers’ Object-Oriented Software Construction book. If I need to be able to keep 1300 pages worth of info in my head to write well designed software, maybe it’s the wrong approach.

If I had my druthers, I'd put the teaching-ratio of interfaces versus inheritance at like 95%/5%. Treat concrete inheritance primarily as a useful work-saving shortcut.

Basically, I'm one of those people who might unironically write `class DefaultDog implements Dog extends AbstractAnimal`.

yes, using composition instead of inheritance, as well as a "Structure of Arrays" (SoA) instead of an "Array of Structures" (AoS), to form a loose collection of arrays one can iterate over very quickly, and add properties to at run time, to program games is a mature game architecture described by http://en.wikipedia.org/wiki/Entity_component_system which has been popular in the last decade.

Oddly Eric never mentions ECS or SoA in the series, even though it solves the problem. (He only suggests in part 5 to use a Rule Engine, instead of inheritance)

Shame that Swift is pretty much confined to iOS app development.

    protocol Weapon {}
    class Staff: Weapon {}
    class Sword: Weapon {}

    protocol Player {
       associatedtype T: Weapon
       var weapon: T  { get set } 

    class Wizard: Player {
       var weapon = Staff()

    class Warrior: Player {
       var weapon = Sword()

If you look through the later items in the series, it seems to get more complex than this. In particular, a character can use either 0 or 1 weapons at a particular moment, and either class can also use the Dagger weapon. It's not that each character inherently only has, and uses, the only weapon appropriate to that character -- this would make things quite a bit simpler.

Well, maybe someday Dotty will be a mainstream language...

    trait Weapon
    class Sword extends Weapon
    class Staff extends Weapon
    class Dagger extends Weapon
    trait Player {
      type T <: Weapon
    class Wizard extends Player {
      type T  = Dagger | Staff
      var weapon: T = null
    class Warrior extends Player {
      type T = Dagger | Sword
      var weapon: T = new Sword

Python already is a mainstream language:

    class Weapon: pass
    class Sword(Weapon): pass
    class Staff(Weapon): pass
    class Dagger(Weapon): pass

    class Player:
        weapon: Weapon
    class Wizard(Player):
        weapon: Optional[Union[Dagger, Staff]] = None
    class Warrior(Player):
        weapon: Optional[Union[Dagger, Sword]] = Sword()

    class Wizard extends Player {
      type T  = Dagger | Staff
      var weapon: T = null
The billion-dollar mistake in a new language? What a waste.

Yeah that freaking blows. They keep it for compatibility with java. It's not idiomatic code though, you would use an Option.

Until you want to model a bar-fight and then a chair (a piece of furniture) becomes a weapon, i.e. "class Chair: Furniture{}" turns into "Class Chair: Weapon {}". Or when the winter gets tougher and longer and "class Chair: Furniture{}" changes into "class Chair: Firewood{}". I know that there's probably a "solution" involving object "composition", but that would only mask the problem, will not really solve it.

Honest question: does anyone have a clean solution to this problem? I've run into this often and always end up hacking something ugly to make it work.

You probably want to solve "the expression problem".

I'm not sure I understand your objection. Those are protocols, so Chair can adhere to all of them at once.

class Chair: Furniture, Weapon, Firewood {}

That's why protocol extensions were implemented.

That forces wizard to always have a staff and warrior to always have a sword?

warrior can also use a staff

both can use daggers

I don't think an up-vote is enough. Thanks for this. I love his explanation of Liskov substitution principle.

Eric Lippert is smart, I love his Stack Overflow answers.

Think it's worth submitting on its own?

Absolutely. Though would be better as a single article rather than a series.

looks like someone else got it.

It may be better as ShowHN rather than a single article. But it was a great read.

I like and agree with his posts, but the last one does feel like a bit of a bait and switch:

The first post talks about the difficulty of getting compile time errors for Wizard wielding a Sword

The last post completely abandons the notion of statically checking anything about the Weapon assigned to a Player(beyond the most general case of is Player and is Weapon).

I don't actually have enough experience with Erlang/Elixir to know but I thought their function overloading (http://raganwald.com/2014/06/23/multiple-dispatch.html) still provides multiple dispatch with compile time checks?

The proposed Rules solution is more elegant to use(for the programmer, if not the person defining the Rules) but it does nothing(or at least very little) to help you check that the business logic is correctly encoded. Yes, you're likely going to avoid your program crashing, but that's just because you've given up on encoding the business logic entirely. At least where compile time checking is concerned.

Maybe I'm expecting too much of the type system and that's the actual lesson of the blog posts.

One way to get at the end conclusion that the original modeling is wrong is simple logic.

If given

1. Player can have a Weapon

2. Sword is a Weapon

3. Wizard is a Player

The from the statement "Wizard cannot have a Sword" follows a contradiction that "Player cannot have a Sword".

I realize Lippert holds more knowledge about OOP and programming in his spleen than I have in my brain, but he mentions in the first post that the first attempt violates C# rules. But, if working in C++, the interface could specify either (Weapon & weapon) or (Weapon * weapon) could it not?

This is great. I've come up with just about every one of his "what if we..." examples on my own trying to find the same sorts of solutions over the years.

Does anyone know a good code example (preferably C# but anything similar would do) of a rules system like what he describes in the final part?


NRules is possibly what you're after, and maybe http://www.antlr.org/ to parse rules.


Yes because you don't have to inheritance "all the things".

I am suspicious any time I see inheritance in my C# code base. It's rarely needed and causes a lot of pain when used extensively. I'm looking at you Credit Note and Invoice!

The antithesis is when inheritance should be used but isn't. Inexperienced programmers tend to ideological extremes in either camp.

Inheritance considered harmful. Go got this right.

Inheritance is a way of relating two or more pieces of code; it's a programming abstraction. If you try to apply it to the problem domain, by "modeling" things, for example, you're in for a bad time.

That's mixing up properties, ownership and state. Those are very different concepts.

That's effectively the conclusion that the series comes to in the end.

Indeed. Whether you use OO or state machines depends on the architecture of your problem. There is unlikely to be a one-size fits all solution to every class of problem.

State machines are used extensively by HDL hardware design tools, and are easier to debug.

It may be obvious to you but it's very hard to see this without a fair bit of experience.

So what if you did away with "is-a" and used "has-a" instead? Add "Weapons Skill" objects, including those which represent incompetence.

I prefer the related example of enchanted weaponry.

* A Sword is a type of weapon.

* A Fire Sword is a type of Sword.

* A Mace is a type of weapon.

* A Mace of Ice is a type of Mace.

* A Sword of Ice and Fire is a type of Sword.

What about if you hold the sword by the blade and hit someone with the pommel, is it a kind of Mace then?

just use inverted sword asset on mace weapon if you're lazy. melee is what you're talking about though. probably best to make melee a different weapon configuration and change to it when wielding. or make melee a weapon modifier. but the assets would differ, at least, in how the attack is drawn, but we could be talking about only a backend here, and UI is irrelevant.

So there's some weird Sword method that changes the sword to a Mace? Seems bug-prone. What if there's a special quest that triggers only if you're carrying a Mace? The message boards will be full of clever people saying, "Guess what, if you switch sword grips, you can actually get the Mace quest!"

Erlang, which traditionally deals with concurrency and parsing protocol has had a built-in standard FSM module for a long while:


It even got a recent re-write and the new one is called gen_statem: http://erlang.org/doc/man/gen_statem.html

This new version is already is used to handle some of the TLS and SSH stuff from what I've heard. Here is TLS connection code: https://github.com/erlang/otp/blob/11cd0f1d000be5849bba2466b...

I've done them in C and Python before as well. In C I the like the table + function pointers approach when possible. Here is an example: https://stackoverflow.com/questions/133214/is-there-a-typica...

Even without the framework, Erlang/Elixir processes very naturally lend themselves to statemachine-like behaviour. Unless you're doing something really weird, the messaging behaviour is basically:

    new state = old state + message

Indeed. In Erlang I had never actually used any of the FSM modules in practice because in my case a simple gen_server works just as well.

> I've done them in C and Python before as well. In C I the like the table + function pointers approach when possible. Here is an example: https://stackoverflow.com/questions/133214/is-there-a-typica....

In that particular example, I'd just like to point out how super clever it is to put `NUM_STATES` at the end of the enum in the first line. I'm fairly comfortable in C, but I never really used it with/around people who have been using it forever, so I love seeing these little tricks.

It’s a standard technique in C...

Also, it produces tons of false warnings when you try to enforce exhaustive switch-cases on enum values.

>Also, it produces tons of false warnings when you try to enforce exhaustive switch-cases on enum values.

Dispatch tables are the way to go. Nested switch statements turn into a nightmare.


I've been thinking about extending this data structure visualizer I've been working on to also deal with state machines. Part of my thinking is that it could be integrated (done through something like logging calls) into various popular state machine libs—then people using the libs could just set a flag once and see their state machines evolving in real-time (or transformed time) as their application executes. Video and more information here: http://symbolflux.com/projects/avd

I'd be curious to hear anyone's thoughts/opinions on the desirability/feasibility of these, or any other suggestions. Thanks!

How could look a STM with syntax sugar? Exist a language where is integrated?

Here's a tip for state machines. If you've got a transition that takes some time then you often actually need another state to represent the state that the system is in during the "transition".

This is usually modeled better on backend systems (a state while waiting for a network response to arrive) but is often modeled poorly in front-end systems (a state while waiting for an animation to finish).

This is one of the reasons why I love ReactJS. You quickly realize that it just won't work unless you model all of those states in your component. And the act of "forcing" you to model all of those states often catches logic bugs or issues that wouldn't have been found until much later in the process.

And that's exactly why I'm loving Elm right now. The compiler is also incredibly helpful in finding out where you might have missed a state.

This is something I had to deal with pretty recently. The sequence I needed to model was:

Operator pushes Start

Turn motor on

Motion triggers switch to ON state

Motion triggers switch to OFF state

Turn motor off

Machine changes state to RUNNING

The time between pushing Start and RUNNING is only a couple seconds, but there's a chance that something could jam, the switches could fail, or something I have no knowledge of could go wrong. Question becomes, do I now need an intermediate "In Motion" state for the six or so cases where this happens?

In the end, I decided no, because there was no other transition out of that state than the expected behaviors, so modeling the intermediate state didn't offer me any benefit[1]. Failure to get to OFF state within a certain time puts the system into a global error condition that required manual intervention to fix.

I don't have a real comment here :-) other than to back up your point that modeling isn't as obvious as it first appears to be.

[1] I would have created this intermediate state if this was a complex controller that was likely to have new rules added later. However, knowing it was a simple one-off, I'd just be giving myself extra work for no benefit if I did it now.

I prefer having nominal intermediate states just for debugging purposes. "What am I doing now? Does being in START imply it's already started the process? Does being in RUNNING imply the process was completed successfully?" vs "What am I doing now? In motion!"

In the example given, I'd probably stick to a single intermediate state and let the language work as my implicit state machine for the internals.

Depends a lot on the boilerplate required to insert states though, my preferences owe a lot to the HFSM implementations I've used in the past.

I think it's fine to leave out these states so long as you're conscious of what you're doing and you've thought about it.

A lot of people don't even realise they've made a modeling error. Then you invariably see code written to try and handle "unexpected errors".

Love seeing more about this. It's a great design pattern for many systems.

We used this approach back at Heroku to power Heroku Postgres and follow the same to power our database as a service at Citus as well (https://www.citusdata.com/blog/2016/08/12/state-machines-to-...). The approach has scaled extremely well due to us following a few key principles (most importantly that we don't change the state of a running database, rather we failover to some new one with the new state).

I distinctly remember the first time I looked at a overly long and complicated model class and thought: "OMG, this is a state machine!" - a shortish refactor later and it was much simpler to read and debug.

However, now I'm at a loss on how to teach the junior programmers at my company how to recognize which patterns scream "turn me into a state machine!" Several booleans triggering certain code paths in several methods is a pretty sure sign. Are there more?

The word "protocol" is a big red flag that you should be using a state machine.

The words "event loop" probably mean you need a state machine.

If "time" is an input variable, you probably have a state machine.

I've also found the presence of a mode variable a good indicator that a FSM should at least be considered.

Your second and third points are why I love having state machines in the games I develop.

If you have a "status" field in your object, chances are you can represent it as a state machine

I remember reading about a language from Microsoft called P that compiles to state machines in C. Its for complex and asynchronous state changes, like programming a protocol for an automated drone.


I'm slightly surprised that nobody (as far as I've seen) have mentioned SDL: https://en.wikipedia.org/wiki/Specification_and_Description_...

SDL is widely used in telecom and datacom systems to develop the control plane. Most tools are of the draw your code-type, which ends up being a unmanagable mess in direct contrast to what management thinks. But the tools will generate state machines that pass messages between eachother and to controlling interfaces based on states an inputs.

The point being made for SDL is that you can easily simulate the whole system and watch/test that the control plane will work as intended before the HW has been built.

Coincidence, I've written something about it two weeks ago (in Portuguese, unfortunately: https://epxx.co/logbook/entries/sm.html). About Petri Nets as well. It is difficult to understand why so many junior programmers use a dozen boolean state variables instead of a state machine, given that they learn it in CS or similar courses. My guess is they are taught in an overly scientific fashion, so they go to the same dusty corner where so many other CS topics that are never used in practice lie already.

go to the same dusty corner where so many other CS topics that are never used in practice lie already

I've been meaning to write a letter to the CS department of one UC school: What the hell are they teaching in CS nowadays? If presented with a situation where the user can create circular references, there's one UC school that produces graduates with high 3.8+ GPAs, virtually none of whom I've interviewed can give you an implementable description of how to detect circular references. As a general pattern, they also tend to say silly things, like that a null pointer member of a structure uses up no data. I can go on and on. It seems to me they are being taught by TAs who would also flub such points in an interview.

Detecting circular references is one of those "tricks" which end up being a bad interview question because you either know the trick or you don't.

GPAs are not normalized across universities. The higher the average GPA for the school's graduates, the more likely grade inflation is at work.

How is being able to detect circular references a useful skill?

> How is being able to detect circular references a useful skill?

For all kinds of systems, where there are references input by users as data, one needs to be able to contend with this. Systems which crawl webpages have to contend with this. Parser-transformation systems need to detect these. Such detection could be useful in memory management. It's not just the ability to detect a circular reference. It's the ability to contend with problems of that type.

Furthermore, two of those UC system CS grads had eerily similar responses, consisting of, "Oh, is that a graph algorithm?" plus a literal handwave. Is there some pool of TAs somewhere that has that attitude towards graph algorithms?

Hmm, this seems like a kind of understandable attitude if I'm reading it correctly. Hearing that it's a (standard) graph algorithm means it's a solved problem so you don't need to worry about it: pick up a text book or Google and compare the available options—easy.

It makes sense to me that junior programmers would not jump to state machines for something like, e.g., handling complex UI state. I see the reasons as twofold:

1) How would you describe the reason for using state machines over handling the state and transitions through if-statements and sprinklings of booleans? The best I can come up with is that it allows you to be explicit about which states exist, how they can be moved between, and what other code should execute when transitioning. It basically comes down to the architectural principle of 'cohesion': put related things nearby one another. This is pretty fuzzy until you've actually had concrete experience getting bitten by the lack of cohesion and attendant unmanageable complexity. To sum up, it's something you need to learn through experience—especially because another aspect of the problem is identifying situations where it really make sense rather than some simplistic heuristic about using state machines is better than not.

2) The way they're taught in school is generally about theory of automata, or they're used to diagram the behavior of some simple little system. Using them to control state transitions in applications is unlikely taught at all, and isn't really directly implied by the seemingly related things which are taught.

> It is difficult to understand why so many junior programmers use a dozen boolean state variables instead of a state machine

It's not just juniors. The only times I've encountered state machines in the wild are almost exclusively games and virtual machines/interpreters.

The reason, to me at least, is pretty clear. In both games and VMs, you mostly know the rules ahead of time. You have a clear picture of the opcodes you want to implement (add, mul, call, jump, etc.) or the game rules you need.

But for most development, especially with multiple stakeholders or customers and multiple developers working independently, the rules aren't known at the start. In fact, the majority of requirements are brought up months or years after the first line of code is even written. It's like the parable of the blind men and the elephant (https://en.wikipedia.org/wiki/Blind_men_and_an_elephant). Each person, working separately, has a different idea of what this thing is that you're creating. Communication being an incredibly difficult thing, the requirements are never fully rendered coherent by the entire team at the moment where it would benefit them most (i.e. the beginning).

In my experience, this is when you need the skill to know to refactor to a state machine and/or refactor to some other form of composition (strategy objects, has-a, or whatever).

As taught in school, State Machines are formalisms that fit well when everything can be formally specified. As I have learnt, State Machines in programming help to reorganize things that have evolved haphazardly.

  > they go to the same dusty corner where so many other CS
  > topics that are never used in practice lie already.

> O autômato mais poderoso é a Máquina de Turing, a que equivale todo computador de uso geral.

It's the most powerful realizable automaton (although it's not even fully physically realizable because of the infinite tape), but people have studied far more powerful automata:




In these models the oracle is so called by analogy to prophetic oracles of antiquity, who were believed to obtain reliable information from an incomprehensible and supernatural source. :-)

> It is difficult to understand why so many junior programmers use a dozen boolean state variables instead of a state machine, given that they learn it in CS or similar courses.

If you think that's a head-scratcher, the fact that senior developers consistently do the exact same thing must really confuse you!

This reminds me of my old mentor Julian Noble's Forth code that makes state machine transition tables valid code: http://galileo.phys.virginia.edu/classes/551.jvn.fall01/fsm....

This is another important argument against class-based object oriented programming. There are only very few "objects" so small as the ones conventionally created in OOP. If money is withdrawn from a bank account, it's going to be put somewhere else. Withdrawing money from an account is not an operation on an account, but on a larger system that includes the account. So making an "account" class with a "widthdraw" method is a bad idea.

Very often a program has only one or very few meaningful state machines, and pretty much always they are "global", i.e. there is only one of its kind in a process.

Just use global state, it has the best possible syntax for OOP :-)

In the real world nobody would ever create an Account class with a 'withdraw' method. Withdrawals are always first class entities that unique IDs and states of their own. Every Withdrawal is itself a request with a lifecycle that, only if completed successfully, results in an event and changes to the system.

This example is very reductive. In a rich and constantly changing domain like banking using a state machine to model core entities like an Account almost always ends in disaster. It simply doesn't work because businesses are complex and ever-changing. Virtually every time I've seen a 'STATE' property in a core entity, especially if it's a public property, it has evolved over time to become a horrendous thing that nobody really understands but it is now integrated throughout so many codebases that people have no choice but to continue supporting it. Ugh.

The real point here is that state machines are mathematical, highly specified constructs. They do not change well. If you are working on a protocol that can be specified precisely, sure use a state machine. If you're working on a business entity think long and hard about whether these requirements are truly fundamental and will never change. And if you really do think you have a state machine at least don't make it public.

What you often see instead of a simple state enum is 5 boolean getters and a bunch of logic that has things like "panic if this bool is true and this one isn't, that should never happen." Whereas with a simple state enum representing the same thing you can often just make the invalid combinations have no representation at all.

Having worked on actual banking software, I agree that an Account object with a "withdraw" method is not a real-world thing.

But having also blogged for a while... There are no good examples that are complex enough to be realistic and yet simple enough that we can concentrate on the programming principle and not be distracted by the thing being modelled :-D

Now as to one or two state machines... This has not been my experience. I have worked on some systems that had dozens of state machines, nearly every domain model was a state machine of some kind.

Yep - The only useful explicitly coded state machines I have seen are the massive, generated ones. Like DFAs representing regular expressions.

And once you're confortable with FSM's, you might try Abstract, State Machines (ASM's) for formally specifying or verifying software:


Microsoft even had a gool, AsmL, for this method. TLA's PlusCal specs look similar to some Ive seen in ASM's, too.

Microsoft has the best gools.

Microsoft has the only gools.

State machines also make excellent database constraints. Trigger based enforcement removes the need to implement machines in application languages and prevents bad code from inserting bogus states. This only matters if you're storing state data in a DB. For example I wrote a simple trigger based fsm extension for postgres:


Shameless plug: I’ve been working on some open-ended state-machine-like constructs in Clojure and JavaScript - https://github.com/notduncansmith/factfold

The idea is to capture the useful properties of state machines without being constrained to enumerable states.

I’m currently working on a program to edit these machines visually.

Fun fact: The J language implements general finite state machines as a language primitive -- `;:`


State machines are my #1 „magic trick“ for writing reliable code in Swift. In Swift you can attach values to enum cases, so you have implicit data stored with each state. UI states are very well suited for this (Eg .loading, .success(value), .error(error, cached)), or API request states (.initial, .fetching(isRetrying), .success(value), .error(error)). Each can cycle through each state, and each state translation can be handled and animated or ignored. I cannot stress enough how much this improves the resulting code. Most bugs I get now are almost „intended“ as I chose a certain behaviour for a certain state, but that decision was wrong (rather than the code flying in my face unexpectedly).

I find the original example to be much clearer. If you want to know about all the valid transitions from the 'held' state, search for all usages of 'held'. Adding complicated dispatching doesn't help readability.

This seems a lot like the expression problem [1]. There are no perfect solutions. A problem can be split along multiple dimensions, but not all at once, or at least not as code saved in a text file.

Whether you split the problem first by method or by state, you will have code that's located far away from closely related code.

[1] https://en.wikipedia.org/wiki/Expression_problem

Eh. State machines have their place, and I find them especially helpful for simultaneous code of all kinds, but for run of the mill business logic I find they just get in the way of the code. My rough rule of thumb is this:

1. Can simultaneous actions hurt here?

2. Would a database transactions work instead?

Iff the answer is "yes" to the first and "no" to the second then consider state machines. Make sure the answer to #1 is really "yes" though. For example, things like animations consider libraries like Ember Concurrency which give you fine grain control over queuing and restarting things without the muck of a bunch of states and transitions.

It's unfortunate that the article (and Wikipedia, too), use "state machine" to refer to a finite state machines. In program semantics and formal verification, the term state machine is a common term of art, but it refers to either a finite- or an infinite- state machine, with infinite being the presumed default unless otherwise stated. If people come to associate the concept of a state machine with the useful but more limited narrowly applicable concept of a finite state machine, they may miss out on this important and general concept.

In program semantics and formal verification, a state machine is a mathematical mechanism for describing an often nondeterministic discrete (software or hardware) system as a mathematical relation, →, on states, where if `s → t` (i.e., the pair <s,t> is in the relation), then a transition from state `s` to state `t` is possible. For example, both Turing machines and the lambda calculus are often naturally described as state machines. A finite state machine is just a state machine with a finite number of states.

There are more generalized concepts of great interest, including infinite state machines and (as mentioned elsewhere in this discussion) hierarchal state machines.

However... A blog post with the stated aim of alerting programmers to the opportunities for refactoring a convoluted domain model into something with more structure has to start somewhere, and leave something out.

The great thing about discussions like this is that more experienced programmers such as yourself can chime in with exactly this kind of observation.

I invite you to post a link or two for others to read. I'm sure the community will appreciate it.

May I chime in? A gentle introduction to process algebras [^1] shows some examples of how you can represent programs as (possibly infinite) state machines. Essentially you define some operators to compose smaller programs (aka. processes) into larger ones, hence the name "process algebra". These definitions allow you to "see" your process as a generalization of a state machine known as a labeled transition system [^2]. And then you can exploit this representation to verify properties about the process. Lots of fun!

[^1]: https://pdfs.semanticscholar.org/12d9/eae1638729aeb237b5be44...

[^2]: These rules are called an operational semantics. There are other ways of giving semantics to a formal system: for instance, denotational semantics exploit recursive mathematical definitions. Each way has its pros and cons.

State machines (that contain both labeled and unlabeled transition systems) may indeed serve as operational semantics of some formalisms, but they can be denotational semantics of others, or even studied completely separately from any programming formalism.

Not only process algebras, but even the lambda calculus is naturally expressed as a state machine, where the states represent the expression, and transitions represent possible reductions (lambda calculus is nondeterministic).

Oh, I was merely commenting on the use of the unqualified "state machines" as referring only to finite state machines. Wikipedia indeed claims this is the default meaning (i.e., finite unless stated otherwise), but I am not familiar with it being the default, and, in any event, don't think it should be, as that would be confusing.

Thanks for the reply!

Now as to my request... Any particular essay or other resource about infinite state machines you'd care to suggest?

Agreed. I have worked with Petri Nets before now, which I found less restrictive than FSMs (which are just a subset of PNs).

But giving your classes explicit state is certainly very valuable. I even wrote a tool that would take an ASCII-art diagram of a PN embedded in the comments of a class, and generate the state definitions and methods within the class.

The sample code is fine for JavaScript, but it begs to be written using pattern matching, like that in Elixir/Erlang. If you have the current state and an event to apply to that state (similar to Redux), you can pattern match on pieces of the current state and the event to apply to that state. Your event handler could look like:

``` def handle(%{state: "open", balance: balance},%Deposit{amount: amount}) do {:ok, %{state: "open", balance: (balance + amount)}} end

def handle(%{state: "closed"}, %Deposit{}) do {:error, "invalid action: account closed"} end


Every state/event pairing is explicitly defined as a variation of the `handle` method. There are countless ways to build a state machine like this, but the basic pattern matching primitive seems to simplify this a lot.

An extension to javascript to support this has been proposed, I'm hopeful that it gets considered seriously: https://github.com/tc39/proposal-pattern-matching

The state machine is a powerful/clear metaphor to align your design. IMO hierarchical state machines (aka Harel statecharts) are a superior variant that makes behavior yet clearer and easier to design.

Harel statecharts

I wonder how those map to "Behavior Trees" in game development?

FWIW, I wrote one for a game engine, and I'm aware of several others. I can safely say they map cleanly enough that the names are more or less synonymous, at least as far as the underlying concept goes.

A behaviour tree is sort of a hierarchical state machine but one in which the states are not explicit. They fall out of how you arrange the nodes in the tree.

Actual hierarchical state machines are pretty common for driving a variety of things. Probably most commonly animation but also driving game state.

Actual hierarchical state machines are pretty common for driving a variety of things.

My understanding of hierarchical state machines, is that they're equivalent to Context Free Languages in terms of computational power.

When state machines are finite, and are well-formalized, they are fairly easy to understand and debug.

State machines descriptions can lie. For instance, suppose that the implementation of a state machine grows hair that is not folded back into the specification. For instance, originally, the outputs of state transitions were just pure outputs. Now they are secretly fed back into the machine and give rise to additional states.

Or, originally, the state machine just consumed input events and changed state. Now, without it being clear in the diagrams and documentation, the machine can push back any number of events into the event stream to process again, including rearranged and edited versions of the events. This is just modeled as an "output" of certain transitions. Oops! (Why, it must be an output, because it's coded in some "output handler" function in the state machine implementation).

Another in the series "Show HN what is it that people actually learn in their first two years of undegrad CS"

You'd think that...

I did both EE and CS undergrad (we didn't have Computer Engineering at the time). In EE we started talking about FSMs very early on, in quite a bit of depth. In CS though, we didn't really talk about them until the Computability course in 3rd or 4th year, and even then it was in the very theoretical Finite Automata sense; applying FSMs to software design wasn't really a thing that was ever covered.

From the EE material, I started loosely applying those concepts to software design, but it wasn't until my first job in industry where I was lucky enough to be exposed to a system that was very deliberately designed as a system of FSMs.

We based the design of ours off of a pretty famous paper, whose name is entirely escaping me at the moment. Damn.

Are you thinking of Harel's paper "On Visual Formalisms"?


I'm curious to know what famous paper it is so that I can go and read it!

That wasn't it... but this also looks super interesting.

I emailed the mentor who first introduced me to it to see if he remembers what it was called. Watch this space!

Slightly offtopic, but I grew up in Regina and went to college at the U of S :)

I was in Saskatoon for a long time. Undergrad 2002-2007 (EE+CS), CS MSc 2009-2015ish (ran out of funding, ended up contracting a bunch). Any chance we crossed paths?

Sadly, while many people learn about state machines in their first two years (to varying degrees of formality), they often don't learn how to design code using them.

I think the only course where I explicitly designed programs using state machines was compilers (other than writing things to simulate state machines in my theory courses).

  > Another in the series "Show HN what is it that people
  > actually learn in their first two years of undegrad CS"
This is my entire technical blogging output from 2004-2018. Combinatorial logic, surreal numbers, function-oriented programming, everything.

I learned about state machines, along with push-down automata and others but these were in a formal automata class where we learned things like the Church-Turing thesis and other similar theory.

So there's learning about state machines in that context and then there's learning about state machines by actually building them in complex pieces of software, or replacing spaghetti code with FSMs. That's when I personally realized how valuable they are.

That is something I never learned in school - and something I learned throughout a career of experimenting, making mistakes, reading from books like http://gameprogrammingpatterns.com, learning from peers, and reading articles like this.

There's learn and there's learn... The way state machines were described at my uni was from the highly theoretic side. It was there right next to context-free/sensitive languages. While it was a good description, I wouldn't expect anyone finishing it to know how to actually implement it in real software - and more importantly: why.

There's CS and there are coding jobs. And sometimes they don't overlap...

It's wonderful, isn't it? Can you imagine how useful this is to programmers that didn't have the opportunity to go to university or to programmers that are still in high school?

I learned about finite state automata in my first year of undergrad CS, but only in the context of recognizing regular grammars.

Finite state machines as used to accomplish practical computing tasks other than grammar recognition were something I didn't see at all in undergrad CS (with the very slight exception of networking where we did see a state diagram for TCP).

700 points, 200 comments. Clearly, you've been misinformed.

(Also, I did my CS undergrad at UC Berkeley and I don't recall ever studying state machines. All the useful engineering bits and design patterns I picked up after getting my first job.)

Which is totally OK and awesome. Simple ideas can be revisited and applied to solve complex problems.

I wish I'd learned about state machines... Most of the potentially interesting CS courses at my college were taught by terrible professors who were allergic to code.

Thankfully I enjoyed screwing around with game programming, and picked up Matt Buckland's Programming Game AI by Example[1]. I've used essentially the same FSM design described there over and over in my career.

[1] http://amzn.to/2FxZ54c

Check out statechart.js. It is a implementation of a Harel Statechart and it has worked out well for my company.


Talking to some programmers the last year I was surprised that state machines was unknown. In embedded systems, and esp in HW design (FPGAs and ASICs) you have state machines everywhere controlling the behaviour of the data processing (the data plane) part of the design.

I first started to really learn about finite state machines (FSMs) during compiler class. The parser in a compiler for example is a FSM. Tools like Bison generates FSMs based on state rules. And in general as programmers we write code to parse stuff all the time. So how can FSMs be unknown? I'm a bit baffled.

I spent a majority of my programming career developing business process automation solutions on top of a graph database that incorporated state management in the middleware layer as a first come citizen. State transitions were triggered and it had strict schema.

It was amazing the amount of features we could push out with great stability and scalability using this middleware. We build huge systems that just ran like clockwork and was easy to explain and reason about.

Sad part of this is that the middleware is not commercially available without buying they vendors domain specific products.

Typo: ...first class citizen...

xstate is a nice javascript library which even generates charts out of your defined states in your app: https://github.com/davidkpiano/xstate

Similarly, I have a early-stage JS library for hierarchical statecharts. It revolves around nested contexts and an explicit message passing interface for controlling states between parent/child/sibling. Example:


off-topic but why do you use 'let' instead of 'const'? In royal.ts you do that all the time even though you variables don't change! Is there a reason for that?

I have no idea why they did it, but I have argued for quite some time that const has very little practical value, because its effect is always lexical and local.

To demonstrate that this is so, I suggest that we can write a processor that converts all lets to consts where possible. I also demonstrate that this is so by asking, "What bug will const catch that our tests will not catch?"

_Immutable Data_, on the other hand, is marvellous. Immutability has nonlocal effects, and it is not something that can be trivially verified.

I agree with your assessment, but for me it is a message from the programmer to the reader. Hey this value will not be re-assigned during the following scope block. If I see a let, I know I must be careful. Of course this doesn't apply to properties of objects ... which is a pity!

Thanks! I have some awesome updates to announce very soon!

The last time I looked at this (when having to deal with spaghetti-like execution flows through an Actor-based code base), I really liked the ideas of model checking with Colored Petri Nets and code generation (http://cpntools.org/), and the TLA+ language.

TIL that VHDL can be more readable than Javascript.

Just wait until you start connecting things with different types by mistake and watch the weird results you'll get.

And JavaScript is eating software... :sob:

No JavaScript support but for nice finite state machines in C, C++, C#, D, Go, Java, and Ruby there is Ragel:


Worth pointing out that the ragel parser was used to create a pretty bulletproof HTTP parsing implementation in the mongrel webserver. I think that implementation is still used in other http servers.

After playing around with some state machine libraries, I find a simple switch statement against a State enum to be the most straightforward approach. Especially if your language can guarantee an exhaustive switch.


A colleague raised the point that state machines (or their richer variations) work best in models with low cohesion.

Anybody here have some thoughts/experience with this point of view?

I have related experience with QStateMachine versus hand-written (state variable(s) + switch + some freeform manipulation): if your state machine is "pure", using the framework is nice. If your state machine needs certain interesting transitions such as undo which don't fit in, you may be better off writing it yourself. If it fits at all, a funny state machine is better than no concept for your state variables. You still get most of the benefits.

I guess domain-specific transitions like undo and cohesion mean similar things to state machines: requirements that don't cleanly fit into conventional state machine models.

My (admittedly limited) experience had some high-cohesion models and it worked ok, with the caveat that they often didn't change independently. It was basically one FSM for a cluster of high-cohesion objects, and the events would mutate some subset.

I agree with your colleague somewhat. If you have a system where every domain object is inextricably linked to every other domain object, then you're going to have a state explosion trying to capture the states and transitions.

My gut reaction looking at a system like that is to start decomposing it, looking for accidental coupling that doesn't need to be there.

I'll admit it: I always had a kind of murky idea of how you might use state machines in a real program. This article gave me some idea. Good stuff.

When people talk about state machines, do they mean deterministic finite automatons? Because i can see a lot of ubiquity in state transition tables in code, but less use in deciding whether to reject or accept a sequence of states which is essentially what a DFA does.

Not usually. "State machine" is typically used to refer to the more general concept of a finite state machine, which includes Mealy and Moore automatons. DFAs typically are not viewed as having an output (or arguably a single bit for the whole input sequence that represents acceptance), while FSMs can have outputs on each transition or in each state, which means it can induce some action.

So people basically mean a transition table/function, but without the start state or accepting states?

No. They mean a set of states, a set of input symbols, a set of output symbols, an initial state, a transition function, and an output function. The output function maps from the state to the set of output symbols (or sometimes from the state and input to the output).

Is there a formal name for this?

It's probably closest to a finite state transducer. No guarantees that what I described above is exactly the definition, though.

Horrible propaganda, pure "1984 Newspeak"!

Just because the State is a big horrid Machine is no reason to "love" it!

Interesting.. wonder whether there's any Python implementation ?

How much of developers do not know what a state machine is?

At least 90% by my observation.

This is the key point IMO. Putting stuff in your code that developers don't understand is a RISK.

Font on this website is so thin that it's unreadable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact