
Immutability is not enough - r4um
https://codewords.recurse.com/issues/six/immutability-is-not-enough
======
jorams
> This is exactly the kind of problem that functional programming was supposed
> to help us avoid!

No, no it isn't. Just like you wouldn't expect 2 * 3 + 8 to suddenly return 22
when you intend it to, you shouldn't expect the order in which you apply
functions to not matter.

When you pass around a changing state object to every step of your program,
you are effectively doing normal imperative programming with global-ish state.

~~~
coldtea
> _No, no it isn 't. Just like you wouldn't expect 2 _ 3 + 8 to suddenly
> return 22 when you intend it to, you shouldn't expect the order in which you
> apply functions to not matter.*

Only the latter (that the order that pure functions are applied doesn't
matter) has been touted forever as a benefit of functional programming, and
what supposedly makes it "trivially parallelizable".

~~~
coldtea
Not sure what the downvotes are for.

I'm not saying that order doesn't matter with pure functions (which is wrong).

I'm saying that the notion that "order doesn't matter" (without
qualifications) has been touted as a benefit of pure functions and FP -- and
indeed it has. Wrongly, but it has -- and that's an impression many people get
from reading such "introductions to FP".

In fact, the author of the original post we're discussing makes the exact same
observation: that order was supposed to "not matter", but he shows how it
does.

And if you want some more example of what I said, here's a very lazily
collected sample, I just used "pure function" and "any order" etc. in Google:

"A pure function is also robust. Its order of execution doesn’t have any
impact on the system.". \-- [http://www.nicoespeon.com/en/2015/01/pure-
functions-javascri...](http://www.nicoespeon.com/en/2015/01/pure-functions-
javascript/)

"Finally, and here's the coup de grâce, we can run any pure function in
parallel since it does not need access to shared memory and it cannot, by
definition, have a race condition due to some side effect". \--
[https://drboolean.gitbooks.io/mostly-adequate-
guide/content/...](https://drboolean.gitbooks.io/mostly-adequate-
guide/content/ch3.html)

"Another consequence is that it doesn't matter in what order functions are
evaluated — since they can't affect each other, you can do them in any order
that's convenient". \-- [http://stackoverflow.com/questions/4382223/pure-
functional-l...](http://stackoverflow.com/questions/4382223/pure-functional-
language-haskell)

"Pure functions may be run in any order, so it's more easy to parallelize
their execution".
\--[http://leonardo-m.livejournal.com/99194.html?page=1](http://leonardo-m.livejournal.com/99194.html?page=1)

"Pure functions can be evaluated in any order, the result is always the same.
Therefore, pure functions calls can be run in parallel". \--
[https://medium.com/@yannickdot/functional-
programming-101-6b...](https://medium.com/@yannickdot/functional-
programming-101-6bc132674ec5#.ukc2q41fl)

Note how all examples don't bother to give any qualifications for input data
dependencies between some of your pure functions and race conditions based on
them...

What do newcomers to FP learn from those?

~~~
the_af
I think you're being disingenuous. Those articles aren't claiming what you say
they are claiming. Their wording may be imprecise (I agree with you on that),
but nobody is claiming that FP makes all chains of transformations magically
commutative.

You know what these people mean when they say pure functions may be evaluated
in any order: that as long as they have the _same input_ , the result will be
the same. But in contrast, the "magical commutativity" property doesn't mean
the functions get called with the same input! Or, to say the same in other
words, you cannot change a function and expect your program to be the same,
and we know that h1 = f . g and h2 = g . f are, in the general case, NOT the
same function! In fact, one of the two might not even compile!

 _At best_ you could argue the explanations you linked to are incomplete or
unclear. But it's dishonest to say they claim you can order the functions in
any way, regardless of their input -- which is what TFA was having problems
with.

~~~
coldtea
> _I think you 're being disingenuous. Those articles aren't claiming what you
> say they are claiming. Their wording may be imprecise (I agree with you on
> that), but nobody is claiming that FP makes all chains of transformations
> magically commutative._

But I didn't say they do [say the latter].

Only that they sidestep the issue -- often confusing application with
evaluation (using them interchangeably, or e.g. using the imprecise "run in
any order" instead of evaluate).

Very few of those article mention commutativeness, the distinction between
evaluation and application, data dependencies, etc. And even those ment

ioning "order of evaluation" forgetting to clarify that is not the same as
application, and confusing the matters more by adding that we can "easily run
them all in parallel" (giving newcomers the impression that the order of
application doesn't matter at all).

> _At best you could argue the explanations you linked to are incomplete or
> unclear. But it 's dishonest to say they claim you can order the functions
> in any way, regardless of their input -- which is what TFA was having
> problems with._

TFA author had those problems (or, more precisely, had the idea that they
shouldn't exist) because he was influenced by such articles -- coming to
believe that FP was "supposed to free us" from ordering issues.

You say that "it's dishonest to say they claim you can order the functions in
any way, regardless of their input", but that's just what this statement
implies:

"A pure function is also robust. Its order of execution doesn’t have any
impact on the system."

Potential issues with input not mentioned at all, as if they don't exist.

Or this:

"Pure functions can be evaluated in any order, the result is always the same.
Therefore, pure functions calls can be run in parallel"

Here they indeed write "evaluation", but the "can be run in parallel" sans
qualifications, muddies the waters.

~~~
the_af
I don't follow you. When people say "you can run these functions in any order"
(or "in parallel"), how do you interpret this to also imply "...regardless of
their input"?

Why does it matter if they say "run" instead of "evaluate"? What's the
difference, in your opinion?

Do people really have trouble with the (universal, not FP-specific) notion
that the input to a function matters?

Do we both agree that FP/immutability doesn't promise what the author of the
article thinks it promises, and that people (regardless of about how clearly
they say it) do not claim it does?

~~~
coldtea
> _Why does it matter if they say "run" instead of "evaluate"? What's the
> difference, in your opinion?_

"Evaluation" solely implies passing an actual value and getting the result
from a function. Taken in isolation, each evaluation (of pure functions) is
independent from whether 10 or 200 other happened before or after -- it
depends only on the input it gets.

Saying "run" on the other hand can be conflated to mean running a pure
function in the course of a program's execution, and there the order or
execution can matter with regards to the overall program's behavior, e.g. when
it affects the derived input of a subsequent function.

> _Do people really have trouble with the (universal, not FP-specific) notion
> that the input to a function matters?_

Yes, people do expect pure functions will free them from race conditions for
example -- only to discover that they still exist, as the author did, at a
higher conceptual level.

Heck, if we are to discuss the original article, isn't it a given that such
people as its author do exist and have the same confusion?

Or does the author seem like some totally ignorant programmer that is not
representative of more people (and that we'd expect him/her to get it all
wrong)? It doesn't come out that way to me.

------
jasonkester
Oh dear.

I read the first half of this article watching him take the simple OO game
loop and iteratively make it worse and worse, leaving it nearly unreadable and
impossible to follow the logic or know what the state would be at any given
moment. Then I came to this line:

 _Let’s take a moment to appreciate a few of the things we’ve gained by
rewriting this code in a functional style_

He actually thinks that he has improved the program. By piling functions up
six deep and replacing pos.x += 3 with four lines of object accessor code
(using strings for property names). Oh dear.

The second half of the article, where he realizes that he has not actually
gained anything by ruining his game is just the icing on the cake.

I don't feel as though I've been sold on the benefits of Immutability today.

~~~
junke
I swear someday someone will write a blog post titled: "Controlled mutation of
state variables in a purely functional paradigm: a syntactic approach."

And just after that we will rediscover GOTO.

~~~
charlesism
You're joking, but "Goto" is coming back into fashion again. It's in Swift,
for example. There are few things more self-destructive than taking any of
these programming memes too seriously.

------
victorNicollet
The bugs described in the article stem from the existence of a "state" type.
If you're allowed to call draw() on the result of handleCollisions(), but not
on the result of processInput(), that means there are actually two state types
in your program:

    
    
        draw : cleanState -> unit
        handleCollisions : dirtyState -> cleanState
        processInput : cleanState -> dirtyState
    

These signatures allow draw(handleCollisions(processInput(state)) but forbid
the incorrect ordering draw(processInput(handleCollisions(state)).

Effect systems boil down to "discovering" these types eventually, but it is
easier for a human to think in terms of what invariants are enforced by a
given type. Even if there is no language support for actually checking those
invariants before execution.

~~~
shiro
I think the uber point of the article is that, even with the help of types,
you cannot design the whole type structures until you know the entire picture.
Suppose you throw some autonomous to your character (e.g. pathPlanning,
preprogrammed behavior, subcomponents moving autonomously otherwise
interfering other objects or user input, etc.) then there are many types of
state, and you'll design what operation gets which type of state and returns
which type of state, and that's essentially the same as how you design with
chaining State -> State functions or working with imperative procedures. Sure,
types will keep you away from shooting yourself---once you design the whole
type chain the compiler will detect if you swap or drop operations
inadvertently. But during the stage of your designing the type structure
itself, the issue described in the article still exists.

~~~
victorNicollet
My point is that you don't have to design the whole type chain ahead of time,
you can discover new type constraints as you go and apply them straight away
(in the same way that you don't have to wait until you finish your design to
start writing unit tests).

Maybe an experienced game programmer would notice straight away that it could
be a problem to draw a state that has not undergone collision detection, and
design this state constraint into the game straight away (as a type, as an
assert, etc.) and maybe another programmer wouldn't notice this requirement
until they ran the game and noticed something odd (or someone from QA notified
them) and had to dig through the system to find the underlying reason. But
once they shot themselves in the foot once, it makes sense to implement
_something_ to detect when they make the same mistake again.

~~~
shiro
During the design stage, you do need working components, independently
implemented, and experiment various combinations. If you use types, that means
frequently rewriting types to match the reorganizations of control flow. Types
still help to prevent mistakes, but the main part of actual design process
(like working on clay, or, scrap & rebuild with Lego pieces) is just as
tedious as other ways. I read that the original article is trying to address
that problem.

------
zby
Can anyone explain why it is:

    
    
      var newState =
          drawManuel(
          drawBackground(
          processInput(state)));
    

and not:

    
    
      var newState = processInput(state);
      drawManuel(newState);
      drawBackground(newState);
    

When I read the first my intuition is that application of drawManual and
drawBackground is important for the value of newState. The second one makes it
clear that newState does not depend on drawManual (or drawBackground).

It would also make the first bug more easier to spot:

    
    
      var newState = handleCollisions(state)
      newState = processInput(newState)
      drawBackground(newState)
      drawManuel(newState)
    

The problem with the order of handleCollisions and processInput is now more
visible because now it is obvious that just those two change the state - so we
concentrate on them. And there are less parentheses.

(Initially I used another variable here to make it more functional - but then
I decided this is useless - we are still in an imperative language)

I have a feeling that the author have some strange idea about what functional
means. I stopped reading after this part.

~~~
dkersten
Indeed.

If the state is immutable, then drawing (which shouldn't modify the state) can
be, conceptually, done in parallel. While in practice this is unlikely to
actually happen, your code snippet reflects this conceptual property better
and is therefore, in my opinion, much easier to read: I know where the state
is modified and where it is merely read. I also know that drawManuel cannot
mess with drawBackgrounds data and vice-versa.

~~~
ajuc
It's not only conceptual difference.

It is a good practice to decouple state manipulation from drawing (even to do
drawing in different thread, and with different frequency, than state
manipulation).

This allows for computers with wastly different processing power to run the
same simulation in sync, while the more powerful computer can draw 10 times as
many intermediate frames.

Alternatives are variable step length (makes it hard to write deterministic
simulation), or fixed step length and capping framerate to it (sacrifices
animation quality on better computers for easy implementation).

EDIT: also - the ordering between drawManuel and drawBackground probably do
matter, and there's probably some state related to drawing (like animation
frame numbers etc) so there should be separate screenState that reflects that.

    
    
      state = processInput(state);
      screenState = drawBackground(state, screenState);
      screenState = drawManuel(state, screenState);

~~~
dkersten
I meant conceptually because of what you say in your edit. As far as your
state variable is concerned, they are parallel. But as you point out, there is
implicit screen state (either in the code or in the GPU or wherever) and when
updating _that_ , order certainly does matter.

If each draw* function drew to a temp buffer to be composited later, then they
could be parallel. But none of this is really relevant to the game-state
discussion, though, since we were both talking about (and agreeing, I think)
that decoupling is good regardless. Even if just to aid reasoning.

------
fpoling
It is interesting that the solution for order dependency problem essentially
turns functions that update the state into functions that writes a program in
a mini-language. When interpreted, the language updates the state. In Haskell
this is known as Operational Monad [1].

The nice thing with that approach is that it decouples complex application-
specific logic from state management allowing to test both components
separately. Another bonus is that it enables things like Elm's time traveling
debugger [2].

The drawback is that this adds maintenance burden as now one has to deal with
the intermediate language. If that language is a poor fit for the problem, the
burden can be rather high. This is especially bad in JavaScript where lack of
rich static types makes generating programs and writing an interpreter is
rather error prone.

[1]
[https://wiki.haskell.org/Operational](https://wiki.haskell.org/Operational)

[2] [http://debug.elm-lang.org/](http://debug.elm-lang.org/)

------
Kiro
> However, we still don’t know what they do with the state. Do they modify it,
> or do they only read it?
    
    
       var newState =
            drawManuel(
            drawBackground(
            processInput(state)));
    

I don't see how this makes it more readable. I still don't know which
functions actually modify the state. This is what I imagined the result to be:

    
    
        var newState = processInput(state);
    
        drawBackground(newState);
        drawManuel(newState);

------
cousin_it
Don't try to use functional programming for GUIs, simulations, or games.
That's what OO was invented for.

Don't try to use OO for compilers or formal verification. That's what FP was
invented for.

Don't try to use OO or FP for operating systems. That's what imperative
languages were invented for.

Anyone who insists on a single style for the whole software industry is
probably underinformed and overconfident.

~~~
Illniyar
I really like your sentiment, but I'm not sure if "OO was invented for X" or
"FP was invented for Y" is right.

They are abstract paradigms that were created with very little connection to
their underling usage at the time (FP in particular stems from a mathematical
background, but most of OO as well).

Rather I would say more then "OO is more suitable for GUI" and "FP is more
suitable for compilers" (though I'm not sure if I agree with that)

------
raspasov
Here's a ClojureScript + core.async + Om + my own animation lib solution. No
surprises, no weird bugs. Dead simple functional code + CSP = WIN! : )

[https://gist.github.com/raspasov/e894a0c6c26e5814d752b3fc907...](https://gist.github.com/raspasov/e894a0c6c26e5814d752b3fc9075c601)

Here's also the anim lib gist, if anyone is interested
[https://gist.github.com/raspasov/f9ca712571efd932169e](https://gist.github.com/raspasov/f9ca712571efd932169e)

~~~
retrogradeorbit
Here's a complete web game a mate and I did for global game jam this year in
less than 24 hours from scratch using ClojureScript and core.async targeting
pixi.js.

[https://github.com/retrogradeorbit/moonhenge](https://github.com/retrogradeorbit/moonhenge)

It even has multiple weapons, enemy waves and an end sequence if you complete
the game!

The code is not as clean as it could be. For instance it has many more atoms
than it needs and lots of code that could be simplified and re-factored. But
time was extremely tight and it was a massive rush job.

But even so in the end it came together seamlessly with no weird bugs or
crashes of any kinds and it is rock solid. I attribute this to the immutable
data structures and pure functions used. I've done games before using mutable
OOP and in my experience they _never_ come together this easily or with so few
bugs.

~~~
raspasov
This is awesome - great job! : )

------
ktRolster
"You will never find a programming language that frees you from the burden of
thinking about bugs" [https://xkcd.com/568/](https://xkcd.com/568/)

~~~
pron
Moreover, the following theorem is easily proved: there cannot exist a useful
general-purpose programming language in which all (or even most) programs can
be efficiently verified to be correct, either by man or machine[1].

Where "useful general-purpose" means any language with forward branching and
some form of looping recursion (even if bounded) -- it does not need to be
Turing complete -- and "efficient" means done in time polynomial in the size
of the program.

[1]: ... unless PSPACE = P, in which case some small class of languages that
are not quite general purpose but useful nonetheless may have this property,
because verifying useful finite-state machine languages is PSPACE-complete.

~~~
pron
For those interested, the proof for the Turing-complete and nearly-TC (as in
total functional programming) cases is the same as the proof of the time
hierarchy theorems[1], and for finite-state-machines the proof is based on
results showing that verification of a (concise program expressing) finite
state machine is PSPACE-complete. Those results are not surprising, as it is
easy to see how any problem in computer science can be reduced (efficiently)
to verification, so being able to efficiently prove correctness of a language
capable of describing algorithms with super-polynomial complexity (this, BTW,
only requires the ability to loop/recurse to depth 2) would mean
contradiction.

[1]:
[https://en.wikipedia.org/wiki/Time_hierarchy_theorem](https://en.wikipedia.org/wiki/Time_hierarchy_theorem)

------
theseoafs
It seems what the author learned is that "programming languages sometimes let
you do things in the wrong order", which is so obvious that it's kind of a
non-observation.

Anyway, functional programming on its own doesn't get you there, but
functional programming with a solid type system does prevent this bug from
happening.

------
akkartik
Great article, feeds all my prejudices. For example, I'm tempted to link to it
in
[http://akkartik.name/post/modularity](http://akkartik.name/post/modularity)
at the point where I say that "There seems the very definite possibility that
the sorts of programs we humans need to help with our lives on this planet
intrinsically _require_ state."

~~~
mbrock
That's why Haskell programmers, for example, try to represent state in clear
ways.

The basic mathematical model of functions and values turns out to be able to
represent state in several ways: the "monadic" way of composing state
transforming functions; the "functional-reactive programming" way of modelling
signals and events (which offers interesting solutions to the OP's game
problems); various "interpreter" patterns; and so on.

I point this out just because there's a widespread misunderstanding that pure
functional programming is somehow opposed to modelling state. It's definitely
not. It just involves building state models on the coherent basis of functions
and values.

Put simply, in Haskell, when your problem requires state, you can just use the
"State" type, just like when you require I/O, you use the "IO" type.

~~~
idanoeman
Absolutely. In "The Awkward Squad," SPJ says "Haskell is the finest imperative
programming language in the world," not entirely tongue-in-cheek.

:)

------
jw-
The problem doesn't really have anything to do with immutability; the problem
is dependency and composition. E.g.

f :: A -> B

g :: B -> C

will let me write g(f(x)), but not f(g(x)). It does not matter if f and g are
'pure', the point is that f must come before g. I think the mistake is to
think that functional programming is about immutability, and that immutability
solves the problems. Immutability is there to keep the types honest, it is
fine to have effects aslong as we are upfront about them, and don't conceal
them under false names such as "int" and "bool".

------
fnordsensei
Just as it's possible to model immutability on top of mutable structures, it's
possible to model mutability on top of immutable structures. Regardless of
which way you go, you end up with the properties associated with the kind of
structure you're building. This is not too surprising, I would think.

------
DanielBMarkham
There's a funky thing going on here because of the environment this guy is
using: Javascript and a JS library to handle immutability.

Because of the way he's coding, he's doing a lot of horsing around with
several things at once. He's not writing functions and then composing them.
He's refactoring multiple functions at the same time. As it turns out, this
difference is important. Looking at it from one angle, from small pieces into
larger pieces, you'll be solving the order of functions almost immediately.
Looking at it from another angle, from big pieces into smaller pieces, you
solve problems in a different order. You make pieces with names on them that
do what you want, then you look at whether there's support in the code to do
what you want to do, regardless of how that integrates. In fact, you end up
(as he does) integrating later. You're not _composing_ , you're _decomposing_.

This is why one of the recommended ways to do FP is to use the REPL and get
the core, toughest function working first. Then start composing. Work outwards
from there.

The first function you'd probably write in this case would perhaps be
moveTheGuy which would move the guy from one place to another. You would
immediately be solving the problem of whether or not the guy could actually
move. Contrast this with the first function being updateTheWorld, where you
can start inventing all kinds of stuff that might or might not happen. Then
you horse around trying to figure out what should go under updateTheWorld

I remember the first conference talk I saw with a guy talking about F#. It was
still new in the industry, and many of the folks who were using it were true-
blue OO coders.

The guy gets up, shows some code, then goes behind-the-scenes to show us the
"real" C# code. "As you can see, all this really is? It's do-while loop. You
could code this way with C# and never touch F#"

Yeah sure, but the environment you're in can help lead you to write better or
worse code. It seems to me if I wanted to do pure functional stuff and
Javascript I'd be pretty freaking careful how I coded. Lots of things there
that point you in the wrong direction. Functional code just isn't regular code
with all sorts of weird syntax; it's actually a different way of looking at
solving problems.

------
_pmf_
It seems to be a common trend to claim that functional programming solves the
problem of state.

It can solve the problem of accidental state in the solution space, but it
obviously cannot change the fact that a specific problem space has stateful
characteristics.

Don't introduce accidental state, yes. But embrace the problem's states as
first class elements of your solution and don't try to hide them.

~~~
the_af
FP is all about embracing state as a first class element and reducing
accidental state. I hope nobody claims otherwise :)

The confusion is that people sometimes say "state" when they mean "mutable
shared state".

------
RichieAHB
I don't see how functional programming would ever solve the problem of
chronology. If the collision engine needs to know the new position _in that
frame_ then it needs to run that code first. You can't just run code whenever
and expect functional programming to fix it. Nor will assigning different
areas of the state to specific functions fix that either.

I think immutable data solves a lot of headaches and using Redux and Immutable
together in our current stack has shown me some of the benefits of
immutability and a more functional programming style. However, much of this
article was about immutability not being a silver bullet for things that, IMO,
were never going to be solved by immutability. It's like saying it would be
solved by writing everything in Haskell, there are some "complexities" that
can't be hidden, e.g. order of events. Maybe I missed something.

~~~
kristiandupont
My typical analogy is this: mutable state is to software as movable parts are
to hardware. There are a group of problems that cannot be solved without it,
but introducing it makes everything a bit more fragile.

------
yason

         var newState =
            drawManuel(
            drawBackground(
            processInput(state)));
    

This isn't how functional programming must work to be functional programming.
The point is always to write as much as possible in functional, preferably
immutable style, if only it fits well. You can write a large portion of your
program that way.

The next sacrifice is to glue the functional calls together with more of a
stateful approach, but preferably keep the mutable state more contained. For
example, having a single function that handles the state, calls into the
immutable functional code to do the heavy lifting, then sew everything
together and ideally return something that is immutable and stateless. Like
little bubbles of state where absolutely needed. For example, draw() calls in
the above example can't be functional as drawing relies on side-effects.
There's nothing to return from a drawing function, so those can't natively be
chained into nested function calls.

The final level is usually some hodge-podge state management at the very top
level where it becomes unavoidable in order to keep things organised and
beautiful. There's always some state, so you'll have to do it somewhere, but
the more localized and minimized the better state management it is.

------
susi22
I've said this a few years back in some other thread but it's so fitting here
I'll say it again:

In a few years down the road I hope that the JS folks take a very very close
look at Rule engines. These kind of errors can be easily avoided with them.

There is actually folks in the Clojurescript world using clara-rules for
Browser state/rendering and it apparently works very well for them. I really
wish some bigger player would push in that direction.

~~~
Chris_Newton
_In a few years down the road I hope that the JS folks take a very very close
look at Rule engines. These kind of errors can be easily avoided with them._

I agree up to a point, but I think off-the-shelf rule engines will only ever
get us so far.

Front-end web development in recent years has been going through its “Visual
Basic phase”. We’ve been developing tools to help automate basic UI
interactions and communications between front- and back-end. Those often get
the job done better than the more manual techniques we used to use, but mostly
they work best with simple presentation like forms or charts, and with simple
data models and limited relationships — in other words, the kind of thing you
need for a simple CRUD app UI where the tricky issues like scalability are
handled on the back-end, like a lot of the web apps being developed today.

However, you didn’t see a many people trying to build more demanding UIs in
Visual Basic, things like CAD applications or DTP software or flight simulator
games or tools for visualising and exploring tricky data sets. I agree that to
build more sophisticated web apps, the front-end world is going to need to
broaden its horizons. Tools that let you systematically represent large
numbers of rules without hand-crafting a custom function for every one of them
will surely be necessary in some cases. As with so much of web development,
lot of lessons learned some time ago in other programming fields will be
relearned by a new group of developers.

However, that’s all still relatively easy. The hard part is how you reconcile
the effects of those rules when they conflict, and there aren’t any universal
right answers to that one. In fact, often there aren’t any good answers at all
that can be fully automated. That can create a whole new world of nasty,
awkward interactions where users have to actively deal with those conflicts.
Maybe even that isn’t going to be scalable or efficient enough, and you have
to design the entire system to operate with only incomplete or potentially
incorrect information and still have acceptable if not always ideal behaviour.
There are lots of interesting problems at this level, but a basic rule engine
is never going to solve them alone.

------
lliamander
I really liked this post. It reminds me of a similar post by James Hague[0]

A number of people seem to be hung up on what is meant by FP advocates saying
"order doesn't matter". In all the literature I have read when I first learned
about FP it seemed pretty clear to me that it meant "function composition is
associative", _not_ "function composition is commutative".

But whether this is obvious is beside the point. The hypothetical confused
programmer is merely a rhetorical device to set the reader up to think more
deeply about how they handle state. FP teaches us to push state changes to the
boundaries of the system. OK, once it is there, now what do we do with it?

We handle it declaratively[1]. That's what treating the updates as data
allows.

[0][http://prog21.dadgum.com/189.html](http://prog21.dadgum.com/189.html)
[1][https://awelonblue.wordpress.com/2012/01/12/defining-
declara...](https://awelonblue.wordpress.com/2012/01/12/defining-declarative/)

------
hellofunk
Here is a related article for anyone curious about, or who uses,
persistent/immutable data structures:

[http://concurrencyfreaks.blogspot.nl/2013/10/immutable-
data-...](http://concurrencyfreaks.blogspot.nl/2013/10/immutable-data-
structures-are-not-as.html)

------
dschiptsov
Immutability as absense of over-writes or mutations of existing values (in
Erlang one just introduces a new binding for a new value) is obviously enough.

State and I/O is easily incapsulable inside closures with CPS or with explicit
message passing.

For a lazy language one needs some ADT, like Monad, for explicitly "enforcing"
a strict order of evaluation for a specific set of expressions, because I/O
_implies_ a strict order. Think of it as a transformation from a set to
sequence (by _defining an order_ as an ADT).

Lifting State or IO into Monadic world is mere a trick to satisfy a
typechecker _and_ enforce a strict order by using an ADT. A strict language
with optional lazynes could use ordinary type-tags.

Nothing to see here.

~~~
marcosdumay
> A strict language with optional lazynes could use ordinary type-tags.

And then you'll have a total order, that makes your code hard to parallelize.

But more important, it mixes state updating (that can be actually done in
several orders, can be rewinded, and has several other interesting properties)
with IO, with list mapping, with exception handling, and everything else.

------
sesquipedalian
First of all, it's great that you're exploring functional programming
paradigms and their applicability to game programming/state management.

That being said I have to point out some things that bothered me: I believe
you're conflating immutability and commutativity -- these things are not the
same thing. Functional programming does not absolve you of paying attention to
the order of operations. For example if you had two functions plus1 and plus2,
you can order those any way you like since summation is commutative. However
if you had plus1 divide2 functions, you have to pay attention to the order of
application due to non-commutativity.

------
wodenokoto
I've never really dived into functional programming but often end up reading a
lot about it here.

There's always been something about state and immutability that irked me the
wrong way and I'm very happy to see it being put into words and code examples.

~~~
ktRolster
Well, immutability is something that can help you in non-functional
programming languages too: even C has const. It doesn't solve every problem,
but it reduces the cognitive load.

~~~
wodenokoto
You are absolutely right. What I meant, and what I should have written was "
_completely_ stateless and completely immutable".

~~~
minitech
You can still model state pretty straightforwardly when everything’s
immutable. This article shows that different designs have different
applications more than anything, and, well, duh.

------
chriswarbo
I think this is an example of something I see often, where a particular idea
(e.g. immutability, interfaces, modularity, etc.) gets confused with a
particular feature of a particular language which shares the same name. In
this case, the author's code isn't using immutability; it just-so-happens to
be creating an object called "Immutable.Map".

Following the author's terminology, mutability is when we replace the contents
of a memory location with different values, e.g.:

    
    
         Time | RAM
        ------+--------------------------------
         0    | foo
         1    | bar
         2    | baz
              .
              .
              .
    

The author is using the term immutable to mean never replacing the contents of
a memory location, so in their view this is immutability:

    
    
         Time | RAM 1 | RAM 2 | RAM 3 |
        ------+-------+-------+-------+--...
         0    | foo   |       |       |
         1    | foo   | bar   |       |
         2    | foo   | bar   | baz   |
              .       .       .       .
              .       .       .       .
              .       .       .       .
    

However, this is really just an implementation detail; Javascript programming
isn't really about values in RAM (unlike, e.g. C where pointers are first-
class values in the language). Instead, Javascript is conceptually built
around variables, objects, scope, etc. So let's see how these RAM details
translate to the values of variables.

The "mutable" version:

    
    
         Time | "state" var
        ------+-------------
         0    | foo
         1    | bar
         2    | baz
              .
              .
              .
    

The "immutable" version:

    
    
         Time | "state" var 0 | "state" var 1 | "state" var 2 |
        ------+---------------+---------------+---------------+--...
         0    | foo           |               |               |
         1    | foo           | bar           |               |
         2    | foo           | bar           | baz           |
              .               .               .               .
              .               .               .               .
              .               .               .               .
    

So far so good. However, scope plays a large part in Javascript programming:
we can only access variables which are in scope, and what's more the _same_
variable name can refer to _different_ values depending on the scope we're in.
This is where the author confuses the _concept_ of immutability with the name
of a library they just-so-happen to be using. Their code keeps generating new
scopes, in which the variable name "state" refers to a different value than in
the previous scopes. Conceptually, the variable "state" is not immutable
because it keeps referring to different things, even though the implementation
details could be argued to be immutable.

As an analogy, consider a display which shows the current value of a stock
price. That is certainly not immutable, and trying to use that reading to
perform calculations will be tricky since it keeps changing. This is like the
author's original, mutable "state" variable.

Now consider a plotter drawing a graph of how that price varies over time:

    
    
              |
              |    _--.      ____
        Price |   /    \    /    \
              |__/      \__;      \__/
              |                       
              +-----------------------------------
                   Time              ^
                                     |
                                    Now
    

This is like the author's "immutable" version, where values are never changed
once they're defined. This seems more useful for performing calculations,
however, because the author keeps creating new scopes, they have no access to
these old values! We have no access to the graph paper, we're only allowed to
see the current position of the pen; which is exactly the same as if we only
had the original display!

If we take scope into account, we actually get the following:

Mutable version:

    
    
         Time | Scope | Current "state"
        ------+-------+-----------------
         0    | 0     | foo
         1    | 0     | bar
         2    | 0     | baz
              .       .
              .       .
              .       .
    

Immutable version:

    
    
         Time | Scope | Current "state" | "state" var 0 | "state" var 1 | "state" var 2 |
        ------+-------+-----------------+---------------+---------------+---------------+--...
         0    | 0     | foo             | foo           |               |               |
         1    | 1     | bar             | foo           | bar           |               |
         2    | 2     | baz             | foo           | bar           | baz           |
              .       .                 .               .               .               .
              .       .                 .               .               .               .
              .       .                 .               .               .               .
    

I do think the author has a few good points to make; however, I don't think
it's an argument between being mutable vs. being immutable, so much as one
between being implicit vs. being explicit.

~~~
pdubroy
Author here. You've actually summarized one of the main points of the article
-- here's what I wrote in the "What's going on?" section:

> In a sense, we are modelling a mutable object. Implementing that model with
> immutable data structures does not eliminate the difficulties of dealing
> with state updates -- it just shifts them to a different conceptual level.

~~~
G4BB3R
I thought you would recommend Elm in the end of the article :P

------
tome
The point of functional programming (or any good programming style, really) is
to eliminate unneccesary dependencies (by which process the necessary
dependencies are clarified) _not_ to remove necessary dependencies (which by
definition is impossible).

------
leshow
reminds me of Elm's Effects system.

great article.

------
hathym
sounds like a solution looking for a problem.

------
kordless
Not one mention of the blockchain in the story or here. How interesting.

------
riprowan
As a non-game-programmer, the idea of "collision detection and prevention"
seems like a case of bad modeling.

In the real world, two objects don't start by occupying the same space, then
deciding that's not a valid state and taking corrective action.

~~~
Kiro
Can someone explain why I cannot just check for collision before moving the
character?

~~~
catmanjan
If the character moves faster than the space between him and the block, he
will not be able to stand flush with the block - this can be fixed on other
ways though.

