
A Worst Case for Functional Programming? - platz
http://prog21.dadgum.com/189.html
======
crntaylor
"Turning your code inside out" is a great piece of advice, as it often opens
up abstractions and refactorings that you didn't realize where there to begin
with. The same idea is behind several common object-oriented design patterns
(Command, Mediator, Strategy, Visitor) but it's baked into to many functional
programming languages.

For example, say we want to write a function to compute square roots. A common
approach to computing sqrt(n) is to start with a guess of x = 1.0, and keep
replacing x with (0.5 * (x + n/x) until the relative difference between
subsequent guesses is small enough.

    
    
      sqrt n = loop x0 x1
       where
        loop x y = if converged x y
          then y
          else loop y (0.5 * (y + n/y))
        converged x y = abs (x/y - 1) < 1e-10
        x0 = 1.0;
        x1 = 0.5 * (1.0 + 1.0/n)
    

That's good, but it has the test for convergence all mixed up with the logic
for generating the guesses. What if we could factor out the code that
generates an infinite sequence of guesses?

    
    
      sqrtGuesses n = go 1.0
       where
        go x = x : go (0.5 * (x + n/x))
    

Note that this works in Haskell because of laziness, but it's simple in any
language that has a mechanism for delaying computations. Now we've decoupled
the method for generating a sequence of guesses, we can write a function that
checks for relative convergence

    
    
       converge (x:y:rest) = if abs (x/y - 1) < 1e-10
         then y
         else converge (y:rest)
    

and define the square root function in terms of these

    
    
       sqrt n = converge (sqrtGuesses n)
    

The logic of the program is now much cleaner, and we've got a useful function
'converge' which can be re-used in other parts of the program.

This kind of 'turning inside out' is often possible in functional languages,
often leads to more compact and more compositional code, and is one of the
reasons that I enjoy programming functionally so much.

~~~
revelation
C# can do this:
[http://dotnetfiddle.net/dCm475](http://dotnetfiddle.net/dCm475)

(Sorry, it's just a not-so-well-known but awesome feature)

~~~
profquail
Now here's the same example in F# (a functional language); it compiles into IL
similar to that produced by your C# code. Which do you find more readable?

    
    
        module Program =
            let n = 42
        
            let rec sqrtGuesses x = seq {
                yield x
                let next_x = 0.5 * (x + (float n / x))
                yield! sqrtGuesses next_x }
    
            sqrtGuesses 1.0
            |> Seq.pairwise
            |> Seq.pick (fun (x, y) ->
                if abs (x - y) < 1E-10 then Some y else None)
            |> System.Console.WriteLine
            
            System.Console.WriteLine (sqrt (float n))

~~~
username223
It's a wash. To understand that thing you posted, I'd have to look up what the
exclamation point does to "yield", why adding it forces you to repeat the
function name, and whether those "|>" sequences are typos or some weird
operator. The other version has some random StudlyCaps, I have to look up.
Meh.

~~~
codygman
Well it's only a wash if you are familiar with one and not the other (and
that's the case IIUC). You don't intuitively know what the C# version does
either.

For future reference "|>" like "$" in haskell is just short hand for a start
and end parenthesis and end of expression or next instance of "|>" or "$".

So in Haskell:

    
    
        sum $ filter (> 2) [0..10]
    

is a less noisy way of saying:

    
    
        sum(filter (>2) [0..10])
    

About the exclamation points, I believe it causes it to evaluate and has to do
with ensuring the expression is evalauted. At least that is the case in
haskell.

~~~
platz
|> would be the flip of $ since the arg comes before the function

~~~
dllthomas
So equivalent to Control.Lens.(&)

------
fiatmoney
I also find it to be the exact other way around.

A lot of mathematical transformations, on the inside, break down neatly into
sequential mutations, and you really do need to express them imperatively for
maximum performance. Fortunately, you can look at the math and verify they
compose appropriately, and use testing to verify you've implemented each step
correctly.

But if I'm doing anything involving state, _particularly_ interdependent state
like a simulation, functional algorithms make a lot of sense as a way to
ensure that I always have a valid object handle (because I'm always returning
a new, valid object as a result of a computation) and give you a somewhat
easier route to refactor while maintaining the same semantics (because there
are no rogue mutations or global state updates I need to conform to), as well
as parallelizing updates to the object graph for performance.

Clojure's semi-famous "ants" demo is a good example of this.

[http://www.youtube.com/watch?v=dGVqrGmwOAw](http://www.youtube.com/watch?v=dGVqrGmwOAw)

[https://gist.github.com/spacemanaki/1093917](https://gist.github.com/spacemanaki/1093917)

~~~
tikhonj
Hmm, if the mutation is just purely sequential, it's just an implementation
detail. Ideally, you would express the solution at a high-level, close to the
mathematics and _without_ mentioning mutation, and have your code optimized to
the mutable version _for you_. After all, it's just a series of
transformations over some data--exactly what functional programming is!

This sounds like an appeal to a "sufficiently smart compiler", but really
we're already part of the way there with things like vector and stream fusion
in Haskell. I know that some of Haskell's vector functions are already smart
enough to run in-place if they can get away with it.

~~~
username223
> This sounds like an appeal to a "sufficiently smart compiler"

Because it is. It's a long road from "gee, I changed something, better copy
everything" to something as good as FFTW or ATLAS. GHC is pretty smart, but
not smart enough to be trusted.

~~~
shoki
> It's a long road from "gee, I changed something, better copy everything" to
> something as good as FFTW or ATLAS.

CUDA implementations of FFTs and matrix operations are faster than both FFTW
and ATLAS, and they are neither sequential nor functional.

CUDA, C, and Haskell all have domains they typically outperform one another.
The math vs. simulation divide sketched in this blog post is more an
expression of the author's own psychology more than anything else.

~~~
kkjkok
To be fair, the FFTW/CUDA thing is due to fundamentally different hardware
architectures which drove design constraints for these types of libraries.
FFTW was never meant to run on a dedicated, ultra-parallel processor with
highly optimized floating point instructions (GPU), but it is incredibly fast
considering it runs on general purpose hardware. I am sure the FFTW authors
could have done _something_ to squeeze out more performance if they controlled
both the hardware and software as NVidia does. And the transfer time to/from
the GPU does matter, especially for smaller/more frequent operations...

All that aside, the psychology of pure functional vs. pure OOP vs. some hybrid
methodology is really interesting, and even the view of what a "clean
solution" is becomes tainted based on past experiences with other code written
in that style.

------
fhars
When he started to write about writing (military) simulations in a functional
style, I really expected a reference to this paper
[http://haskell.cs.yale.edu/?post_type=publication&p=366](http://haskell.cs.yale.edu/?post_type=publication&p=366)
which is one of the rare examples of a real empirical experiment in software
enginering. (And it is going to turn twenty this year, we are not learning
very fast as a community).

~~~
NigelTufnel
This is not the most fair paper (it could be described as: two Haskell
language designers kick program-manager-struggling with-pre-STL-C++'s ass) but
it's an interesting reading.

It would be great to see this paper's experiment conducted in the modern
world. Haskell vs Python vs C++11 vs Clojure vs whatever.

Actually it would be great to have a site like
[http://benchmarksgame.alioth.debian.org/](http://benchmarksgame.alioth.debian.org/)
that focuses on program readability, development speed etc.

~~~
BSousa
Not an holy grail, bur take a look at Rosetta Code :
[http://rosettacode.org/wiki/Rosetta_Code](http://rosettacode.org/wiki/Rosetta_Code)

------
dmlorenzetti
There's nothing wrong here, but it all feels to me like a bit of a false
dichotomy. When discussing the functional approach to the simulation, the
author still talks about objects, such as tanks and shells, whose state gets
tracked. So it isn't as if an object approach has been completely abandoned.

Similarly, there's nothing preventing an implementation in an object-oriented
language from making the states of things at the beginning of a time step
immutable. In fact, this is done all the time in scientific simulation
(necessarily so, because you might try a time step that is too long, and have
to re-try a shorter step from the same initial state).

So the author's conclusion that _The second [fix] is to avoid any kind of
intra-frame pollution of object data by keeping a list of all changes..., then
applying all of those changes atomically as a final step_ sounds like a
letdown to me. The imperative codes I work with do this all the time. It's
just that the comments say mundane things like "Update the pressures" or
"Report out final concentrations" rather than "Apply changes atomically to
avoid intra-frame pollution."

~~~
bjourne
Try to implement infinite levels of undo in an imperative solution. The
functional approach makes it trivial, since everything in the simulation is
just a big reduce function applying a sequence of changes to a state object.
Mutable state on the other hand, makes it very hard to write.

~~~
hrjet
I didn't quite follow your comment. If the imperative solution tracks a list
of changes why couldn't it implement infinite levels of undo in a similar
fashion to the functional approach?

~~~
dllthomas
It's a question of persistent data structures and sharing. Efficient code
without mutability tends to recreate only the parts that change, and so you
don't have to store a fresh copy of the whole world each update. Of course
there is nothing preventing you from doing this in an imperative language, but
a preference for mutability discourages it.

------
espeed
Thomas Kristensen recently released a Propagator implementation for Clojure
([https://github.com/tgk/propaganda](https://github.com/tgk/propaganda)) --
Propagators as in Sussman's paper "Art of the Propagator" and his StrangeLoop
talk "We Really Don't Know How to Compute"
([http://www.infoq.com/presentations/We-Really-Dont-Know-
How-T...](http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-
Compute)).

"Art of the Propagator":
[http://web.mit.edu/~axch/www/art.pdf](http://web.mit.edu/~axch/www/art.pdf)

"Revised Report on the Propagator Model":
[http://groups.csail.mit.edu/mac/users/gjs/propagators/](http://groups.csail.mit.edu/mac/users/gjs/propagators/)

Original Scheme implementation:
[http://groups.csail.mit.edu/mac/users/gjs/propagators/propag...](http://groups.csail.mit.edu/mac/users/gjs/propagators/propagator.tar)
[tar file]

Thomas' 2013 Clojure Conj talk:
[http://www.youtube.com/watch?v=JXOOO9MLvhs](http://www.youtube.com/watch?v=JXOOO9MLvhs)

------
Pxtl
Yeah, I've often been interested in RTS games and that kind of lock-step
"flip" between two "pages" of states sounds like the right way to go.

I tinker with SpringRTS a bit and actually its developers have had a lot of
trouble with _not_ using that kind of frame-by-frame approach - since every
unit is fiddling with the same shared state, it makes threading problematic.

Actually, my complete idea (thinking of this using a more procedural language
with mutable objects) was to have two state-pages, but only _one_ state-page
is read-only. Basically, you have two state-pages: Present and Future. Present
is read-only, and Future can only be edited by _the same actor_. This lets the
actor do self-modification without the overhead of the message-queue, but
peer-modification goes through the message queue. Peers can read each other's
read-only "Present", but not their mutable "future", and can insert queue
entries to modify each other's future. Of course, you'd have to sort the
queue-entries before executing them during the queue-processing step to make
it deterministic (RTS games use lockstep synchronization so _everything_ must
be deterministic).

This would be appropriate if your code had a lot of procedural logic going on
like programmatic animation or something (again, SpringRTS) and so putting
_every_ self-modifying operation into a queue might be too heavyweight. But
that's not very functional.

------
eonil
Though I am not much familiar with functional programming, as far as I tried,
one of the hardest problem in functional style is _referencing_ to a mutating
object.

Where an object is immutable, it has to create a new instance of itself for
mutation. That invalidates existing references. (if the language support
object referencing…) The only solution I can figure out is putting unique tags
for each objects, and lookup them for each time. In other words, I have to
reference them indirectly which introduces lookup cost. And this cost is
usually unacceptable.

If the data structure is pure tree, functional style usually superior on
simulations - easier writeup and easier debugging by retaining all the
intermediate state. But most games are graph structured. They usually contains
many links to arbitrary node which are very hard to be updated together with
target object mutations.

If I am wrong, please correct me. I want to use functional styles on games,
but I don't have a good solution for referencing problem.

~~~
judk
You can look at "FRP".

The pure-function approach is to have one outer loop to update state (and
_all_ earlier references are frozen, not just some), so there are no
references from previous-iteration state to future-iteration state (but the
reverse is OK).

~~~
eonil
I have heard of it. But the problem is data structure is not a pure-tree, so
we need arbitrary referencing _in a version_ of state, and FRP doesn't seem to
provide a solution for it.

------
rwmj
A couple of anecdata here:

\- I wrote a (nearly) pure functional Katamari-type 3D game (OpenGL + ODE for
physics). You can find the source here[1]. I didn't find the functional style
to be a problem at all.

\- At about the same time I was involved in a large closed-source project that
simulated the criminal justice system of a well-known European country.
Criminals, police, judges, prisons, etc. and their interactions with each
other. It was written in OCaml in a pure functional style. Again, no problems
writing it functionally.

[1] [https://github.com/blue-
prawn/OCamlODE/blob/master/examples/...](https://github.com/blue-
prawn/OCamlODE/blob/master/examples/katamari.ml)

~~~
kriro
I'm assuming you can't really give more information but is there some research
paper or at least the name of said European simulation. It sounds pretty
interesting and somewhat relevant to what I'm currently doing :)

~~~
rwmj
I'm afraid I really can't say any more about that. I got an email from the
contractor asking me to remove a posting last time I mentioned more details
about that project (FWIW I have no idea why they were so secretive/sensitive
about it).

------
rbehrends
Worst case is probably something like Warshall's algorithm (in-place mutation
of a bit matrix for optimum performance). See warshall.c in BYacc or
lib/bitsetv.c in Bison (relevant part reproduced below).

    
    
      void
      bitsetv_transitive_closure (bitsetv bsetv)
      {
        bitset_bindex i;
        bitset_bindex j;
    
        for (i = 0; bsetv[i]; i++)
          for (j = 0; bsetv[j]; j++)
            if (bitset_test (bsetv[j], i))
              bitset_or (bsetv[j], bsetv[j], bsetv[i]);
      }
    

While the algorithm can obviously be reproduced in a functional language, it
does rely on destructive updates for its performance.

~~~
judk
This is why Haskell (ST) and Clojure (transients) provide _local_ mutable
state in the form of "linear types" where it is impossible to read a mutable
value outside of tightly controlled conditions.

As Guy Steele wrote, "Lambda, the ultimate imperative"

~~~
tel
I'm not sure I would call ST a linear type to any significant degree. It's
only very slightly such a type.

~~~
judk
The quotes were the part where I was bluffing. :-/

------
_random_
Yet another one confusing functional and immutable. Is it OK that I am happy
with 50% OOP, 50% functional and 100% immutable?

~~~
andrewflnr
How does one use immutability extensively without adopting functional style?

~~~
malkia
By keeping old data, imperative style.

~~~
andrewflnr
It seems to me that if you go far enough down that route, you're going to end
up thinking functionally, even if the data flow is obscured by out parameters
and such.

------
gejjaxxita
The author's main observation is that with a non-functional approach one has
to _avoid any kind of intra-frame pollution of object data by keeping a list
of all changes then applying all of those changes atomically as a final step_.
This doesn't seem like it has any obvious drawbacks, the simulation algorithm
defines all objects updating at once, so it's not unnatural in an OO-world to
have this "update step".

The Author calls this universal update step a "fix" and then goes on to say
that a functional approach would avoid it, but in reality never explains why a
problem exists in the first place.

------
falsedan
I don't get it. There's a straw man of mutating state mid-step, which no sane
simulation would do.

    
    
      var new_state = current_state.clone();
      run_tick( current_state, new_state );
      current_state = new_state;
    

Get your infinite undo with copy-on-write semantics for your objects & pushing
current_state onto an undo list before replacing it with new_state.

I get that a purely functional simulation would have these free.

------
alanning
I made a load-testing tool using Clojure [1] and it did seem like a bit of an
unusual use case for functional programming.

Upon researching further, I found that was actually to be expected as Object
Oriented Programming was originally implemented as a way to help model
simulations:

"The formal programming concept of objects was introduced in the 1960s in
Simula 67, a major revision of Simula I, a programming language designed for
discrete event simulation..." [2]

[1] [https://github.com/alanning/meteor-load-
test](https://github.com/alanning/meteor-load-test)

[2] [http://en.wikipedia.org/wiki/Object-
oriented_programming#His...](http://en.wikipedia.org/wiki/Object-
oriented_programming#History)

------
abecedarius
The solution of queuing updates is natural and concise in E with its eventual
sends:

    
    
        tank<-move(displacement)
    

is essentially like

    
    
        setTimeout(0, function() { tank.move(displacement); });
    

except it also works if the tank lives in another process or machine. The
distributed/concurrent case was what it was for, but it can be nice for
sequential programming too.

------
mtrimpe
The talk "Deconstructing Functional Programming" by Gilad Brach should really
be a must-watch if you're thinking about having any meaningful debate about
functional programming.

tl;dr Almost everything we call 'Functional Programming' being equally
applicable to 'Object Oriented Programming' if it's done in a sufficiently
flexible language.

~~~
platz
I enjoyed Brach's talk; criticism like that needs to be heard. Of course FP
isn't the solution to all problems. Still I felt he employed a number of
strawmen. The HN discussion it provoked was excellent, however
[https://news.ycombinator.com/item?id=6941137](https://news.ycombinator.com/item?id=6941137)

~~~
codygman
Well I agree criticism must be heard, but much of what Brach talked about was
unfounded and in some cases just plain wrong.

As more experienced functional programmers in that thread pointed out, Brach
showed many misunderstandings which FP newbies typically have.

------
lmm
When you have a discretized timestep the functional approach falls into place
quite easily. I think the worst case for functional programming is where you
have an incoming stream of events (e.g. user requests) and need to update
(your internal model of) the world based on those.

~~~
tikhonj
This is where functional reactive programming really shines. It lets you model
time explicitly. You can easily work with streams of events, but you can also
work with things that change _continuously_ over time.

With a standard callback/state-based approach, you end up _implicitly_
modelling time through mutable state. This has a few major disadvantages: it's
difficult to talk about history and time is distinctly second-class. The
"resolution" over time is also implicit and ends up depending on your
implementation.

An FRP approach lets you make time explicit and a _first-class citizen_. In
turn, this allows you to directly talk about the relationships between time-
varying values and their histories--for example, you can "integrate" over time
explicitly.

My favorite example comes from a little game of life applet[1] I wrote.
Basically, I have an explicit timer that controls animation steps (called
life)--it's a stream of () which has a value every _n_ milliseconds. I also
have a behavior called "paused" which corresponds to whether the pause button
is pressed. I can combine these directly:

    
    
        when (not <$> paused) (step <$ life)
    

(The step :: Life -> Life function represents iterating a single generation.)

I also have a stream of (x, y) coordinates from the mouse. There's a modify ::
(Int, Int) -> Life -> Life function that flips a cell at the given point. I
can combine these into _another_ Life -> Life stream thanks to partial
application:

    
    
        modify <$> clicks
    

Now that I have these two streams, I can just combine them with union:

    
    
        when (not <$> paused) (step <$ life) `union` (modify <$> clicks)
    

The cool thing about this short snippet is that it shows how I can combine
three inputs--a button, a timer and the mouse--into a single stream very
easily. It's also very flexible. Right now, the "when" only applies to the
first input: you can modify using the mouse even when you're paused. If I
wanted to change this, I could just wrap the whole thing in the when:

    
    
        when (not <$> paused) (step <$ life `union` (modify <$> clicks))
    

Anyhow, I think FRP is an awesome approach for this sort of problem. Another
thing that I did not mention is that you can write your program in a way
that's "resolution-independent" over time. If your library models time
continuously, you get the moral equivalent of an SVG except for reactive code!
I think that's pretty cool too, although it turns out to be tricky to
implement.

[1]: [http://jelv.is/frp](http://jelv.is/frp)

~~~
lmm
The game of life is about as discrete-time as it gets. As you say, continuous
treatment of time is fiddly, and that's what I was trying to talk about. (And
you're right that FP is probably a better approach if you need history).

(I suspect _correct_ implementation using an object-oriented approach is also
tricky, tbh - but you can write the naive OO version and it will work most of
the time, and fail less badly, than a functional version at the same level of
completeness.)

~~~
tikhonj
The buttons and clicks that I _also_ support in my tiny example are not
discrete. Or rather, not in the sense you meant. The grid of life is
modifiable at any time--whether its running or not--and can be paused.

I only talked about it a bit, but we can also model _continuous_ time. Not
things like events but things like the mouse position. After all, how you
sample the mouse position _should_ be an implementation detail! This is what I
meant when I was talking about being "resolution independent". The fiddly part
is not the FRP abstraction but implementing it efficiently. And, again, I'm
talking about continuous time not like _events_ (which may happen at any time
but are otherwise discrete) but like a continuous function in calculus.

------
jmpeax
The blog post tries to discredit imperative programming by describing naively
incorrect simulation implementations and attributing them imperative
programming. A final sentence describes the well known, obvious, and correct
way where two world states are maintained, i and i+1, and state i+1 is
calculated based on state i.

Then the post briefly touches on functional programming being able to
automatically somehow transform an incorrect imperative solution into a
correct functional programming solution, without actually showing how it is
achieved.

The post contains no content of value.

------
minor_nitwit
To me, simulations always seemed to fall into a nice place for concurrent
programming with messaging. In that case, the abstractions are not as numerous
as when forcing things into a single threaded world.

------
obblekk
Isn't this attacking a strawman? Can't you always make object mutations
construct new objects and return those? Basically "fake" the immutable part of
functional programming?

~~~
judk
Immutability is one aspect of the family of techniques called "functional".
Yes, you can right immutable code in C or Java, but the compiler can't verify
that intent in general, which means that a whole class of bugs are possible,
and certain kinds of high-level general functions (combinators) and automatic
optimizations (inlining and rewrites and skipping computations) are not
possible to write/apply.

------
jheriko
i think interactive applications are a good example

we have what i consider 'a pile of hacks' (monad) in functional programming
languages to handle these because its very difficult to make useful software
which doesn't allow user input. whilst being functional in some sense most of
the usual benefits are undone and we are left effectively with imperative code
written in a functional style.

however the libraries hide the implementation details for us...

personally though i don't think its as cut and dry as 'OO is good for X' or
'FP is good at Y'

OO and FP are tools that should sit with good ol' procedural programming,
array programming and other paradigms - they all have their place, and
languages with too much emphasis on one in particular end up with weird
constructs to work around that.

I/O monads are an example for FP, but if we look at an OO language, say C#, we
have static class as a way to turn an object into a bunch of free-floating
state and stand-alone procedures - because sometimes /procedural/ is the only
approach and we must shoehorn it into OO with singletons which usually incur a
needless performance pain - the static keyword then hides that our object is
actually a load of procedural code for us.

~~~
dllthomas
Monad is a wonderful abstraction and not at all hacky. In my C, I miss the
ability to _parametrically_ combine "things that produce values".

~~~
jheriko
i guess its a matter of taste - i consider it a clever trick and totally
against the spirit of FP, but thats my take...

~~~
dllthomas
How would you describe "the spirit of FP"?

------
fexl
The purpose of any computer program is to produce side effects. A program
which has no side effects has no observable behavior and is therefore useless.
In that sense, side effects are good, so embrace them.

That implies that the purpose of a _functional_ program is also to produce
side effects.

The only problem with side effects is managing them properly. Side effects can
become tangled and conceptually incoherent. Functional programs can help
manage side effects.

For example, if I call (draw_line P1 P2), I expect a line to appear on the
screen joining points P1 and P2. That is a side effect.

As a simpler example, if I call (say "Hello"), I expect the line "Hello" to be
output to my terminal. That too is a side effect.

Those are good side effects.

The way I like to encapsulate side effects in a functional language (such as
Fexl, a language which I wrote), is to use closures. Take your mutable data,
and bury it inside a closure.

If you object to closures (pun intended), then think again. If you're a C
programmer and you write printf("Hello\n"), you are in effect using a closure.
How? Because stdout is effectively buried inside printf -- i.e. you don't have
to pass it as a parameter.

If I saw (draw_line P1 P2), I am using a closure. How? Because most likely
some X window gadget is buried somewhere down inside draw_line.

Fexl does provide a "var" construct which _is_ a mutable variable. Pure
functionalists might object to that. But why? The X window gadget is a
variable. Your screen memory is a variable. All the memory in your computer is
a variable. The buffer for stdout is a variable. The whole _world_ is a
variable. Stop fighting it.

I would prefer to avoid using "var" in Fexl, and aim toward pure functions,
but sometimes that's ridiculous. For example, what if I wanted to _simulate_
the behavior of draw_line, so instead of it going into an X windows gadget,
the effects get stored in memory so I can use them differently, maybe for
testing.

In that case I might use a var which simply collects all the draw_line and
other graphic commands into a big list.

So let's stop fighting side effects and learn to embrace them, and recognize
the value of functional languages in managing those side effects rationally.

------
malkia
LZ-based compression?

------
michaelochurch
Mutable state is an optimization. Sometimes a powerful optimization with
little loss of clarity, sometimes utterly ruinous to any hope of understanding
the code. Usually it's between the two extremes. But I think the default
should be the functional style. To reach for imperative tools immediately is
usually a premature optimization.

~~~
millstone
It's not just an optimization. Functional style is often less clear and more
confusing than using mutable state.

For example, I move my browser window. My browser models this as a Window
object whose Position property is mutated to a new value. This is clear and
easily understandable, in part because it matches the user's experience of
moving a window.

A functional approach might be to construct a new window at the new position
that shares some structure with the old window. This seems totally weird, and
furthermore, it's hard to think of any benefit to keeping the stale, no-
longer-onscreen window around, while it's easy to think of problems that might
cause. We can pile more machinery on top (FRP, lenses, etc) but we only make
the functional solution more complex.

(On that note, GUIs have got to be close to a worst case for functional
programming.)

~~~
Peaker
> it's hard to think of any benefit to keeping the stale, no-longer-onscreen
> window around,

Undo? No aliasing issues? Transactionality of the set of changes being done?

------
angersock
Years ago a good friend and I wrote a zombie outbreak simulator, and ran into
a similar problem.

Our solution was to have "move" and "think" phases for all entities (usually
run think->move->think because a movement could result in a collision with
somebody else who had wanted to move), and to use a messaging system to have
entities announce their intentions ("attack entity X", "open door", etc.). The
messaging system also let us filter messages and do other clever things--a
later evolution of this same architecture became a double-buffered entity
simulation framework with green threads for handling updates.

It's a hell of a lot simpler to write a simulation with objects who can
understand certain types of messages--the code is simpler, the mental model is
tons simpler, and in general it seems to be just easier.

~~~
lloeki
> "because a movement could result in a collision with somebody else who had
> wanted to move"

Similarly useful when simulating physics: evaluate forces effecting entities,
then evaluate movement for the next step _regardless of collisions_ and then
resolve the collisions by applying a virtual repulsive force moving the
objects out of the "impossible superpositions".

~~~
jjoonathan
How do you address the problem where you get impossible superpositions that
are difficult to resolve via repulsive forces, leading to a massive deposition
of energy in the objects?

[http://i.imgur.com/ZgMNXpS.gif](http://i.imgur.com/ZgMNXpS.gif)

~~~
lloeki
Cap the forces and/or trigger a more refined resolution mechanism, and maybe
also force set the objects in a plausible {position, velocity}. This kind of
simulation is by definition an approximation. You can detect impossible
superpositions because they result in impossible forces. This is of course not
applicable to the general case, so it depends if you want realism racing or
massive unit count RTS physics.

