

Functional Programming Doesn't Work (and what to do about it) (2009) - tosh
http://prog21.dadgum.com/54.html

======
blakehaswell

      Imagine you've implemented a large program in a purely functional way. All the
      data is properly threaded in and out of functions, and there are no truly
      destructive updates to speak of. Now pick the two lowest-level and most
      isolated functions in the entire codebase. They're used all over the place,
      but are never called from the same modules. Now make these dependent on each
      other: function A behaves differently depending on the number of times
      function B has been called and vice-versa.
    

That sounds like a terrible design decision.

Having two functions which are used all over the place by different modules,
and then linking their behaviour to some piece of state that wasn’t passed
into either of them is a terrible idea. Suddenly that state becomes an
implicit argument AND return value, and I have to know about that before I can
call the function… Not only that, now I’ve lost my ability to reason about the
behaviour of the function independently from the execution of the program.
Further, functions A and B are now impossible to test in isolation. I need to
actually call A multiple times before I can test B properly. Then once I’ve
done that, well I’ve already called B a few times, so how do I test A? I’d
have to have assertions for the behaviour of A and B tangled up in the same
tests…

This is a mess, whatever programming language/paradigm you prefer.

~~~
millstone
No, the example is unmotivated, but not unreasonable.

For example, say the two functions are computationally expensive, and you want
to make them faster. The "piece of state" is a shared cache, i.e. memoization.
Furthermore, there's reason to believe that if function A is called with value
X, then function B will not be called with Y anytime soon. So if A(X) is
called, Y(B) ought to be evicted from the cache.

This is awkward to implement in pure functional languages, but it's not a
terrible idea. You can still reason about the functions, test them in
isolation, etc.

~~~
thu
Your example is very different. It seems you're still describing a pure API:
testing those new function implementations can be done with the same test
suite used for the slower variants.

In the blog post the author choose to specify that the observable behaviour is
not pure.

~~~
Veedrac
What if the cache involves IO? The behaviour is then assuredly not pure.

~~~
asQuirreL
What sort of a cache would use IO? Not any cache I've heard of. (Genuinely
interested)

~~~
asQuirreL
I guess I should clarify and say "What sort of a cache would use IO and make a
pure function impure?". Even if the cache is so large that it doesn't fit in
memory, its behaviour would still be referentially transparent.

~~~
Veedrac
That's only if you don't count the IO itself as a side-effect.

Does a language like Haskell give you the choice? If so,would it be idiomatic?

------
BringTheTanks
"Imagine you've implemented world peace. Now let's try to drop couple of nukes
in the middle of this..."

Anyway, the point of functional programming is to have as many "immovable
parts" as possible, because mutation stands in for parts in motion. And
everyone in engineering knows parts that move are usually the first point of
failure.

You might need a moving part here or there in a usable design eventually (not
always) but you should do you darn best to have them isolated and as few as
possible.

~~~
cousin_it
Aha! Moving parts are failure-prone because of friction. And friction happens
when something's changing under you. Which means mutation. Nice analogy ;-)

------
sz4kerto

      Imagine you've implemented a large program in a purely functional way.
      All the data is properly threaded in and out of functions, and there are no truly 
      destructive updates to speak of. Now pick the two lowest-level and most isolated
      functions in the entire codebase. They're used all over the place, but are never
      called from the same modules. Now make these dependent on each other: function A
      behaves differently depending on the number of times function B has been called 
      and vice-versa.
    
      In C, this is easy! It can be done quickly and cleanly by adding some global 
      variables. In purely functional code, this is somewhere between a major 
      rearchitecting of the data flow and hopeless.*
    

That's the actual problem with C -- people can (and therefore they do) make
remote parts of the program depend on each other. One global is fine, but then
comes another one, ... if there are two very remote, hidden functions that
suddenly need to depend on each other then yes, it's OK if this is hard
because you are fundamentally changing information flow in your system.

~~~
blakehaswell

      …you are fundamentally changing information flow in your system.
    

And this is the key point. That the language/paradigm adds friction here is _a
good thing_ ; it guides you towards a more appropriate implementation.

------
mercurial
> In C, this is easy! It can be done quickly and cleanly by adding some global
> variables. In purely functional code, this is somewhere between a major
> rearchitecting of the data flow and hopeless.

"Cleanly" and "global variables" never mesh.

> Here's a simple statement:
    
    
        if (a > 0) {
            a++;
        }
    

> In single-assignment form a new variable is introduced to avoid modifying an
> existing variable, and the result is rather Erlangy:
    
    
        if (a > 0) {
           a1 = a + 1;
        } else {
           a1 = a;
        }
    
    

Er. Plenty of functional languages will let you shadow existing bindings:

    
    
        let a = if a > 0 then a+1 else a
    

However, if you held a reference to 'a' before the shadowing, its value will
not change.

------
alkonaut
Functional programming has some serious problems, but that it doesn't allow
quickly changing behaviour by introducing global state isn't one of them (I'd
say its one of the features).

It takes a whole lot less mental effort to iteratively produce a bad solution
in OO, than a good solution in FP.

I don't have anything to back this claim up, but to me it seems the "solution
space" in FP is a lot smaller than in OO. As soon as you have mutability
everywhere, you are just one more if() or more one state away from achieving
what you want. In OO there are a million bad solutions to every problem, so
the chance that a group of mediocre developers eventually stumble over one of
the OO solutions to a problem is pretty good. It's so good that you can almost
be sure that given enough time, any group of developers will reach the goal of
a working program (for some definition of working). This makes it appealing to
business.

~~~
spacemanmatt
Having recently completed my first (small) Clojure project for a client, I
don't think the solution space is small unless you're talking about a pretty
constrained problem. Given a broad enough scope, there are many
approaches/solutions in FP that can yield similar results.

------
pjc50
This sort of thing stems from fundamentally different ways of thinking about
the task of programming, depending on what kind of problems you're tying to
solve. Let me sketch some categories of programmer:

Mathematician: primarily concerned with numbers and the stateless universe of
mathematical objects. Primary audience for functional programming.

Roboticist: because "control systems engineer" doesn't trip off the tongue.
Primarily concerned with physical machinery. Trained on things like Kalman
filters and lift controller state machines. Would like the certainty of
functional programming but tends to write sphagetti assembler instead in order
to reduce interrupt latency.

Plumber: someone who would like to build a system by joining together parts.
Perpetually disappointed by how hard this is. Labview is the archetype system
here, although a lot of web framework/"stack" talk shows this kind of
thinking.

Bureaucrat: someone for whom the records are the primary concern. Tends to
build systems which look like CRUD but actually embed a complex workflow which
is divided between humans and computers.

The thing is, only the mathematicians are really comfortable hiding all the
state in the world behind monads. "Plumbers" might be the next easily
persuadable, as FRP maps quite well to this way of thinking, as does
map/reduce. But there isn't quite the tooling to enable building _software_
out of pieces joined by pipes.

In the electronic and bureaucratic control systems, _changing state_ is the
point of the program. The thinking revolves around state, when it should be
updated, and what it should be updated to. Stuffing this all behind a monad
results in unnatural-feeling programs. There's a hybrid point where you have a
"state" part of the program and a pile of "next state" logic which is
basically the only way to write Verilog/VHDL - and it's hard to learn and not
so naturally expressive.

Perhaps the way in is something like C++'s "const": you take a language that
isn't functional and mark parts of your code as non-state-modifying, making it
easier to subject them to formal reasoning.

~~~
mbrock
I appreciate this kind of distinction. However, I wouldn't be so quick to say
that purely functional state transitions are unnatural or difficult. If you
just formulate it slightly differently, it makes lots of sense: the point of
the program is to calculate the next state given a current state and some
input. And that's natural to formulate as a pure function of type (Input,
State) → State.

Let's also not overestimate the elegance, naturalness, or correctness of the
typical imperative approaches to control systems... Most programs are already
difficult to understand, think about, predict, and change. And a huge reason
is that they tend to lack principles. How often do you look at code and feel
like there's no reason to believe that it works correctly, other than the fact
that it seems to? And then you look at the growing bug count in the issue
tracker...

State monads are (from one perspective) an implementation pattern used in
Haskell to encapsulate state transitions. They're not necessary, and actually
they're mostly used because they enable some syntactic sugar.

I've written very stateful code in Haskell, professionally, and it's been
quite nice. No worse than any other language, and nicer in many ways. STM and
light-weight threads go a long way.

I wish I had an example of a well-designed CRUD-ish system in Haskell to point
to. Maybe there are some. I haven't had the opportunity to make one, but I
feel confident that some very nice design patterns could emerge.

To give some indication that semi-nice stateful things can be written in
Haskell in natural styles, here is a very simple component of an "IRC relay"
system I never quite finished. [1] It's an independent process that reads IRC
messages from a RabbitMQ queue, and writes "PONG" for every "PING". It's based
around the `pipes` library and as you can see the basic effect of the program
is explicitly stated in the main function.

[1]:
[https://github.com/mbrock/klatch/blob/master/Klatch/Ponger/P...](https://github.com/mbrock/klatch/blob/master/Klatch/Ponger/Ponger.hs)

------
raverbashing
It seems to me there are several issues:

1 - You don't know what's happening. Even with higher level imperative
languages you have a (direct) correspondance between code and machine code (or
something similar like bytecode). Not in functional

2 - Sometimes you want side-effects, in ways you didn't know they were side
effects (like writing/reading a file, random numbers, etc). Now Haskell's way
of solving this may be mathematically elegant, but it's clunky in real life.

3 - Doing things functionally require a different mindset. You think of a
for/if cond() break, and you need to rethink this on functional (and a lot of
other things).

~~~
tome
> 1 - You don't know what's happening. Even with higher level imperative
> languages you have a (direct) correspondance between code and machine code
> (or something similar like bytecode). Not in functional

There is a direct correspondance between Haskell code and machine code. GHC
translates from Haskell into an imperative language called C--, which is
basically a restricted subset of C. Sure, the translation is of a different
style than you are used to in the imperative world, but it's still a direct
translation.

> 2 - Sometimes you want side-effects, in ways you didn't know they were side
> effects (like writing/reading a file, random numbers, etc). Now Haskell's
> way of solving this may be mathematically elegant, but it's clunky in real
> life.

Arguably, but after several years with Haskell I now find unrestricted side-
effects "clunky" and effect typing much more practical.

> 3 - Doing things functionally require a different mindset. You think of a
> for/if cond() break, and you need to rethink this on functional (and a lot
> of other things).

I agree. It requires a significant change to your way of thinking about
programming.

~~~
cousin_it
Well, all languages translate to machine code eventually, but that doesn't
make all languages equally easy to understand.

Specifically, some people try to understand code by mentally stepping through
it. IMO that's a lot easier in imperative languages than in Haskell. In
imperative languages, you step down one line at a time, and loop back in
easily specified places. In Haskell, on the other hand, you need to imagine
graph reduction in your head. That's already pretty hard for most people, but
it gets worse! You can't mentally step through a Haskell function in
isolation, because the sequence of graph reduction steps depends on what the
caller will do with the answer.

That's not just a theoretical difficulty. Take a look at this thread:
[http://lambda-the-ultimate.org/node/3127](http://lambda-the-
ultimate.org/node/3127). A bunch of programming language researchers and long-
time functional programmers are looking at a six-line Haskell program
implementing the Sieve of Eratosthenes, and cannot figure out its big-O
complexity. I can't, either.

~~~
tome
Maybe I am splitting hairs then. I still hold that the _translation_ is fairly
simple. On the other hand, the operational steps that the machine code goes
through can be mind bending, and I still have trouble figuring it out myself.
Perhaps that's what raverbashing meant all along.

------
jwr
Clojure strikes an impressive balance between functional purity (which I'll
agree isn't always the most natural solution) and mutability. You can write
purely functional programs, but you can also carefully introduce state and
mutability in a controlled manner.

~~~
spacemanmatt
Clojure is my first FP language and I am really enjoying it. Looking forward
to introducing it at work.

------
yason
Yeah, _purely_ functional programming can approach an academic exercise. Real-
life programs are different, the hosting of mutable state is inherent in many
tasks and programs, and cannot be avoided so it makes no sense to trick the
language into doing something that it can't easily do.

But functional programming doesn't need to mean all purely functional
programming. Functional programming is easy to reason about but it can only be
useful as long as it's only applied to the part of the program for which it is
a fit. That's quite like how object-oriented programming is a perfect fit for
programming user interfaces but not necessarily to managing a data flow.

Write as much in functional style as possible. That will form a nice set of
tools upon which you can build. If you need to modify state, you might be able
to parameterize a pure function and handle the updates outside but if things
get too complex, just pass in a struct or a dict. You can always refactor the
function into smaller functions that are, again, purely functions. It's just a
matter of drawing the line where it's simplest.

Just as it doesn't make sense to write a program using only global state and
gotos it doesn't make sense to write a purely functional program. (And it
doesn't make sense to write a program using only objects, either. I'm looking
at you, Java.)

There needs to be an interface between the functional code and the code that
has side-effects. I like Clojure there a lot. It supports mutable state but
makes access to it explicit and controlled. It also support writing a function
that looks pure but internally uses a temporary mutable store for data
crunching, which is then explicitly finalized before it's returned back as an
immutable piece of data.

~~~
Dewie3
> There needs to be an interface between the functional code and the code that
> has side-effects.

Yeah, like some kind of _IO_ type or context. Gosh, someone should try and
invent that sometime. :-)

------
michaelfeathers
I think FP works okay, but at the appropriate scale.

Tell Above, and Ask Below.

[https://news.ycombinator.com/item?id=3876136](https://news.ycombinator.com/item?id=3876136)

When functional code gets unwieldy, it's probably time to place an object
layer above it. You see the model in Erlang (where processes act as objects)
and the messsaging middleware of many IT systems.

~~~
kenbot
It's time for this terrible "Objects in the large, functions in the small"
idea to be put to bed.

FP can do side-effects, asynchrony etc just fine, it's just about controlling
them so you can still reason about your program and compose its elements
easily.

Kay-style message-sending object graphs are neither modular nor composable;
state & effects leak transitively by observation, and they cannot be
recombined in self-similar patterns. Local reasoning is impossible.

We can talk about specific situations where say, Actors models might have pros
and cons, but a blanket recommendation that "it's probably time for an object
layer" is rubbish.

Why not design for composability and modularity top to bottom?

------
sparkie
> You could pass a global state in and out of every function in your program,
> but why not make that implicit?

Because making it explicit allows you to reason about what a piece of code
does without inspecting every little part of it for the global state it
touches.

~~~
ajuc
At the cost of having to practicaly rewrite it if the control flow changes
even slightly.

If you don't know the exact control flow when you start it's a lof of busywork
to modify it each time you change your mind.

~~~
sparkie
I'm not sure where you get the idea that much would need rewriting, but if
anything does, the advantages of having modular code that you can reason about
are worth it.

If you read what I quoted again, he is basically arguing for the use of plain
old imperative code over OOP. It's widely considered bad practice to use
global state at all in OOP languages, except where strictly necesssary for
handling specific resources anyway.

OOP languages still have global variables via "static", but good OOP code
design will try to avoid static entirely, and pass any global state explicitly
down through functions and classes all the way from main. The reason is
simple: It allows you to reason about what every part of code does. The only
problem is that you can't reason about some code someone else wrote, as it may
use static internally.

And this is really what FP intends to fix. Instead of just "trying" to avoid
static, forbid it entirely. Now you can take some code written by anyone and
be sure that it isn't going to interfere with another piece of code you write
separately. Any time you explicitly want parts of code to interact in such
ways, making those interactions explicit still allows you to reason about
their collective behavior (and isolate that behaviour from the rest of your
code).

Hague suggests that with FP, you need to write in SSA form whenever you're
passing state around explicitly, but this argument is nonsense - FP has all
the tools to _avoid_ the use of assignment at all. Control flow is abstracted
away as functions and they have useful names (or operators). What makes
Hague's suggestion most absurd though is his use of an example from imperative
languages, where he uses an if STATEMENT. Of course, you probably do need to
use variable assignment if your language prevents you treating constructs as
expressions.

~~~
ajuc
Sometimes it's worth it, sometimes it's not. Exploratory programming is a
thing. Example:

Spec 1.0 update position of objects according to velocity

    
    
        for in imperative, map in FP
    

Spec 1.1 what if we made the objects that are far from the center of the group
move towards the center?

    
    
        for followed by for in IP, reduce and map in FP
    

Spec 1.2 scratch that, just remove the objects that are far from the center

    
    
        for and for in IP, reduce, map, filter in FP
    

Spec 1.3 and spawn explosions around them

    
    
        add 1 call in ip, refactoring required in fp
    

Spec 1.4 also they should play explosion.mp3

    
    
        add another call in IP, another refactoring required in FP
    

Spec 1.5 this should happen for all destroyed objects

    
    
        move both calls to function that removes object in IP, another refactoring (with passing a lot of state in new ways) in FP
    

And so on.

After you're OK with the result you can make the code nice in IP too. And you
can do this once, instead of doing this every time anything changes.

It's like a text editor that forbids you to have temporarily non compiling
code. Only refactorings are available, you can't write if by hand. Pain in the
ass.

Yes I know about paredit, people still copy paste, and write parents by hand.

~~~
sparkie
It'd be nice if 1.3 and 1.4 could be done with just 1 line of call, but you're
missing the fact that pretty much any graphics and audio library will have
some "context" singleton through which it is called due to the way these
interact with the operating system. The APIs are deliberately designed this
way to encapsulate the global state they use.

Sure you can make a global variable for the context and avoid passing it
around - but the refactoring route is the preferred option in the imperative
world anyway.

If you're writing in Haskell with any expectation of using IO for some
exploratory programming like this, the obvious solution is to make your
functions of type (:: MonadIO m => ... -> m ...), then any refactoring you
need to do to add additional facets to your state can be specified elsewhere
as part of a MonadIO transformer. Your calls to "play explosion.mp3" and
"render explosion" just need a liftIO on them.

------
Kiro
> Make those globally accessible and modifiable and all of a sudden a large
> part of the code has shifted from imperative to functional

Isin't it the other way round?

