
“Mostly functional” programming does not work (2014) - wldlyinaccurate
https://queue.acm.org/detail.cfm?id=2611829
======
dang
Previously:
[https://news.ycombinator.com/item?id=8084302](https://news.ycombinator.com/item?id=8084302),
[https://news.ycombinator.com/item?id=7654601](https://news.ycombinator.com/item?id=7654601).

------
coldtea
Condidering that

1) mostly imperative programming has worked wonders for ages with a huge body
of work to show for (almost every basic infrastrcuture, all major OSes, almost
all popular programs etc are writen in one of
C/C++/Pascal/Obj-C/Fortran/Ada/Java). Might have its security and stability
warts, but it has shown pragmatic results.

2) "fully functional" might have its security and stability assurances, but it
has little of importance historically to show for all its boasting (the AI
winter is not something to be really proud of).

I'd call BS, and say that "mostly functional" works fine, thank you very much,
in that it takes (1) that already works (with some issues) and makes it even
better with a touch of (2).

~~~
bumeye
I think the author means to say that it's kind of an all or nothing thing. You
either go purely functional and get all the benefits, or you don't and barely
get any benefits at all.

You could say that "almost functional" does not really exist, just like
"almost secure" does not exist.

And yes, you can still program with a non-functional language just fine he
does not refute that.

~~~
coldtea
> _I think the author means to say that it 's kind of an all or nothing thing.
> You either go purely functional and get all the benefits, or you don't and
> barely get any benefits at all._

Yes. And what I say is that "I get enough benefits from going halfway
functional, thank you very much".

> _You could say that "almost functional" does not really exist, just like
> "almost secure" does not exist._

You could say that but you would be wrong. Most Lisp coders for example have
also used imperative code where they need it, with side effects and all.
Nobody cared much about such total "functionalness" and purity as you get
today from Haskell programmers (most of them non professional of course since
there are far more Haskell programmers than Haskell jobs) and the like...

------
bjlkeng
One thing that bothers me with these articles is how they never address the
software engineering challenges of writing large complex systems in a pure
functional style. There are some tasks where just adding a sprinkle of
imperative programming can make the design of the entire system much easier to
understand (a kind of "mostly functional"). An article that really
crystallized this point is here:

[http://prog21.dadgum.com/54.html](http://prog21.dadgum.com/54.html)

~~~
TazeTSchnitzel
That post's been on HN before. I don't know if it's really making a good
point.

> Imagine you've implemented a large program in a purely functional way. All
> the data is properly threaded in and out of functions, and there are no
> truly destructive updates to speak of. Now pick the two lowest-level and
> most isolated functions in the entire codebase. They're used all over the
> place, but are never called from the same modules. Now make these dependent
> on each other: function A behaves differently depending on the number of
> times function B has been called and vice-versa.

This is a complaint that the programming language is preventing you from
introducing a hidden dependency. This is a strange complaint, given that
hidden dependencies are a problem in software maintainability.

I mean, yes, it's convenient _now_ to be able to "just" add global state here
and there, but it will come back to bite you later.

> [Single-assignment form] is cleaner in that you know variables won't change.
> They're not variables at all, but names for values. But writing [single-
> assignment form] directly can be awkward.

Well, you don't have to write single-assignment form functional code. If you
want to modify state within a function in the same way we all know and love
from C, you actually can do that. In Haskell you could do this with the state
monad, for example. Purely functional programming enables the composing of
operations in many different ways, so you actually have a huge amount of
freedom in what style you write your code.

> For me, what has worked out is to go down the purely functional path as much
> as possible, but fall back on imperative techniques when too much code
> pressure has built up. Some cases of this are well-known and accepted, such
> as random number generation (where the seed is modified behind the scenes),
> and most any kind of I/O (where the position in the file is managed for
> you).

Random number generation doesn't require threading the seed if you really
don't want to. In Haskell, for example, you can also:

* Generate an infinite list of random numbers

* Call the IO monad function to get a new random number, which advances the generator behind-the-scenes

While the author might have a point that some things are simpler in imperative
code, I don't think their examples really support this. Perhaps they had not
delved very deep into functional programming.

------
moron4hire
Well now, what the _hell_ have I been doing all this time?

According to TFA, apparently nothing that "works".

If you're working in C#, you're most likely working in Visual Studio (Mono is
just such a terrifically small proportion of C# users, and even then, some
Mono users are using Visual Studio), so you have access to one of the best
(and easiest to use!) debuggers in the world. This isn't embedded C++. It is
common for .NET developers to know how to use the debugger. People _don 't_
use printf-style debugging, and even if they thought they would want to, there
is no easy way for such an undertrained developer to do it. You have to make a
special build to get the "console" window, and it's a setting most developers
I've encountered just plain don't even know exists.

And if you're using LINQ, it's because you _know_ you want deferred execution
and that's explicitly what you're hoping for. Developers who don't know this
feature/limitation of LINQ just plain don't know about LINQ.

So after all these C# strawmen, his proposed solution is... rewrite your
entire world in Haskell. Yeah, that's totally going to happen. I'm going to
get my stakeholders on board for that, I'm going to have all of the libraries
I've come to love available to me, or reasonable analogues (Just looked at
HDBC, it looks like a toy compared to ADO.NET). We're just going to switch to
Haskell and everything is going to be peaches and cream.

~~~
TazeTSchnitzel
> People don't use printf-style debugging

Firstly, no, people still do. Not everyone, but people do.

But even if nobody did, that doesn't change anything. The article isn't about
the dangers of printf, it's about the dangers of side effects: printf is just
an example. A printf inside an otherwise pure function is not the only
possible example of a side effect.

> And if you're using LINQ, it's because you know you want deferred execution
> and that's explicitly what you're hoping for. Developers who don't know this
> feature/limitation of LINQ just plain don't know about LINQ.

So? You can still make mistakes here and your compiler won't catch them.

~~~
moron4hire
But what he's showing is "not even close to functional", not "mostly
functional".

~~~
TazeTSchnitzel
How so? LINQ is functional.

~~~
sqeaky
I think that the author's point was that C# was imperative and linq is
functional. But the merge of the two does not give you best of both worlds,
but rather to worst of both.

In pure functional coding the lack of side-effect guarantees safety during
parallelism at the cost imperativity(code readability for those who have only
ever written imperative code.).

In purely imperative code you get a guaranteed order of execution at the cost
of simple parallelism because you are managing shared state.

With both you get to juggle the order of execution and shared state with none
of the guarantees!

It can be done and I have seen it done successfully, but I would argue it is
not a good starting point for a project or a language design.

------
bunderbunder
This article's core premise, given in the first sentence, strikes me as a
blatant straw man.

Normally I've seen mostly functional programming being sold as a pragmatic
solution that merely gives developers some tools that are helpful in certain
situations. It's deeply uncharitable to suggest that users of multiparadigm
languages are so naive as to believe that the mere presence of paradigm A
erases the pitfalls of paradigm B.

I'm also deeply suspicious of any language evangelism that makes a whole lot
of noise about the pitfalls of other languages and then implies that one's own
favored language has no pitfalls whatsoever. This one just comes down to
Occam's razor. Which is more likely: That your favorite language just happens
to be the first perfect, wart-free language ever invented, and its relative
lack of popularity is just because the rest of the world can't recognize its
blinding awesomeness? Or that you've become so attached to your favorite
language that your perception of it has become deeply biased?

~~~
white-flame
The specific strawman that is being presented is the free intermixing of
functional and imperative. We have really nice codebases where the execution
of side effects is specifically constrained to known locations.

From an optimization perspective, imperative behavior can be introduced if the
functional API is not affected. Caching systems fall into this category. The
functional benefits and freedom of usage is not impeded; the difficulty of
creating and optimizing a caching system is the same as any other project.

Any quality from a project comes from discipline. Even purely functional
programs can become unwieldy spaghetti with copy/paste code, memory leaks,
etc. It's less about the programming languages, and more about proper
experienced perspective of what to do and how to design.

~~~
bunderbunder
C# is a language where functional and imperative are freely intermixed, most
notably through the LINQ library that shows up in some of the author's C# code
examples. The purpose in doing so is not to constrain side effects. It's to
make code more declarative and enable the use of compositional patterns.

I think that's where the article really comes across as tone deaf. If you're
going to write an article with a title like, "Mostly functional programming
does not work," then you better be prepared to demonstrate that mostly
functional programming fails accomplish the thing it sets out to do. Instead,
what he really demonstrates is that he fails to understand that there are
desirable things about functional programming aside from purity and laziness.
Which is in itself a rather fascinating thing to miss considering that the
majority of functional languages are neither lazy nor pure.

------
mpu
I find it hilarious that all the examples he gives in the beginning have weird
semantics only because of LAZINESS, mixing effects with laziness gives
nonsense, give a cbv semantics to the language and you get the intended
behavior (intended even by HIM who is arguing for laziness). Unbelievable.

Edit: in my comment, oppose pureness (that is put forward in his rant) to
laziness (what he unintentionally is arguing against).

~~~
tome
And yet the first example is in C# which is not a "lazy language". The problem
is that in _any_ language you do want lazy behaviour from time-to-time. For
example in "get me the first element of this list that is positive" you do not
want to traverse the rest of the list after you find your positive element.
Laziness _always_ has a weird interaction with effects and you _always_ want
(the potential for) some laziness, hence the problem.

~~~
marcosdumay
1 - Why would you ever want to add a logger to a pure function?

2 - If you really, for some unknown reason, wants to do that, you can pass the
logger as an argument to f2, and have it run inside the IO monad somewhere
outside of that fragment. But really, read #1 first.

~~~
tome
Did you reply to the wrong comment?

~~~
marcosdumay
Ops. I did.

------
RyanZAG
As far as I can tell, the argument is that allowing side effects inside
functions allows programmers to make mistakes, and therefore "mostly
functional" programming is broken.

I'll happily extend this: "fully functional" programming does not work because
I can define a mathematical function that divides by 0, and therefore does not
work. I'd go a step further even: all programming does not work, because I can
solve a problem in the wrong way if I make a mistake.

~~~
calpaterson
Nonsense! That works fine in Haskell:

    
    
      cal@hp-elitedesk ~/s/b/i/ops> ghci
      GHCi, version 7.6.3: http://www.haskell.org/ghc/  :? for help
      Loading package ghc-prim ... linking ... done.
      Loading package integer-gmp ... linking ... done.
      Loading package base ... linking ... done.
      Prelude> 1/0
      Infinity

~~~
RyanZAG
1/0 is not infinity, it is undefined. The limit as x tends to 0 of 1/x is
infinity, but there is no value for 1/0, which is why doing a calculation that
involves a divide by 0 is mathematically invalid. That Haskell let's you do it
and gives an approximate answer is OK, but if it was in a calculation for the
number of apples you can eat, it would be a useless answer. You could end up
with that if you made a mistake in defining your logic. In the same way, you
can make a mistake when doing all manner of other things such as using a non-
pure function or adding instead of subtracting. They're all forms of logic
error that are either allowed or disallowed by tools. The tool isn't broken if
it let's you make mistakes though.

A power drill will let you drill a nail into your leg. That doesn't make power
drills broken. A smart drill that turned off the drill if your leg was in
front would be better - but we still use drills without that function because
there are trade offs when you make your drill do that. There are trade offs in
functional languages too. That doesn't make C# "broken" for being a tool that
let's you drill into your leg if you point your drill there.

~~~
thoth
Well then your formula which contains a division by zero is mathematically
invalid as well. What do you think is valid - a floating point exception?
That's how the hardware handles it.

I think you're being overly pedantic on this specific example, division by
zero is a special case. Floating point numbers on all architectures are only
approximate - 1/7 times 7 is seldom exactly equal to 1, in any language, but
that doesn't mean floating point math on a computer is useless.

~~~
RyanZAG
Sure, but that wasn't my point - my point is that if you're calculating the
number of apples you can eat and you make an error in the logic and end up
with a calculation of x/y with x=22.9 and y=0, you get an exception or an
answer of 'infinity'.

My point was: either answer is obviously wrong. You can't eat infinity apples
any more than you can eat undefined apples. It wasn't the real answer to the
question you were trying to solve. Haskell (or whatever language) allowed you
to enter logic that gave an invalid answer. You used the mathematical and
functional operators to blow off your leg. You could probably use a more
constrained language dealing specifically in "apples and eating" that would
prevent you from getting an answer that wrong. Obviously you'd have trade offs
in using that language though.

So let me try again: Haskell isn't any more a broken language because it let's
you calculate apples incorrectly than C# is a broken language because you can
print things multiple times.

------
qznc
What if you could do "fully functional" programming within a language that
also allows OO, procedural, and generic programming?

My exhibit would be D. Here is the 'identity' declaration:

    
    
        T identity(T)(const T me) pure nothrow @safe @nogc;
    

It cannot have side effect (do IO,start a thread,etc) because "pure". It
cannot throw any exceptions because "nothrow". It cannot do any pointer-magic
shenanigans because "@safe". It cannot modify the argument because "const". It
even guarantees that no memory will be allocated through the garbage collector
because "@nogc".

(Yes, pure by default etc would be better, but such a change would break too
much code at this point. Yes, Haskell looks cleaner, but this article is not
about beauty.)

~~~
TazeTSchnitzel
How does "pure" work? How does the D compiler know the functions it calls do
not have side effects?

More importantly, however, D's type system is not as expressive as Haskell's.

~~~
qznc
Pure functions can only call pure functions.

Details: [http://dlang.org/function.html#pure-
functions](http://dlang.org/function.html#pure-functions)

And Haskell's type system is not as expressive as D's. E.g. records.

~~~
egorl4r
I'm not very good at D, but if you are referring to row polymorphic records,
Haskell can do that too (e.g. [https://github.com/nikita-volkov/record-
preprocessor/blob/ma...](https://github.com/nikita-volkov/record-
preprocessor/blob/master/demo/Main.hs))

------
erikb
I started out reading not believing that pureness is a necessity and sadly the
arguments weren't framed in a way that I could understand them. So, I'm still
quite unconvinced because I haven't seen how it doesn't work. But of course I
know from experience that it happens that things are true I haven't seen yet.
Can anybody try to explain it for people who don't live on Haskell planet?

Example time: I program python, not C++. And in the example with the try-catch
I would have wrapped it around the loop anyway. So for me it wasn't quite
convincing that there is a problem in the first place. Sure someone who hasn't
crashed a few programs due to lazy evaluaton might run into trouble in such
kind of situation. But after a few times you understand that being defined and
being executed are different things and then you wrap the execution. As far as
I can see the whole problem described with the example doesn't exist if you
intuitively wrap the foreach loop and not the definition statement for q.

So, would really appreciate if someone could explain it in a more nooby
fashion to me. Thanks.

~~~
maehwasu
> But after a few times you understand that being defined and being executed
> are different things and then you wrap the execution.

In this example, you've had to internalize a bit of extra knowledge, and have
an extra "gotcha" to watch out for. A job that could have been done by a
computer, if things were kept pure, now has to be done by you, incurring
slight amounts of mental overhead.

I don't program routinely in Haskell (although things are quickly headed in
that direction), but I think the sentiment of that community is: 1\. Keeping a
large system consistent in your head is hard 2\. Offloading the constant
checking of that consistency, in any amount, to a computer, is a huge win that
compounds quickly over time.

Note that the "hardness" of holding a large system in your head, and the
"hardness" of Haskell's learning curve have to be qualitatively different for
those assumptions to hold. In my experience, they are; essentially, you're
making a large upfront cognitive investment in order to save slight amounts of
mental overhead every day.

~~~
AnimalMuppet
I think you just refuted the article.

> Offloading the constant checking of that consistency, in any amount, to a
> computer, is a huge win that compounds quickly over time.

True. And therefore offloading a less amount to the computer (by being mostly
but not totally functional) is less of a win, but still a win.

------
maehwasu
The example of the exception being thrown late (essentially, an exception
being a type of side-effect) is quite good.

Would it be fair to say that this is an advantage of the Haskell approach of
(for example) encapsulating an operation that might fail inside a Maybe? You
get to make the failure aspect explicitly part of a monad, and make sure you
"know" where it's happening?

~~~
TazeTSchnitzel
Well, Haskell also has something similar to the more traditional exceptions
within the IO monad (ioError).

------
bitL
I would suggest the following:

\- write all functions related to math/geometry/combinatorics in pure
functional style, even better with accompanying mathematical proofs to be sure
they are principally sound (you won't avoid problems with floating numbers
etc. that way)

\- think about using functional programming in parallel/distributed tasks if
you can manage communication complexity directly (i.e. this is not handled to
you by some opaque, un-debuggable monad)

\- write everything else in imperative style of your choice

I/O is the elephant in the room of functional programming, monads are
"imperative" hacks to make functional programming useful in any real-world-
related way, and most monad tutorials will only confuse you about what they
are actually.

~~~
TazeTSchnitzel
> monads are "imperative" hacks to make functional programming useful in any
> real-world-related way

Monads are a quite clever solution to control IO: instead of allowing
functions to directly perform IO, allow them to compose a sequence of IO
operations and return them, such that you can compose a program which the
runtime will then execute.

They're not imperative: they represent programs in a purely functional way,
and they don't let you opt out of any of the guarantees a purely functional
language provides. And they're not a hack, they fit properly into the type
system.

Unlike "mostly functional" programming, use of monadic IO in a purely
functional language will force you to cleanly separate IO and computation. Not
only does this make a more maintainable program (no side effects everywhere),
but you get nice bonuses, like lazy computation.

> and most monad tutorials will only confuse you about what they are actually.

Perhaps because many monad tutorials try to hide how they work, despite this
really being quite simple.

~~~
viraptor
I've been very excited by many approaches before, then Haskell, and now I'm
past it. While pure functional language will force you to cleanly separate
computation and IO, with an app big enough you'll find yourself one day in a
situation where independent data from 3 different places comes together in a
way that the only reasonable way to proceed is to log a warning, tweak the
value, and continue the computation. And that's what you'll do in any unpure
language. In a pure one, you'll start to think how to redesign the computation
to allow this... It's not wrong, but in many cases it's pointless.

~~~
TazeTSchnitzel
> In a pure one, you'll start to think how to redesign the computation to
> allow this... It's not wrong, but in many cases it's pointless.

Are you sure it's pointless? Having to show I/O in the type signature means
you can't end up with accidental side-effects.

~~~
viraptor
In some cases - yes, pointless. You rarely care if a function logs something
or not. If you do care, you can still return the log lines as an extra value
and use the pure solution. At least you have a choice.

------
AdieuToLogic
Reading many of the comments in this thread reminds me of Plato's "Allegory of
the Cave":

[http://platosallegory.com/TheStory.aspx](http://platosallegory.com/TheStory.aspx)

Food for thought and all that.

------
commentzorro
Every time I try to play with Haskell I give up in just a few days. Too much
math! Maybe you don't need a PHD, but from my perspective you do need more
than an undergraduate in Engineering.

So, what's a good purely functional language that's easier to learn than
Haskell. One that I can learn more by hacking about than trying to keep so
many things juggling about in my memory all at once?

Ideally one that runs native on Windows and Linux and is suitable for web and
business apps? So I might actually end up with something I could put in
(light) production.

~~~
cschneid
Don't confuse new terminology with math. Programming has lots of terminology,
and some of it is backed by mathematically rigorous definitions, but that
doesn't require you to know that in order to use them.

OO has 'Inheritance', 'Subclassing', 'SOLID', 'Liskov', every design pattern
name, 'Object', 'Class', 'Overloading', 'Factory', etc, etc.

Haskell has similar words, covering both core concepts, and common patterns of
implementation. 'Monoid', 'Monad', 'State', 'Reader', 'Function', 'Lambda',
'Typeclass' are all examples of terminology in Haskell

None of those words are inherently more difficult than the OO ones, you're
just not familiar with them.

The best way to learn Haskell is to follow along with a course that has you do
exercises, and actually do the exercises. See the
[https://github.com/bitemyapp/learnhaskell](https://github.com/bitemyapp/learnhaskell)
course for some class recommendations.

~~~
commentzorro
Not just terminology. It's more like there's only two levels of Haskell: first
it's understandable and I say, "Okay just step by step and I got this. Then
all of a sudden we run smack into something like this big wall of WFT:

    
    
      * The infix application function (ma>>=\a->f(a)), commonly called bind, executes the computation ma to expose its effects, calling the resulting value a, and passes that to function f. Evidently, the result of this application (f a) should again be a potentially side-effecting computation; hence, the type signature for bind looks like this:
    
      (>>=):: M a -> (a -> M b) -> M b.
    
      * The injection function (return a) injects a pure value into a computation. Its type signature is: return :: a -> M a.
    
      In practice, each effect usually comes with so-called nonproper morphisms. These domain-specific operations are unique to that effect ...
    

Seriously, how can I hack around and understand this?

~~~
marcosdumay
Oh, God, you just came to the Monad definition...

From my experience, it only really falls in place after you write your first
or second monad. To learn that you go in that order:

1 - Get used to use the IO monad in do notation. 2 - Learn a couple more
monads. I'd recommend State and one of Parsec or Attoparsec. (As a bonus, use
a bit of applicative syntax too.) 3 - Get used to write some somewhat complex
one liners without the do notation. 4 - Abuse the List monad for practice. 5 -
Write your own.

All the time, look at the types as much as you need, but do not obsess over
them.

------
davexunit
I don't buy this. The paradigm I use often depends on the level of abstraction
I'm working with. A functional runtime is ultimately built upon an imperative
foundation. I write _mostly_ purely functional code in Scheme, but from time
to time I add my own imperative components to the foundation. I don't really
want to use languages that restrict me to a single paradigm.

~~~
TazeTSchnitzel
> A functional runtime is ultimately built upon an imperative foundation.

Not necessarily. Functional language compilers are very sophisticated and do
not necessarily compile code the way you might think.

~~~
knz42
The functional _runtime_ (i.e. not compiler) is built upon an imperative
foundation. This is necessarily so as long as hardware computer are built upon
state-modifying processors and random access, in-place updatable memories.

~~~
innguest
So what? It's Turing-complete so it can simulate any non-Von Neumann machine
that can compute. And in fact it does so very well, see Forth, Lisp, and other
languages from different paradigms for which there are dedicated processors.

If you use an abstraction (your job as a programmer) the innards of the
processor should approach inconsequential.

------
chaoky
Scheme is not "fully functional". Neither is Common Lisp. No one will say they
are mostly imperative either. But they work; before AI winter there were many
machines and OS's built on top of them and their immediate predecessors.

If this author's definition of "does not work" means that they can't be used
to build operating systems or high level AI programs well...

------
j-pb
Here's your perfect counterexample.

The only language that allows you to build things that won't fail even when
the hardware breaks.

Erlang.

~~~
hueving
What? The only possible way that constraint could be satisfied (working under
any hardware failure) would be if erlang didn't let you write any programs
that did anything.

~~~
fnordsensei
Excuse the aside, but this reminds me of this old Bash quote:

    
    
      <FreeFrag> The most secure computer in the world is 
                 one not connected to the internet. 
      <FreeFrag> Thats why I recommend Telstra ADSL.
    

Also, agreed. Some elaboration on that point would be most welcome.

~~~
TazeTSchnitzel
[http://bash.org/?168859](http://bash.org/?168859)

------
stewbrew
It's funny to read this from somebody who called scala "his new love".

~~~
visarga
He's still soul searching.

------
sebastianconcpt
What if you do not-functional software in ways where parallelism and
concurrency are non-problems? there is no elephant in the room?

~~~
TazeTSchnitzel
Pure functional programming provides benefits in domains where you don't need
parallelism or concurrency, too. It controls side effects, allowing safe
composition of functions.

~~~
seanmcdirmid
Right. Just like it is often useful to avoid functional code in cases where
parallelism is needed for performance reasons (e.g. in CUDA).

~~~
sebastianconcpt
What's the paradigm that fits best for CUDA?

~~~
seanmcdirmid
SIMT of course, it is more similar to array programming than functional (of
course, array and relational programming all have relationships with FP).

