
Why Functional Programming Matters (1984) [pdf] - tosh
http://www.cse.chalmers.se/~rjmh/Papers/whyfp.pdf
======
kutkloon7
A lot of pleas for functional programming give a long list of convincing
examples that work well with functional programming. You can do this with
almost any language, and this is also quite deceptional, as this is often done
to convince beginners to learn another language; "look at how simple it is to
program this contrived example in <language X>!".

In order to be able to use a programming language or paradigm well, it is
especially useful to know what its weaknesses are. Functional programming does
not work well in programs which handle a lot of states. It does not work well
in programs which have high requirements for performance or memory. It does
not work well for programs which have to do low-level stuff.

These are serious disadvantages, and I would like to see them highlighted more
often in this introductory-style articles. Nevertheless, I think functional
programming is a must to be able to write simpler programs, and I think
programmers should write functions (i.e. methods that only depend on their
arguments) whenever possible, for modularity and correctness reasons. I think
the disadvantages can be solved by non-pure functional languages, and that
there is a lot to gain for new programming languages in this area.

~~~
jlouis
Functional programming largely has two schools: Treat "commands" as a separate
entity from expressions, and bake "commands" into expressions. The former is
largely Haskell, Clean, ... and the latter is exemplified by e.g., Standard ML
or OCaml.

There are trade-offs between the two, but I definitely belong to the second
school: we simply add an imperative subset to our functional language. This
means we can exploit any efficiency trick an imperative program can do,
including low-level stuff. The seasoned FP programmer will then proceed to
encapsulate the efficiency trick in a abstract module such that the rest of
the program doesn't have to worry about it. Of course, the price to pay for
this is that you are losing purity. I think this is a fair trade-off but
others disagree.

As for performance and memory usage: it is _always_ a property of the
architecture or system, not of the programming language. Dropping to a low-
level language, such as C, usually doesn't buy you too much these days. What
is more important is that most C compilers in use have vastly more time
invested into optimizing routines than the typical FP compiler. Apart from
that, you can easily mange the same kind of data in e.g., OCaml than you can
in C.

The reason FP can beat the curve of performance in practice is because you
operate at a higher level of abstraction. You have more attempts at writing
the correct architecture, and it is easier to change over time. Since most
real-world problems are heavily time-constrained, this makes them beat low-
level solutions: when the C programmer has written the first working version,
the FP programmer has tried 5 different solutions.

There is one area FP tends to fare poorly: CPU bound tasks where an inner-loop
has to squeeze out performance (video encoding comes to mind). But most low-
level programming fares poorly as well: either you use assembly, write GPU-
level programs, use an FPGA or create your own ASIC/SoC solution for this.
Also note that moving to faster solutions here costs an order of magnitude in
time and in dollars: FPGAs are, relatively speaking, expensive beasts.

~~~
michaelfeathers
The key insight is that immutability is not an end in itself, it's a tool to
give us referential transparency.

If a language can allow mutability in an area of code without allowing side
effects to escape from it we can have all of the reasoning advantage that FP
gives us at a level above the mutations. We can also have the performance we
want.

~~~
stcredzero
_The key insight is that immutability is not an end in itself, it 's a tool to
give us referential transparency._

How about a programming environment tailored for small simulations or games? I
could imagine such an environment maintaining referential transparency without
strict immutability. Rather, such a system could provide a kind of "poor-
man's" immutability by only allowing pure functions that take state from tick
N and output state for tick N+1.

Perhaps such a system could even achieve high performance by exploiting its
constraints? Maybe the language could essentially be built on top of a custom
VM and around the mechanism of bump allocation, read/write barriers, and
Beltway garbage collection?

~~~
jpt4
Are you familiar with Urbit [0]?

[0] urbit.org

~~~
dTal
Is anyone?

------
cousin_it
The top link on /r/haskell right now [1] is someone complaining that their
program runs out of space due to a subtle interaction between laziness and IO.
I think it's safe to say that we have tried laziness as the default and have
learned that it's the wrong default, because it plays havoc with space and
with IO.

[1]
[https://www.reddit.com/r/haskell/comments/5h6emf/haskell_run...](https://www.reddit.com/r/haskell/comments/5h6emf/haskell_run_out_of_memory/)

~~~
kqr
Look, I fixed their code in so many ways:

    
    
        main =
            runConduitRes                        -- dealing with finite resources
                 ( sourceFileBS "input.txt"      -- read input.txt as binary data
                .| decodeC utf8                  -- decode assuming UTF-8
                .| linesC                        -- split into lines
                .| mapC parseList                -- parse each line into list of text
                .| mapC (get 5)                  -- get sixth element of list
                .| catMaybeC                     -- discard lines with no sixth element
                .| encodeC utf8                  -- encode as UTF-8
                .| sinkFileBS "output.txt"       -- dump into output.txt
                 )
    

And it's still very functional, if not more so!

This will run in constant space and linear time, it will buffer reasonably, it
will not leak file handles, it will gracefully clean up on exceptions, it will
not crash when it fails to parse something correctly, and it makes the
encoding assumption explicit (you cannot split into lines unless you know the
encoding).

Dealing with I/O is not hard when you use the correct primitives.

~~~
tdb7893
It seems to me that part of the problem he was having is that he didn't really
understand how these methods were being implemented internally. I know that
all languages inevitably have these problems but how do the amount of leaky
abstractions compare in Haskell to other languages?

~~~
kqr
I don't think Haskell is worse than any other language in that regard. What
perhaps sets it apart is how much of its functionality is implemented in
third-party libraries, which may be difficult for a beginner to come to grips
with. "Why should I download a library to use efficient arrays? Shouldn't they
be built in, like in Python?"

I can come up with two explanations for this reliance on third-party
libraries.

1) Haskell has always been a quickly evolving language attracting research-
minded people which in turn go on to develop really cool libraries that are
much better than conventional ways of doing things. The interpretation of this
explanation is that it's simply not possible to keep the standard library up
to date with the latest library developments.

It may also be the case that

2) Haskell has always been a really powerful language _capable_ of offloading
important tasks to libraries. What would need to be built-in functionality in
other languages can be implemented as libraries with no sort of special
treatment in Haskell, so people do it that way because they can, and because
it keeps the base simple.

~~~
cousin_it
> > _how do the amount of leaky abstractions compare in Haskell to other
> languages?_

> _I don 't think Haskell is worse than any other language in that regard._

Many Haskellers are happy about this kind of stuff:

    
    
        min = head . sort
    

which is the very definition of a leaky abstraction. The blame lies with
laziness, because it allows code to depend on implementation details of other
code in crazy ways: "sort is O(n log n) unless you ask for only the first
element, in which case it's O(n). What if you ask for the last element?
Uhhh..."

~~~
codygman
I wouldn't depend on invocations of sort to be anything but the worst case.

------
mpweiher
To me, this is the most important part of the text (p2, bottom):

"The ways in which one can divide up the original problem dep end directly on
the ways in which one can glue solutions together Therefore to increase ones
ability to mo dularise a problem conceptually one must provide new kinds of
glue in the programming language."

Functional programming is great because it provides two (new) kinds of glue:
function composition and lazy evaluation.

Even if you accept that (I certainly do for function composition, less
enthusiastic about lazy evaluation), I would say that it _only_ provides two
new kinds of glue.

We need _lots_ of kinds of glue, in other words, lots of _architectural
connectors_. And that means linguistic means of defining and varying
architectural connectors. [http://objective.st](http://objective.st)

~~~
hota_mazi
> Functional programming is great because it provides two (new) kinds of glue:
> function composition and lazy evaluation.

Certainly true for composition but laziness is more the exception than the
rule in today's FP languages. And it's getting an increasingly bad reputation
to the point that even Haskell is slowly (and reluctantly) being dragged in
the strict direction (which it will never fully reach because so much of it
would break).

~~~
bunderbunder
There's one spot where I like laziness, and that's where it supports
composition.

I'm gonna hop over to C# because that's where my favorite example lives: LINQ
is a functional library that lets you describe queries on data that are
executed lazily. The reason why the laziness is great in this scenario is that
it lets you separate the tasks of constructing a data processing pipeline, and
executing it.

The spot where it's tricky, though, is that it's a very leaky abstraction.
It's easy to forget that these expressions might actually represent a lot of
work, so if you get your lazy sequence object (IEnumerable<T> in C# terms) and
then check if it has any values in one expression, and calculate its sum in
another, then you might end up accidentally round-tripping a database twice.

Because of those sorts of stumbling blocks, I think laziness is a power that
needs to be handled with care. I'm pretty sure that means you most certainly
should _not_ make it the default behavior.

~~~
suzuki
I hope you will be interested in the C#/LINQ codes found in following pages:

    
    
      "Why Functional Programming Matters" solved with C#
      http://www.oki-osk.jp/esc/cs/whyfp.html
      http://www.oki-osk.jp/esc/cs/whyfp2.html
      http://www.oki-osk.jp/esc/cs/whyfp3.html

------
inputcoffee
Does someone have a better scanned version of this? The format is not great on
this, but it looks really interesting.

~~~
mgr86
I am not sure if this is _better_. But here is a scanned copy of the original
journal article. I put it on google drive. Do you have a preferred PDF host?

[https://drive.google.com/file/d/0B8_iX4Icv1BmQkh3TWtqTXY5ajQ...](https://drive.google.com/file/d/0B8_iX4Icv1BmQkh3TWtqTXY5ajQ/view?usp=sharing)

~~~
inputcoffee
Thank you, this does read better (thicker font, better resolution) on my
screen.

I was fine with the post-script but for some reason it seemed thin and there
was a slight blur.

I just use my browser to read the PDF (usually chrome, sometimes Firefox or IE
or even Opera).

------
m0th87
See also Raganwald's Why Why Functional Programming Matters Matters:
[http://weblog.raganwald.com/2007/03/why-why-functional-
progr...](http://weblog.raganwald.com/2007/03/why-why-functional-programming-
matters.html)

~~~
bunderbunder
Ironically, his "rules of Monopoly" analogy helped me realize why I haven't
completely given up on OOP, especially for non-hobby work.

He holds up the idea that every piece of the game has the rules for what you
can do with it tacked on as some sort of horrible mess, but, in the age of
code completion, I'm finding that it's used to drive an amazing convenience
from a practical perspective: Code completion.

Take Python, which is a language that I'm still learning. If I have an object,
but I'm not sure what I can do with it - or, more particularly, I'm not sure
of _the names_ for the things I can do with it - I can get a quick reference
by hitting '.-tab' to bring up an autocompletion menu, just so long as I'm
interacting with a more OO Python library. If I'm trying to work with a more
procedural library such as matplotlib, though, I'm SOL and end up having to
dive through the documentation. (I can't think of a really great functional
library for Python that I use, but the same is true for the more functional-y
bits of numpy and pandas.) And matplotlib is a big library, so there's a lot
of documentation. Far from being a form of organization, that fabled central
store of the rules that the author holds up as an ideal ends up being an awful
quagmire to wade through.

Granted, this is dependent on having an editor that does tab completion. And
I'm sure it could be done with a functional library, too, but probably only if
you're using a statically typed functional language, and I've no idea what a
good UX would look like given how functional syntax works.

But still, given the current situation, I think I've realized my main reason
for thinking that object-oriented programming also matters: Because right now,
when you're working with large and complicated systems, object-oriented
programming still offers the more pragmatic, human-friendly user experience.

------
Kenji
Why Functional Programming ended up not mattering much at all, after all
(2016) [pdf]

------
AimHere
For those whose browsers don't speak bare postscript, a pdf also exists:

[http://www.cse.chalmers.se/~rjmh/Papers/whyfp.pdf](http://www.cse.chalmers.se/~rjmh/Papers/whyfp.pdf)

~~~
executesorder66
Out of curiosity, which browsers do support native postscript rendering?

~~~
lmm
Konqueror supports it in the sense that it will embed the appropriate KPart,
just as it would for e.g. PDF. How "native" you consider that is an open
question.

~~~
executesorder66
Thanks. By native I meant it renders without plugins or any configuration by
the user, on a default install of the browser.

~~~
lmm
The line between plugin and not is pretty blurry for Konqueror - even its HTML
renderer is a plugin in a sense.

------
skeptic2718
Is it that time of the year again?

------
joshuapassos
I love this paper

------
GrumpyNl
You can program functional in almost any language. Its more a mindset. I can
not emphasize this enough, keep it simple.

~~~
chrisdone
I think what you mean by "functional" and what the author means by
"functional" are distinct. Are you aware of the distinction?

