
Why functional programming matters (1990) [pdf] - znt
http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf
======
epidemian
Finally read this paper a couple of days ago. Even though i thought i already
understood the concepts of higher order functions and laziness, and had used
them on my every-day programming, the paper was still quite enlightening.

One of the things i hadn't considered before is how much laziness can help to
build modular code. The last section in the paper, about implementing the
minimax algorithm, is a great example of this. The `evaluate` function is
first naively defined as a simple chain of a couple of functions that iterate
the tree of all possible game positions. Then a series of improvements are
applied to it (e.g. pruning the tree to allow the algorithm to work on
infinite trees, or ignoring branches that cannot yield favourable moves), but
what's really nice is that thanks to lazy evaluation these improvements can be
implemented as independent functions that are just "plugged in" the chain,
without having to modify the original functions. For example, the original
function that generates the whole game tree doesn't need to be changed to
implement pruning.

I'd really recommend the paper to anyone who might be wondering "why all this
fuss about functional programming?". It's not only very accessible, but also
full of beautiful code examples solving very practical and concrete problems.
I personally felt a bit sad having to "back" to the imperative programming
world in my every-day job after reading it.

------
kinow
Read this paper the first time while working on an Apache project. Shortly
after created a sub in reddit, and now it has almost 1.700 users subscribed.

Feel free to share, vote, comment there too :)
[http://www.reddit.com/r/functionalprogramming/](http://www.reddit.com/r/functionalprogramming/)
A very healthy community, lots of interesting stuff in different programming
languages.

------
thr0wawayhn
This paper was written 25 years ago; have any studies yet justified the
extravagant claims made by Hughes and other FP advocates of substantial
increases in productivity, correctness, and modularity?

~~~
mafribe
Credible empirical investigations of programming language efficiency are too
hard to carry out (though they could be done in principle). However, what we
can do instead is look the evolution of programming languages: what features
do the latest creations have, what features are older languages retro-fitted
with? What ideas have seen little uptake?

(i) Higher-order functions: Yes, slam-dunk win. Essentially all new languages
have them, Java and C++ retrofit them.

(ii) Lazy evaluation order by default: mostly dead. All new language use eager
evaluation, even SPJ is sceptical that lazy evaluation should be the default.
Scala has an interesting approach: eager by default, but you can switch to CBN
(although not full laziness) if required. This hybrid form enables you to
define control-operators easily, while keeping the simplicity and efficiency
of eager evaluation. Exceptions are languages with dependent types that are
really theorem provers (e.g. Agda).

(iii) Purity: dead. All new languages have side-effects. The widely used FP
languages like Haskell, Ocaml and F# have side-effects. Exceptions are
languages with dependent types for Curry-Howard-based theorem provers (e.g.
Agda), because there is no convincing Curry-Howard correspondence using state.

(iv) Everything is function application: open. This may work reasonably well
in purely sequential languages, but there are theoretical reasons to believe
that message passing is not function application (while function application
is a special case of message passing).

(v) Types. The answer on this one depends on whether one is a proponent of
statically typed languages or not, clearly an unresolved question. For
dynamically typed languages, the question is moot, so the rest of the
discussion is irrelevant to those who think that static typing is a bad idea.

(vi) Types (1), type inference: Slam dunk winner. All new languages have
inference or wish they could have it but don't know how to do it (Scala).
There are probably some hard trade-offs between expressivity of types and
efficient decidability of type inference (B Pierce termed Hindley-Milner a
"sweet spot"), so we might have to tolerate some type annotation being
required, e.g. in local or bidirectional inference.

(vii) Types (2), algebraic data types, pattern matching: winner. Most new
languages have them in some form.

(viii) Types (3), monadic encapsulation of effects: open. Most new languages
have not yet embraced this. Some conceptual problems not yet solved, e.g.
destructors for state monads. There are also expressivity problems, e.g. not
all effects can easily be expressed as monads, and composition of monads
doesn't always work as desired. Moreover there are pragmatic questions about
whether monadic encapsulation should be required or optional.

(ix) Types (4), higher-kinded types: open. Not all new languages have them,
but HKTs are generally considered useful, but not super important, cf Rust's
evolution. If monadic encapsulation of effects is used, then HKTs become
highly desirable, cf the recent Scala fork.

    
    
       --------------------
    

Any other suggestions?

~~~
tel
Really, really have to say I disagree with (iii). Purity is growing more and
more day by day. From other comments you seem to have taken a fairly non-
standard definition of purity and I'd encourage you to stick with the normal
one as it'll let you talk about more interesting things—see further comment on
(viii).

I don't think (iv) is open at all. The fact that you can encode everything as
function passing is an interesting trick, but it's merely one of several (you
can, e.g., also encode everything categorically or via supercombinators and
graph reduction, which turns out to be more practical for lazy languages).
Function application is just one really well understood semantic foundation.

I find (v) far more subtle than you make it out to be. The standard
static/dynamic types divide is silly because both sides are talking about
different things. The static side wins completely because honestly, taken at
face value, they don't claim that much. All languages are typed, some have
interesting type theories, some compilers reject programs which are
syntactically OK but violate the type theory.

(viii) is interesting. In my mind, you've conflated this with what most people
would call purity---however, monads are merely _an_ implementation of effect
typing/purity. Others exist and are experimented with. Also, composition of
monads always works as desired... there's no strange edge-case to it. It's
merely that they don't compose like functions in a simple way. When you
examine what they would entail you learn that they ought not to as well.

(ix) HKTs are sort of a gateway drug to type-level computation. It's
remarkable how many languages just more or less let their type-level language
wither and die. I'm not claiming that HKTs are a slam dunk victory, but I
would be willing to bet that the evolution of languages will lead us to (a)
having them universally in the next two decades and (b) wondering why everyone
was so hung up on HKTs---I mean, I suppose I can understand singletons, but
HKTs? Really?

~~~
mafribe

         monads are merely an implementation of effect typing/purity. 
    

Yes, and the one Haskell has chosen. Is it the right way? Most newer languages
are not following Haskell here, instead they have effects like state and
exceptions 'directly. That's what I was pointing out. And yes, composition of
monads is a bit non-canonical.

As to HKTs, I agree, languages should have them. I was just pointing out the
state of the art, post Haskell.

~~~
tel
I would suggest that effects which are "directly" embedded are impure effects
(w.r.t. our other thread). Essentially, without reifying them into values with
types it's harder to reason about them.

A more interesting dividing line are things like computation effect typing in
languages like Bauer and Pretnar's "Eff" or McBride's "Frank".

[http://www.eff-lang.org/](http://www.eff-lang.org/)

[http://homepages.inf.ed.ac.uk/slindley/papers/frankly-
draft-...](http://homepages.inf.ed.ac.uk/slindley/papers/frankly-draft-
march2014.pdf)

But yes, if you consider other "practical, modern" languages, many retain
impurity.

~~~
mafribe
There are very different options.

First, approaches like Eff take pure functional computation as a basis and
layer effects like call/cc or concurrency on top of that. I think it's better
to go the other way and take concurrency, or, if you want to start sequential,
a lambda-calculus with jumps a la call/cc or lambda-mu, as a basis see pure
functional computation as a _restriction_ of the base calculus.

Second, it's unclear that effects should be reflected in types for reasoning.
That road leads to complicated typing systems without type inferences (see
also dependent types). I think the following alternative is more appealing:
stick with lightweight types (think Hindley-Milner), and leave reasoning to an
external calculus (variants of Hoare logic).

~~~
tel
That "analytic" style you suggest is really interesting, too. I'm not sure I
have a dog in the ultimate race and I'd love to see both analytic and
synthetic styles flourish, but from a mere terminology point of view I'd
distinguish between the two languages in means of describing them as "pure" or
not.

I also really worry that external proofs will ultimately be neglected in
practice. I suppose this is again a "UX of Types" thing, but I'm something of
a believer that anything that you do not literally demand of the programmer
will eventually be sacrificed.

~~~
mafribe
I'm not totally sure what you mean by "external proofs will ultimately be
neglected".

------
vezzy-fnord
Probably one of the most popular reposts on HN, if you search this title.

------
Ono-Sendai
I don't find the parts on lazy evaluation very convincing.

------
tuukkah
FP and the arguments have evolved in the 25 years after this paper. Laziness
by default was important because it forced language and library developers to
invent new purely functional constructs such as monads and functional reactive
programming. Functional programming has won with lambdas in every language and
even mainstream GUIs getting programmed with React.js, pure Flux architecture
and Immutable-js.

~~~
tormeh
Given that "anonymous function" is the same as "lambda", please use "anonymous
function", because it's self-explaining.

------
denim_chicken
If functional programming actually mattered we would stop having to remind
ourselves why it matters.

