

What is a functional programming language? - e1ven
http://enfranchisedmind.com/blog/posts/what-is-a-functional-programming-language/

======
anatoly
I found this post to be a poor attempt at shoehorning the notional of
functional languages into lambda calculus. Sure, functional programming is
inspired by lambda calculus, but it doesn't reduce to it, and, what's
important, never has - from the very first LISPs on.

So, for example, Church's encoding of naturals is introduced, which is totally
irrelevant to all practical functional languages. Despite what the author is
saying, using computationally sane integers is not "nothing more than an
optimization", but a fundamental part of the language (and don't get me
started on floats). It's just that actual functional programming languages
model a lambda calculus with atoms - basic objects, like integers, that are
_not_ functions. It's really OK - not everything needs to be a function! The
fact that it's possible, in pure lambda calculus without atoms, to model
integers using Church's trick is cool and has theoretical significance - but
no relevance at all to functional languages.

Similarly, functional programming has always included languages with mutable
data. LISP was mutable from day 1, and so was the even more functionally-
inclined Scheme. There's nothing in "functional programming", as has been
practiced for decades, that necessarily excludes mutable data. Now, "pure
lambda calculus" is another story... but it really _is_ another story, one
that serves as a motivation for functional programming rather than all of its
being.

The requirement that a functional language, in the author's opinion, _must_
have tail call optimization, is weird (many LISPs don't have that), and its
explanation is weirder yet: a functional language must have TCO because it's
based on lambda calculus where everything is done with recursion, so you need
infinite stacks (???), but real stacks aren't infinite, so "you need tail call
optimization to allow the use of recursion for looping". Well, what if you
don't use recursion for looping, just as you don't use Church's encoding for
integers, because _you are not after all writing in lambda calculus_?

Overall, I felt that the author let his intuition of what a functional
language could be settle too firmly on Haskell (with the understanding that
static typing is optional), and is trying to redefine functional programming
in accordance with this vision.

~~~
anc2020
Lisp was by all accounts an attempt to make a working lambda calculus[1].
However the machines weren't very quick then and speed was a real issue. Even
later on when Scheme was designed, speed was an important consideration, and
it probably did affect the resulting language.

What the author was saying about TCO was that, based on the fact that all our
machines today use stacks, and not wanting to have programs that break(!), any
implementations of lambda calculus on these machines must have TCO to allow as
much of the lambda calculus as possible to be available in the programming
language, lest you reject otherwise correct programs.

But make no mistake, Lisp[2] and functional programming languages ARE based on
the lambda calculus. The article really did hit the nail on the head.

\--

[1] "Lisp was originally created as a practical mathematical notation for
computer programs, based on Alonzo Church's lambda calculus."

\-- Wikipedia (<http://en.wikipedia.org/wiki/Lisp_%28programming_language%29>)

(and lots of other sources agree with this)

[2] Sorry to say it, but Lisp isn't the center of the Universe. It's a cool
trick, but it isn't that great.

~~~
bitdiddle
I'm not sure I agree with this first statement. For sure scheme was motivated
by lambda calculus, but I do not believe McCarthy (a functional analyst by
training) was aware of lambda calculus when he invented Lisp. Rather he was
motivated by symbolic calculations often performed in functional analysis.

NOTE: I stand corrected, I just checked and McCarthy's Lisp paper does cite
Church's thesis work. Somehow I had thought Algol was motivated by lambda
calculus

~~~
apgwoz
interestingly, I believe scheme was a failed[1] attempt at building OOP

[1] of course a closure is an object with exactly one method, apply

~~~
silentbicycle
No, it was an attempt to implement the Actor model in Lisp. The Actor model
isn't what people usually have in mind when someone says OOP - it's more like
Erlang or Termite.

------
antipax
Really enjoyed the explanation of how addition falls out of the lambda
calculus, as I'd never seen that before.

~~~
ambulatorybird
You can also do sequencing (assuming applicative order) and conditionals:

    
    
      'a; b' ==> (fn x -> b) a
    
      true  = fn x -> fn y -> x
      false = fn x -> fn y -> y
      if    = fn p -> fn -> c -> fn a -> p c a

------
arohner
I really enjoyed the distinction between languages based on turing machines
and languages based on lambda calculus. I'd had a fuzzy idea of the
distinction before, but this article set it out clearly.

------
jes5199
okay, here's the argument as I read it: 1) functional languages are made of
lambdas a) when they aren't made of lambdas, it's "just an optimization." 2)
non-functional languages aren't made of lambdas b) when they have lambdas,
it's just "style"

~~~
silentbicycle
Try this:

Functional programming languages have an underlying model based on _function
composition_ (with one argument functions, etc.). To the extent that they have
arguments with multiple functions, etc., it's ultimately just syntactic sugar
and optimizing for common cases - While nobody generally writes the low-level
"lambda calculus virtual machine language" stuff by hand outside of academic
papers or bootstrapping, Scheme's "let" is implemented in terms of lambda at
compile-time. (Scheme without set! is representative.)

Imperative languages have a fundamental underlying model of cells in memory
and sequential operations that modify them, branch based on them, etc.,
including an instruction pointer specifying where instructions are currently
read. (C is representative.)

Intuitively (and without busting out the Greek letters): compare languages for
which functions are a list of _expressions_ , and return the value of the
last, versus languages for which functions are a list of _statements executed
sequentially_ , the last of which can optionally return a value. A lot of
other issues are ultimately due to how that choice directs default behavior.
Even when a language has the amenities to borrow the other's tools, the
defaults channel common usage.

Likewise: Relational databases are ultimately based on the relational model,
even when parts of SQL fill in conceptually irrelevant details for sake of
performance; Prolog is based on backtracking and Horn clauses, and well-
structured Prolog can be reasoned about in terms of them, even when cuts are
used to avoid wandering about needlessly in known dead ends. The ultimate
result would (eventually) be the same. Both models are fundamentally different
from pure functional or imperative programming, as well, and the languages
built on them are as different as Erlang is from Smalltalk.

I'm self-taught and not a mathematician, so I couldn't give a really
exhaustive or formal analysis of the distinction -- probably the best way to
understand it (and learn quite a bit else, PS) is to work through the
exercises in SICP and CTM. The second chapter of SICP includes Church encoding
([http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-14.html...](http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-14.html#%_sec_2.1.3)), among many other things. SICP covers functional
programming quite well, but CTM makes the fundamental differences between the
different programming models and their strengths and weaknesses clearer.

You can use functional idioms in imperative languages, but they may lack
features such as tail call optimization or garbage collection, which mean that
you're working at cross-purposes to both the semantics of the language and the
cases for which its implementation has been optimized. Whether this means
attempting to do so is actually infeasible (e.g. stack overflows in Python) or
just slow and/or verbose depends on the language. Likewise, modeling problems
in functional languages that are intrinsically better-suited to imperative
languages can be messy and confusing.

(OO is a completely different direction, but as people have wildly
inconsistent ideas of what it actually refers to, I'm not going there.)

------
jberryman
That was really well-written. I liked this line:

> The Lambda Calculus is innocent of the notion of mutability

------
JulianMorrison
A student once asked his teacher "is Scala a functional programming language?"
The teacher gave a shout and began hitting the student mercilessly with a
chair, until he was forced to flee the room.

The student was enlightened, and wrote his program in Scala.

------
lutorm
This strongly reminds me of listening to the first couple of lectures of SICP.

------
twism
Didn't know clojure tries to do OO

~~~
lg
Aside from all the Java crap it has multimethods, but it doesn't really make
sense to call them OO because there's no object that they're attached to.

~~~
jrockway
OO is not about methods, it's about the data. Single-dispatch OO is just a
special case of multimethod-based-OO, anyway.

(Remember the whole "sending a message to an object" thing? You do that with
both variants. In the single-dispatch case, the method is a structure in the
metaclass instance. In the multiple-dispatch case, some other helper object
finds the right method. Same idea, though.)

~~~
lg
Multimethods are ambient actions. You could implement them with functions and
any sort of type system. Maybe also a way to have variable-length parameter
lists. If that's OO then functional programming is object-oriented
programming.

~~~
jrockway
Yes, functional programming is very similar to OO.

OO:

    
    
        class SomeType { 
            <data> ; 
            SomeOtherType function1() { 
                this.whatever ; 
                ... 
            }
        }
    

FP:

    
    
        data SomeType ...
        f1 :: SomeType -> SomeOtherType
        f1 this = whatever this >> ...
    

Same idea, different syntax. (Type classes will allow polymorphism, if that's
what you're after.)

You are thinking of the abstraction in terms of its implementation. There is
no reason why a method in a class would physically be in that class; the class
just acts as a namespace:

    
    
       class Foo { method }
       Foo.method();
    

and

    
    
       (defmethod Foo.method ...)
       (Foo.method)
    

are the same.

