
“Mostly functional” programming does not work - Bootvis
http://queue.acm.org/detail.cfm?ref=rss&id=2611829
======
yason
I think that "Mostly functional" is actually the sweet spot.

Going to any extreme makes some things horribly difficult and going to another
does the same for other things. So, optimally, multiple paradigms coexist in
the single codebase, applied where they're most useful.

Functional programming with as many immutable bits as possible is definitely a
good start. I generally do that for whatever problem I'm solving: I have a
model for the data and then I write (if at all possible) pure functions to
transform the inputs into meaningful outputs. But then you need know where to
drop the ball and move over to some other paradigm that does something else
right and merely drives the functional parts from the top level.

For example, such a data analysis library can be written with minimal state
and using only pure functions but if -- and when -- you need some sort of an
user interface so that the program can actually be used, an
imperative/procedural approach is generally the most native approach because
UIs are basically I/O. If you're adding a graphical user interface, you might
use object oriented approach to build the UI tree which is probably the
world's most idiomatic, canonical use for OO anyway. But even those are
generally driven with an innately imperative event loop.

Also, note that the different approaches or paradigms aren't language specific
either.

In the first stage, languages are tools that shape your thinking into
accepting new programming paradigms but at some point you have a number of
different ways of thinking in your head, and you can just forget about the
languages they came from.

But in the second stage, you can just think directly in paradigms: you can
consider different ways to build different parts of your program but you might
actually use only one language to implement everything. You can write
functional, imperative, object-oriented, and whatever code in C. Or you can
use several languages with strengths in each paradigm, depending on what
trade-offs produce the best engineering in each case.

~~~
_pmf_
I agree. Part of the reason Clojure sees production use and Haskell does not
leave its academic closet very often is that the former makes interaction with
the non-functional parts of a system painless.

~~~
_delirium
I don't regularly use either one, but I was under the impression that the
situation is rather the reverse: Clojure is quite popular among hobbyists but
doesn't get a lot of serious industry usage (perhaps excepting some stuff in
the web space, which I don't really follow), while Haskell has a bunch of
industry users. I see ads for Haskell jobs pretty often, anyway, while I'm not
sure I've ever seen a Clojure job ad outside of HN threads.

~~~
adrianm
My experience has been in the reverse, which just goes to show you how bad
anecdotal evidence is at proving any point. Without any hard data, it's
impossible to say for certain. All I know is that there is definitely enough
demand for Clojure work that I didn't have to search for any leads when I left
my last job; I already had four promising prospects to choose from that either
emailed me directly or messaged me on Linkedin within the previous month.

------
seanmcdirmid
Debugging a functional program is so difficult (data flow debuggers don't
really exist) that equational reasoning is necessary because you want be able
to fix your code otherwise. But really, mixing list comprehensions with
effects is really a bad idea, and we C# programmers have no trouble avoiding
it.

There are ways to tame side effects without going monads, which don't really
fix the complexity problem anyways (it just makes all effects explicit). See
this paper for ideas on how to do that:

[http://research.microsoft.com/pubs/211297/managedtime.pdf](http://research.microsoft.com/pubs/211297/managedtime.pdf)

We shouldn't treat state like an unwanted but necessary evil, we should
embrace it and deal with it.

~~~
lomnakkus
Are there any practical systems that are usable today that implement "managed
time"? (I haven't read the paper yet(!), but I just thought I'd ask to shorten
the turnaround time.)

Btw, are you familiar with David Barbour's Reactive Demand Programming and if
so, what are your thoughts on it?

~~~
doxydexydroxide
Yes there are.

Glitch is an approximation to Backus's "Applicative State Transition Systems".
see:Backus:"Can programming be liberated from the von Neumann style?"
www.thocp.net/biographies/papers/backus_turingaward_lecture.pdf

Since 1980, I have applied ASTS in the development of Hard Real Time Avionics
Systems Software for Military & Commercial Aircraft and Spacecraft.

I have licensed this code exclusively to Aerospace companies over the years.

It has proven its value in the development of verifiable software.

For reasons that Backus states, the approach is not easy to comprehend nor
apply, and requires very specialized tools (data flow debugger & proof
system).

The tool is known as "Synthesis" in the Aerospace Industry.

~~~
seanmcdirmid
Not really. What Backus is advocating is applicative-style programming (what
we know as FP today); what Glitch is advocating is anything but! Some of the
code looks similar (indeed, we are inspired by FRP and earlier reactive
languages like Esterel), but Glitch keeps everything in the world of explicit
control flow rather than bury everything in data flow.

~~~
dllthomas
Only skimmed, but it sounds like it's in a conceptual space near STM and
lvish?

------
Kutta
I disagree with some of the sentiment expressed in the article; mostly
functional programming works much better than profoundly non-functional code,
and more functional programming usually delivers marginal benefits.

The fact that effects can be used to simulate other effects does not imply
that programmers would typically try doing that. In fact, programming style is
more often shaped by trivial inconveniences (often syntactic, or the
availability of certain libraries) than most of us care to admit. A
programming language that merely _discourages_ people from writing spaghetti
code should still cause less spaghetti code being written. Freak bugs will pop
up from time to time, but not as often.

Shifting an existing language to a more functional style should be a net
negative thing only if the resulting extra complexity and bloat gets
excessive. Starting from an OOP/imperative position, it gets harder and harder
to support an additional (marginal) piece of FP functionality, and after a
while it's just better to start over and switch to pure FP, but current OOP
languages flirting with FP are not particularly close to that point. They
haven't started in earnest to introduce purity to an impure core language
(which would be rather awkward, as the article points out). In this situation
it is somewhat early to talk about mostly functional programming not working.

Scala might be closest to the point of excessive bloat, but I'm not really
familiar with it, and I think that the FP support there is a good thing,
especially in contrast to the Java code that it might displace.

(Disclaimer: I program chiefly in Haskell and I love it).

~~~
tethis
And it's important to note with Scala, a big part of its complexity comes from
the practical philosophy of its creators: purity is sacrificed in order to
_actually make it work on the JVM the way we want it to_.

I used to work with Java on the server-side, but I'm programming almost
entirely in Scala now (I'm at a small shop where I was lucky enough to
convince the boss to let me give it a go on a project last year) and I've got
to say that it's completely changed the way I think about and solve problems.
I learned Haskell in university and I'm trying to learn more, but I don't see
it ever being accepted in our office.

The Scala syntax does (especially when working with async programming /
Futures) suffer from problems, like the nested callback problem that
Javascript also has, but there is an elegant solution in the language... you
just have to know how to use it. But on the other hand, it doesn't look like a
completely foreign language to Java developers and it's not too hard to get
our new hires productive with it.

~~~
muhuk
How do you manage nesting from callbacks and matches and such? Inlining short
functions like _ + _ is fine, but how do you organize more advanced
operations?

(I'm just constantly looking for ways to make my scala code more accessible.)

~~~
tethis
I am indeed referring to for-comprehensions. Any time you have nested
flatMap/maps you can replace it with a for, since that's all a for-
comprehension really is...

    
    
      computation1.flatMap { result1 =>
        computation2.flatMap { result2 =>
          computation3.map { result3 =>
            result2 + result3
          }
        }
      }
    

Becomes...

    
    
      val foobar = for {
        result1 <- computation1
        result2 <- computation2(result1)
        result3 <- computation3(result2)
      } yield {
        result2 + result3
      }
    

Or, since I'm using the async Postgres module which returns Futures, to make
them run in parallel you need to create the futures beforehand. I often have
something like this...

    
    
      val fProject = Project.findById(projectId)
      val fTask = Task.findById(taskId)
    
      for {
        projectOption <- fProject
    
        taskOption <- fTask
    
        students <- projectOption match {
          case Some(project) => Student.findByProject(projectId)
          case None => Future.successful(IndexedSeq[Student]())
        }
    
        result <- (projectOption, taskOption) match {
          case (Some(project), Some(task)) => { 
            /* do something with project, task and students */ 
          }
          case (None, _) => Future.successful(NotFound(s"Project $projectId not found"))
          case (_, None) => Future.successful(NotFound(s"Task $taskId not found"))
        }
      }
      yield result
    

By creating the futures first, they both run in parallel and their results are
'collected' by the for comprehension. In the first example, the computations
necessarily run in sequence.

I'm using Play framework, for me, these for comprehensions are usually found
in my controllers and the final result is an HTTP result.

Passing along failure can still be tricky but I find this much more organized
than nesting callbacks.

------
PaulAJ
The software engineering world is in danger of repeating the mistake it made
with objects two decades ago.

Back then there were legacy "structured" languages like C and Ada, and new
exciting "object oriented" languages like Smalltalk and Eiffel. C++ was
promoted as a "middle way" that let you "choose the best tool for the job".
This made the pure OO languages look extremist. So, it was argued, if you had
a problem best solved by structured programming you could do that, and if you
were doing an application with objects in it then you could use those. It also
meant that your old C programmers could pick up the tool and start using it
immediately without having to relearn how to design a program.

"Aversion to Extremes" is a well-known cognitive bias, and these arguments
play up to it, but of course it didn't work well in practice. OO features
didn't dovetail neatly with the existing structured features, leading to an
exponential explosion in the rules defining how the various features
interacted. The mess was not helped by experienced structured programmers who
felt they should use the new sexy OO features; the result was often a
conventional structured design with some random virtual functions sprinkled
around.

The book "Industrial Strength C++" is a case in point. It is basically a
catalogue of C++ language features that interact in dangerous ways.
[http://www.amazon.com/Industrial-Strength-Recommendations-
In...](http://www.amazon.com/Industrial-Strength-Recommendations-Innovative-
Technology/dp/0131209655)

Today we have the same story happening again. On one hand we have legacy OO
languages like Java and C++, and on the other hand we have pure functional
languages like Scheme and Haskell. So along come "hybrid functional" languages
like Scala which basically make the same promise as C++: if your problem has
lots of objects then you can carry on doing the same OO designs you know and
love, but if you think that these magic first-class functions would be useful
in some complicated algorithms then you can use those as well. And its going
to fail for the same reasons that C++ failed: the OO and functional features
don't interact well, so we are going to have lots of messy rules about them
that cause subtle bugs, along with attempts by OO programmers to use chains of
map and filter functions that work inefficiently because the compiler can't
optimise them. And in ten years time there will be an "Industrial Strength
Scala" book consisting of a long list of features that should be avoided if
you want reliable software.

~~~
cynicalkane
How about Clojure, which is a Lisp-like language with objects? Explicitly, you
have protocols and multimethods; implicitly, almost everything under the hood
is done with Java interfaces (and you can make new first-class datatypes by
implementing the interfaces, though this is mildly discouraged). I cannot
recall anyone complaining about the OO clashing with the functional patterns.

There's also O'Haskell, and in regular Haskell you can get something like
polymorphism (implemented under the hood with actual polymorphism) with
forall-qualified datatypes.

It's not OO and functions that class... it's functions and _state_. There's
(literally) no law of computer science that says objects and functions must
clash.

~~~
coolsunglasses
I'm an experienced Clojure user with work done on the job and in open source.
If you're a Clojure user, it's very likely you've used a library I've worked
on or made.

Don't bother. Go straight to Haskell and just Haskell.

No excuses, no compromises, no mental backflips to justify not learning
something new. Learn Haskell properly and then see for yourself why "hybrids"
are a waste of time.

Hybridized approaches are like asking for a hole in your bathroom floor when
you're being offered indoor plumbing with a porcelain toilet instead of an
outhouse.

~~~
cynicalkane
I find your rant slightly insulting, off-topic, and lacks substantial
information. "Learn Haskell because I say so." Why? No.

But let me do something you didn't do, which is to actually explain my
position. Hybrid languages afford a flexibility and power not available to
language puritans. It's very nice to have things first-class DSLs that don't
rely on slow monad stacks, heterogeneous lists that don't rely on existential
qualification, stateful programming that isn't a type system hack (do I really
need to understand existential uninstantiated types to twiddle a bit in a
vector?), first-class side-effects without monad stack weirdness (unsafe
escapes don't count), Turing-complete macros at load time, inheritance and
class qualification that isn't crippling and/or dependent on weird compiler
extensions... In the course of learning Haskell I find myself banging my head
against a type-system wall to do something that would be trivial in Clojure or
Scala.

Now I like Haskell, but Clojure and even Scala offer very flexible and
powerful defaults together with programming escape hatches anywhere you want
them. This is very powerful. It's not clear how to have a similar in a
strongly typed strict language. Monads and weird compiler extensions don't cut
it.

~~~
caughtexception
I am fed up with these "pure" people.

They have hijacked every sane discussion about programming into a
condescending -- "Do you have monads and typeclasses ?".

It's absolutely unhealthy.

State is not Evil.

Languages like clojure, scheme take imperative features and give it more
beautiful abstractions.

In what profession, do you find people complaining about the very foundations
and thinking it's cool ? It's like Musicians saying Rhythm is stupid.

If you haven't written a State Machine with goto's and never marvelled at it's
beauty ... please just try it.

~~~
coolsunglasses
>State is not Evil.

I agree!

You're mistaken if you think Haskell users don't take advantage of state or
side effects. They do, you just don't understand the difference.

Try this course:

[http://www.seas.upenn.edu/~cis194/lectures.html](http://www.seas.upenn.edu/~cis194/lectures.html)

Then see LYAH's section on the State monad:

[http://learnyouahaskell.com/for-a-few-monads-
more](http://learnyouahaskell.com/for-a-few-monads-more)

Then reflect on ST:

[http://www.haskell.org/haskellwiki/Monad/ST](http://www.haskell.org/haskellwiki/Monad/ST)

Then this example using mutable variables and closures:

[http://bitemyapp.com/posts/2014-03-25-when-nested-io-
actions...](http://bitemyapp.com/posts/2014-03-25-when-nested-io-actions-are-
wanted.html)

Side-effecting closures mutating variables!

It's not "no state or side effects".

It's about making state and side effects typed and explicit so they can be
properly composed and manipulated.

------
nhaehnle
From the article:

 _The infix application function (ma >>=\a->f(a)), commonly called bind,
executes the computation ma to expose its effects, calling the resulting value
a, and passes that to function f._

This is the kind of imprecise language that really made life extraordinarily
difficult for me when I first learned about monads. I think this is an
important point:

>>= does _not_ execute ma!

If it did execute ma, that would violate the purity of the language.

Instead, >>= takes the computation ma and combines it with the function f to
build a new, larger computation that is _composed_ of smaller parts. The
resulting computation (and ma) might never be executed (depending on the rest
of the program).

~~~
lostcolony
Before learning about monads in Haskell, you probably should learn a bit of
Haskell first. Most tutorials assume that. So the fact that function
application is lazy is assumed as prior knowledge, since any approach to
learning Haskell would cover that before monads.

~~~
nhaehnle
This has nothing to do with laziness, because laziness does not affect the
semantics of a program that has bounded recursion. [0]

The problem is that the quoted part of the article is written as if the >>=
operator had side-effects (whether lazy or not), and that's just plain false.

Now I agree that ordinarily, a student of Haskell has learned very early on
that There Are No Side-Effects in Haskell, and should therefore not be
confused. However, introductions to monads typically start out by stating that
monads are how you _can_ get side-effects in Haskell, and so they explicitly
"deactivate" the No Side-Effects-assumption that students have. That's what
causes the confusion.

(In fact, the moment I finally understood monads was precisely when I realized
that a useful way of thinking about it is that Haskell code with monads does
_not_ have side-effects after all. This is totally obvious in hindsight, but
it seems that the best way to get this point across in teaching material has
yet to be found.)

[0] Obviously, this is only true in a side-effect-free language, but we're
talking about Haskell here...

------
lomnakkus
Seems about spot on.

I find it interesting that he's now advocating monads considering his earlier
stance on static type systems[0]. Of course it may just be that he's changed
his mind -- it happens.

(Yes, I consider monads as fundamentally requiring a static type system, at
least if you're using monad transformers or similar advanced techniques. In
practice you're _not_ going to be able to get things right without compiler
assistance when you have a stack of N monad transformers.)

I also find the bit on having a "pure" annotation vs. having explicit monads
particularly insightful. The difference between a type system that only
handles "pure/impure" vs. a type system that handles "pure/state-
effect/writer-effect/network-effect/etc." is huge.

[0] [http://lambda-the-ultimate.org/node/1311](http://lambda-the-
ultimate.org/node/1311)

~~~
taeric
If "in practice, you're _not_ going to be able to get things right..." than
what the hell? That just strikes me as crazy.

Note that I am not necessarily against monads. However, this idea that they
are both a good answer and require fairly extensive programmatic help seem
counter.

I realize we can never reduce programs to things which are trivial and easy to
comprehend. However, any new paradigm/trick that will always require compiler
assistance doesn't sound like a step forward.

(Of course, in my mind a step forward are tools that don't necessarily need
you to change your current languages and programs. Which is one of the things
that annoys me with many new languages. Seems we always get a new wave of
effectively solved areas of programming with incomplete solutions that are
"cool" because they are in the new language.)

~~~
lomnakkus
_Any_ kind of static typing (regardless of how primitive) is compiler
assistance. Do you think that all static typing is worthless?

~~~
taeric
I'm not sure how that even follows from what I wrote.

Note, I am all for extensive static analysis. To the point that I am excited
about such tools as Coverity and friends.

I am beginning to take exception to requiring ever more from the programmer.
To the point that a programmer can't "in practice" specify a program correctly
without a type checker. (Which... is what the parent post says. Right?)

I would much rather have it such that "in practice" we can specify programs
without help. Since that implies that we can "in practice" read and reason
about programs without extensive help, as well.

~~~
lomnakkus
Sorry, didn't mean "compiler assistance", I meant "programmatic assistance"
(as in your original reply). I hope my post makes more sense with that change!

~~~
taeric
So, it sounds like my post is still the more nonsensical. I'm not against any
programmatic assistance. I am growing weary of the ones that require rather
large stretches from the programmer to show dividends.

So, I would rather have a static analysis tool let me know that I am using
data straight from the user, than I would generate a rather large type system
that includes this. See GWT and the "SafeHtml" joy for an example of what
sucks in programming.

Now, it can easily be argued that the problem there was Java not being quite
strong enough, but even in languages like Scala, things can be difficult.

Of course, I have grown to love the Lisp world where people have pretty much
agreed to write in the S expressions. Not because they are the most readable
form, but because they really are ascii art of the structure of what you are
trying to say.

So, yeah, I'm a jumble of conflicting feelings on this. :)

------
userbinator
_The average programmer would [...] because that 's the way the program was
written, as evidenced by the semicolon between the two statements._

Seriously? I don't even know C# and the "var q0" was enough to suggest the
type of whatever Where returns is not an array of int (as opposed to the two
functions above, with int and bool types), so why would I expect it to have
filtered the array and returned it?

Ditto for the 2nd example: in this one it's even more clear that Select is
returning IEnumerable<int>, not int[].

 _In C# the using statement causes the variable initialized at the entry of
the block to be automatically disposed of when control flow reaches the end of
the block. [...] surprising exception far away in time and space_

There should be nothing "surprising" about that; maybe it is if you don't know
C#. One of the first things that beginning C and C++ programmers learn rather
quickly is never to return pointers to local variables or put those someplace
where they'll need to be used after the function returns. How is this any
different?

 _Imperative programs describe computations by repeatedly performing implicit
effects on a shared global state. In a parallel /concurrent/distributed world,
however, a single global state is an unacceptable bottleneck_

Except that this "global state" is not really one thing, and it's not like all
of its parts are modified by any one effect.

It appears that this "fundamentalist functional programming" the article is
advocating is attempting to make programming more like math and distancing it
farther from the real world by adding more abstraction, and if anything, I
think abstraction is one thing that a lot of software these days needs far
_less_ of.

(Sorry if this is too rantlike, I have somewhat of a visceral reaction to
these "the sky is falling!" style of articles...)

~~~
MichaelGG
It is possible that Erik has a better (or more cynical) idea of the "average
programmer" than you or I might. I'm of the opinion that so long it makes
sense and has an elegance to it, then it's fine. If "average" programmers
can't handle it, they can use another language or something.

~~~
metasim
I can't substantiate this, but my sense is that the generation of general-
purpose, non-academic programming languages C++ lowered the barrier to writing
software in the industrial context (think VB.Net, etc.). This can be seen as a
good thing in these business contexts, but for those who are driven to be
deeper and more concise in their problem solving, and are interested in
solving more challenging problems, are only starting to realize that we've not
been expecting enough out of our languages and compilers. Furthermore, the
last two decades of software tool development have gained "lowest common
denominator" accessibility at the cost of "dumbing down" the ways we
generalists think about and solve problems. The first thing that picking up
Scala did for me was realize I was expecting _way_ too little from the
compilers I use. Why should I be figuring out what the damn type of a value
should be (while not throwing out types all together)!?!

Since Meijer is as an employee of Microsoft I'd extrapolate to say he's
heavily influenced in his experience of the "average programmer" by the
primary clients of his company.

~~~
endeavour
He no longer works for Microsoft

------
eigenrick
I would just like to point out that the author seems to be confusing Pure-
Functional-Lazy with just Functional.

I absolutely agree that if you buy into Lazy programming, you have to buy into
entirely Pure Functional as well.

However, many languages and frameworks have demonstrated a high degree of
success mixing in functional paradigms (mostly centered around collections)

I would like to refer people to the concept of _Collection Oriented
Programming_. In this paradigm, application specify most of there operations
as mapping and reducing functions across different collections (trees,
vectors, lists, etc). Not only does it promote safety, but it works in such
high level constructs, it allows the compiler/interpreter to optimize the
operation in many ways, such as optimizing out the lambda calls, and even
parallelizing the operations.

To name a few Language+Libraries for which this is hugely successful: Ruby,
Clojure, C++11 w/ std::algorithm, Scala, and Haskell, of course.

~~~
sparkie
He isn't confusing the terms, he's just using the correct one.

Functional programming is about programming with functions - "Purely
functional" is redundant. It should be obvious that Functional means
functions, and "function" has fairly precise meaning which predates
computation, and certainly didn't include anything about side-effects.
Languages which don't use functions are not functional - they would be best
described as psuedo-functional, nearly-functional, or mostly-functional, as he
uses in the article title.

The only reason we've had to invent new terms like "purely functional" is
because the original term has been abused to mean what it never meant - it was
used to describe psuedo-functional languages, so we needed a new term to
distinguish the two.

~~~
eigenrick
I disagree that '"Purely functional" is redundant'. You can have a functional
language that still uses mutable state. Lisps are a good example. Pure
functional implies no mutable state. Pure functional is the only way to safely
achieve laziness. Which seems to be the author's thesis.

~~~
dllthomas
Well, from the parent poster's position, Lisps would not be "functional" \-
they'd be "pseudo" or "mostly" functional or something. Obviously this
disagrees with common usage, but the expressed motivation "mutation disagrees
with 'function' in mathematics" isn't crazy. I do think it's insufficient,
however, in the face of common usage given that we've collectively chosen that
"purely functional" should mean that.

------
pron
I disagree, because the entire article rests on the premise that imperative
programs are bad because they rely on shared mutable state. Here's the thing,
though: every complex-enough program relies on shared mutable state; even
Haskell programs[1]. Pure functional code just might outsource shared mutable
state to an external database.

The solution, then, is not doing away with shared mutable state, as that's
downright impossible, but with providing transactional semantics for that
state. Once shared state has clear transactional semantics, the pure-
functional nature of the programming language becomes secondary.

Let's imagine a programming language with perfect STM (i.e. an STM
implementation that executes the most efficient code possible in every
transaction). That language would have none of the problems described in the
article, even if it were completely imperative. Hence, the problem is managing
shared mutable state, and not existence of side-effects in general.

Pure functional programming could have been one solution to the problem, if
only for the fact that it's not even a solution at all: Haskell programs still
need a database. But the article assumes no other possible solutions, which is
wrong. It focuses on one particular solution (which isn't even really a
solution), rather than exploring many approaches. It is simply begging the
question.

EDIT: I do agree that _some_ partially-functional approaches are inherently
dangerous, but I do not agree with the conclusion that the answer is going
pure functional.

[1]: Except maybe for compilers, which is probably the most common complex
software built in Haskell.

~~~
nmrm
> Pure functional code just might outsource shared mutable state to an
> external database.

There's an important distinction between essential side-effects and
inessential side-effects.

Some data you have to store in a database is an essential side effect.

Modifying iterator or flag (e.g. bool isOpen) etc. is a non-essential side
effect.

Transactional semantics are probably part of the story for the former, but the
author was talking about the latter.

~~~
pron
In either case it does not follow that pure functional programming is the
answer, especially as it makes "essential side effects" (which are, well, very
essential), quite cumbersome. Clojure isn't pure functional, it makes
essential side-effects easy, and non-essential (or, rather, dangerous) side
effects hard.

I'm not saying that Clojure is the silver bullet, it's just that the article's
conclusion does in no way follow from the premise.

------
_random_
I am not sure what I am doing wrong, but using functional techniques improved
my C# quite a lot.

~~~
PaulAJ
Its an example of the "blub paradox": if you haven't used a pure functional
language then its hard to see what the problem is.

The crucial thing about pure functional languages is that they decouple the
logic of the program from the order of the computation. In an imperative
language control flow and data flow are explicitly interleaved, with complex
dependencies between the two. In many cases a particular bit of code is only
correct if another bit of code has been executed previously, and its up to the
programmer to keep track of all these dependencies.

In a pure functional language this coupling between data flow and control flow
is broken because all the data dependencies are made explicit and visible to
both the compiler and the programmer. That frees the programmer from bothering
about it (and automating low level programming issues is always a Good Thing),
and it also enables the compiler to optimise it. So for instance in Haskell
the compiler will rewrite this expression

    
    
        map f (map g xs)
    

into this

    
    
        map (f . g) xs
    

The first line would iterate through the list "xs", building up an
intermediate result list by applying "g" to every element. It would then
iterate through this intermediate list applying "f" and building up the
result.

The second line iterates through the list only once, applying "g" and then "f"
to each element in turn. Haskell can do this because "f" and "g" are
guaranteed by the type system to have no side effects, so it doesn't matter
what order they are executed in. In impure languages the order of execution
matters, so the compiler can't switch things around in this way without
changing the meaning of the program.

The programmer also gets the benefit. If you see "x = complexThing" you can
always replace "x" with "complexThing" and vice-versa anywhere that "x" is in
scope, without changing the meaning of your program. That makes it much easier
to reason about what your program does.

~~~
_random_
I am disciplined enough to keep my functions pure and use immutable data
structures where they are useful. I try to apply each paradigm where it makes
most sense. What is a pure functional way of implementing GUIs for example to
use instead of MVC/MVVM patterns?

~~~
nmrm
This SO post does a decent job and answering this question:

[http://stackoverflow.com/questions/2672791/is-functional-
gui...](http://stackoverflow.com/questions/2672791/is-functional-gui-
programming-possible)

It's also worth noting that frameworks such as Java's Swing are somehow
functional in their architectural style, if not in their implementation.

------
foobar23511
I think it is funny that the most hardcore group of FP enthusiasts seem to
sympathize with constructive mathematics (e.g. proof-systems ala Coq) yet the
law of the excluded middle is used in the argument here. :)

------
RivieraKid
The notion that functional programming is somehow better than imperative or
object-oriented is completely and utterly wrong. It has its benefits. In some
situations, it's the best approach. But in most real-world projects I've come
across, a mix of different paradigms is optimal.

~~~
PaulAJ
The trouble starts when you try to get your mixed paradigms to fit well
together. The original article was making the point that the cost of mixing
functional with imperative code means that the functional code is crippled.

And I've never seen a project where a mix of paradigms was optimal. Any
project, and any part of a project, can be tackled in a functional or OO
paradigm.

~~~
RivieraKid
The original article was about strictly side-effect free FP, wasn't it? Yeah,
that obviously mixes badly with programming styles that use side-effects.

Most situations I come across are solvable with a reasonably elegant mix of
programming styles.

------
maxiepoo
I just don't get Erik Meijer, though I think he's quite entertaining. He seems
to enjoy taking the other side wherever he is.

Here's a talk I was at a couple years ago where he makes fun of the ideas he's
presenting in this article:
[http://www.youtube.com/watch?v=a-RAltgH8tw](http://www.youtube.com/watch?v=a-RAltgH8tw)
. Key quote: "obsession with monads is a medical condition".

~~~
the_af
Indeed, in last year's Reactive Programming course from Coursera, Erik's
position was that one shouldn't be too fundamentalist about programming
languages, and instead pick whatever works from each language.

He seems to be having a bit of fun with us all. That said, I respect a people
that can change their minds.

------
coldtea
> _Unfortunately, just as "mostly secure" does not work, "mostly functional"
> does not work either._

I call BS. Pure OO and pure imperative has worked for half a century (a
timespan in which functional languages have given us almost NO programs of
importance, with the exception of Emacs, AutoCAD and a handful of others).

It's not like people have abandoned C/C++/Java/C#/Go/etc because they don't
work anymore.

Plus, the need to get more out of multicore machines is quite exaggerated --
most programs can do just fine with just one core (if anything, they are
unoptimized even for that). As for the others, programs like Premiere, Final
Cut Pro X, Logic, Cubase, Maya, AAA games, etc, that is multimedia and number
crunching stuff where performance is a premium, those are not done in
functional languages (the particular examples are almost all C++).

As for high volume internet systems and services, those have found that
Go/Scala/Clojure etc work well for them, to tap those cores.

So, yeah, "mostly functional", will do just fine.

~~~
sparkie
Appealing to tradition doesn't really help to solve any problems we might face
in the future - just because things work "now" doesn't mean they're great.
Still, the examples you gave are kind of biased because you're only
considering a specific kind of _walled gardened_ sofware, which does one job,
but has limited extensibility for further development (although this is
intentional for most games).

It should be understood when Erik says "does not work", he is not saying these
languages are useless or have no practical use today - he is suggesting that
they are incapable of solving the problems of tomorrow. When looking for
solutions to the problems we're facing now or in future, it's useful to have a
look at how we actually build software - what are the "units" which make up
the bulk of our software, and how do we combine them. Let's have a look
through the decades and reason a little about what these units were.

    
    
        1940s: Instructions
        1950s: Subroutines
        1960s: Procedures/Structured programming
        1970s: Interfaces over data
        1980s: Objects
        1990s: Libraries
        2010s: Services
        future: ???
    

Obviously these are only approximations, but they give a fair idea of our
industry's development. For example objects were in use before the 80s, but
they were popularized by C++. None of these were new in their day, but they
became the primary units of software which we use in our programs - because
it's simply too much effort for anyone to write them all from scratch - we are
all using other people's software in our own.

Each stage in this development is an attempt to simplify the previous one, by
encapsulating, or hiding the implementation detail, and presenting a
simplified interface for another programmer to consume. Part of the idea is
that _you shouldn 't need to know the how the encapsulated system is
implemented_, you only need to consume it in the ways specified.

So while people are writing are building all this software on top of services
now using Go/Scala/Clojure and whatnot - what languages are people going to be
using a decade or two from now to _combine_ these into bigger programs?

The suggestion of purely functional programming is one that removes the need
to know how the program or service you consume deals with state, because
effects are made explicit. The idea of purely functional programming as the
solution to multicore/concurrency is a just a _consequence_ of having explicit
knowledge of state, because we need it to reason about race conditions.

We don't really know what the future will be like, but I imagine it will be
one where programs are written to be entirely independent of the hardware in
which they run - as they will be intended to run in clouds with heterogeneous
architectures - other software will be making those decisions for us, but it
can only make them if it can reason about their state. Which suggests we
either need to make it explicit, or vastly improve our theorem provers to
figure it out for us.

------
overgard
So many weasel words and strawmen in this article.

> Recently, many are touting "nearly functional programming" and "limited side
> effects" as the perfect weapons against the new elephants in the room:
> concurrency and parallelism.

Who is this "many", and when did they say it was "perfect".

I think the premise is silly too. Even if you don't get the full benefit of
functional programming without a hardcore functional language, you obviously
get some. Limiting side effects is almost always a good thing.

~~~
PaulAJ
The problem is that the languages don't limit the side effects; they leave
that to the programmer.

------
jerf
I am going to take the apparently unique position here (after 90 comments)
that Erik is correct, at least about pure functional programming (the more
modern sense of "functional" rather than the older one that is "merely" about
first-class function objects). The value of pure functional programming comes
from creating programs out of very mathematically-small pieces... a function
of Int -> Int can only do so many things to the output Int, as compared to a
function in an impure language which may be only able to do so many things to
the output Int but may _also_ arbitrarily manipulate the world in
uncontrollable ways. The pure-functional function is exponentially "smaller"
than the the impure version. Much of the study of the Haskell world right now
comes in how to use these much simpler pieces to still build real-world
programs.

If you "pragmatically" say, "Oh, but this is so _hard_ , let's just let
ourselves use a _little_ bit of arbitrary-world-manipulation in our
functions", you've basically returned back to the original world of
programming with exponentially-complicated pieces again. As he points out in
the article, even with a tiny crack in the wall, the compiler is back to being
unable to assume purity. Programs must once again function as if an Int -> Int
closure might read from the disk or hit the network. You're really back in the
world of OO + old-school functional addons. I'm a bit more pragmatic and will
agree that's a nice and useful paradigm, especially if you've learned
discipline from time in the pure-functional world, but it is _not_ the pure-
functional world, and you will not reap the benefits.

(Which A: yes, they do exist and B: no, they aren't necessarily "mandatory" or
the "only way to program". But still, see A. Personally I'd keep pure
functional around for either applications that need high quality assurance
without breaking the development budget, or programs of high complexity where
the state space is big enough even before you start using horrifically
unconstrained pieces to try to build your solution.)

I'd rephrase the title a bit... _Mostly_ functional programming is not _pure_
functional programming, and whereas I think "pure OO" didn't have a sweet spot
where you insist everything is 100% OO, pure functional programming does. It
isn't the only sweet spot. I think careful use of a very-not-pure language
like Go, where one merely uses convention to avoid shared state (a very "pure"
idea), can still be a sweet spot on its own. But there is another, where you
go "purely pure", and for that one, you really do have to go _purely_ pure, or
you're not using it... and if you've never done it yourself, because you've
only used impurely pure languages, you don't really have an opinion yourself
because you've never tried it. Very nearly the only practical way to try it
right now is Haskell.

~~~
marcosdumay
Now that I've read your post, I'd say that impurity is much more similar to
"goto" than with object orientation.

A language with "goto" is not structured. It does not matter how often it's
used, or how similar the rest of the language is to a structured one. The same
is true for side effects.

(Funny thing that the most used language has both.)

~~~
orbifold
If code is structured or not is not really a property of a language, every
function call is effectively a "goto" although with more convenient syntax. If
your functions are partitioned in a strange way, you can just as easily
produce spaghetti code. Same with nested if statements. You can write well
structured code in Fortran 77 for example, even though most standard control
structures involve a goto. Absence of a goto statement is neither necessary
nor sufficient for enforcing structured code. One real advantage is perhaps
that the compiler has more invariants to work with.

~~~
marcosdumay
There is an entire class of compiler optimizations that can only be done on
structured languages. If you include a "goto" command in a language you must
either do a huge amount of statical analysis to map your language into a
structured one, or live without such optimizations.

The fact that some code do not use a feature of the language does not help the
computer generating a faster program.

Also, no, function calls are not equivalent to "goto".

~~~
astrange
Compiler IL reduces all branches to the equivalent of "goto", so adding a few
more is just no problem at all.

More important obstacles to optimization include use of exceptions (now that
makes control flow complicated), memory aliasing (any write to a char * is a
scheduling barrier), overly-defined int math (can't prove loop iteration
count), and the fact that your compiler has no idea what the hell is going on
inside an x86 chip anyway.

------
spullara
"Completely functional" obviously doesn't work as there would be no side
effects aside from your computer getting warm. Every functional language has
escape mechanisms that allow you to see what the program is doing. "Mostly
functional" is as close as you can get to functional.

~~~
the_af
Erik addresses this in the article. What he calls "mostly functional" isn't FP
with some carefully used escape mechanism, but imperative languages adding FP
features here and there. He argues that the benefits of true FP get negated in
hybrid languages.

------
eranation
Just an observation, Erik was one of the lecturers in the coursera course
"Introduction to reactive programming" co taught by Martin Odersky, creator of
Scala. Erik was teaching reactive extensions to Scala (a port based of his
work at Microsoft). Course is highly recommended by the way.

~~~
saryant
He's also a keynote speaker at this year's Scala Days conference.

------
solomatov
I think, this is a marketing article (the author is working as a consultant
now). With all due respect to Erik Meijer for his contribution to functional
programming, this article disseminates FUD that you are using functional
features of programming languages incorrectly.

In my experience, using imperative programming with elements of functional, is
very productive, and I don't need to introduce monads everywhere to be more
productive. Separating side effects, and making code as pure as possible, was
a good practice in old style OO programming and will be so in the future, and
introducing smart-sounding words for this which most of the readers don't
understand (BTW, I do understand what monad is), is just a marketing trick.

~~~
Paradigma11
Nah, Erik has held this opinion for a long time and has expressed it
repeatedly in various videos on channel9:
[http://channel9.msdn.com/Tags/erik+meijer](http://channel9.msdn.com/Tags/erik+meijer)

------
ruricolist
Is this a satire? I mean this seems like a satire of the fact that the
unfortunate framing of functional programming in terms of "purity" and
"impurity" clouds a very abstract question with the intense instinctive
reactions we have to questions of personal hygiene. Notwithstanding the fact
that a programming language is "impure", you cannot catch anything from it. It
cannot defile, pollute, or contaminate you. Nor is there any power that will
reward you in this world or the next for your supererogatory devotion to
"purity" in programming.

~~~
the_af
"Impure" doesn't have negative connotations in this context, and in fact
"pure" and "impure" are common informal terms when discussing languages with
or without control of side-effects.

------
dorfsmay
I'm assuming midle languages are clojure, ocaml and scala (python to a certain
extent), and pure is? Just haskell?

------
roryhughes
The site seems to be down now probably because of the load. Anyone have a
version they could put somewhere?

~~~
bjxrn
[http://webcache.googleusercontent.com/search?q=cache:queue.a...](http://webcache.googleusercontent.com/search?q=cache:queue.acm.org/detail.cfm%3Fref%3Drss%26id%3D2611829)

------
Uncompetative
C functions are really procedures, except when they have no side effects on
global variables. Indeed, a C function can be regarded as pure even if it
contains imperative state changing code provided that all the mutable
variables involved are local temporaries that only exist for the duration of
the function call. Provided that f(x) always returns the same result for the
same value of x it shouldn't matter how it is implemented.

APL element-wise operators are pure functions in an imperative language that
has destructive assignment. However, it is also well suited to being used to
implement GPGPU parallelism with local temporaries inside imperative
procedures that appear to be pure functions from the outside.

Another form of parallelism arises from pipelined dataflow where tasks can
simultaneously work on different parts of the whole linear computation so that
those waiting for the results (further down the 'conveyor belt') receive more
data from a potentially unlimited input stream 'just in time' whilst they
consume more new data from their supplier or the source.

Taking things in the opposite direction a purely functional program can be
seen as being a frozen moment in time that is subject to extrinsically defined
constraints. Much like a spreadsheet these values can transition between
epochs so they become ordinary mutable variables in a high-frequency event
loop. This is the basis of exploratory "live" programming as an interpreter
'reacts' to dynamic changes in its source. It is also a facility provided by
Mathematica where it permits the user to manipulate a graph of some complex
function through recalculations based on new values of some sliders.

Every technique has its proper place and the latency incurred by Erlang
mailboxes in the pursuit of fault tolerance and convenient hot-swapping of
distributed modules is less of a performance issue when there would be a delay
anyway given that the code is running over a network of computers. Erlang
solves every significant problem of concurrency and it is highly reliable,
unlike C# or Visual BASIC for which he is responsible. He really isn't in a
position to criticise.

Both FP and OOP are extremes. Really, you can get by with Prototypes and have
something type-oriented rather than class-based like Barbara Liskov's CLU.
These can be given the capability of operating like Actors to simply take
advantage of multicore / multiprocessor / multicomputer architectures. It
helps your clarity of purpose if your language encapsulates persistent state
in these Prototypal Actors without silly workarounds like C++'s friend
function to make it go faster. A lot of your global state can live 'outside'
of the program only to be seen from within as a set of global constants that
change every 1/60th of a second when the runtime is reborn as if run from
scratch with a slightly edited source. This may mean that apart from the
output pipe of your dataflow that you can only put stuff IN to the BLACK BOX
and must trust it to create its own views as to its current epochal value,
just as a videogame renders a new frame of animation. All of the proponents of
FP and OOP and for that matter Actors seek to prove that all programs can be
written just using only their newly hyped paradigm. This is typical of ivory
tower academia unsullied by the necessary pragmatism of the workplace. If you
admit that there is something good about Actors then you don't have to learn
Monads. If you admit that 'cloning' is a cleaner solution than the often
abused (for the sake of convenience) 'implementation inheritance' and come to
realize you can easily recreate classes / interfaces as abstract prototypes
(i.e. they are just a pattern) you can jettison a whole lot of distracting OOP
terminology and inscrutable UML diagrams as the work of self-promoting
"architecture astronauts". Yet, if you embrace symbolic programming as seen in
Mathematica you get to optimise your algorithms whilst not losing an insight
into how their individual terms are transformed in your super-accessible
declarative 'executable specification', at which point you realise that a
symbol with an unknown value (i.e. a conventional mathematical variable) which
can only become attached to a value once (per epoch) can be viewed as awaiting
dataflow from a pipeline without any extra syntax obscuring your intent - you
just call a variadic function and let the receiver await a finite list of
arguments and then if there are more in the stream waiting beyond those it has
already taken it awaits the same number of parameters again.

Really, the whole is greater than the sum of its parts - especially when those
parts (paradigms) are dovetailed nicely.

I've been working on my own multiparadigm programming language for many years
and Erik Meijer just seems too damn bleak.

------
metasim
If you strongly restrict mutability (significantly facilitated in Scala with
case classes), then OO and FP dovetail quite elegantly. The Kiama language
processing library is a fantastic example of getting the best of both (there
are many others):

[http://code.google.com/p/kiama/wiki/Dataflow](http://code.google.com/p/kiama/wiki/Dataflow)

That said, one of Scala most compelling (business) features--great Java
interop--is also it's greatest liability. While I appreciate the great
interop, Java's lack of state mutation controls in the language and JVM
instruction set interfere greatly with the FP/OO impedance matching,
generating misconceptions about the viability of hybrid FP/OO approaches.

From 1994 to 1996 I went from being a 80%/20% C++/Python developer to 90%/10%
Java/C++ developer. My productivity went through the roof, but the lack of
something akin to C++'s `const` references and `const` methods were a glaring
mistake, one that remains to this day a shadow mandating contorted defensive
programming techniques. Adding `final` almost made it worse, as it's nuanced
and overloaded, and ultimately doesn't do what less-experienced developers
think it does. Because of that one serious flaw, was I ready to go back to
C++? Of course not, because I was able to render my ideas into working
software at a faster pace--an extremely important factor--but I had move
forward being ever aware of the language weaknesses and how to effectively
ameliorate them.

Over the last year I've gone from developing in Java 90% of the time to Scala
75% of the time. In this transition I have seen a similar jump in my
productivity (almost, but not quite as big as C++ to Java, and after a longer
learning curve). However, have approached it with the same multi-dimensional
awareness of one's paradigmatic assumptions and how they play with and against
the language facilities.

For me, Scala was my gateway into the world of FP thinking, which radically
changed the way I think about software problems. Scala has also had a profound
impact in developing a deeper appreciation of the power of a more formal type
system. Both of those paradigm-shifting features of the language have allowed
me to be more creative, expressive and concise in my software writing, with
bountiful rewards on multiple axes.

However, I've also stumbled along the way--becoming enamored of features I
didn't fully understand, being too expressive when simplicity would suffice,
being FP for FP sake, etc.--but I sure am glad I had those opportunities to
stumble, and do so in a "fail-fast" manner. The process has been invaluable,
and I'm a _much_ better programmer today for it. I never entered the process
assuming the FP/OO/CT academic visionaries or the
Smalltalk/C++/Scala/Haskell/ML/OCaml language inventors offered me any
"promises". They gave to the world constructs for others to think about and
solve software problems, take it or leave it, to live and die in the ecosystem
of ideas. I know it is _my_ responsibility as a professional programmer to
understand the pros and cons of those constructs and tools, weigh them against
my goals, experience and intelligence, and go into a relationship with these
tools knowing, _I 'm_ ultimately the one responsible for the final product,
and need to know what I'm doing.

All this is to say, I don't think sweeping generalizations nor pointed nick-
picks help in assisting people select the best language and paradigm for their
problem at hand, understanding the strengths _and_ weaknesses, and how to
manage those trade-offs. It isn't, and doesn't have to be a "one-size-fits-all
world" (as someone else here already referenced the great Stroustrup﻿ quote:
"There are only two kinds of languages: the ones people complain about and the
ones nobody uses."﻿). And at the end of the day, the ultimate responsibility
rests in the hands of the individual professional developer. In my developing
awareness of the FP viewpoint I have most appreciated and benefited concretely
from those pragmatic viewpoints in the middle. It is from the middle that one
can more clearly see _both_ perspectives, and from that develop a third, more
holistic and encompassing viewpoint that harnesses the power of both.

