
Deconstructing Functional Programming [video] - newgame
http://www.infoq.com/presentations/functional-pros-cons?utm_source=infoq&utm_medium=QCon_EarlyAccessVideos&utm_campaign=QConSanFrancisco2013
======
Peaker
Gilad Bracha sounds like he hasn't used a typed language long enough to stop
struggling with basic type errors.

As such, it is of a position of extreme ignorance that he speaks of the
uselessness of type checking and inference.

Claiming Smalltalk has the best closure syntax shows he doesn't understand
call by need. Haskell defines easier to use control structures than Smalltalk.

Claiming patterns don't give exhaustiveness, ignoring their extra safety shows
Gilad doesn't understand patterns.

Claiming monads are about particular instances having the two monad methods,
when they are about abstracting over the interface, shows Gilad doesn't
understand monads.

Claiming single argument functions have the inflexibility of identical Lego
bricks shows he doesn't understand the richness of function types and
combinators.

In short, Gilad sounds to me very much like a charlatan who'd benefit greatly
from going through lyah.

------
mafribe
I found Bracha's talk poor. That guy really has a chip on his shoulder vis-a-
vis functional programming. A lot of things he said were not well though out.
Here are some examples.

\- He claimed that tail recursion could be seen as the essence of functional
programming. How so?

\- He complained that tail recursion has problems with debugging. Well, tail
recursion throws away stack information, so it should not be a surprise. You
don't get better debug information in while loops either. And you can use a
'debug' flag to get the compiler to retain the debug information (at the cost
of slower execution).

\- His remarks about Hindley-Milner being bad are bizarre. Exactly what is his
argument?

\- His claims about pattern-matching are equally poor. Yes, pattern matching
does some dynamic checks, and in some sense are similar to reflection. But the
types constrain what you can do, removing large classes of error
possibilities. Moreover, typing of patterns can give you compile-time
exhaustiveness checks. Pattern matching has various other advantages, such as
locally scoped names for subcomponents of the thing you are matching against,
and compile-time optimisation of matching strategies.

\- He also repeatedly made fun of Milner's "well-typed programs do not go
wrong", implying that Milner's statement is obviously non-sense. Had he
studied Milner's ￼"A Theory of Type Polymorphism in Programming" where the
statement originated, Bracha would have learned that Milner uses a particular
understanding of going wrong which does _not_ mean complete absence of any
errors whatsoever. Milner uses a peculiar meaning, and in Milner's sense,
well-typed programs do indeed not go wrong.

\- He also criticises patterns for not being first-class citizens. Of course
first-class patterns are nice, and some languages have them, but there are
performance implications of having them.

\- His critique of monads was focussed on something superficial, how they are
named in Haskell. But the interesting question is: are monads a good
abstraction to provide in a programming language? Most languages provide
special cases: C has the state monad, Java has the state and exception monad
etc. There are good reasons for that.

\- And yes, normal programmers could have invented monads. But they didn't.
Maybe there's a message in this failure?

~~~
the_af
Indeed, I found his talk pretty poor as well. A lot of it comes down to not
wanting to learn new terminology, and forgetting that a lot of "common sense"
terminology from, say, Java, is also learned. I don't get more insight from
"FlatMappable" than from "Monad"; in both cases I must learn about them first,
and neither is intuitive without prior knowledge.

It is instructive to read Bracha's blog too, mostly for the comments where
readers refute a lot of what he claims.

His argument against Hindley-Milner seems to be that "he hates it", and that
type errors are sometimes hard to understand. It is true IMO that they are
hard to understand (even though, like everything in programming, you get
better with practice), but what is the alternative? Debugging runtime errors
while on production?

He also presents Scala as a successful marriage between OOP and FP, but in
reality this is a controversial issue. Some of the resistance to Scala
(witnessed here in Hacker News, for example) is due to it trying to be a jack
of all trades and master of none. Scala's syntax is arguably _harder to read_
than that of other FP languages.

Some of his "funny" remarks sounded mean-spirited to me. Nobody in his right
mind claims that FP invented map or reduce, for example.

The only point of his talk I somewhat agree with is that language evangelists
are annoying. Oh, and that "return" is poorly named.

~~~
newgame
> His argument against Hindley-Milner seems to be that "he hates it", and that
> type errors are sometimes hard to understand. It is true IMO that they are
> hard to understand (even though, like everything in programming, you get
> better with practice), but what is the alternative? Debugging runtime errors
> while on production?

He pointed out that a more nominal type system is a solution. Because when you
give meaningful names to your types the error messages will become clearer and
not full of long, inferred types that reveal potentially confusing or
unimportant implementation details.

~~~
mafribe
Most programming languages with Damas-Hindley-Milner do not prevent you from
using explicit type annotation, and inventing semantically meaningful type
names.

More importantly, I think the reason why error messages are sparse and not
meaningful in languages with Damas-Hindley-Milner is that nobody bothered to
improve the situation. And the reason why nobody botheres is that it's simply
not a problem in practise. Any even moderately experienced programmer can
easily detect and fix typing errors as they are given in Haskell, Ocaml, F#,
Scala etc.

------
taeric
First, thanks for all involved in getting this posted!

I'm somewhat curious on why the industry has such an aversion to simulating
things in our mind. Especially when this seems to be one of the arguments
employed against monads in this speech. That it basically couches something
known in an odd name that is not known. Isn't this just stating that it is bad
_because_ it confuses the simulator that is the reader?

That said, the live coding aspect is something that I am just now learning
from lisp with emacs. Being able to evaluate a function inline is rather nice.
It is somewhat sad, as I still wish I could get a better vote in for literate
programming. (Betraying my appeal to the human factor moreso than the
mechanical one.)

~~~
catnaroek
Monads have nothing to do with simulating anything. They are just a commonly
recurring pattern of computational contexts (more precisely, functors) that
also provide two basic operations:

1\. entering the context (pure :: a -> m a) 2\. collapsing nested contexts
into one (join :: m (m a) -> m a)

Together with some coherence laws that ensure that these operations do
exactly, no more or less, than entering the context and collapsing nested
instances of it.

~~~
taeric
Did you watch the video? I'm not referring to monads simulating something. I'm
referring to the observation that when reading code you are simulating its
execution. My understanding of the video's complaint against monads is that
the signature of monads is actually quite simple and well understood in
different contexts _by different names._

The video goes on to display an environment where you do not have to simulate
the code in your head.

This progression seems somewhat interesting to me. As does the desire to not
have to simulate code in your head.

~~~
asdasf
>This progression seems somewhat interesting to me. As does the desire to not
have to simulate code in your head.

But none of that has anything to do with monads.

~~~
taeric
Ok... I think I'm getting trolled at this point.

I am taking issue with the video's critique of monads. Wherein it is claimed
that monads manage to take a common and understandable behavior and make it
laughably impossible to explain to people by giving it a weird name.
Essentially, the problem with monads is one of it being difficult to
"simulate" under the name "monad" for many individuals.

This part, I actually feel makes sense and resonates well. Simply follow the
progression in the video and see how "FlatMappable" becomes less and less
intuitive as it is given worse and worse names.

The part that is interesting to me, is how this then progresses into a point
on how programmers should not have to simulate the code in their head. Now, I
realize there is a big difference between "should not have to" and "is
difficult to intuitively do so". Still seems an odd progression, though.

~~~
asdasf
>Ok... I think I'm getting trolled at this point.

If you don't want to discuss something, then don't post. You are not making
any sense, and calling people trolls does not help at all.

~~~
taeric
I should have put a smiley on that, then. While feeling trolled, I highly
suspect this is just a rather amusing case of poor communication.

At no point was I trying to describe or discuss monads. That is something a
response to me thought I was trying to do. When referring to "simulating" a
system, I was referring to where the video refers to the process of reading
"dead code" in a text editor. There is a large rant on monads _in the video_
where the argument appears to be that the problem is strictly with the name.
The reason given that it takes something understood, and hides it behind non-
obvious names. I extrapolated this to be that it makes the program and the
idea "hard to simulate" for the coder reading the code.

------
bunderbunder
Great talk. Particularly the bit on the value of naming things - I rather wish
he'd flogged that a bit harder.

As time goes on I'm finding it more and more frustrating to try and maintain
code that relies entirely on anonymous and structural constructs without any
nominal component. Yes, I do feel super-powerful when I can bang out a bunch
of code really quickly by just stacking a bunch of more-or-less purely
mathematical constructs on top of each other. . . but as the story of the Mars
Climate Orbiter should teach us rather poignantly, when you're trying to
engineer larger, more complex systems it turns out that meta-information is
actually really useful stuff.

~~~
the_af
I'd say static typing and purity as advocated by FP are some of the tools one
wants when trying to engineer larger, more complex systems.

I wasn't familiar with the Mars Climate Orbiter case, but a cursory reading
suggests one of the causes was a type error (confusing newtons with pound-
force).

~~~
bunderbunder
As advocated widely in the FP blogosphere. . . not necessarily as commonly
practiced in FP programming culture, or supported by many FP languages.

For example, I strongly prefer F# to its cousin OCaml largely because F# uses
nominal typing and OCaml uses structural typing. I've also got some misgivings
about being overly reliant on type inference. Both structural typing and
advanced type inference are admittedly incredibly convenient. What worries me
is that they also seem to be incredibly convenient as ways to obfuscate the
programmer's intent w/r/t types and their semantics.

~~~
the_af
I'd say not so much as advocated by the blogosphere (which can be annoying, as
fans of almost anything often are), but by the people actually designing and
using FP languages.

In any case, there is certainly valid criticism of FP, but Bracha's just isn't
it. My impression is that the guy -- as clever as he may be in other areas --
barely understands FP, and makes disparaging remarks about things he isn't
familiar with. Read his blog; every assertion he makes is shown to be
incorrect or misleading by people who do understand FP, like Tony Morris or
(very politely) Philip Wadler himself.

------
vitd
I'm just learning functional programming with Haskell, and it was great to
hear him explain that learning Haskell is really hard because of the
terminology. I feel a little (just a little) less stupid.

That said, he's a terrible presenter. His smarmy style was really off-putting,
and his motives a little sketchy. He spends a good portion of the talk
slamming just about every language in existence except for the two he works on
(Dart and Newspeak). It seemed very disingenuous and I don't need another
ranting nerd spouting venom about why something's not very good in that
holier-than-thou tone. I would have rather had a straightforward talk showing
the strengths and weaknesses than the bitter tone this had.

------
agentultra
This is a brilliant talk. It's getting far too easy to annoy the FP cult(ure).

As an aside, Scala is not unique in marrying a FP approach with an OO system.
CL has had CLOS, IMO one of the better implementations of "OO" outside of
Smalltalk, for much longer than Scala.

Definitely watch this!

~~~
catnaroek
Scala and Common Lisp are not particularly functional languages. Functional
programming in Scala is doable, although it takes a nontrivial amount of
effort (see: scalaz), and it is outright impractical in Common Lisp.

As an aside, CLOS multimethods resemble Haskell's multiparameter type classes
(except CLOS is dumber: you cannot provide any guarantee that the same types
will provide two or more common operations) more than they resemble anything
else also called "object-oriented".

~~~
agentultra
It is a common mistake I've heard from many CL newbies that believe CL is a
"FP" language.

The best descriptor I can find to date (of CL) is, "programmable programming
language," which allows it to encompass almost every desired feature one may
need; including many that fall under the FP umbrella which may be where the
confusion stems from.

However one of the opening points of the talk was that, "FP," is not a
rigorously defined term and is subject to interpretation. Which leads to
bikeshedding over language features and a lot of hype.

I believe it also leads to a lot of misplaced faith in the purity and
completeness of mathematics (it's almost as if the popular notion of FP is
being reborn as a modern _Principia Mathematica_ ).

CL obviously cannot be called an, "FP," language since its inception seems to
predate the popular notion of the term. Scala may suffer in the same way due
to its reliance on the JVM and the expression semantics it has carried over
from Java. However many of the features one tends to associate with modern FP
languages (though not all) are present in both languages.

As for your aside, how so? Perhaps a discussion we can have over email if
you're interested. You sound smart. However I don't understand your statement
and would like to know more.

~~~
catnaroek
> As for your aside, how so?

CLOS multimethods do not "belong" to an object or even to a class declaration.
Particular implementations of generic methods are declared globally, just like
Haskell type class instances. Although, as Peaker noted, type classes can
dispatch on any part of the type signature. It is impossible to make a CLOS
multimethod with signature:

    
    
        (SomeClass a b) => String -> (a, b)
    

> Perhaps a discussion we can have over email if you're interested.

Sorry, I never check email. But I am almost always on Freenode. My nick is
pyon.

> You sound smart.

Not really. The regulars in #haskell - now _they_ are frigging smart.

~~~
agentultra
I think the comparison to type classes is specious and ends there. They look
similar but they tackle very different problems. You've actually explained why
rather well.

> Not really. The regulars in #haskell - now they are frigging smart.

Don't sell yourself short.

~~~
Peaker
It seems to me type classes have a superset of the features of CL
multimethods. Why not compare them?

------
jstratr
Interesting talk! Bracha has some good arguments against features that I
generally enjoy in programming languages, like Damas–Hindley–Milner type
inference and pattern matching.

Regarding Haskell: The points he makes against obtuse names based in category
theory are valid, but then again, Haskell has its roots in research
programming languages. Math-based terminology makes more sense for an academic
audience.

~~~
asdasf
>The points he makes against obtuse names based in category theory are valid

No, they aren't. When you have a class of "things" that doesn't have a name
most people are familiar with, you are left with two options. Either choose a
name people are familiar with, but which is wrong and misleading. Or choose
the correct name and people have to learn a name. Are we seriously so pathetic
as an industry that learning 3 new technical terms is a problem?

~~~
thinkpad20
To an extent, I think it's a valid criticism. There are two main problems with
the mathy names that many concepts in Haskell have.

The first is that they hide the meaning. For example, "Monoid" is a really
scary term, and explaining it further as "something with an identity and an
associative operation" really doesn't help much either. Calling it instead
"Addable" or "Joinable", and explaining it instead as "things with a default
'zero' version, and which have a way to add two of them together", while
perhaps not a perfect definition, would be much more intuitive for the
majority of people.

That brings me to the second problem I see, which is that the esoteric
terminology in Haskell creates a barrier between those who understand it, and
those who don't, and contribute to a sense of Haskell culture being
exclusionary and cult-like, which discourages cross-talk.

Criticizing Hindley-Milner, on the other hand, I'm confused by. It's such a
useful and powerful system. I suppose it can make compiler errors more obscure
at times, but you get used to reading them and they aren't so bad. Hindley-
Milner isn't just a type inferrence system; it's a typing system which allows
for the most general typing to always be used, so that the functions one
writes are as general as possible, encouraging modularity and code reuse.

~~~
Peaker
"Addable" will not actually be more informative than "Monoid", to someone who
doesn't know "Monoid".

"Monoid" will be very informative to anyone who learned it from mathematics.

A "Monoid" is a type which supports an associative operation (`m -> m -> m`)
and a neutral element (`m`) which forms its identity element.

"Addable" suggests it is an "addition". Does this mean it is commutative? For
the sake of preciseness, I'd hope so! (Monoids aren't commutative). Does this
mean it has a negation? No. So it is not "addition", why use a misleading name
for the sake of some false sense of "intuition"?

The actual explanation of what a Monoid is _precisely_ is so short and simple,
it makes no sense to try to appeal to inaccurate intuitions.

~~~
thinkpad20
That's a completely valid point of view. You're not wrong at all. I'm
guessing, though, that you had learned it before from mathematics. My point is
one of pragmatic, not theoretical, distinction. To those without a
mathematical background (most people are not going to learn monoid unless
they've studied abstract algebra), or who are less interested in mathematics
in general, an obscure term like that is discouraging. I know that the Haskell
community is heavily mathematical, and have little interest in "dumbing down"
the language for the sake of those who are put off by theory, but it is a real
tradeoff and one of the things that is likely to impede the introduction of
Haskell into the mainstream.

~~~
Peaker
I've learned Monoid in Haskell, not maths. It's just so simple and easy that
there's really no dumbing down necessary.

Monad is simple and hard, but Monoid is simple and easy.

~~~
thinkpad20
With respect to monoid, you're right. It's really quite simple when you get
down to it. I don't have any arguments there. In fact, the fact that monoids
are really so simple is kind of my point. In almost any other language, were
such a thing to exist, monoids would not be called monoids but by some
descriptive term which conveyed an intuitive sense of their meaning and use;
it would be the purview of the mathematically inclined to write articles
explaining how "actually, what we call the Joinable type class is known in
abstract algebra as a Monoid, and its use extends beyond just joining things;
for example..."

My point isn't really specifically about monoids; they're just an example of
what often goes on in Haskell, which is that people put theory before
practicality and mathematical (and hence often esoteric) definitions before
practical, real-world definitions. Like I've said a few times, this isn't
incorrect at all. Nor is it surprising given Haskell's origins, nor is it
without purpose since it deepens your understanding of what's going on in the
language. It's just a simple fact that the mathematical jargon is a turn-off
to newcomers and those who don't feel they want to be forced to learn math
while they're programming, or might think they're incapable of doing so.

As it turns out, I'm not one of those people; I love the mathematical side of
Haskell and I love that I've learned what a Monoid is and developed an
interest in type theory, category theory and all kinds of other things. But
not everyone is like that, and that's the point I'm making.

~~~
jejones3141
Well, yeah, but... the term "monoid" already exists, and has a definite
meaning. A different name might give people an intuition for it--but it will
be a wrong one that they'll have to unlearn later, like the infamous burrito
(not that you or anyone has suggested that monads be renamed burritos, I am
happy to say!).

------
namelezz
In his talk on currying, he mentioned replying on type system to not be a good
thing. Does anyone know the reasons behind his view?

~~~
latk
Currying can obfuscate what is applied to what. Consider in any ML language "a
b c d" – we can see that "a" is a function, but we have no idea of its arity.
Uncurried, it could be: "a(b, c, d)", "a(b, c)(d)", "a(b)(c, d)", "a(b)(c)(d)"
(oh, that's the curried form again). Especially when function definitions are
implied through pattern matching, it is hard to understand the contract of a
function at a glance.

As a reader of that code cannot easily understand whether the number and type
of arguments is correct, one has to rely on the type checker that everything
will work out.

However, this is more of a criticism of ML syntax than of currying – all
things are good in moderation.

~~~
thinkpad20
It's actually simpler in some ways, because we know that "a" must have arity
1. What we know is that "a" should be a function which takes a "b", that "a b"
should be a function which takes a "c", and "a b c" should take a "d".

As a practical consideration, this rarely if ever becomes an issue, and if it
does, the type checker will tell you straight away.

Type annotations can make clear what isn't intuitively clear with a function's
signature, and since the correctness of the type checker is rigorously proven,
I don't see anything particularly wrong with "relying" on the type checker.

~~~
latk
The argument that every function has arity 1 is technically true (this is the
whole point of currying) but is not useful when definitions like "let a b c =
..." suggest other semantics. It's possible you've had a difference experience
with this, but I tend to get confused when the semantic argument list isn't
delimited.

There is nothing wrong with relying on the type checker, except that it tends
to add cognitive overhead.

~~~
thinkpad20
In my experience, the more you use currying, the more intuitive it becomes
(surprise, surprise). In any case, you very quickly develop an understanding
that `let foo bar baz = qux` is just syntactic sugar for `let foo = \bar ->
\baz -> qux`. Of course, if you want to simulate higher-arity functions, you
could just use tuples. It's perfectly acceptable to write `let foo(bar, baz) =
qux`.

------
delinka
Can we get an [audio] indicator?

------
kaeluka
YES, I've been waiting for this! Thanks so much! :)

------
DanWaterworth
TL;DR FP hater talks about FP.

------
RyanZAG
I'm going to save these HN comments for 5 years time when the hype on
functional programming has died down a bit. Will be very humorous to read this
again then.

~~~
jonsen
No chance. The hype will recurse forever. Even on stackoverflow.

~~~
platz
Deploy the canaries!

