
Functional Programming For The Rest of Us - netvarun
http://www.defmacro.org/ramblings/fp.html
======
rauljara
Kudos to the piece for being well written, readable, and clear. But I'm
worried about a "for the rest of us" that starts off with first Plato and then
lambda calculus.

The problem with "the rest of us" is that these concepts aren't inherently
neat. Disclaimer: I'm not one of "rest of us". I'm learning Haskell, just
because I enjoy having to think in a new way.

A lot of FP explanations start off with immutability. After plato and lambda
calc, that's where this one goes, too. I think because it's a relatively
simple concept, and kind of mind blowing that you could do something useful
without incrementing i in a for loop. And then the follow up example is
normally how you can replicate a for loop in functional programming. Which is
again neat.

But if you aren't into neat things for their own sakes, you've got to ask why?
Why replicate a for loop when you could just use one? (answer: you don't
really replicate for loops in FP) They aren't asking why, in the sense that
they're curious. They're asking why in the sense of "Why would anyone do that?
That's stupid."

If you aren't into having your mind blown for no apparent reason (some aren't)
the only really compelling thing about the first several pages of the piece is
the "Benefits of FP". That's the motivation to learn these crazy new concepts.
It should really come first. I think it should come first in any discussion of
FP for people who aren't familiar with it.

Because for most people to learn, they need a motivation. Your programs will
be faster because you can do concurrency super easy is pretty good motivation.
Sadly, "because it's neat," is generally not.

~~~
tomp
This article is another one of those that confuses functional programming with
_pure_ programming (which is, I believe, by definition also functional).

You can do a lot of functional programming in a language where values are
mutable. I use it all the time in python: the map and filter functions, list
comprehensions, zip, ... For me, functional programming means having first-
class functions that can be passed to/from other functions. A standard library
of common functional combinators help, but you can always write your own.

Funny, but some of the most often mentioned functional languages, such as
OCaml and Scala, allow liberal use of mutable variables. They just make it a
bit harder/more explicit, so the programmer has to give more consideration
into it.

~~~
zby
OK - I checked wikipedia:

In computer science, functional programming is a programming paradigm that
treats computation as the evaluation of mathematical functions and avoids
state and mutable data.

Sounds exactly what I was thinking - map is not mathematical function -
because it can interact with the global state (you can apply map twice to the
same data and get different results - for example if in the function you are
mapping you use 'time') - and thus using it is not 'functional programming'.
If your first-class functions were mathematical functions - then we could
agree - but map is not one.

~~~
polymatter
"map is not mathematical function - because it can interact with the global
state (you can apply map twice to the same data and get different results"

I don't think I quite understand you. I think when you say "mathematical
function" you mean "pure function". And when you say "global state" I assume
you mean "impure operations".

But (from my understanding) map is purely functional and doesn't do anything
impure like IO. Its purely a mathematical mapping between a collection of
input values to a collection of output values, based on applying a function to
each input value in turn. You seem to be suggesting "map f (map f x)" (that is
applying map twice to the same data) is not the same value as "map f x" and
therefore its not "mathematical".

~~~
zby
OK - I can agree with the interpretation that the 'impurity' is not inside map
- but rather in the argument it takes. You can reconcile it with the
mathematical definition of function by saying that some 'programming
functions' take the implicit 'state' argument and that map is not one of them.

It still does not resolve the definition problem - is functional programming
programming with pure mathematical functions - or is it using first-class
functions (not in mathematical sense)? Which definition is more useful? Which
is more widely used?

------
eof
Aside: I am maybe 50-60 hours into my first haskell project using yesod. I
have read about half of "real world haskell" and about half of "learn you a
haskell".

Other than this, all of my experience is in iterative languages.
(c++,python,php).

One thing that comes up is "if your program compiles it's almost certain to be
correct"; and it has been true again and again for me. It's an odd, eery
feeling that honestly I haven't quite groked how it is happening that way.

I don't understand completley what I am doing, a combination of not thinking
like a FPer, not knowing the idioms of Yesod, and not being entirely familiar
with the syntax of Haskell; so I am sometimes stuck on "how" to get the type
that I need.

So I will sometimes try a couple things almost randomly; then it will compile
and it basically is always doing what I want it to do.

~~~
chousuke
I think the "if it compiles, it works" has to do with Haskell's type system
being much more expressive than most static languages, and that most of your
code will be declarative and pure.

Purity helps because your program will necessarily be composed of small, self-
contained modules (usually functions) that have no implicit dependencies.

The type system helps because you can encode a lot of the code's intent in
type signatures, which prevents code that does not match that intent from
compiling.

Monads are a good example. Whenever there is some special execution
requirement (eg. ordering for IO) for operations, you may be able to express
the requirement as a monad, and then any operation outside that monad can't
accidentally mix with it.

In addition, because monads allow you to enforce how operations are composed,
there is no way for the programmer to forget to adapt the output from
operation 1 so that it works as an input for operation 2. That's abstracted by
"bind".

I'm a Haskell newbie myself, but there are a lot of mind-bending tidbits
around the Internet in how to encode _intent_ in various ways using the type
system... It's not the most powerful system imaginable for that purpose, but
as far as practical languages go, it's probably at the top.

~~~
tmhedberg
Agreed; after getting used to working with Haskell's type system, I've found
it kind of painful to use languages with weaker, less expressive type systems
(which unfortunately, is basically every practical language other than
Haskell). The ability to transform all manner of bugs into compile-time errors
via the typechecker is indispensable. Once you start digging into GHC
extensions like GADTs and type families, it gets even better. I never feel
nearly as confident about the correctness of code I write in, say, Python,
even when using a full test suite.

~~~
beagle3
> which unfortunately, is basically every practical language other than
> Haskell

Is this also true of Ocaml? I have experience in neither, but my understanding
from reading up on this says Ocaml and Haskell are roughly equivalent, and if
anything - Ocaml is even more expressive.

Anyone with real world experience who can comment on this?

~~~
tmhedberg
I can't answer you with any real authority, since I haven't used OCaml. But
from what I understand, it lacks type classes, which are absolutely central to
the richness of Haskell's type system. Whether or not it makes up for this in
other ways, I can't say.

OCaml is also strict by default, with optional laziness, whereas Haskell is
the opposite. This could be seen as either an advantage or a disadvantage, but
either way, it is a very fundamental difference.

If you are considering learning one or the other, I'd advocate for Haskell
purely on the basis of it being the closest to mainstream of all the static
functional languages. There is a huge variety of libraries available on
Hackage, and the community is very active, welcoming, and friendly. Haskell
(GHC in particular) is also the laboratory in which the majority of
interesting FP research is currently being done, if you care about that at
all.

Or just learn both languages, so you can compare the benefits and
disadvantages for yourself. :)

~~~
njs12345
OCaml has a powerful module system which can be used in a way quite similar to
typeclasses (<https://ocaml.janestreet.com/?q=node/37>).

The main difference is the lack of purity, which means that the side effects
of a function aren't part of a function's type. It also doesn't have the wide
variety of extensions which you can use to do really crazy stuff (like
verifiably correct red-black trees, [https://github.com/yairchu/red-black-
tree/blob/master/RedBla...](https://github.com/yairchu/red-black-
tree/blob/master/RedBlackTree.hs)).

~~~
gtani
That modules vs. types has been going on for awhile

<http://news.ycombinator.com/item?id=2622166>

[http://www.cs.washington.edu/education/courses/cse505/98au/s...](http://www.cs.washington.edu/education/courses/cse505/98au/slides/slides.10-26.ps)

<http://news.ycombinator.com/item?id=3481700>

<http://news.ycombinator.com/item?id=3641355>

------
adamc
I was disappointed because I was hoping it was a new piece -- in general, I
have found coffeemug's writing well worth my time.

------
zvrba
Functional programming for the rest of us: please structure your code, to the
extent that it is possible and practical, around functions whose return value
depends only on its arguments.

------
fahad-sheikh
I found this article very useful, and helpful in de-mystifying the world of
functional programming. Are there other articles that cover functional
programming theory in plain words like this?

~~~
gtani
Glad you asked! I did this in about 7 minutes, but I could spend a week
talking about how lispers, ML, clojure scala and haskell approach this.
[http://www.reddit.com/r/haskell/comments/qxmh3/what_are_some...](http://www.reddit.com/r/haskell/comments/qxmh3/what_are_some_good_and_short_max_3_pages_articles/)

[http://stackoverflow.com/questions/tagged/functional-
program...](http://stackoverflow.com/questions/tagged/functional-
programming?sort=votes&pagesize=15)

esp this one (Harrop, Ramsey, Brian are FP heavyweights

[http://stackoverflow.com/questions/485418/distinctive-
traits...](http://stackoverflow.com/questions/485418/distinctive-traits-of-
the-functional-languages)

------
agentgt
Is it me or that "A Walk in the Park" sounds like Sheldon Cooper from the Big
Bang theory:

"Fire up the time machine. Our walk in the park took place more than two
thousand years ago, on a beautiful sunny day of a long forgotten spring in 380
B.C. Outside the city walls of Athens, under the pleasant shade of olive trees
Plato was walking towards the Academy with a beautiful slave boy. The weather
was lovely, the dinner was filling, and the conversation turned to
philosophy."

Big Bang: <http://www.youtube.com/watch?v=lDxv9hSKdY4>

~~~
Groxx
Other way around, I believe. The date on this article is 2006, BBT started
late 2007.

------
bazzargh
Nice article, but its a bit irksome that the MessageHandler example drops the
conditional and doesn't replace it - the code at the end of that section isn't
equivalent to what he started with.

------
sreyemhtes
This is a much clearer explanation of the benefits than most of the other
attempts I have seen.

------
evincarofautumn
A great read. Just a couple of things that jumped out at me:

“In functional languages automatic analysis of functions and finding good
candidates for concurrent execution is as trivial as automatic inlining!”

It isn’t exactly trivial. If you can evaluate _anything_ concurrently, then
you still have to figure out what’s worth the overhead of spinning up a thread
to run concurrently. And many operations are inherently sequential even if
they aren’t imperative or side-effectful—parsing, for example. Still, the gist
is that concurrency is easiest in a pure language and a functional style, and
I have no bones to pick with that.

“…infinite data structures, something that's much more complicated in a strict
language.”

Infinite structures are still pretty easy to construct in strict languages.
For example, you can use ranges of the sort in D’s std.range library, as
described in Alexandrescu’s “Iterators Must Go!” talk.

~~~
fusiongyro
Of course benchmarking is essential, but the Control.Parallel library can
decouple expressions to run in parallel (called sparks) from threads, and
manages the whole shebang for you. You more-or-less say x `pseq` y and it will
try to evaluate them in separate threads if possible. The RTS statistics will
tell you how many sparks got "converted" to run on another CPU and whatnot, so
you can compare and see if you actually got an improvement, but it is a bit
easier than spinning up dedicated threads, assuming you're parallelizing pure
computations. For true concurrency, you still need to deal with threads.

------
squires
Interesting, well written article.

However I was surprised to be made aware of the existence of a logician named
Haskell Curry. The name struck me as the punchline to a joke only told and
appreciated by the uber-nerds in the halls of elite university mathematics
departments.

------
jiggy2011
Good article , cleared a few things up for me.

One thing I am still a bit hazy on though. He mentions that continuations
could be used in the context of a web app to maintain state between HTTP
requests.

I'm not quite clear how this would actually work. Lets say you have something
like this (in psuedocode):

Let's say we have a counter we want to increment on each request.

doHTTPRequest(requestVars)

{

    
    
      renderPage(requestVars)
      pause()
    
      return requestVars + 1 
    

}

doHTTPRequest(doHTTPRequest)

Is that approximately how this works?

In which case, how would it work for doing more than 2 requests in a row?

~~~
Periodic
Node.js is built around a lot of continuations, if I recall. It's very common
in event driven computing.

I believe the basic way to do it is that the function that processes the
original request both returns the output and the continuation, then the next
time the user connects it would use the continuation instead of the original
function.

    
    
        function doHTTPRequest(requestVars, renderPage)
        {
             (content, renderNextPage) = renderPage(requestVars);
    
            return (content, function (newRequestVars) {
                renderNextPage(newRequestVars);
            });
        }
    

It's a little awkward, and maybe not the best example of CPS. But basically
what happens is the request gets handled and returns the output and something
to do next. Then you return your output and some additional computation that
will get executed later, in this case a function that renders a different
page. This is computation that hasn't been evaluated, but which can be passed
around to evaluate later. Note that this whole function could be replaced by
renderPage, but I wanted to make things more explicit. A good language would
let you write

    
    
        doHTTPRequest = renderPage;
    

Also, when your config would probably specify that it should call renderPage
and so that function would get passed around or held by your
controller/dispatcher until the actual doHTTPRequest is called.

~~~
lemming
What you're describing is CPS, but that's not how continuations would
typically be used in a webapp, it's actually a compiler technique (I've always
thought it was crazy that node.js forces you to do by hand something that
should be done by the compiler). A language that supports full continuations
as first class objects, for example some Scheme implementations, allow you to
actually suspend the current execution (including the entire call stack) and
store it somewhere, and then start executing it again at a later point in
time. So your web container can start running a thread of execution for a
particular request, suspend it after it writes its response then resume it
again when it receives the next request corresponding to this web session. It
allows you to write code like:

    
    
        function handleSession() {
          renderFormPage()   // writes response with HTML for form
          suspend()          // wait for next request in the session
          processForm()      // process the form data - notice this is the same function
        }
    

It's a very powerful technique and can make web programming very intuitive
since you can have a single linear flow for a complex web interaction.

Check out Chris Double's tutorial on this:

<http://double.co.nz/scheme/modal-web-server.html>

Or SISC-Web for a simple but complete implementation:

<http://siscweb.sourceforge.net/overview.html>

------
yathern
Anyone who liked this read might also like the read "Gödel, Escher, Bach: An
Eternal Golden Braid". Its such an interesting book, and teaches through very
simple ways.

------
brooksbp
> Programmers are procrastinators. Get in, get some coffee, check the mailbox,
> read the RSS feeds, read the news, check out latest articles on techie
> websites, browse through political discussions on the designated sections of
> the programming forums. Rinse and repeat to make sure nothing is missed. Go
> to lunch. Come back, stare at the IDE for a few minutes. Check the mailbox.
> Get some coffee. Before you know it, the day is over.

You're fired.

~~~
veb
So you'd rather have some dinosaur sitting at a computer? Someone who doesn't
care about programming, doesn't care about the latest or newest ways to do
things? Doesn't care about improving themselves?

I must admit, I'm like this _sometimes_. But that's only because I'm using PHP
:) If you have a problem with your developers doing this, you're doing
something wrong.

------
fabjan
Just a nitpick: Erlang is not an academic language, it's designed in the
industry, for industrial use.

------
michaelochurch
After racking my brain for years trying to figure out why shitty "design
patterns" have been winning while functional programming (despite its
superiority to typical object-obfuscated architecture) remains on the back-
burner, I've come to a few conclusions.

1\. The name "functional programming" doesn't sell us; it just sounds
impractical. What we do isn't really "functional programming" in a purist
sense. We need side effects all the time. We just have an understanding of the
need to _manage_ state (not eliminate it) properly. We need a better term for
the hybrid functional-preferred/imperative-when-appropriate/very-occasionally-
OO style that good engineers use, but all I can come up with (for how we
"functional" programmers do things as opposed to the enterprise hoipolloi) is
"non-shitty programming" and that sounds biased.

2\. Usually, when we showcase FP, it looks like we're hawking complexity. We
get into tail recursion and abstract data types right away. In fact, proper
use of FP _reduces_ complexity a great deal. For example, Scala is actually
less complex than Java _by far_. On the other hand, it's hard to present Scala
to a Java programmer without it appearing that one is trying to shove more
complexity into his workflow, because it's only after a few months of learning
how FP works (and that you don't need classes to program, you just need
functions, because that's what algorithms actually are) that people realize
how much simpler code becomes when done properly, and that functional
programming (unlike <managerially-empowered flavor-of-the-month X> in Java)
does provide life-long, substantial boosts in productivity and product
quality. Until people get to that point, they'll feel like we're just shoving
a tough learning curve on them.

~~~
lemming
_Scala is actually less complex than Java by far._

I'm sorry, but I can't let this one go by. I can accept the argument that FP
may be less complex once you get used to it (not sure I agree but let's go
with it) and immutability by default is great but just because Scala and/or
it's surrounding community allow or encourage this style doesn't mean that
Scala-the-language is less complex than Java-the-language. That's a statement
bordering on the absurd - I would say that Scala is demonstrably massively
more complex than Java. Many of the things that make Scala great _are_
extremely complicated, both theoretically and practically. That doesn't stop
it being great but it does make it complex.

~~~
michaelochurch
If I recall correctly, the Scala spec is about one-third the size of the Java
spec.

~~~
CarlSmotricz
If that's true, then Scala was simply specified less precisely than Java. As
lemming said, Scala is a massive superset of Java's functionality.

Off the top of my head, I can rattle off a slew of functionality in Scala
that's not in Java:

* Regexp literals

* partial/curried functions

* for comprehension

* argument defaults

* case classes (and switches thereon)

* tuples

* combinators

* XML interpolation

* Mixins

* Fine-grained visibility

Caveat: I'm no Scala expert. Some of this may be in the libraries and not the
language proper.

Now... what Java functionality can you name that's not in Scala?

~~~
michaelochurch
Not that this diminishes your point, but many of these features (e.g. case
classes and pattern matching, list comprehensions) actually make real-world
code _less_ complex.

You're right that every bit of complexity Java has also trickles into Scala.
For example, Java's broken generics certainly add undesired complexity (type
erasure, wonky types) to Scala. Still, I think that idiomatic Scala generally
is less complex per unit functionality delivered. It's true that "full Scala"
must be more complex than Java because it contains Java as a subset, but I
think that more can be accomplished in Scala without making code complex or
verbose to the point that it becomes a problem.

In other words, to do comparable things requires far less complexity in Scala
than in Java, because Scala delivers the best bang-for-buck by picking the
most efficient abstractions. For example, Java concurrency requires a JVM
expert to get it right (did you know that doubles and longs aren't thread-
safe?) whereas Scala concurrency just requires someone who has a basic
understanding of the Actor model.

------
AdrianRossouw
such a wonderful article , sullied by java based examples.

------
dustingetz
i wrote this article off about when it makes incorrect comparisons between
immutable values and final variables. someone who read it, is the rest of the
article worth reading, or even accurate?

there's certainly a need for articles like this, targetting an audience of
java devs, if only they were correct. I haven't found much.

~~~
spacemanaki
Yes, it's worth reading and it's basically accurate, although I'm not sure the
accuracy is really as important as the general big picture, since there aren't
that many examples.

Is there something specific that you can point out where the article is
actually _incorrect_? There are only a few paragraphs in the beginning that
mention final variables (and those only use Java primitives). I agree that
there's a distinction to be made but I don't know how useful that would be in
this kind of article.

If you're going to be splitting hairs over immutable values and final
variables, you're probably not the target audience. On the other hand, Java
devs who can't be bothered to finish a 20 page article probably aren't either.

FWIW, I think that Rich Hickey's Clojure videos and some of his more general
talks on state, time and identity are probably some of the best "intros to FP
for Java devs" out there.

~~~
calpaterson
To be fair the the grandparent, the difference between immutable data
structures and final variables in imperative languages is large and
fundamental. If the author is (and I don't know, because I haven't read the
article recently) making this analogy then it is going to be extremely
confusing for some people.

~~~
spacemanaki
The article doesn't make this analogy, that was my point. It doesn't really
delve into immutable data structures at all, and just discusses FP in broad
strokes. This isn't really a bad thing, since I don't think it was meant to be
a deeply technical article.

------
bryogenic
/me adds this article to my pile of articles to read

