
Adventures of an imperative programmer in the land of fp - jacquesm
http://jacquesmattheij.com/Adventures+of+an+imperative+programmer+in+the+land+of+fp
======
sofuture
I've recently jumped into Erlang, and have come to some of the same
conclusions. I read Armstrong's Erlang book, and thought 'that seems cool' and
didn't at all grok what was going on beyond the superficial. Then a few months
later, just sat down and started solving Project Euler problems. At first it
was strange and foreign and I was mad variables were immutable. A day later
and it just clicked, I'd never had that much fun writing code.

The FP paradigm is (to me, even after 15 years of imperative programming) so
much more natural for development, since you're able to sanely start attacking
the problem directly -- instead of architecting a big-picture solution up
front that's probably wrong anyway because you've ignored some detail you
haven't yet discovered.

Building things with actions instead of objects just makes much more sense.
Reuse comes much more naturally, and doesn't seem as contrived as a lot of OOP
reuse seems. I've noticed that working my FP muscles out has made me a much
better imperative programmer -- I write a lot more clever and effective code
(not clever like 'tee-hee-no-one-will-ever-figure-this-out'!).

~~~
WilliamLP
One thing that raises my eyebrows is how much functional programmers talk
about Project Euler problems. The actual programming for solving these
problems is in fact extremely trivial. They require some mathematical insight,
especially after the first one hundred or so, and you need to do some external
research on Pell's Equation to avoid getting stuck, and you need a fraction
library if your language doesn't have it built in. But am I wrong in thinking
that these kind of problems are almost no test or strain for your actual
_programming_ at all?

~~~
sofuture
They're an absolutely fantastic way to start 'getting' any language. They're
small, discrete, and varied tasks that require you to build different types of
operations and structures. Sure, they're not at all reflective of 'real
programming' nor are they necessarily particularly challenging, programming
wise.

I'd rather cut my teeth in a new language on the first 50 or so PE problems,
than take on a bigger, less defined, or more domain limited task.

I'm conversant in Python and Erlang because of PE problems entirely. They've
enabled me to start actual projects in both languages.

~~~
WilliamLP
> varied tasks

That's where I'd argue. It would be like saying you're an all around gamer
when you play only chess but with different openings every time.

Varied simple tasks would be doing some Euler problems, doing some basic
algorithms (Dijkstra's with Heap, A* pathfinder on a 2d map, etc... TopCoder
problems are great for this), write a Mandelbrot zoomer, Conway's Life app
with position setup and step-through and save/load, write a Tetris clone,
write a basic HTML form builder, write a blogging engine, write a multi-user
chat room server, write a simple side-scrolling shooter game, write a basic
Roguelike game, write a simple text adventure. Things like this can all be
afternoon projects.

Every time I read someone claiming that Project Euler is for developing
general-purpose programming, I roll my eyes more than a little.

~~~
silentbicycle
Any similar sites you'd suggest, besides Project Euler and TopCoder? I've
worked through much of PythonChallenge, but I'm more interested in sites that
aren't as closely tied to a specific language.

~~~
WilliamLP
Do you know about Sphere Online Judge? (<http://www.spoj.pl/>) It supports a
lot of languages. It still contains mostly algorithmic problems though, making
it a superset of the kinds of problems at Project Euler but still a tiny
subset of general programming.

~~~
silentbicycle
That's what I had in mind, thanks.

------
chriseidhof
If you really start studying FP literature you'll discover that an enormous
amount of ideas that are in vogue in today's languages have been invented
years ago by functional programmers. For example, this paper from 1965(!)
describes DSLs: <http://www.cs.cmu.edu/~crary/819-f09/Landin66.pdf>

I think learning a language like Haskell can be extremely good for you as a
programmer. The problem is that you just can't expect to be productive, if
you're new to it, and that might be very frustrating if you're doing to do
work. However, just as jacquesm writes, if you're doing it for fun you'll
learn a lot (that you can sometimes apply directly to your normal
programming).

------
beagle3
There are about a thousand FP programmers to every imperative programmer, if
not ten thousands. They use a tool you may have heard of, called "Excel",
which is quite limited in scope.

It is an FP subset which is at the same time trivial to understand (much more
so than imperative programming for most users!), and coupled with a usable I/O
capabilities, is surprisingly sufficient for many uses.

(IDE, documentation, maintainability all suck, though; I wouldn't recommend it
as your main FP tool if you can avoid it)

~~~
ww520
And many Java programmers doing FP programming unknowingly when writing an Ant
script.

~~~
silentbicycle
No - ant is _logic programming_ , not functional programming. Same with make.

------
bufo
it's funny, in French engineering schools we have the opposite reaction; in
"prep" schools students are taught OCaml which is their first programming
language if they're not geeks.

Functions returning functions seem a natural thing as they are used to the
exact same kind of abstraction in math (and even sometimes order 3+ functions
when you study duality!). Conversely, they are initially puzzled when they are
taught Java in their engineering school because of the difference between
static variables and attributes, constructors and other unnatural concepts.

~~~
eru
`Unnatural' in the sense of `not relating to natural transformations'?

~~~
Dn_Ab
Unlikely since natural transformations are a statement on how a particular
collection of functions behave under composition.

------
ww520
It's great that more people are exposed to the functional programming style.
Kudos to OP for trying something new.

It bugs me whenever FP people talk about state being bad as if it should be
avoided at all cost. State is bad if its scope is not carefully managed.
Global state is generally bad because its scope allows the whole program to
modify it, making it difficult to follow its validity. Local state maintained
as a local variable in a function is perfectly fine. Its scope is small and
its changes can be tracked easily. Pure functional code actually also
implicitly maintain state in their parameters and return values, and the
passing of the return values as parameters to next function.

~~~
silentbicycle
It's ultimately about making code easier to reason about. Immutability (at
least as a default) makes the dataflow between independent portions of your
program clearer, since every value is determined by where it came from, not
where it came from _and_ everything that could have potentially touched it
along the way.

~~~
silentbicycle
Oh: Easier for the compiler to reason about, too. Not just the programmers.

------
DrJokepu
One thing you haven't mentioned but it is related to anonymous functions (or
lambdas) and is an important part of the fp style is _passing around functions
as first class objects_. It is quite unusual in imperative programming and it
is normally only used there to provide callbacks.

Let's say, you want to write a function that exports your video library to an
arbitrary medium. In FP, one way to do this is to create generator functions
that create DVDs, Blu-Ray discs, etc. Then your export function would take the
input and a generator function, and export the library using that function. In
Common Lisp:

    
    
      (defun make-dvd () ... )
      (defun make-blu-ray () ... )
      (defun export-library (data make-medium) ... )
    

And then you would

    
    
      (export-library *my-data* #'make-dvd)
      (export-library *my-data* #'make-blu-ray)
    

Or if you want to use an inline lambda (anonymous) function:

    
    
      (export-library *my-data* (lambda () ...))

~~~
jacquesm
I'm going to do a completely separate post on the subject of first class
functions because I think it is too complicated a subject (and with too many
implications) to be squeezed in there as an aside, besides that I don't think
I'm qualified just yet to write the article in a strong enough way yet. More
understanding is required first.

It must be funny to all the FP gurus here to see someone struggling to
understand the things that are second nature to them, but I find that it is
surprisingly hard to teach this old dog a new trick. One part of me wants to
say 'enough of this' all the time and reach for a C compiler just to get the
job done :)

~~~
jules
The real trouble here is probably closures.

    
    
        function foo(x) { return function(y) { return x+y; }; }
    
        bar = foo(3)
        bar(5) 
        // Returns 8, wait how did the x in the inner function remember the 3?! 
        // It's supposed to be gone already because the call to foo is over...
    

One way to understand this is to notice that it always "just works". It seems
like magic. The 'aha' for me was when I learned how closures are implemented.

~~~
silentbicycle
But you'd probably only get hung up on that being "magic" if your mental model
of programming assumed stack-based scope for everything.

Besides, how about this pseudocode?

    
    
        class Foo {
            field x;
            constructor(x) { this.x = x; }
            fun add(y) { return x + y; }
        }
        
        bar = Foo(3);
        bar.add(5)  // returns 8, wait how did the x in the object remember the 3?

~~~
jacquesm
Stack based 'mental' scope is one of the bigger things to 'unlearn'.

When I look at a function I build up a mental image of what is going on on the
stack as the function executes. Things that I don't see declared within the
function or that are explicitly allocated update the heap. To get away from
that image is very hard to do at first.

I compare it to learning real life languages, like English versus German. If
you've never seen a language that has declinations in the endings of words
then it can completely overwhelm you. Likewise tonal languages are strange
when you've only spoken English _or_ German.

Each of those opens up the perspective you had before you learned that
language to show you that there is more under the sun than what you thought
was 'normal', and that your way is not always the better one.

In programming languages it is much the same way.

~~~
silentbicycle
Indeed. For some reason, closures never tripped me up, but continuations took
a while to wrap my head around. (I think because I tried to learn
continuation-passing-style _first_. While also learning OCaml.)

Since you know C, it might help to think of continuations in terms of
setjmp/longjmp, but with multiple, persistent stacks. (Same with coroutines,
but coroutines and continuations have a lot of overlap.)

~~~
jacquesm
> in terms of setjmp/longjmp, but with multiple, persistent stacks.

That's exactly the thing I used to help me visualize it :) Funny you came up
with the exact same model.

Setjmp and longjmp are by the way probably two of the most abused calls in the
C runtime package. But when you need them, you need them badly.

------
blintson
Towards the end of his entry he mentioned something about readability. I
believe postfix notation is superior in both read & write-ability for FP.

Take: (reduce (lambda (x y) ...) (map (lambda (x) ...) data-set))

When you actually read this what do you do? You work inside-out to understand
it. You figure out what 'data-set' is, then you figure out what '(lambda (x)
...) does to it, and so on.

You (or me at least) also write the code inside-out. You start with the data,
and an idea of how to transform it, and you work your way towards that
transformation.

Compare to:

((data-set (lambda (x)...) map) (lambda (x y) ...) reduce)

Of course, this brings up a lot of edge-cases. Ex.: Where does 'define' fit
into this? You really want define and the variable name at the beginning.

~~~
jules
This, in my opinion, is one of the greatest strengths that OOP has over FP:
data.map(...).reduce(...). It doesn't really have anything to do with OOP, you
could just as well have such a syntax for calling functions. In F# you do have
the |> operator that does something like this.

This may seem superficial, but it helps readability a lot. The human mind (or
mine at least) is just not well suited to unraveling nested structures.

~~~
koenigdavidmj
Whoa! The UNIX shell's pipelines!

~~~
jimbokun
The odd thing is that the semantics of the Unix pipeline model is more similar
to lazy sequences in Haskell or Clojure.

It is common to have one function generating a lazy sequence, another taking
the output of that function and generating another lazy sequence, and so on
until you get your final results out at the other end. The nice thing is that
at no point in time is it necessary to have the entire sequence in memory.

A Unix pipeline is similar, in that one process consumes the output of another
process as it becomes available, as opposed to having to wait for the first
process to complete its task before the next process in the pipeline can
start.

------
Goladus
Stuart Halloway has a great explanation of anonymous functions in "Programming
Clojure." He identifies the specific conditions where you might choose to use
an anonymous function given that, for readability reasons, naming functions is
usually a good idea.

------
martingordon
I tried wrapping my head around Clojure but I just couldn't. I'm currently
diving into Scala and finding it much easier to get into coming from an OOP
background (I'm a Java developer by day and a Rails/Objective-C developer by
night).

------
eru
Interesting read.

From the article: "You have to train yourself to start the understanding of
code you're looking at from the innermost expressions, which are the first to
be evaluated."

The author is in for some even more mind bending, when he eventually has a
look at lazy languages.

~~~
jacquesm
Lazy evaluation: evaluate an expression only when you are actually going to do
something with the result. For instance, in a generator you can generate
'infinitely long lists' because only those values that are consumed are
actually generated.

Is that what you mean ?

Or do you mean when that principle is expanded to the whole language ?

~~~
eru
To the whole language. Then you can't tell what's evaluated when, easily.

------
DanielBMarkham
Welcome to the party, Jacques! I think you and I started about the same time.

FP seems much richer than IP, but maybe that's just me. I know that once you
get all the basic set operations, then you move on to continuations and Monads
and it's like wow! A whole _other_ world opens up. Then you can move on to all
sorts of other cool stuff like super-scaling which kind of just "falls out" of
FP. So it seems like there is more depth here for geekiness.

As far as bugs, I guess that the vast majority of bugs are related to either
"state leakage" -- somebody tickling your variables while you're not looking
-- or off-by-one errors. FP eliminates both of those. I know I try to stay as
immutable as possible and my code feels a lot more solid than it used to.

