

Functional programming didn't take off because we're imperative at heart. - jacquesm

Please criticize:<p>The argument usually goes that functional programming is inherently mathematical and that's why it isn't as big is it could have been, but I think that doesn't hold water. It's no harder than 'imperative' programming, just requires a slightly different mindset.<p>But when you look at it from a different perspective, not the 'maths/engineering' viewpoint but how humanity works we are all acting in an 'imperative' mode towards each other.<p>It structures our beliefs about authority, it's how (most, unfortunately) parents deal with their children, it is what we see when in science fiction books or movies we picture interaction between robots and humans or computers and humans:<p><pre><code>  Do x, y or z and then report back for more work.
</code></pre>
Functional programming forces you to break that habit, you no longer 'tell something what to do', you have to get in to a different mode of thinking.<p>That to me seems to be the key obstacle in mass adoption of functional programming, it simply does not fit the mindset of the people that are programming, because they're just like everybody else, this machine literally needs to be told what to do.
======
nostrademons
Technically, that's more like actor-mode concurrency than imperative
programming.

Anyway, it's dangerous to make generalizations about how people think, because
people think different ways in different situations. For example, some people
may think: I'll hand in my homework, and it'll come back with a grade on the
top. If I get enough As, that results in passing grade for the course. If I
pass 32 courses, I get a diploma. If I get a diploma, I get a job at Big Corp.
All of these are essentially "functions", where you pass in an input and get
back an output.

Honestly, I think the real reason functional programming didn't catch on is
because when the computer industry was new, you needed the performance
available through mutation. So programmers learned to depend upon it, and when
new programmers entered the industry, they picked up the conventions and
culture of old programmers, and so on. There's nothing intrinsic about
functional or imperative programming that makes one better than the other:
it's the network effects from imperative being the default in industry today.

In fact, I thought MIT released some research showing that _among students
with no previous programming experience_ , those who were taught in functional
languages picked it up more quickly and easily than those in imperative
languages. You could make a good case that functional programming is closer to
how people actually think, but that programmers' brains have been warped by
having to deal with imperative programs all these years.

------
tumult
Hopefully you'll excuse a short rant with no data to substantiate it.

It's easy to give someone a list of something to do. You already know how it's
done. Need a sandwich? "Go to the store. Buy these x items. Come home. Take
two slices of bread..."

It's more difficult to say, "A sandwich is all or part of a meal. It's a set
of ingredients that are roughly planar, layered between two pieces of bread.
Ingredients frequently include lettuce, tomato, deli meats, and cheese. A
grocery store is a place where you can go to buy.."

Or let me put it another way. One of the places where you see much less FP
than other areas is games and simulation. Is it because performance is really
that important? Haskell is very fast and provides more straightforward
parallelism. Certainly you can make Haskell code run faster than some of the
stuff people make decent games with every single day.

I think it's because simulations of a world are really goddamn hard to
understand. How can you hold that much in your head at once? Baseball sim.
"Pitcher throws a ball. It travels towards home plate, where someone swings at
it with a bat. Test for collision on the baseball's vector and the bat's
arcing collision box. If the ball hits, then.."

vs.

"The speed of a baseball as thrown by a pitcher is a product of the amount of
power chosen by the player and a selection from possible pitches. A pitch
selection is a product of all input from the player's controller who is
currently on the field. The player who is currently on the field is a product
of the number of outs from the previous inning. The outs from the previous
inning are.."

It's really hard to think like that, because these problems are really
actually very complicated. We've been "cheating" all along by not really
understanding what the hell is actually going on in our simulations. It's
completely chaotic.

Massively complex 3D games with physics tend to be very very buggy, in case
you haven't noticed. Companies with big budgets seem to be able to pull it off
because they can playtest everything to death, and special-case everything as
needed to avoid the most visible problems in the simulation. Falling through
geometry, physics going crazy, AI behaving incomprehensibly. This can all be
suppressed somewhat by lots of hard, manual work by large teams of people.
It's obviously not ideal from a programming perspective.

It's a lot easier to just list the effects of what you're seeing happen. You
don't have to understand the system itself, just what the behaviors look like.
Understanding what's actually going on is very difficult.

------
rikthevik
I think it has more to do with hardware and history. The mapping of iterative
concepts to standard microprocessor instructions is very straightforward. In
the old days languages were imperative just because you couldn't get decent
performance with high-level abstractions like FP.

------
psygnisfive
I think this is sort of correct but not entirely correct. Natural language,
for instance, affords us a great many mechanisms which are very much like a
functional programming language. For instance, rather than something like so:

    
    
      for (x in men):
        if john saw x:
          the_man = x
          break
    

mary knows the_man

we can do something like

    
    
      mary knows men.select { |x| john saw x }
    

namely, relative clauses: "mary knows the man that john saw".

But relative clauses are difficult to process when they're not tail recursive
(think of The House that Jack Built in English vs. Japanese, let's say).

So what it might suggest is that the brain is a functional tail-call optimized
system, rather than generically imperative or generically functional. This
might explain the apparent preference for imperative programming.

------
CyberFonic
Computers are imperative to the core!

The silicon in your CPU implements the mechanisms of executing instructions
which in turn manipulate data in registers, memory and persistent storage.

Everything else is implemented in layer upon layers of abstraction on top of
this imperative core. With a bossy human and a subservient sliver of silicon
the imperative is "imperative". All that which gets in the middle can either
smooth the way or get us tied up in knots.

------
wwalker3
I think functional programming never took off because the world around us is
made of stateful, interacting objects.

Pure FP uses a completely different paradigm, so there's a huge impedance
mismatch when you try to connect FP programs to the real world or run them on
real hardware.

I've written my share of Lisp code, but it never seemed to be a good fit for
things other than AI and propositional calculus.

------
zeynel1
I wonder if op can give a representative example (I am not a computer
scientist). For instance, superficially, Fibonacci in C and in Haskell, look
identical:

Fibonacci C

<<fib>>=

unsigned int fib(unsigned int n) { return n < 2 ? n : fib(n-1) + fib(n-2); }

Fibonacci Haskell

<<fibonacci_recursive.hs>>= fib :: Integer -> Integer fib n | n == 0 = 0 | n
== 1 = 1 | n > 1 = fib (n-1) + fib (n-2)

From here: <http://en.literateprograms.org/Fibonacci_numbers_(Haskell)>

~~~
nostrademons
Fibonacci in C would more often be written like this:

    
    
      unsigned int fib(unsigned int n) {
        int a = 1, b = 1, temp;
        for (int i = 2; i <= n; ++i) {
          temp = a + b;
          a = b;
          b = temp;
        }
        return b;
      }
    

This also happens to be significantly more space and time efficient.

