
Don’t Be Scared Of Functional Programming - subbz
http://www.smashingmagazine.com/2014/07/02/dont-be-scared-of-functional-programming/
======
cessor
I am not so much scared by functional programming as I am annoyed by
functional programmers. I feel that they often come over like mathematics
professors that just fill the chalkboard with lightning speed and look at you
like "seriously, how dare you ask this dumb question". They go ahead
explaining to you how your language is crap for all its sideeffects, smugly
explaining how their language doesn't have those and then explain to you how
Simon Peyton Jones shoehorned them into haskell with a monad. Also you always
appear to fail to understand monads and they always fail to explain in an
understandable way (although they're not that hard to get).

Functional programming is great and feels right, I am amazed how knowledge of
fp affects my js, C# and Python up to this day. But from a social perspective:
Loose the smugness, and then we'll talk fp.

~~~
innguest
The reason you might be perceiving this is because once you're past the
learning curve hump, the concepts become extremely obvious and the definitions
stick to your head.

It takes me a few seconds to rewrite foldl from rote understanding: foldl f z
(x:xs) = foldl f (f x) xs, because it takes a function, a 'zero', a list of
head and tail, and calls itself again with the same function, a zero 'going
up' (so accumulating) and a list 'going down' (being exhausted). So as you see
this is an almost a visual way to understand it, which is natural to those
that already are familiar with these abstractions, but is meaningless to those
who don't.

If someone then says "how dare you ask that" then they have the wrong approach
to things. Socrates asked every question and thought they were all valid. By
asking the basics you learn the abstraction as if you're proving it to
yourself, and that's more powerful.

So it boils down to familiarity, is what I'm trying to say. I usually come
across as more passionate than arrogant about it. If you stick to it, it will
make sense. It took me 4 years of Haskell exploration for it all to click
really hard and now I'm trying to shorten other people's paths. In my opinion
the gap is around mathematical abstractions, how they're created, what do they
mean, how they actually relate to reality, and what work do they accomplish
for us that we then don't need to do anymore.

~~~
mbell
> It takes me a few seconds to rewrite foldl from rote understanding: foldl f
> z (x:xs) = foldl f (f x) xs, because it takes a function, a 'zero', a list
> of head and tail, and calls itself again with the same function, a zero
> 'going up' (so accumulating) and a list 'going down' (being exhausted). So
> as you see this is an almost a visual way to understand it, which is natural
> to those that already are familiar with these abstractions, but is
> meaningless to those who don't.

I think this type of explanation falls into what the parent post is talking
about. That is a technical accurate but very dry way to describe a construct
that simply calls the function it's given for every element in a collection
and returns the accumulation of the results. Granted you can dive into all
sorts of discussions about what that _really_ means and what power it gives
you, but save that for much later, it isn't introductory material.

An analogy to how functional programming often is taught would be if I tried
to teach a new programming language by going over the BNF grammar first
without context instead of starting with how the language is actually used in
practice. It may be more 'technically correct' but it's a terribly ineffective
way to learn for most people, even those who know and understand BNF. People
are often more interested in what they can do or create with this new thing,
not the nitty gritty details of how it works mathematically even if they
intend to get into the details later.

~~~
innguest
You're absolutely right, but I wasn't trying to explain it, just showing how
easy and natural it gets once you get past the hump. I was assuming the author
already understands the benefits of FP but finds it difficult, and I was
saying it definitely does get a lot easier.

Having said that, for an example of how I would actually explain some FP to
someone, search this page for "DomainList" and I have an explanation there.

I agree with you 100%; my belief is that mathematics and its notation often
gets introduced too early, before students have had a chance to build
intuition.

------
slapresta
Smashing Magazine on functional programming, getting most of it wrong. This is
the very moment functional programming jumps the shark.

It's too mainstream for me now, I'll have to switch to reactive programming.

~~~
pacala
Reactive programming is functional programming for events. You can run but you
can't hide.

~~~
lightblade
"Functional" reactive programming is the functional programming for events.
Reactive programming is actually pretty broad.

~~~
jonsterling
No, Functional Reactive Programming is not about events.

~~~
slapresta
Isn't it? It's about event streams, right?

~~~
jonsterling
Nah, FRP is fundamentally about continuous time.

------
grayrest
If you're interested in FP in javascript, you should check out the relatively
new Ramda.js [1]. It switches from the underscore style collection first call
pattern to a collection last pattern and automatically curries its functions.
The combination allows for easier function composition.

[1] [https://github.com/CrossEye/ramda](https://github.com/CrossEye/ramda)

More info on the topic:

[http://fr.umio.us/why-ramda/](http://fr.umio.us/why-ramda/)

[http://www.youtube.com/watch?v=m3svKOdZijA](http://www.youtube.com/watch?v=m3svKOdZijA)

[https://speakerdeck.com/raganwald/javascript-
combinators](https://speakerdeck.com/raganwald/javascript-combinators)

~~~
vqc
I believe Javascript Allonge takes the same approach w/r/t the call pattern:
[http://allong.es/](http://allong.es/)

The book is also very good: [https://leanpub.com/javascript-
allonge](https://leanpub.com/javascript-allonge)

------
droithomme
"Don't be scared" rhetoric is generally used to promote arguments not
primarily through logic, but by painting the audience as fearful luddites,
scared of progress and the inevitable better future based on whatever the
speaker is advocating. The goal is to encourage the listener to convert to the
new religion or paradigm, or be considered backwards, behind the times, and
ignorant or stubborn. It's an effective tactic because any push-back is pre-
framed as evidence the listener is among the ignorant backwards people.

~~~
notacoward
Hear, hear. I had exactly the same reaction. Maybe the author is pitching the
article toward novice programmers and is genuinely trying to reassure. OTOH,
such phrasing is far more often used as a pure rhetorical ploy.

Personally, since I work in storage which is where we have to store all the
"state" that the FP folks have defined into Somebody Else's Problem, I have
little tolerance for that particular flavor of the month.

~~~
slapresta
You seem to be confusing state with data. Storage is supposed to store data,
not state.

Still, even if you store your internal state, that's hardly what functional
programming means when opposing state.

~~~
notacoward
That kind of response starts to sound like "no true Scotsman" pretty darn
quick. How do you distinguish between state that is on the stack (which FP
still has), state that is on the heap, or state that is on external storage?
Are those the same distinctions/evasions that I'd get from the next three FP
advocates I asked? I keep hearing about how FP saves us from all that evil
mutable state, but every time I look at a program written in a functional
language I see plenty of mutable state. Half of it is entangled with control
flow, which I do not see as an unalloyed win. When that doesn't suffice, those
same programs often resort to externalizing their state (e.g. into a
database), often incurring a significant and unnecessary performance penalty
just so they can give it a name and treat it as something outside of their
program. Aristocrats never like to get their hands dirty, and that's _exactly_
how most FP advocacy comes across. "Let them eat monads." Can't wait for the
guillotine.

~~~
innguest
There are answers to those questions but they do not help you to understand
functional programming as a practice. Because you're asking the equivalent of
"But how does OOP distinguish stack from heap from storage?" and it's not a
practical question.

The answer is something along the lines of: the compiler is usually very
different from most, it uses graph reduction and rewriting techniques, garbage
collectors optimized for block reusage for all the heap churn that all those
function frames _would_ generate if implemented naively (that's why recursion
ends up being as fast as a loop); this is my cursory understanding of it. But
yes, I would suggest Lisp in Small Pieces and Appel's compiler books.

For practical purposes what is important is that FP is inviting you to program
in this world where it is not important to talk about how those things end up
getting implemented on the machine. In fact it lets you quickly escape that
world whenever possible by allowing you to alias the String type to FirstName
for instance; so you can talk about the problem you're trying to solve, and
not the computer that will run it. That's the point of declarative programming
and FP is declarative.

It's the same in Prolog, I'd think. I'm sure they don't care about how these
things are actually being created and destroyed on the machine; the power of
Prolog's declarative style is precisely that it allows you to never talk about
the machine if you don't want to. That ends up being more general than the
usual abstractions of the day that still model themselves after the turing
machine.

There's no state to speak of. FP builds the blueprint of the program (one
level removed from the way OOP and procedural see things) and gives it to the
computer to run it. Now we make the execution order implicit (it's now
implicitly threaded through the functions' caller-callee relationships) and we
lose the ability to do state (because state depends on ordering things, which
can be represented by the semicolon in curly-brace languages). We also lose
the ability to do one thing after the other. But although it sounds like a bad
thing, it's actually a really good thing, because apparently most bugs live
inside semicolons.

~~~
seanmcdirmid
> That's the point of declarative programming and FP is declarative.

Declarative programming is a gradient. You can be exposed to a bunch of
tedious details in FP and you can abstract over them in OOP. Saying FP is
somehow magically declarative (as if that were a thing) leads to profound
disappointment when complexity is inevitably encountered.

> It's the same in Prolog, I'd think.

"Cut" it out :) Also see taming space leaks in Haskell.

> There's no state to speak of. FP builds the blueprint of the program (one
> level removed from the way OOP and procedural see things) and gives it to
> the computer to run it.

The primary difference between pure FP and OOP is identity: there is no
identity to speak of, things don't have names or addresses, they are just
values whose equality is determined purely by structure. The inability to name
things in FP is a huge drawback, because things actually have names in the
real world.

State doesn't depend on ordering. It depends on time; imperative and even
functional languages often convolute the two concepts, but not always (see
real FRP, not fake FRP ala Rx). Functional languages eschew state are merely
eschewing the notion of time, which makes equational reasoning hard; it takes
something like FRP to bring time back into the fold by making it explicit.
Still, its quite hard to program interactive apps in pure FP given the
intrinsic aversion to time.

------
chris_mahan
I'm not scared. It just does not make sense to me.

Just because I speak French and English fluently does not mean I can speak
Japanese. My wife is Japanese, and speaks it fluently. We've been together 19
years. I've tried to learn. it just does not make sense. It would be nice,
because I have relatives there, and I could go live there, legally, but it
just doesn't make sense in my brain.

What can I do?

Functional programming is the same. Every couple of years I take a look, and I
recoil in confusion.

It also doesn't help that people make fun of me for seeming dumb.

~~~
jeffasinger
What worked for me was taking functional programming concepts, and applying
them in languages where I was already comfortable. That way, I wasn't learning
a new syntax and a new programming paradigm at the same time.

~~~
sanderjd
Which concepts did you find the most applicable? For me it is applying
functions to collections in different ways, eg. map, fold, etc.

------
valarauca1
Maybe its me being thick, but I can't honestly wrap my head around what the
fundamental good thing in functional programming is. When ever I work in a
functional language I just find myself missing objects.

OOP just feel more eloquent to look at personally.

    
    
            Struct.function(arg1, arg2, arg3); 
    

feels more atheistically pleasing to my mental model of programming then

    
    
            function(struct, arg1, arg2, arg3); 
    

This means more complex function definitions. While I can define multiple
.close(); functions, that only operate in relation to their connection type
I.E.: tcpConnection, serialConnection can each call their own close(). In FP I
have to actually state either:

    
    
         fn close<T>(&:<T>){
               if(<T>::isType(tcpConnection)){
                    tcpConnectionClose(<T>);
               }else if(<T>::isType(serialConnection)){
                    serialConnectionClose(<T>);
               }else{
                   //throw exception
               }
        } 
    

Or I just make primitive calls to each connection type

    
    
        tcpConnectionClose(*);
    

This just doesn't stroke me the right way. It feels like bulky, over necessary
coding when you take into account objects exist. I'm interested in FP, I feel
it has to offer something for people to jump on it, but I just don't get it.

~~~
sanderjd
Not sure what language that is supposed to be, but in all the functional
languages I've used, you would specify that T implements close and write an
implementation for a TcpConnection and a SerialConnection, letting the
compiler figure out which one to call when and obviating the runtime exception
entirely.

~~~
innguest
Let me stress your point to the parent:

> letting the compiler figure out which one to call when and obviating the
> runtime exception entirely.

That is how FP eliminates a whole category of bugs. You have to have
"experience", which some love to boast, to know that it's better to have an
else clause there and throw an exception.

If you think that's basic knowledge you'd be surprised with how little do
"experienced" programmers that get paid very handsomely actually know; I've
met 30-year of experience programmers that do not believe that the final-else-
then-exception pattern is a good idea, because they think "crashing the
program is never good". So what ends up happening is of course the program
ends up crashing anyway (supposing you have passed an invalid connection in
that case) but now the stack trace is potentially much deeper than it needs to
be, and therefore misleading. All stack traces lose their value collectively
because now we can't tell the good ones from the bad. That can make debugging
harder than it needs to be.

But instead in FP, because it's a fundamentally different way to program, that
situation doesn't even arise in the first place, obviating a solution. In FP
things are modeled differently from OO, it's not just "procedures without an
object jacket". The advantage here is the compiler verifies the "object"
selection for you (which "class"/type of connection) and can provide some
mathematical guarantees. Those guarantees are what metonymically "eliminate
bugs".

So we can rely less on the programmer and more on the compiler. You know, like
it's meant to be. After all we are sitting in front of a computer, so we
should just let it do its thing without telling it how. Turns out autonomy in
coming up with a solution is good both for the programmer and for the computer
(and then again for the programmer).

~~~
sanderjd
Thanks for the interesting expansion. This technique isn't actually unique to
functional programming languages, but rather languages with good type systems
and compilers. It just so happens that a lot of functional programming
languages have that.

------
badman_ting
"Functional programming is the mustachioed hipster of programming paradigms."

Well, _that 's_ not helping. But I agree that devs should learn FP, and I now
write in a much more functional style than I used to. I resisted for a long
time, and that was a mistake.

------
peaton
> The literature relies on somewhat foreboding statements like “functions as
> first-class objects,” and “eliminating side effects.

I personally find this view a little overbearing.

In "lay programmer's" terms, functions as first-class objects can often come
down to being able to pass functions to other functions or function
composition. Most of us learned about function composition in Algebra 2... So
that's pretty straightforward even at its worst.

Eliminating side effects is equally straightforward in that all it means is
that any variables outside the scope of a given function are not changed by
the function.

~~~
mneary
Function composition is very different from passing one function into another
and the latter is definitely a concept that most Algebra 2 students would not
be able to grok.

~~~
peaton
I knew I was making a strong assertion when I mentioned function composition.
I'm grateful for your feedback, but perhaps you could elaborate? I don't see
why they have to be "very different". I think you called me on a certain case,
but an example like this demonstrates that they are not so "very different":

    
    
        def a(b):
            m = 0
            return b(m)
    

Or even:

    
    
        def a(b):
            m = b(1)
            return m+7
    

I don't see how these are not a form of function composition. I'm definitely
no expert. But I'd really appreciate an elaboration so I can learn from any
mistake I'm making.

~~~
mneary
Function composition is a single, special case of a function accepting
functional arguments. You could define the compose operator as below:

    
    
        compose : (a -> b) -> (b -> c) -> (a -> c)
        a `compose` b = \x -> b(a(x))
    

However, there is a whole spectrum of additionally possible functions which
can be built to accept functions as arguments. Here's a couple of example:

    
    
        partial3 : (Integer -> Integer -> Integer) -> (Integer -> Integer)
        partial3 f = f 3
        map : [a] -> (a -> b) -> [b]
        map [] f = []
        map (x:xs) f = (f x):(map xs f)
    

The first one takes a function and applies an argument, 3; the next one takes
a list and a function and maps the list using the provided function. The idea
of functions as arguments is a very powerful one, and from my understanding
one with which people sometimes struggle.

------
yxhuvud
To be honest, this looks more like an explanation of the concept of
abstraction than of the concept of functional programming.

------
doctorKrieger
one of the points of the article is wrong, fp isn't stateless at all it just
tends to represent state in a different way - look at monads in haskell.

~~~
innguest
No, it is indeed stateless. The fact that it _represents_ state is proof of
that. FP uses representations for things instead of the real things. For
instance, OO purposefully causes changes in RAM to make things happen, where
in FP changes in RAM are incidental and the programmer need not manage them.
So it is stateless because you program indirectly by using these conceptual
representations that are free of implementation.

~~~
dllthomas
Modulo IORef and similar.

~~~
innguest
No, even IORef is stateless.

You will know if there's state in a language if you can build a function that
returns different results given the same values over time; IORef does not
allow you to do that.

~~~
dllthomas
The bit that I was intending to modify was your statement that _" in FP
changes in RAM are incidental and the programmer need not manage them"_. With
IORef, changes in RAM are _not_ incidental, and the programmer is explicitly
managing them. If IORef isn't direct enough for you, substitute Ptr. The fact
that it happens in the runtime system does not _necessarily_ mean that it is
incidental or that it does not need to be managed by the programmer.

As to your broader claim here, I while I understand what you're getting at
(and probably wouldn't have bothered responding, but for the above) I think
it's most precise to say that IORef is an encoding _of_ state. This does not
have to mean that state gets updated by means of side effects.

------
innguest
If you're interested in learning FP but struggling and would like to beta test
didactic material for beginners, I urge you to get in touch with me. I'm
making a series of videos explaining the concepts and abstractions of
functional programming in a socratic way and from scratch (so not assuming
even that numbers exist). I'm 'uploaded' at google's email service.

