
Follow up to "Functional Programming Doesn't Work" - steveklabnik
http://prog21.dadgum.com/55.html
======
anc2020
I really don't get it, what does this guy mean by 100% pure functional
programming?

Because we all know already that a program needs to print values or "do"
something to be useful, so in that way no program is ever 100% pure - is that
what he's talking about?

Or is he talking about 100% pure functional languages which allow functional
ways of writing IO code, like Haskell?

In Haskell can't you still program nearly 100% imperatively, just by wrapping
stuff in the IO mondad?

    
    
        mysquare  x = x * x
        mysquarer x = do
                        putStrLn "Hey look, it squared"
                        return (mysquare x)
        
        main = do
                 putStrLn "Starting program..."
                 x' <- mysquarer 9
                 putStrLn (show x')
    

Can't he just write his program like that?

Or is he just arguing that we should write programs in functional languages
but just with an imperative style (no points-free and simple use of IO)?

~~~
pwnstigator
Please correct me if I'm wrong, but my understanding is that most Haskell code
doesn't generate values, but "thunks" that evaluate to values if needed.

When you write

    
    
      x = 4 + 5
    

you aren't setting x to 9, but creating a thunk that evaluates 4 + 5 at
runtime. An Integer is a thunk that returns an integer and has no other
effects. An IO Integer is one that does some I/O before returning that
integer.

As I understand it, the only function that has the power to "do" anything
(computation or IO) undernormal circumstances is main, which always has type
IO ().

~~~
jrockway
This is true, but not really relevant to the original article or the comment
you are replying to.

When you sequence computations with >>=, like the grandparent does, you
generally evaluate the left side of the operation before running the
computation. That is the point of monads; sequencing computations and
controlling the order of evaluation. Sine the rhs depends on the lhs, the
sequence is "evaluate lhs completely", "evaluate rhs completely", and so on.

------
icey
I feel like I'm missing the punchline here somewhere. Functional programming
"doesn't work" at 98% but if you turn some imaginary slider to 85% then
everything is awesome? I honestly don't get this.

Can anyone shed some light on the WTF-ness of this article? (Other than the
author disliking purely functional development and "98% functional"
development, whatever that means).

~~~
jacquesm
What's so hard to understand? I think he means that if you try to use
functional programming as a religion and try to shoehorn everything into the
functional paradigm (I hate that word) that you are making life hard on
yourself.

But by using the right mix of functional and imperative you get the best of
both worlds.

I don't necessarily agree with that but I have no problem understanding it.

The more I learn about 'functional' programming the more I'm beginning to see
that it is not so much a simple technique as it is a wholly different approach
to solving the problems, which often leads to tremendous insights that in turn
lead to huge optimization possibilities.

Witness the 'hashlife' link that I posted last week, I never even thought such
an optimization was even possible.

~~~
icey
The author explained that purely functional development is hard to do in his
first posting. I don't understand the point of his follow-up; all he says is
there's some magical barrier in which functional programming is suddenly okay,
and that magical barrier is somewhere between 98% and 85% on the mystical
"functional programming is hard, let's go shopping" scale.

~~~
statictype
I don't know why you're taking his post so literally.

Forget about the 98 and 85 numbers.

The point of the follow-up was to merely say that functional programming works
as long as you don't try to rigorously and dogmatically apply it to
everything. There's a sweet spot where most of your code is functional but you
also allow certain inconsistencies in, in order to make the whole system
palatable.

This was actually what he was trying to say in his first post, but he didn't
really get it across clearly, hence, the follow-up.

------
stcredzero
_It does work, but you have to ease up on hardliner positions in order to get
the benefits._

That's one of the best last lines in a blog post I've seen in awhile.
Absolutely true about a whole lot of things!

------
chipsy
The reactions to Hague's previous article reminded me of allergic responses -
that is, in a programming community that has become utterly soaked with "more
FP = better" commentary, any statement to the contrary must be shrugged off as
irrelevant or incorrect. And so we see people falling over themselves to say
that something _must_ be wrong with Hague or his arguments.

He was clear enough in the first article that he was advocating compromise
over purity, specifically addressing pure FP's faults as a tool for software
development. The real problem is not with him, but that with FP we've hit that
point where the hype has induced people to start bellowing various forms of
"USE FP EVERYWHERE, FP>IMPERATIVE" meme from the rooftops. See the memes
around TDD, design patterns, etc.

In conclusion, this is a prime indicator that the "interesting new stuff" in
programming practice has gone elsewhere. Functional style is quickly entering
the mainstream, it has its zealots, and it's well on its way to become just
another tool. So the question is now, what's next?

~~~
davidmathers
_The barbarians are at the gates. Hordes of Java programmers are being exposed
to generics and delegates; hundreds of packages have been uploaded to Hackage;
the Haskell IRC channel has nearly hit 500 users; and it’s only a matter of
time before Microsoft seals that multi-billion dollar bid for Hayoo.

The time has come to retreat and climb higher into our ivory tower: we need to
design a language that is so devious, so confusing, and so bizarre, it will
take donkey’s years for mainstream languages to catch up. Agda, Coq, and
Epigram are some approximation of what functional programming might become,
but why stop there? I want strict data, lazy codata, quotient types, and a
wackier underlying type theory._

<http://www.haskell.org/sitewiki/images/0/0a/TMR-Issue10.pdf>

------
tensor
I don't claim to be very experienced in functional programming, but doesn't
his whole argument simplify to "State is hard in a pure functional setting" ?

I was under the impression that this was a simple and fairly well understood
problem. Either use a functional abstraction for state, a reduce with state or
perhaps a monad if the state needs to be carried through many types of
operations, or manage it in an impure way, using STM with a global state
perhaps.

I don't see how this is any knock against the functional programming paradigm.
I always understood the rule of thumb to be "use purity as much as possible,
and be impure only when it's explicitly needed." Modifying a global state
seems to be a pretty obvious case for using an impure function.

Am I misunderstanding this?

~~~
statictype
>I don't see how this is any knock against the functional programming
paradigm. I always understood the rule of thumb to be "use purity as much as
possible, and be impure only when it's explicitly needed." Modifying a global
state seems to be a pretty obvious case for using an impure function.

This is pretty much exactly what he's saying.

His response was targeted at people who think functional programs should
wholly avoid state and side-effects. (I don't actually know of too many people
who believe this, but apparently he does)

------
naghip
I have experience with Lisp, Java, Scheme and Clojure. I am surprised at how
vehemently people are pushing back on James (the author). I tried coding in
Clojure and found myself working very hard at things that would have been
straight forward in Lisp. Could it have been done in Clojure - answer is
"yes!". But i didnt want to get sidetracked thinking about how to make
programming language work for me rather than actually work on the task.

~~~
Raphael_Amiard
Thing is, Clojure is not at all a 'pure' functional programming language, it
has all kinds of facilities to mutate state (and very well thought ones in my
opinion). If you had to categorize it, it would be more like the 80%
functional language James is talking about.

If you are struggling in Clojure, i know this is a common argument but i think
it's true, it's probably because you didn't code enough in it for this style
of programming to become natural.

Moreover,and on a side note, i'd very much like an example of what was very
hard in clojure and was straight forward in Lisp ,whatever lisp that is, i
assume CL. It would be interresting to me to know what kind of code does seem
hard to produce at first in clojure.

------
rvirding
Agreeing with @pwnstigator I think the important thing is to separate the pure
functional code from the impure code with side effects. And be very explicit
that this code is impure and does change state so when you use it you must
take this into account. If this is done properly you generally find that the
impure code need only be a small portion of the whoe program.

------
astine
Maybe put another way: Some algorithms are more simply represented using an
imperative aproche than a functional, and sometimes the benefits of easily
understandible code outway the benefits of having no side effects.

~~~
jrockway
Is effectful code ever easy to understand?

(The answer is yes, because it often follows expectations. This is why
dynamically-typed languages work. Although there is no explicit structure, you
can generally see what type the author wants something to be by understanding
what the code is trying to do. With imperative programming, the same is true.
The "+" function probably doesn't depend on your fooBarBaz global variable, so
you can pretend it doesn't. But there is no guarantee -- and programming based
on guarantees is safer than programming based on hoping the author is doing
what you think he's doing.)

------
pmarin
What not work is to believe that one programming paradigm is the best solution
for all the problems.

------
pwnstigator
Functions should only have side effects if and only if that's what the
function is designed to do: e.g. perform IO, write to disk, et cetera. Then,
"side effect" is a misnomer, because the alteration to state is intended.

The hatred for "side effects" comes largely from a prehistoric tendency of
programmers to write highly optimized in-place operations that destroyed the
original data. An example would be a matrix multiplication that destroys one
of the original matrices, or Common Lisp's NCONC (a faster APPEND that
destroys some of the lists passed as arguments).

The general guiding principle of good code is that visible state changes occur
only when requested. (For optimization, private state changes can be used,
such as caching/memoization, but these are behind a layer of abstraction and
don't violate the referential transparency of the API.)

~~~
lmkg
>Functions should only have side effects if and only if that's what the
function is designed to do: e.g. perform IO, write to disk, et cetera. Then,
"side effect" is a misnomer, because the alteration to state is intended.

In medicine, they have a saying, "There are no side effects, only effects." I
first heard that from a psychiatrist, discussing how he initially selects
antidepressants for his patients based on side effects, such as weight loss or
gain. Viagra was originally a heart medication.

What the discussion really comes down to is, what is the intent of the
programmer? If the reason for a function call is a return value, but there's
also a state change, then a problem will almost always arise because you want
one but not the other. That's why change of state and global variables are
usually maligned, because it couples disparate behavior, sometimes in
unintentional or obfuscated fashion.

Further reading: <http://en.wikipedia.org/wiki/Command-query_separation>

Eiffel is apparently designed such that anything with a return value causes no
change of state, and vice-versa. It's an interesting idea because it means
that a change state, like you said, is never a side-effect, but always the
intention of the caller.

Like pretty much almost everything, anywhere, ever, it's probably better as a
general programming practice rather than a language-enforced restriction. And
that's basically what the author was saying about purity--it's better as a
guideline than a commandment.

~~~
pwnstigator
I generally agree with the principle of command-query separation. One point
where I would differ is that, when the command's effects aren't entirely known
to the caller, it's often useful to return this information.

For example, a function that creates a user account with a unique numerical
user ID, that isn't known until the account is created, can return the user's
ID, e.g.

    
    
      (create-user-account "Bob") => 9001
    
      (defn create-acct-and-immediately-do-something [username]
         (let [user-id (create-user-account username)]
            (do-something-with-acct user-id)))

