
Clojure for the Brave and True – Functional Programming - nonrecursive
http://www.braveclojure.com/functional-programming/
======
taeric
Has there been any exploration into an idea where the problem with side
effects is not that they make functions impure, but they are often against the
metaphor of the instructions being given? That is, if the metaphor that one is
trying to model is traditional mathematics, then of course side effects are
terrible.

However, consider a program where the main metaphor is controlling something.
Logo, for example. Few people argue, I would think, that the traditional
imperative styling there hurts and confuses things.

More extreme, consider stack based languages. These are strictly based on the
current state of the program, yet my understanding is if you can fit your mind
to that metaphor, it works very very well.

Or, my favorite category, cookbooks. Look at traditional baking directions:
"Begin heating oven to XXX, mix dry ingredients in bowl, add butter, whip,
..." Doesn't get any more imperative than that, and yet people around the
world often have great success replicating the desired results. (Granted, I
often think that the difference in programming and teaching is that humans
make an effort to understand what you were communicating, computers typically
don't.)

Does this make sense?

~~~
samatman
(serve (fry (add-pan kale (fry (add-pan (dice onions)))))

~~~
cynicalkane
Better is

    
    
        (->> onions
          dice add-pan fry (add-pan kale) fry serve))

~~~
agscala
Amazing. What's the ->> operator called?

~~~
breckinloggins
A monad ;-)

No, but seriously, you can think of (->> foo bar baz quux)

As a way of saying:

    
    
        Within the context of computing one thing and passing 
        that result to the next thing with a common carrier 
        object of "foo", do bar, then baz, then quux within
        that context, each operating on foo.
    

In VB you could do:

    
    
        with someFile
            readIt()
            processIt()
            writeIt()
    

See the similarity?

By the way, monads are a way to abstract this even further so that not only
can you model the context of "do something to a thing then do the next thing
then do the next", but almost every other kind of context you could think of.

EDIT: as has been pointed out, this isn't really a monad, but it's an example
of something monad-like that helps me grasp the larger concept better. Also,
fixed the syntax.

~~~
tel
(->>) is not really a monad, though. It's a concatenation of endomorphisms,
which sort of has the smell of a monad, but is simpler.

~~~
dllthomas
Not necessarily _endo_ morphisms, though, is it?

~~~
tel
True, and then it's a category.

~~~
dllthomas
Well, hmm, I don't think that's quite right either. There is a category where
the objects are clojure types and the arrows are clojure functions, but I
don't see how the threading operator _embodies_ it. There could totally be
some perspective I'm missing, though.

~~~
tel
No, I think I'm just playing more fast and loose than I should. I don't think
it fits into any particular semantic mould because it's really a syntactic
thing—it is a list processing function that's applied to source code
represented as lists. The basic usage pretty much traces out a path in the Clj
category, so perhaps if you built Paths(Clj) then (->>) is a forgetful functor
from Paths(Clj) -> Clj. In which case it's almost a monad, since if you play
with Paths and (->>) you can turn them into a forgetful/free adjunction pair
and make a monad.

~~~
dllthomas
Hmm, I think that's right. Heh.

------
brudgers
Rich Hickey has a great description of how to think about working with
immutable values - what we usually want when we change a variable is simply
the next value and so long as we are getting the value we expected, there's no
need to name it.

In other words if our current position in an array is 2, what we need to
access the next position is the value 3. Creating a variable _int i=2;_ and
then mutating it _i++;_ introduces the possibility of side effects. This not
to say that at an abstraction layer below our programming language a register
won't get incremented, only that our brains don't need to worry about the
mechanisms most of the time if we use a functional language.

~~~
dllthomas
Or a language with a foreach construct...

~~~
brudgers
Exactly.

There's nothing magical about functional programming - it is no more than
syntactic sugar to help programmers control the state of computing devices.

Foreach is a higher level abstraction that makes reasoning about our code
easier. The bits still change.

~~~
Dewie
> There's nothing magical about functional programming - it is no more than
> syntactic sugar to help programmers control the state of computing devices.

In the same way that objects in Java are just syntactic sugar for
encapsulating state. In the same way that a compiler is just the machine that
takes a language of the syntactic sugar of language A to the actual language
A.

> Foreach is a higher level abstraction that makes reasoning about our code
> easier. The bits still change.

So how do you write a Foreach loop from a regular for-loop in your typical
imperative language? How do you compose that Foreach loop to make even higher
level abstractions like map, filter and fold/reduce?

~~~
dragonwriter
> So how do you write a Foreach loop from a regular for-loop in your typical
> imperative language?

By walking through the collection in the update step of the for loop (for
languages with less-general version of for than C and friends, this may
actually require a while loop rather than a for loop.)

> How do you compose that Foreach loop to make even higher level abstractions
> like map, filter and fold/reduce?

By abstracting it behind a function call that takes a function (or function
reference/pointer, or in particularly limited languages a location to jump the
current instruction pointer to) as an argument.

Yes, languages that focus on the functional paradigm (or which have been
influenced by it even if it isn't their central paradigm) have syntax which
makes this more straightforward than it might be in some other languages,

~~~
dllthomas
In the case of foreach, it's simple to be sure, and I'm pretty fine with
calling foreach "syntactic sugar". Even something as simple as function
composition breaks down a bit in C. You can manage it, but you're either not
really using C anymore (writing out machine code in memory and treating it as
a function, which is going to be tremendously compiler and architecture
specific) or you're not really dealing with C functions anymore (passing
around a structure that stores the things to compose, "calling" that function
with a separate "apply" function).

Yes, you can ultimately write all of Haskell in C and code at the higher
level, exceedingly verbosely, but to say that's "just syntactic sugar" would
be absurd - "fundamentally all these languages are Turing equivalent" already
encapsulates that observation, there's nothing new the notion of "just
syntactic sugar" is adding.

Typically I restrict "just syntactic sugar" for things that are simple
transformations that can be done locally. The archetypical example being array
syntax in C, which can be described more precisely by a syntactic
transformation than a semantic operation:

Given:

    
    
        char *c = "abcd";
        int i = 2;
    

It turns out that:

    
    
        a[i] = *(a + i)
    

but that's equivalent, by commutativity of +, to:

    
    
        *(i + a)
    

and so, counter-intuitively if you're thinking of [] as a semantic "array
indexing" operation, you get the same results with:

    
    
        i[a]

~~~
dragonwriter
> In the case of foreach, it's simple to be sure, and I'm pretty fine with
> calling foreach "syntactic sugar".

Yeah, I was responding specifically to the "foreach" and specific things
layered on top of foreach, which are fairly straightforward in most popular
non-FP languages.

> Even something as simple as function composition breaks down a bit in C.

Quite. No argument there. At least with standard C. (I think Clang and GNU C
both have extensions -- but not the same ones -- that make this reasonably
straightforward in simple cases, but still much less elegant than, say,
Haskell.)

------
jafaku
As someone who still doesn't get functional programming, I would like to see a
real-world example of how to deal with side effects. Eg: write something into
a file or a DB. Because _of course_ methods/functions with no side effects are
easier to deal with, it sounds good, but I don't think it would be easy nor
convenient to separate every side effect in a real-world program (as opposed
to academic programs, where you just write a Fibonacci or whatever). Even in
good OOP, you can have lots of side effects in a method.

Halp!

~~~
ufo
Using side effects in Haskell is easy. The only thing is that they need to be
explicit and you aren't allowed to hide them inside pure functions.

For a database example, here is the chapter from Real World Haskell about
databases (a bit old but still relevant):

[http://book.realworldhaskell.org/read/using-
databases.html](http://book.realworldhaskell.org/read/using-databases.html)

And here is some example code:

    
    
        do
          conn <- connectSqlite3 "test1.db"
          stmt <- prepare conn "INSERT INTO test VALUES (?, ?)"
          execute stmt [toSql 1, toSql "one"]
          execute stmt [toSql 2, toSql "two"]
          execute stmt [toSql 3, toSql "three"]
          execute stmt [toSql 4, SqlNull]
          commit conn
          disconnect conn
    

As you can see, it looks very similar to how you would write it in a regular
imperative language. The main difference is that since they database
operations are IO operations then they need to be in the special do-notation
block and cannot be mixed with regular code.

As for the question of wanting to mix pure and impure code that much I don't
think it actually comes up that much. First of all, I find that I don't often
find myself with an impure function that I want to add a side effect in the
middle seamlessly - firstly, side-effecting stuff tend to be side-effecting
from the beggining and secondly if I do want to turn a pure function into an
impure one then forcing me to change the interface helps make sure that I am
not breaking any code that used to work because it assumed it was working with
pure code. Additionally, Haskell is really good at abstracting code so its not
that hard to take a big chunk of logic and split it into a pure and an impure
part. Finally, converting code from pure to impure is not that bad - sure, you
need to change the syntax a bit but the type checker helps you find all the
places that need to change and also helps you check all the code that calles
the method you changed so you can be sure that you aren't breaking those.

side effecting operations in the IO monad so you can't run them in the middle
of any function and instead must run them inside a special "do-notation block"
like I did.

You can have impure code call both other impure code and pure functions and
all you can't do is call impure functions from inside pure functions. Sure,
its a bit annoying if you ever have to convert a pure function to an impure
one (you need to change some of the syntax and all the places that call tha
tfunction need to start treating it as impure) but its not that bad: first of
all the type checker tells you all the spots where you need to update your
code and secondly, Haskell is super good at abstracting code so its not hard
to take a big chunk of code with some side effects in the middle and split it
into a pure and an impure part.

> to academic program

Its a little pet peeve of mine but it seems that nowadays "academic" doesn't
have a real meaning and just tends to refer to whatever people don't like. :)

~~~
dllthomas
"do" is just syntax. You could write the above as:

    
    
        connectSqlite3 "test1.db" >>= \ conn ->
            prepare conn "INSERT INTO test VALUES (?, ?)" >>= \ stmt -> 
              >> execute stmt [toSql 1, toSql "one"]
              >> execute stmt [toSql 2, toSql "two"]
              >> execute stmt [toSql 3, toSql "three"]
              >> execute stmt [toSql 4, SqlNull]
              >> commit conn
              >> disconnect conn
    

and it's "just" a bunch of nested lambdas with no "do" in sight. What's
different from "regular" code is that those lambdas 1) return IO actions, and
2) are strung together with monadic bind (>>=) to build one big IO action.

To be sure, in this case the do version is much easier to read (that's why "do
notation" exists). It can be used with any monad, though - nothing ties it
particularly to IO.

~~~
ufo
Maybe I should have been clearer but the basic point is that monadic code has
a different interface

    
    
        bar (foo x) --pure Haskell
    

vs

    
    
        foo x >>= bar --monadic Haskell
    

vs

    
    
        bar(foo(x)) -- in C you write both cases like this.

~~~
dllthomas
You're close, but it's not really so clear cut.

foo x ++ bar -- "pure" or "monadic"?

foo x >>= baz -- "pure" or "monadic"?

What if (foo x) is a list in both cases?

(>>=) is just an operator. The values that it operates on are anything with a
Monad instance, just like the values that (+) operates on are anything with a
Num instance. There is nothing voodoo about monads or about bind (>>=); such
voodoo as there is lies solely in IO, which is interacted with the _same_ way
as other values (though certainly what you can do with (IO a) is more limited
than what you can do with (Maybe a)).

~~~
Dewie
I used to think that Monad was some alternative realm in Haskell, a realm
where purity didn't exist and everything was written in an alternative, built-
in language (do-notation). That's because people tend to refer to using Monads
as "working in the X-Monad", as if it is some... _place_. But then it turned
out... Oh, so it's basically just a signature/interface/type class.

~~~
ufo
True! But one neat thing is that, if you have a black-box abstract data type
then you can force people to use the provided Monad interface if they want to
do stuff with it. The IO monad does exactly this to create its "alternative
realm"; there is no way to get "out" of IO once you get in , the only way to
run an IO computation is to have it be directly or indirectly called by "main"
and the only primitive way to compose IO computations is with the monadic
combinators.

------
geuis
I want to take a leap and talk about 2 core concepts that I got after spending
last week learning Clojure. I'd love some comments about them.

I've been writing javascript for years but functional programming didn't make
sense until I realized:

1) It's essentially inverted callbacks. You can read and reason about your
code by working from the inside out. The results of a function are passed up
to the outside.

2) When you stop modifying variables and just pass them around, it becomes a
lot easier to think about code in abstract ways. There are of course some
situations where modifying a variable is needed, but it really feels icky and
you think twice about doing it.

The problem I found trying to learn this stuff is that once you learn to think
functional its hard to think like someone who doesn't get it. That makes it
hard to translate concepts in ways that non-functional people can get easier.
I'm keeping mental tabs on ideas as I understand them to hopefully help with
this problem.

Also, macros. Clojure people love talking about them. They aren't explained
well most of the time though.

Simplest explanation is that you can write a bunch of your own functions, even
ones that will override native functions, i.e macros. Since clojure is a
compiled language, at compile time all the source code gets read in and pre-
processed. When the compiler reaches some code that was defined in a macro, it
keeps rewriting that code until all the macros are processed and the code has
been re-written. Then everything is compiled to bytecode for the jvm.

~~~
johncagula
Your use of the language "passed up to the outside" is interesting... Up and
outside imply some sort of intrinsic directionality. Considering that
functional programming seems to capture the mathematically inclined, can
anyone speak to the concept of topology applied to code structure or
interpretation? For instance, does a closure imply some sort of filtration
structure on the "code space?" (by filtration here, I mean the concept from
algebraic topology)

~~~
geuis
Oh man, you lost me. I'm definitely not one of the mathematically inclined. My
analogies are rough and largely wrong and the only thing I'm trying to relate
is the particular way of thinking that at a critical moment helped me
understand a little more. I wouldn't put too much credence on the veracity of
"up to the outside". If it does hit on some underlying important point, it's
entirely by accident, I swear.

------
alexvay
I found LightTable* as a great way to grok functional programming - you can
actually see how data moves around. Really brilliant.

*Lighttable IDE: [http://www.chris-granger.com/lighttable/](http://www.chris-granger.com/lighttable/)

------
kineticfocus
Just watched this decent intro released yesterday... (OSCON 2013: "Functional
Thinking" \- Neal Ford)
[http://www.youtube.com/watch?v=7aYS9PcAITQ](http://www.youtube.com/watch?v=7aYS9PcAITQ)

------
username223
> It always returns the same result given the same arguments. This is call
> "referential transparency" and you can add it to your list of five-dollar
> programming terms.

> It doesn't cause any side effects, e.g. it doesn't "change the external
> world" by changing external mutable objects or outputting to i/o.

These have little to do with one another. "sort :: [a] -> [a]" sorts a list,
perhaps using quicksort. If you want to randomize the pivots, it's "sort ::
[a] -> IO [a]", and everything in your program that sorts things has a
different type, even though it produces the same value.

------
gbog
I have been tempted by functional programming, and it is interesting, but I'm
not sure how would be the result for highly complex code.

~~~
eru
It is especially suited to highly complex tasks. (Your code should stay as
simple as possible.)

------
pat_shaughnessy
Another great post, Daniel... keep 'em coming!

~~~
nonrecursive
Thanks!

------
SeoxyS
The third example has an error: ((rand) > 0.5) should say (> (rand) 0.5)

~~~
nonrecursive
Thanks, fixed!

~~~
JPKab
I like your post, and think we need more people like you doing this. One
critique: I think you need to do a much better job explaining your first part
on recursion.

You need to walk through exactly what you are doing in your sum function,
because it is not at all clear to the uninitiated where in the hell the "acc"
variable comes from. Instead of explaining any of this part, which is perhaps
the most difficult part for a non-FP person to grasp, you jumped straight to
"by the way, use loop when there's a lot of numbers."

My coworker's comments when I sent him the link:

"Ok, I get it...... what's going on with the sum?..... whaaat?"

Thanks for doing what you're doing. I look forward to reading more of your
work.

~~~
nonrecursive
OK - I've updated that section. I hope it helps!

~~~
JPKab
Yes, that is a fantastic explanation. My coworker immediately got it, and was
simultaneously blown away by the concept of "arity."

Please keep it up.

One area you may be interested in: I've recently been playing around with
Incanter (a library for data analysis, similar to R or Python Pandas).

There is a distinct lack of tutorials for using it, and the documentation
isn't exactly friendly....

The reason I mention it is because I think data analysis is a place where
Clojure absolutely shines, and someone with your expertise could probably work
it into useful examples.

~~~
nonrecursive
Thanks for the suggestion! I know next to nothing about data analysis so I
might not ever get around to using or writing about Incanter, but I've seen it
in action and it looks super cool. Thanks again :)

