Hacker News new | past | comments | ask | show | jobs | submit login
What's Wrong with the For Loop (notes-on-haskell.blogspot.com)
87 points by jrheard on Feb 23, 2012 | hide | past | favorite | 74 comments



Is it just me or was this article entirely about lambdas and higher order functions? There wasn't a single example that captured a variable outside of the scope of the function and passed it around.

Lambdas by them self are incredibly useful but a closure has a lot more benefits. Your API can return functions that manipulate values through references without exposing exposing the value and giving a lot of flexibility.

Personally, I consider higher-order functions and lambdas convincing enough to have them added to a language, but it takes more than that to convince a broad majority that does not have much exposure to functional programming. While showing how cool you can sum things up with Haskell is certainly nice, the degree of flexibility and usability that is added to APIs is much more convincing.

Edit: Had to write this in a hurry and now came back to make it more substantial.


Yes, the author seems to be referring to higher-order functions as "closures".

> Listing 1. The simplest possible closure

> 3.times {puts "Inside the times method."}

that's not the simplest possible anything. it's magical syntactic sugar gone wild.


Not really “magical syntactic sugar gone wild”, just “everything is an object”. In Ruby, integers can respond to messages just as well as anything else, and the built-in “times” message takes a lambda (“closure” in the article). There is some small irony in that “times” is just a very thin wrapper for a “for” loop.


You are right I think comparing merits of closures on just a comparison with for loops and lambdas alone is not comprehensive enough. Closures are called "A poor mans way of doing OO". And its in no way less powerful than any OO language I have seen. Javascript is a living example of this. I use currying so much in my production code base, to state one of the advantages of closures. The fact is that even if java can do this with inner classes and not natively means you have to write much more code. And more code for doing the same thing is only useful when it gives you more power. Even with MORE CODE to do closure simulation JAVA gives you only "INCOMPLETE" closures(final/class level variable access only in the inner scope). And for currying one says: "return new Function<K, V>(){ public void apply(...){...}}". Composing functions is also harder. So its not just for loop there is more to it.


This is a valid criticism, and if you wanted to really make the case you'd say I think, "think about every time you have ever written an anonymous class. That anonymous class was probably an ugly closure." So Java programmers basically already know that if you want to list your MP3s you might have to write:

    String[] mp3s = new File("/home/drostie/Music").list(new FilenameFilter() {
        public boolean accept(File dir, String name) {
            return (name.length > 4) && (
                name.substring(name.length - 4).toLowerCase() === ".mp3"
            );
        }
    });
In Node.js this becomes:

    var mp3s = fs.readdirSync("/home/drostie/Music").filter(function (name) {
        return name.slice(-4).toLowerCase() === ".mp3";
    });
But as you said, this doesn't use the environment. The question is, when do you really need the environment?

One answer is that it's usually important when that environment is changing and you're modifying that environment; then you have to worry, "Am I modifying the right value? Do I have the right value?" And that's a much more difficult case. MIT's Abelson-Sussman lectures are very clear when talking with Scheme about assignment. "Why do you care about assignment? Because sometimes you want new Date(), or Math.random(), or any number of other constructs which don't return the same thing when you call them twice in a row."

There are some simple use cases for this, like tracking how much work you're doing. In client-side JavaScript you might write something like this:

    var el_text_area = document.getElementById("wordbox"),
        el_charcount = document.getElementById("charcount"),
        clock;
    
    el_text_area.onblur = function () {
        // stop updating when the user isn't typing.
        if (clock !== undefined) {
            clearInterval(clock);
        }
    };
    
    el_text_area.onfocus = function () {
        el_text_area.onblur(); //clear the clock if it somehow already exists.
        clock = setInterval(function () {
            el_charcount.value = el_text_area.value.length;
        }, 50);
    };
All of those external variables are being dynamically modified.

The last case where I really used closures was as a debugging tool; I wrote a function which would take as input a function and 'wrap' it up for tracing its execution. Here it is in its full gory detail:

    function log_call(fn) {
        var count = 0;
        return function () {
            var out, name = fn.name + "#" + (count += 1);
            function log(verb, value) {
                console.log(name + " " + verb + " ", value);
            }
            log("<--", [].slice.call(arguments, 0));
            try {
                out = fn.apply(this, arguments);
                log("-->", out);
                return out;
            } catch (e) {
                log("--X", e);
                throw e;
            }
        };
    }
I use it by writing code like this:

    function fibs(n) {
        return n <= 1 ? n : fibs(n - 2) + fibs(n - 1);
    }
    // fibs seems slow, let's see what's going on here
    fibs = log_call(fibs);
    
    //later in the code
    fibs(22);
The above code is in fact horribly inefficient, and gets to fibs#57313 before it starts to construct the Fibonacci numbers. The debug trace shows all of this, if you run it.

We can solve that by now defining a caching decorator:

    function cached_onearg(fn) {
        var cache = {};
        return function (x) {
            var out;
            if (cache.hasOwnProperty(x)) {
                return cache[x];
            } else {
                return (cache[x] = fn.call(this, x));
            }
        };
    }
    fibs = log_call(cached_onearg(function (n) {
        return n <= 1 ? n : fibs(n - 2) + fibs(n - 1);
    ));
The logger now shows a much more reasonable 43 calls.


I love closures. I can't live without them.

And I never, ever, use them to replace a for loop. Approaching closures from the "because they can replace loops in great ways!" is IMO pretty boneheaded.

Let's take something from iOS land, where blocks (Obj-C closures) make my life easier than it ever has been before. If you want to hide an object, then destroy it once its hidden:

[UIView animateWithDuration:0.2 animations:^{ myObj.position.x += 10.0; } completion:^(BOOL finished) { [myObj destroy]; }];

How would you implement this without closures? How much of a clusterfuck will it be? Will you have to install something that will poll for the animation to be finished? Will you write a big "animation manager" class that will throw around a bunch of function pointers and callbacks and marshall all of your animations for you? Will you have to declare a load of protocols and support classes just to implement said callbacks?

The biggest boon to closures, for "regular" programmers, is the dramatic simplification of async code.


I don't know a lick of iOS or Obj-C, but this line is absolutely true:

> The biggest boon to closures, for "regular" programmers, is the dramatic simplification of async code.

Consider the way it's currently done in Java:

    listener.addCallback(new AsyncCallback<ReturnType>(Parameters ..) {
        public void onSuccess(ReturnType returned) {
            System.out.println("Successfully returned!");
        }
    
    
        public void onFailure(Throwable thrown) {
            System.out.println("Unsuccessfully returned!");
        }
    });
This is unbelievably ugly, and yet this is about as easy as it gets in Java. In order to make asynchronous callbacks "easier" you have to instantiate an anonymous class with very precisely named methods.

It's not only wasteful, but the potential for error is huge. Closures could make this much, much simpler.


You also need to do this with closures if you have more than one method.


In a language without closures [1], you'd do something like this:

    class __AnimationHandler(superclass):
        def __init__(self, obj):
            self.obj = obj

        def animations(self):
            self.obj.position.x += 10.0

        def onCompletion(self, finished):
            self.obj.destroy()

    ui_view.animateWithDuration(0.2, handler=__AnimationHandler)
Although I definitely prefer closures, this is hardly a clusterfuck. But it would be fuglier in C/C++.

[1] Yes, python has closures, which obey confusing rules and often work. But the pythonic way is an object handler.


Python's closures are exactly like, say, scheme's, except that bindings are local by default.


No, they aren't exactly the same:

    x = 0
    def foo():
        x = x + 1
        print x

    UnboundLocalError: local variable 'x' referenced before assignment
The javascript version works fine.

    var x = 0;
    var foo = function() { x = x + 1; console.log(x); }
I realize there are ways around this, but they are considered unpythonic. For example:

    x = [0]
    def foo():
        x[0] = x[0] + 1
        print x[0]
This works as you'd expect.


As I said, except that bindings are local by default.

Python3's nonlocal allows explicit enabling of scheme-like behaviour.


> How would you implement this without closures?

You don't have to look too far to find out: blocks haven't always been supported, and the older animation APIs still exist on UIView. You need to set a completion callback selector instead of passing a block. It's certainly uglier, on both sides of the API, but particularly on the implementation side. Blocks wrap up the state they need by default (well, for the majority of cases) which saves a lot of tedious work.


Animations were possible before blocks appeared. The non-block-based APIs is still around:

-beginAnimations:context: -commitAnimations

etc.

The "completion" feature can be implemented via delegating. One thing to note here is: If you don't need the completion block you can also just write:

myObj.position.x += 10.0

without blocks at all. At least on OS X every animatable object as an animator proxy that works like this:

[myObj animator].position.x += 10.0;

Then an implicit animation kicks in that can be configured. The default implicit animation duration is 0.25s but this can all the configured.

The original Core Animation (without blocks) made working with animations so much easier and cooler. You could do everything you can do with the blocks API - although the blocks API is much nicer of course.


Arguing that closures are great because they let you replace for loops is like saying the iPhone is great because it has a better texting interface than a Nokia 3310: it's entirely missing the point.

What makes closures is that it opens up a whole new way of coding. Two things specifically make this possible: that they can be treated as data[1], and that they can capture local variables.

Closures is, for example, what makes the entire premise of OS X's Grand Central Dispatch framework possible. You can create a closure, and seamlessly execute it in a background thread, on a different core. You can build complex queues and dependencies of operations and use all cores in parallel, while keeping a very simple interface to the whole system.

Here's an example pulled from my toying around in Ruby, trying to implement a machine learning algorithm.

    thetas = gradient_descent 0.1, 10 do |theta0, theta1|
        J theta0, theta1, training_set
    end
This code uses a gradient_descent function, and passes it a closure to find the values of theta0 and theta1 which minimize the function. It will automatically figure out how many arguments to minimize, based on the closure's `arity`. The closure captures the `training_set` variable and uses the arguments to call cost function `J`. This would not have been possible, or nearly as easy, without closures. Peruse the entire <60lines code at [2].

Closures open up a whole new world of possibilities.

[1]: Most languages cannot serialize a closure, though. Lisp is pretty unique there. [2]: https://gist.github.com/48a7006e138460b65173


> Most languages cannot serialize a closure, though. Lisp is pretty unique there

Perl can also serialise closures using the Data::Dump::Streamer module - https://metacpan.org/module/Data::Dump::Streamer


If you want to sequentially execute a block of code with a variable that takes successive values, a for loop is probably just what you want. Nothing wrong with that.

If you want to process a sequence of values, and filter, aggregate etc., a for loop is probably too low-level; with judicious use of some libraries, you can work to a higher level abstraction. This is no different in C than it is in Haskell; it's just harder in C, because C is low level to start with, and doesn't have very powerful abstraction features.

The real risk here, I think, is using the wrong language to solve a set of problems, rather than using the wrong feature.


There's nothing wrong with a for-loop. Nothing at all. Using closures instead of an explicit loop is only a minor gain.

The real use comes when you want to perform more complex control-flow logic with different transformations. When you are working with planar undirected graphs instead of arrays, for example a HoMM3's hexagonal grid, repeatedly writing out your breadth-first-search code to do simple graph modifications/transformations gets old really fast.


Well no, there is something wrong with a for loop--it tells you the "how" rather than the "what".

With a map you know it's just transforming the list with a function; with a reduce you know its combining the elements of a list with a function.

With a for loop, it could be any of those, or other things besides. And it takes more code--more to read, more to think about--to encode any of them.


That's pretty trivial though: one line of code instead of 2 or 3? Writing a for-loop to iterate over an array is so simple that Python is cutting out "reduce" entirely because for-loops aren't that bad, really.

The real gain is when you are working with data structures other than arrays, then closures are 1 line rather than writing a 25 line breadth-first-traversal! If you think writing lots of for loops is tedious and confusing, try writing lots and lots of BFSs, Queue and all.


Well no, it isn't trivial. It's just like using a loop instead of a goto: it makes your meaning clearer. The whole point is to make the semantics of your code easier to understand, not just to conserve lines. (Although conserving lines is also important--in my experience, a short one-line function is much easier to understand than a short three-line function.)

The reason Python is cutting out reduce is that Van Rossum has an irrational antipathy towards functional programming, and that Python is really an imperative language throughout. (I know all too well as a functional programming aficionado forced to use Python.)


A for loop is definitely not the place to convince people that closures are any good. A for loop is already essentially defining a block of code to be run on each item in the loop, and it's hardly very different.

My closure AHA! moment was writing asynchronous networking code for the iPhone. Instead of dealing with delegates and writing protocols, I could just pass it a block of code to run when the request completed. It turned a multiple file and many line design, and brought it down to just a couple of lines. It was great.


> A for loop is definitely not the place to convince people that closures are any good. A for loop is already essentially defining a block of code to be run on each item in the loop, and it's hardly very different.

Well yes and no, a `for` loop is a (pretty shitty, when it's C-style) general-purpose iteration methods, closures let API authors and developers craft more precise iterations with clearer meanings, or nice shortcuts.

It's indeed not where closures shine though. But closures tend to shine in developers building their own new control structures, which is very hard to demonstrate to people far from "enlightenment".

Take Smalltalk's conditional. I find it one of the most beautiful pieces of code I know of, in usage and implementation alike. But for 9 java developers out of 10 the reaction will most likely be "java's got `if`, why would you care?"


I find Smalltalk's conditional mundane and relatively uninteresting. It's a straightforward use of closures to implement switching on boolean truth values; it removes one concept that's frequently fundamental in languages, at the expense of making the language harder to analyze statically. Rather than being able to infer conditional control flow syntactically, you need data flow analysis; and without static types, you don't know for sure that something is even boolean until runtime. I'm not sure the beauty outweighs the costs.

I think your reaction - seeing it as beautiful - is more likely if you're relatively new to closures and code as data. The fact that treating code as data lets you write your control flow as libraries, doesn't mean that it is good to write your control flow via libraries. The power of an abstraction is not necessarily well correlated with the desirability of its use; I rather suspect the reverse may be the case.


I've certainly found them handy to encode + self-document specific loops and reduce duplication, e.g.:-

https://github.com/lorenzo-stoakes/weak/blob/master/board/he...

(helper functions for unit tests within a chess engine.)

Having closures allows you to be very flexible in the way you interact with a block of code, and there are some things you just cannot de-duplicate without them.

However they are just a tool and as the cliché goes - 'after a while everything starts to look like a nail' - so as with all programming it's a matter of using taste and good judgement to use them where they reduce duplication/coupling + increase maintainability and [the endlessly subjective but equally vastly important] readability.


Define a for-loop as 'local closure' and you are a hipster again.


Not entirely unfair, actually.


Don't get me wrong, I love closures, but all of these examples are basically as miserable as the "3.times" example.

"Oh, hey! We can replace a for loop doing addition with a fold, with only the added conceptual complexity of ensuring that (+) is an associative operator, and making sure that we don't accidentally blow our stack by unexpectedly recursing too far because we assumed the wrong evaluation order." (Which by the way his example does.)


Wrong evaluation strategy, really. He uses “foldl” (lazy) but gives it the semantics of “foldl'” (eager).


I have never seen a non-trivial example of fold* (i.e. anything other than "+" or "*") that I didn't have to spend several seconds or minutes rewriting it (mentally or in typing) as a loop to figure out WTH it is doing.


What about these?

    f1 list =  foldr (&&) True list    -- && is logical and
    f2 list = foldr (||) False list    -- || is logical or
    f3 list =  foldl max (head list) (tail list)
    f4 list =  foldl min (head list) (tail list)
Do you consider them trivial, or too hard to understand without rewriting as a loop? Honest question.

I think if you use a non-associative function as an argument, then fold is hard to understand. But then an explicit loop won't help much.


Trivial, they're in the same category with add and multiply.

Check out http://lambda-the-ultimate.org/node/587 for some non trivial examples.


Coincidentally, I've always found finding the maximum or minimum of a list with a for loop really finicky. It's much easier with a fold.


I know exactly what you mean. The fact is, every single program you write is going to be executed by a little stateful machine ticking along. So if your programming language doesn't model such a machine and instead uses higher-order constructs, then I will have to mentally translate everything you write to make full sense of it.


> then I will have to mentally translate everything you write to make full sense of it.

But why? A SQL query or a Java/Python/whatever program might get translated to zillions of CMP and JMP instructions, but I see no reason why you should have to do that translation yourself in order to read the code. In fact that would practically preclude the use of any minimally high-level programming language.


I believe that was the GP's point, he just omitted the sarcasm tags ;)


I'm surprised by the hostility in the comments.

Also, everybody's focusing on closures, which makes sense given the article, but I think the main point is about higher-order functions which require first-class functions (what the author calls closures).

Using higher-order functions in place of loops is much like using structured programming instead of goto.

A for loop is just a specialized form of goto that makes what its doing more explicit, lowers spaghetti code and makes making mistakes harder.

A fold or a map is just a specialized form of a for loop that makes what its doing more explicit, lowers spaghetti code and makes making mistakes harder.

Everybody decrying the article would probably never dream of using gotos for control flow--structured programming is patently superior! And yet you are also reluctant to use higher-order functions.

I think this is a perfect realization of PG's ideas in his Blub essay.


I can't speak for everyone, but I'm hostile because I have read it a hundred times before. A nontrivial number of programmers discover functional programming and then write identical blog posts with identical trivial examples.


The problem is that nontrivial blog posts are only going to be interesting for people who already like functional programming. Since people liking functional programming are not doubt a minority on HN, you're going to see far fewer articles like that.

Coincidentally, I remember when I was first learning Perl, then C++ and then Java, a lot of the stuff I read was far more evangelical about OOP.


"Well," I said to myself as I read this article, "this wouldn't work if you had to iterate through multiple arrays simultaneously." Then I was enlightened as I realized the intended purpose of the zip function.

This followed by disappointment, as I was unable to experience a similar enlightenment for cases that did not have completely embarrassing data parallelism. For instance, given arrays A, B, and C, we want to set A[i] = B[i-1] + C[i+1] (with special cases at boundaries). How would you do this with the map/filter/fold primitive recursion patterns?

Edit: Upon thinking of it more, would slices do the trick? So, in Python:

  A = [0,0,0,0,0]
  B = [1,2,3,4,5]
  C = [6,7,8,9,10]
  A[1:-1] = map(sum, zip(B[:-2], C[2:]))
Is there a better way to do this? I would say it is much less easy to read than A[i] = B[i-1] + C[i+1]. I would probably have to add a comment to the effect of "this is what this line is doing", which goes against the self-documenting argument.


Yes. You can say

  a = zipWith (+) (0 : b) (tail c ++ [0])
and that would make a[i] = b[i-1] (where b[-1] is 0) + c[i+1] (EDIT: where the last element of c is 0 (thanks tkahn6!)).

Also, interestingly, if you have lazy evaluation, you can use a zip to produce the fibonacci numbers: just zip the list up with itself.

  fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
so you cons 0 to 1 to the zip, and the zip is combining 0 with 1, and then 1 with 1, and then 1 with 2.

  fibs[0] = 0
  fibs[1] = 1
  fibs[i, i>=2] = fibs[i-2] + fibs[i-1]
and this looks like the math behind it.


In Q/KDB:

/1 million random floats between 0 and 10

a:1000000?10.0; b:1000000?10.0; c:1000000?10.0;

sum each flip (a;prev b;next c)

/To do it in parallel

sum peach flip (a;prev b;next c)

Sometimes it is like living in a world where people refuse to use sheet music/clefs/bars and instead they read and write music by listing the notes, their frequencies and how long to play them for. And you can't tell them otherwise. People who recoil at APL because there are a few unusual symbols involved will gladly play a guitar and deal with all sorts of arcane symbols. If we really wanted to do a good job we'd go beyond ASCII.


Hey, stick around and comment more. The HN APL/J/K/Q mafia needs all the help we can get :)


Perhaps you want the Haskell zipWith function? It takes two lists and, instead of zipping them together -- i.e., applying the tuple constructor (,) -- it lets you apply an arbitrary function.

I think from your python, you want the first element of A to be 0? (Python is not my best language.) Thus in Haskell,

    a = 0 : zipWith (+) b c


Say, you gave you two papers each with a column of numbers and an empty one for the result, and ask me to process them in the manner given. I would place them side by side, move B down a line, C up a line, then add them row by row.

The lining up is a matter of prepending the lists with some border values; then, use map to do the add. Finally, discard the edge cases.


That is a good analogy, thank you. It does seem to produce code that is more difficult to understand, however.


I'm not sure your solution computes what you want.

A[2] = B[2-1] + C[2+1]

A[2] = B[1] + C[3]

A[2] = 2 + 9

A[2] = 11

Your solution gives 12 for A[2].

    f xs ys = zipWith (+) (0:xs) $ tail $ ys ++ [0]

    f [1..5] [6..10] == [7,9,11,13,4]
I assume you want B[-1] and C[5] to yield 0 in this case.


> But it does highlight the key failing of for loops: they conflate three separate kinds of operations -- filtering, reduction and transformation.

Unsurprising for a Haskeller, but he forgot the fourth and likely most common use of for loops: iterative code that has side effects.


Or in fact anything that is sequential. There are a lot of loops that don't necessarily have side effects but are still much easier to express as sequential calculation. If f(a[i]) depends on f(a[i-1]) then you must necessarily recurse, and if you have limited stack space then you must either use a loop or rely on tail-call optimization. Both are valid, but suggesting that "filtering, reduction and transformation" will help is absolutely not true.


If f(a[i]) depends on f(a[i-1]) then you must necessarily recurse,

I think the author already covered this case. It's basically a classic usage of fold:

    fold f a 0 = f(a[i], f(a[i-1], f(..., f(a[0], 0)...)))
He even points out that it's better for the compiler if code is written this way. I.e.:

    a = map g inputData
    b = fold f a 0
The compiler now knows the first line can be parallelized, whereas the second cannot. If the map were mixed up with the fold code (which it often is in big for loops), the compiler would have a harder time of it.


Unsurprising for a Haskeller, but he forgot the fourth and likely most common use of for loops: iterative code that has side effects.

In haskell, this is done as follows:

    sequence_ (map (\x -> putStrLn (show x)) [1..10])
This is equivalent to:

    for i in range(1,11):
        print str(x)
However, in my experience, this is a highly uncommon use of for loops. I've almost never done it, either in haskell or in python. Consider the side effectful code:

    for line in open("urls.txt").readlines():
        scrape_url(line)
Do I really need to do this in order?


I don't think this is the most common--even when you have side-effects, most loops are conceptually going over some collection. Python takes this to an extreme with its for...in loop which is just an ugly map, but it's true in most languages.

Just because you have side-effects does not mean you can't express your computation more neatly using a higher-order function. And some of the imperative uses of a loop are easy to abstract into a higher-order function as well: while (true) {...} becomes forever, for example.


map is a pretty bad example in python due to it being eager while for can be made lazy with yielding instead of returning. Further, list comprehensions and generator expressions tend to be (at least in my experience) fewer characters than map/filter. To get lazy map/filter you need to go out of the core into itertools.


All of that sounds like faults of Python, particularly with its horrible lambda syntax. (E.g. 'map (+ 2) list' is much easier to read than a list comprehension.) Besides, I am not sure why you couldn't generalize map to lazy structures the same way you do with for--Python might not do that, but that is again Python's fault.

Conceptually, a for...in loop is basically like a map, except that it doesn't return a list. Of course, this is yet another fault of Python with an arbitrary separation of statements and expressions. If the for...in loop was just an expression--and that's basically what a list comprehension is, if you squint--it would probably be exactly like map.

My real point was that the only way to get C-style for-loop behavior in Python is to do for i in range(a,b), which is basically equivalent to map fn [1..10].


Its clear, closures actually solve the problem of random array access in high level languages. You are either after 1 value or all of them. These are 2 very easily optimized access patterns. Out of bounds array access isn't a constant threat.

Edit: I consider loop based programming my most essential coding pattern.


Turns out I don't seem to understand closures.

I mean I use what I think are closures in Javascript, but I thought they were to 'prevent the pollution of the global namespace' (a noble endeavor). I don't get how it relates to map, reduce(fold?) & filter.


Although it's more precise to use the word to mean "a fresh environment that inherits bindings from the parent environment" (your Javascript meaning), a lot of people for whatever reason just use "closure" to mean "any anonymous function" (the meaning in the article.)


Honestly this has been the first time when I've seen them interchanged.


Long story short, OP is conflating closures with lambdas.

A closure is a lambda that is closed over its free variables meaning that the variables which are not defined in the body of the function (variables that are not local or the formal arguments) are bound to what their values are in that lexical scope at the time they are constructed and do not change with the calling environment.

In javascript if I do:

    var x = 2;
    var f = function() {
       return x;
    }
    console.log(f())
    (function() {
      var x = 3;
      console.log(f());
    })();
    
It will print '2' and '2'. f is a closure. It is closed over its free variable 'x', 'x' in f is bound to the nearest 'x' that was in scope when it was written.

A lambda is just a function, it does not necessarily imply there are free variables which it is closed over.

    function() { var y = 23; return y * 40; }
is a lambda with no free variables.

Technically the following implementation of evens is a closure:

    isEven x = (x `mod` 2) == 0
    
    evens xs = filter isEven xs
Can you see why? `isEven` is a free variable in evens which is bound to the isEven defined above it. evens is a closure, but generally we wouldn't refer to it as such.


Thanks for the explanation, closures have been a little tricky for me to wrap my head around.

I'm confused a little with regards to the closure closing over the "value" of free variables, or the reference.

For example, in python:

    i = 1
    def f():
        return i

    i = 2
    def g():
        return i
Both f() and g() return 2, even though when f() is defined, shouldn't i=1 be closed over in this case? That's what I find confusing here, in the sense that the state of the environment changes outside the closure affect things inside the closure.


This is a property of how javascript and python do closures. In the above code i refers to the same piece of memory in every case. So f and g are closed over i. The reference is preserved but the value is not.

This behavior is the motivation behind the do keyword in CoffeeScript.

Here's more on this:

http://stackoverflow.com/questions/233673/lexical-closures-i...


Yeah I wouldn't let that guy near any code at our company.


Articles like this really make me think... that maybe the Haskell and FP people have their heads in the clouds.

I've played with OCaml, Haskell, Erlang, Scheme and Common Lisp, and I always end up going back to Python, C, and C++ for all of my "real" development.

IMO the abstractions available in the more mainstream languages are easier to understand and use in almost every case. They also perform better on existing hardware.

This particular article is really bad because most of its arguments are straw men.

For example:

"Instead, higher level operations like sum, prod and concat are defined in terms of folds. And your code is then written in terms of these higher-level reducing operations, making your code more concise, easier to read, easier to write, and easier to understand."

Building higher level abstractions from the language's lower level abstractions is basically the entire point of programming in any language. It's convenient that Haskell's library includes functions like sum, prod and concat, but it's hardly the only language including them, and it's trivial to write them correctly and reuse them in the languages without them.

And then he says stuff like "If you're still not worried about bugs, think about the possible typos. Then think about how often you write loops like this." Because nobody ever makes typos while using folds and closures, right?


The bog-standard for loop that transforms an array mentions the array twice and the index variable five times, giving you, let's say, 6 unnecessary chances to goof something:

for (int i = 0; i < array.length; i++) { newarray[i] = f(array[i]) }

by contrast, the functional-programming method allows you to define the new array immediately in terms of the old one, just mentioning the latter once and without an index variable:

  newlist = map f oldlist
To my eye, this gives me far fewer chances to botch it, and my reader can instantly tell that

  (1) newlist corresponds to oldlist in length and order
  (2) the only difference between the two is that the elements are related by the function f.
Every time I come across a for loop that builds up an array, I worry whether the bounds checking is wrong, whether the right index variable is used (esp. if there are nested loops), and whether an elements were added to the array before or after the loop. Lots to worry about.


Or you can worry about performance. Or memory usage. Or blowing the stack. Or...

There's no free lunch.


You still have to worry about performance and memory usage with a for loop. You don't have to worry about blowing the stack, but you wouldn't have to worry about that in a language with proper tail call support either.

Admittedly, you do have to worry about the stack in Haskell, but that is entirely the fault of laziness and not recursion or higher-order functions. You're not really worried about a call stack, you're worried about having a really big thunk (deferred expression).

There may be no free lunch, but some are much better value for money than others.


> Because nobody ever makes typos while using folds and closures, right?

If you type flod instead of fold, the compiler will complain. If you have an off-by-one error in your for loop, not so much.


If you type foldr instead of foldl it won't complain.


In most cases it will because they have different types:

    foldl :: (a -> b -> a) -> a -> [b] -> a
    foldr :: (a -> b -> b) -> b -> [a] -> b
So the only time it will be the same is if a and b are the same type.


> So the only time it will be the same is if a and b are the same type.

Which they will be in most of the example code he used. They're not the best examples.


This is all language astronauts. There's nowt wrong with the for loop for most cases I can think of.

Closures are way more expensive if you include the call overhead.


Whether they are more expensive or not is a function of implementation and the language runtime.

There is something wrong with a for loop, just like there is something wrong with goto--it's at too low a level of abstraction. A goto could signify any sort of jump; a for signifies any sort of repeated operation. A fold or a map or a filter or a zip signifies a very specific type of repeated operation that is hard to mess up.


Rubbish - your head is in the clouds. This is the real world where we solve problems with the tools we know and work on a common ground to everyone else.

I get the feeling a lot of people on HN just fell out of university or work in a niche area. The rest of us don't care for such isolation and prefer to work with deterministic and well known tooling.

<sarcasm>

If that is the case, then the summation operator in mathematics should be replaced with Lambda Calculus or the world will fall apart...

Linux obviously doesn't work because it uses for loops in the Kernel.

Windows needs to be rewritten in Haskell or LISP to make it pure.

OMG I just realised, I'm not allowed to walk a list in C without implementing car and struct pair. Damn my stack just blew.

</sarcasm>


Your examples reveal that you don't get the point. Using a for loop instead of a higher-level abstraction like reduce or sum is a lot like using lambda calculus instead of a summation in mathematics. We're trying to climb out of the Turing tarpit here by introducing useful abstractions, just like the summation operator was once introduced to make mathematics easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: