Hacker News new | comments | show | ask | jobs | submit login
A Worst Case for Functional Programming? (dadgum.com)
132 points by platz 1258 days ago | hide | past | web | 110 comments | favorite

"Turning your code inside out" is a great piece of advice, as it often opens up abstractions and refactorings that you didn't realize where there to begin with. The same idea is behind several common object-oriented design patterns (Command, Mediator, Strategy, Visitor) but it's baked into to many functional programming languages.

For example, say we want to write a function to compute square roots. A common approach to computing sqrt(n) is to start with a guess of x = 1.0, and keep replacing x with (0.5 * (x + n/x) until the relative difference between subsequent guesses is small enough.

  sqrt n = loop x0 x1
    loop x y = if converged x y
      then y
      else loop y (0.5 * (y + n/y))
    converged x y = abs (x/y - 1) < 1e-10
    x0 = 1.0;
    x1 = 0.5 * (1.0 + 1.0/n)
That's good, but it has the test for convergence all mixed up with the logic for generating the guesses. What if we could factor out the code that generates an infinite sequence of guesses?

  sqrtGuesses n = go 1.0
    go x = x : go (0.5 * (x + n/x))
Note that this works in Haskell because of laziness, but it's simple in any language that has a mechanism for delaying computations. Now we've decoupled the method for generating a sequence of guesses, we can write a function that checks for relative convergence

   converge (x:y:rest) = if abs (x/y - 1) < 1e-10
     then y
     else converge (y:rest)
and define the square root function in terms of these

   sqrt n = converge (sqrtGuesses n)
The logic of the program is now much cleaner, and we've got a useful function 'converge' which can be re-used in other parts of the program.

This kind of 'turning inside out' is often possible in functional languages, often leads to more compact and more compositional code, and is one of the reasons that I enjoy programming functionally so much.

Nice example, but to those considering implementing this in their own code, please don't use that convergence test in practice.

First, you really don't want to do that division x/y, which is slow, and which fails if y==0. It's much cheaper and safer to compare "abs(x-y) < 1e-10*y".

Also, you almost always want to compare (x-y) to an absolute convergence limit (in addition to your relative tolerance), in case x and y are very near zero.

Finally, if you really want a generic converge function that can be re-used elsewhere, you might want to allow for non-convergent processes. This requires tracking the number of iterations, and bailing when it gets too large.

By the way, now that your convergence function wants to know (x-y) rather than x and y individually, you might consider rewriting your logic functions to return the predicted change, rather than the final state. This avoids forcing a re-calculation of the change, which typically was already known in the logic function. It also avoids floating-point problems in which the calculated change x-y differs from the predicted change that produced y in the first place.

Of course, part of the beauty of his approach is that you can incorporate most of your changes just by editing the converge function and not messing up the rest of the code.

The last point does require changing the main code, but it's still easier to do with an external converge function like this.

> First, you really don't want to do that division x/y, which is slow, and which fails if y==0. It's much cheaper and safer to compare "abs(x-y) < 1e-10*y".

I might be missing something, but won't your cheaper and safer fragment also fail when y is zero (or negative)?

Sorry, I wasn't clear. I meant that performing the test, as originally given, will induce a divide-by-zero error. The modified test is safe against that; however, as you point out, it will still fail to indicate convergence (which is one reason to include an absolute tolerance as well).

And as you point out, you need to take the abs(y) as well, on the right-hand side.

Thanks for adding clarity.

The other nice thing about factoring code like this: it often turns up patterns that match existing library functions. For instance, most of sqrtGuesses can be replaced with the "iterate" function:

    sqrtGuesses n = iterate (\x -> (0.5 * (x + n/x))) 1.0

Am I dreaming, or is this an example straight from SICP? (I remember something like this...)

EDIT: Ok, so obviously not "straight" from SICP. But pretty close.

Chapter 1: "Example: Square Roots by Newton's Method" http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-10.html...

SICP has something to do with square roots in the first chapter, although I don't think they do this particular refactoring. I first saw it (or something like it) in John Hughes' Why Functional Programming Matters.

Which, by the way, is an excellent paper and totally worth reading.


Well, it can't be straight out of SICP because SICP uses Scheme and this code is in Haskell. And, in fact, if you did a straightforward translation of the Haskell code back into Scheme it wouldn't work. Figuring out why it doesn't work in Scheme but it does in Haskell is left as an exercise.

Of course, in later chapters SICP covers both streams [0] and lazy evaluation in the interpreter [1]. It's a really great book! I'm completely jealous of anyone who was introduced to CS via a serious studying of SICP.

[0] https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-24.htm...

[1] https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-27.htm...

scheme lists aren't lazily generated?

You use streams a la SRFI-41 for this.

Or Lazy Racket [in so far as Racket is a Scheme].


These is the kind of `combinator` that you can see in "functional javascript" by M.Fogus or " JavaScript Allongé" by R.Braithwaite. IIRC OnLisp also talk about this kind of things (not surprisingly).

C# can do this: http://dotnetfiddle.net/dCm475

(Sorry, it's just a not-so-well-known but awesome feature)

Now here's the same example in F# (a functional language); it compiles into IL similar to that produced by your C# code. Which do you find more readable?

    module Program =
        let n = 42
        let rec sqrtGuesses x = seq {
            yield x
            let next_x = 0.5 * (x + (float n / x))
            yield! sqrtGuesses next_x }

        sqrtGuesses 1.0
        |> Seq.pairwise
        |> Seq.pick (fun (x, y) ->
            if abs (x - y) < 1E-10 then Some y else None)
        |> System.Console.WriteLine
        System.Console.WriteLine (sqrt (float n))

You could use `Seq.scan` to get rid of the manual recursion:

    let n = 42.
    Seq.initInfinite float 
    |> Seq.scan (fun x _ -> 0.5 * (x + n/x)) 1.0 
    |> Seq.pairwise
    |> Seq.pick (fun (x, y) -> if abs <| x-y < 1E-10 then Some y else None)
    |> printf "%f"

    printf "%f" <| sqrt n

Thank you for posting this.

When I saw the original version in F# I thought "I think I can do better" after a long absence from using F#. When I got back to my laptop and searched for the to-be-improved snippet I saw yours.

Now I'm hooked again.

Note given appropriate `MyLinqExt.Scan` and `MyLinqExt.Pairwise`, you could write this in C# just as concisely.

Hmm.. No. :) I guess that's discussed all over the thread here. I do like C# in general and agree that it can be used in a functional style, but 'just as concisely'? Can't agree with that.

Needs the boilerplate (class definition, Main vs. .fsx for example), most keywords/methods seem to be longer (especially the Linq ones, SelectMany, FirstOrDefault etc) and there's no inherent way to build a pipeline as in "no |>".

Obviously you can write quite similar code in C# and I would agree that C# doesn't make it especially hard, is even a roughly decent language for that sort of stuff. Still, the F# snippet looks a lot nicer to me.

Which leads to .. Just a matter of preference, ymmv etc - there's no claim of superiority here.

It's a wash. To understand that thing you posted, I'd have to look up what the exclamation point does to "yield", why adding it forces you to repeat the function name, and whether those "|>" sequences are typos or some weird operator. The other version has some random StudlyCaps, I have to look up. Meh.

Well it's only a wash if you are familiar with one and not the other (and that's the case IIUC). You don't intuitively know what the C# version does either.

For future reference "|>" like "$" in haskell is just short hand for a start and end parenthesis and end of expression or next instance of "|>" or "$".

So in Haskell:

    sum $ filter (> 2) [0..10]
is a less noisy way of saying:

    sum(filter (>2) [0..10])
About the exclamation points, I believe it causes it to evaluate and has to do with ensuring the expression is evalauted. At least that is the case in haskell.

Imperative / shell programmers may be more familiar with |> as a pipe.

  echo "Hello World" | wc -c

  let wc x:String =
  "Hello World" |> wc
are equivalent

|> would be the flip of $ since the arg comes before the function

So equivalent to Control.Lens.(&)

yield and yield! are both keywords used in sequence expressions. To compute a sequence of type seq<'T>, you can use a sequence expression, in which the yield keyword takes a value of type 'T, and adds that value to the computed sequence, while yield! takes a value of type seq<'T> and appends that value to the computed sequence.

You also have keywords "return" and "return!" for F#'s version of monads. In Haskell, "return" is just a function, while "return!" is not needed (though I wish it was present in Scala).

In F#, yield emits a new element in the sequence (just as in C#), while yield! emits all of the elements from another sequence you provide. This effectively allows you to "splice" sequences together, or create a new sequence by prepending/appending some elements to an existing sequence.

As another commenter mentioned below, yield! has nothing to do with repeating the function name -- the name is repeated because it's a recursive function (i.e., a function which calls itself). So the sqrtGuesses function produces a sequence which emits (yields) the value it's given (x), computes the next value of x, stores it in the variable next_x, then emits all of the values in the sequence produced by calling sqrtGuesses for the next value of x (next_x).

The |> is called the "forward pipeline operator" in F#, and it's very simple -- whereas you normally call a function f with a value x by writing f x, you can swap the order of the arguments with |> to write x |> f. This effectively allows you to write nested function applications without all of the parentheses: f(g(h(x)) can instead be written x |> h |> g |> f or h x |> g |> f or whatever.

The function name is repeated because it's recursive!

For those complaining about the verbosity of c#, here is an alternate c# representation:

    public static void Main()
        const int n = 21;

        Observable.Generate<double,double>(1, x=>true, x=> 0.5 * (x + (n/x)), x=>x )
        .Scan(new {prv = 0d, cur = 0d}, (prev, curr) => new {prv = prev.cur, cur = curr})
        .FirstAsync(_ => Math.Abs(_.cur - _.prv) < 1E-10)
        .Select(_ => _.cur)

        // this is just to compare values, so is not part of the solution
That's is, everything you need to run the program (technically one line of code, line breaks are for readability). And its non-destructive, has no side effects and is pretty easy to read.

I would never argue that c# is a 'proper' functional language, but the poster above wrote in the style of idiomatic java, something a lot of c# devs do. I don't think c# has an 'idiomatic'. It is very flexible, and while never truly functional, can be also be used like a scala.

Having someone re-write my 7 line Haskell program into 40+ lines of C# is the best argument I can think of for why it's worth learning a functional programming language ;)

This kind of lying (why count the usage example?) and gloating attitude is what I find repulsive in Haskell community.

Downvoted. I think the ";)" indicates pretty well that this is tongue in cheek. I wouldn't draw any conclusions about the Haskell community, which is very nice, from this.

I also find it to be the exact other way around.

A lot of mathematical transformations, on the inside, break down neatly into sequential mutations, and you really do need to express them imperatively for maximum performance. Fortunately, you can look at the math and verify they compose appropriately, and use testing to verify you've implemented each step correctly.

But if I'm doing anything involving state, particularly interdependent state like a simulation, functional algorithms make a lot of sense as a way to ensure that I always have a valid object handle (because I'm always returning a new, valid object as a result of a computation) and give you a somewhat easier route to refactor while maintaining the same semantics (because there are no rogue mutations or global state updates I need to conform to), as well as parallelizing updates to the object graph for performance.

Clojure's semi-famous "ants" demo is a good example of this.



Hmm, if the mutation is just purely sequential, it's just an implementation detail. Ideally, you would express the solution at a high-level, close to the mathematics and without mentioning mutation, and have your code optimized to the mutable version for you. After all, it's just a series of transformations over some data--exactly what functional programming is!

This sounds like an appeal to a "sufficiently smart compiler", but really we're already part of the way there with things like vector and stream fusion in Haskell. I know that some of Haskell's vector functions are already smart enough to run in-place if they can get away with it.

> This sounds like an appeal to a "sufficiently smart compiler"

Because it is. It's a long road from "gee, I changed something, better copy everything" to something as good as FFTW or ATLAS. GHC is pretty smart, but not smart enough to be trusted.

> It's a long road from "gee, I changed something, better copy everything" to something as good as FFTW or ATLAS.

CUDA implementations of FFTs and matrix operations are faster than both FFTW and ATLAS, and they are neither sequential nor functional.

CUDA, C, and Haskell all have domains they typically outperform one another. The math vs. simulation divide sketched in this blog post is more an expression of the author's own psychology more than anything else.

To be fair, the FFTW/CUDA thing is due to fundamentally different hardware architectures which drove design constraints for these types of libraries. FFTW was never meant to run on a dedicated, ultra-parallel processor with highly optimized floating point instructions (GPU), but it is incredibly fast considering it runs on general purpose hardware. I am sure the FFTW authors could have done something to squeeze out more performance if they controlled both the hardware and software as NVidia does. And the transfer time to/from the GPU does matter, especially for smaller/more frequent operations...

All that aside, the psychology of pure functional vs. pure OOP vs. some hybrid methodology is really interesting, and even the view of what a "clean solution" is becomes tainted based on past experiences with other code written in that style.

When he started to write about writing (military) simulations in a functional style, I really expected a reference to this paper http://haskell.cs.yale.edu/?post_type=publication&p=366 which is one of the rare examples of a real empirical experiment in software enginering. (And it is going to turn twenty this year, we are not learning very fast as a community).

This is not the most fair paper (it could be described as: two Haskell language designers kick program-manager-struggling with-pre-STL-C++'s ass) but it's an interesting reading.

It would be great to see this paper's experiment conducted in the modern world. Haskell vs Python vs C++11 vs Clojure vs whatever.

Actually it would be great to have a site like http://benchmarksgame.alioth.debian.org/ that focuses on program readability, development speed etc.

Not an holy grail, bur take a look at Rosetta Code : http://rosettacode.org/wiki/Rosetta_Code

Interesting paper.

When I look at the outputs, I think "this looks like a very simple homework assignment". If they had provided sample input, as well as sample output, I'd be able to test my belief. I imagine I can produce functionally-equivalent Python code in a few hours (note: Python was not a reasonable option in 1993).

In any case, this is a test in prototyping, with guidance of "5 person-days of effort"), so you would certainly expect languages like C++ and Ada to fail spectacularly.

It was interesting to read about how bloody fast the lisp implementation was written. That seems insanely fast. Presumably the author was one heck of a specialist in the field. And... it is notable that with such a lower amount of documentation, it still did not score much worse than the Haskel solutions did. (I really need to learn lisp well.... Also, be in a place I could use it.)

There's nothing wrong here, but it all feels to me like a bit of a false dichotomy. When discussing the functional approach to the simulation, the author still talks about objects, such as tanks and shells, whose state gets tracked. So it isn't as if an object approach has been completely abandoned.

Similarly, there's nothing preventing an implementation in an object-oriented language from making the states of things at the beginning of a time step immutable. In fact, this is done all the time in scientific simulation (necessarily so, because you might try a time step that is too long, and have to re-try a shorter step from the same initial state).

So the author's conclusion that The second [fix] is to avoid any kind of intra-frame pollution of object data by keeping a list of all changes..., then applying all of those changes atomically as a final step sounds like a letdown to me. The imperative codes I work with do this all the time. It's just that the comments say mundane things like "Update the pressures" or "Report out final concentrations" rather than "Apply changes atomically to avoid intra-frame pollution."

Your imperative version isn't atomic in a multithreaded environment, which is a major concern. (Or of it is, it has complicated system for providing atomicity).

Try to implement infinite levels of undo in an imperative solution. The functional approach makes it trivial, since everything in the simulation is just a big reduce function applying a sequence of changes to a state object. Mutable state on the other hand, makes it very hard to write.

I'm sure you're right about infinite levels of undo being hard to implement in an imperative language. I do think functional languages have a lot going for them (even though I didn't think the original post made a very convincing case for functional languages).

I would be interested in hearing what advantage anybody thinks "infinite undo" would bring to a scientific simulation. I can sort of conjure up a scenario where you're doing a probabilistic optimization, and you want to be able to re-visit every single time step, in order to perturb the boundary conditions and watch how the system evolved differently. But the cost of storing all the meaningless "false" states that the solver generated as it searched around in the solution space for each time step, hardly seems worth it (compared to, say, some kind of way-pointing capability, which many scientific software programs have got already).

Infinite levels of undo does sound like something that would cause a big memory leak for a long-running simulation. It could be accomplished in an imperative language if it was written in a functional style (assuming the language is sufficiently powerful).

I didn't quite follow your comment. If the imperative solution tracks a list of changes why couldn't it implement infinite levels of undo in a similar fashion to the functional approach?

It's a question of persistent data structures and sharing. Efficient code without mutability tends to recreate only the parts that change, and so you don't have to store a fresh copy of the whole world each update. Of course there is nothing preventing you from doing this in an imperative language, but a preference for mutability discourages it.

Thomas Kristensen recently released a Propagator implementation for Clojure (https://github.com/tgk/propaganda) -- Propagators as in Sussman's paper "Art of the Propagator" and his StrangeLoop talk "We Really Don't Know How to Compute" (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...).

"Art of the Propagator": http://web.mit.edu/~axch/www/art.pdf

"Revised Report on the Propagator Model": http://groups.csail.mit.edu/mac/users/gjs/propagators/

Original Scheme implementation: http://groups.csail.mit.edu/mac/users/gjs/propagators/propag... [tar file]

Thomas' 2013 Clojure Conj talk: http://www.youtube.com/watch?v=JXOOO9MLvhs

Yeah, I've often been interested in RTS games and that kind of lock-step "flip" between two "pages" of states sounds like the right way to go.

I tinker with SpringRTS a bit and actually its developers have had a lot of trouble with not using that kind of frame-by-frame approach - since every unit is fiddling with the same shared state, it makes threading problematic.

Actually, my complete idea (thinking of this using a more procedural language with mutable objects) was to have two state-pages, but only one state-page is read-only. Basically, you have two state-pages: Present and Future. Present is read-only, and Future can only be edited by the same actor. This lets the actor do self-modification without the overhead of the message-queue, but peer-modification goes through the message queue. Peers can read each other's read-only "Present", but not their mutable "future", and can insert queue entries to modify each other's future. Of course, you'd have to sort the queue-entries before executing them during the queue-processing step to make it deterministic (RTS games use lockstep synchronization so everything must be deterministic).

This would be appropriate if your code had a lot of procedural logic going on like programmatic animation or something (again, SpringRTS) and so putting every self-modifying operation into a queue might be too heavyweight. But that's not very functional.

Though I am not much familiar with functional programming, as far as I tried, one of the hardest problem in functional style is referencing to a mutating object.

Where an object is immutable, it has to create a new instance of itself for mutation. That invalidates existing references. (if the language support object referencing…) The only solution I can figure out is putting unique tags for each objects, and lookup them for each time. In other words, I have to reference them indirectly which introduces lookup cost. And this cost is usually unacceptable.

If the data structure is pure tree, functional style usually superior on simulations - easier writeup and easier debugging by retaining all the intermediate state. But most games are graph structured. They usually contains many links to arbitrary node which are very hard to be updated together with target object mutations.

If I am wrong, please correct me. I want to use functional styles on games, but I don't have a good solution for referencing problem.

You can look at "FRP".

The pure-function approach is to have one outer loop to update state (and all earlier references are frozen, not just some), so there are no references from previous-iteration state to future-iteration state (but the reverse is OK).

I have heard of it. But the problem is data structure is not a pure-tree, so we need arbitrary referencing in a version of state, and FRP doesn't seem to provide a solution for it.

Here's a (web based) pure functional version of the game 'bomber man', the readme should answer your questions.


But that doesn't answer the GP's worry about lookup cost. The thing is, it is hard to keep direct references in a nested data structure (like a graph or tree) even in a language that supports direct references.

An example: I want to create a tree in which each node keeps a track of its children as well as its parent. AFAIK, it is impossible to keep track of both children and parent (with direct references) in an immutable data structure.

The lookup cost is O(logN). Why do you claim this is "usually unacceptable"? I think it is virtually always acceptable.

The constant factor is huge compared to a straight pointer.

Most parts of most programs aren't performance centric and can take a hit. As evidence, see all the Ruby/Python programs that have constant penalties that go up to 200 in many cases.

I was talking about pretty realtime applications like games. I'm sorry that was unclear.

In such applications, I wouldn't even want to get the GC penalty, so I'd be constrained to a very small set of languages.

Years ago a good friend and I wrote a zombie outbreak simulator, and ran into a similar problem.

Our solution was to have "move" and "think" phases for all entities (usually run think->move->think because a movement could result in a collision with somebody else who had wanted to move), and to use a messaging system to have entities announce their intentions ("attack entity X", "open door", etc.). The messaging system also let us filter messages and do other clever things--a later evolution of this same architecture became a double-buffered entity simulation framework with green threads for handling updates.

It's a hell of a lot simpler to write a simulation with objects who can understand certain types of messages--the code is simpler, the mental model is tons simpler, and in general it seems to be just easier.

> Our solution was to have "move" and "think" phases for all entities

Oh good, that was my really simple solution for multithreading a large agent-based simulation on the order of 100k entities. Each step has a read-only phase (decide) followed by a write-only phase (act).

I figure that's good enough (if not quite optimal) for maximizing CPU usage on a simulation running on one computer.

> "because a movement could result in a collision with somebody else who had wanted to move"

Similarly useful when simulating physics: evaluate forces effecting entities, then evaluate movement for the next step regardless of collisions and then resolve the collisions by applying a virtual repulsive force moving the objects out of the "impossible superpositions".

How do you address the problem where you get impossible superpositions that are difficult to resolve via repulsive forces, leading to a massive deposition of energy in the objects?


Cap the forces and/or trigger a more refined resolution mechanism, and maybe also force set the objects in a plausible {position, velocity}. This kind of simulation is by definition an approximation. You can detect impossible superpositions because they result in impossible forces. This is of course not applicable to the general case, so it depends if you want realism racing or massive unit count RTS physics.

Repinning proposed anti-objects to solve this problem, which uses collaborative diffusion to take the logic out of the actor and place it into the environment:


It's the way to go. Just don't forget to decompose entities into components.

Components are a thing--depending on the language, even a good thing.

A couple of anecdata here:

- I wrote a (nearly) pure functional Katamari-type 3D game (OpenGL + ODE for physics). You can find the source here[1]. I didn't find the functional style to be a problem at all.

- At about the same time I was involved in a large closed-source project that simulated the criminal justice system of a well-known European country. Criminals, police, judges, prisons, etc. and their interactions with each other. It was written in OCaml in a pure functional style. Again, no problems writing it functionally.

[1] https://github.com/blue-prawn/OCamlODE/blob/master/examples/...

I'm assuming you can't really give more information but is there some research paper or at least the name of said European simulation. It sounds pretty interesting and somewhat relevant to what I'm currently doing :)

I'm afraid I really can't say any more about that. I got an email from the contractor asking me to remove a posting last time I mentioned more details about that project (FWIW I have no idea why they were so secretive/sensitive about it).

Worst case is probably something like Warshall's algorithm (in-place mutation of a bit matrix for optimum performance). See warshall.c in BYacc or lib/bitsetv.c in Bison (relevant part reproduced below).

  bitsetv_transitive_closure (bitsetv bsetv)
    bitset_bindex i;
    bitset_bindex j;

    for (i = 0; bsetv[i]; i++)
      for (j = 0; bsetv[j]; j++)
        if (bitset_test (bsetv[j], i))
          bitset_or (bsetv[j], bsetv[j], bsetv[i]);
While the algorithm can obviously be reproduced in a functional language, it does rely on destructive updates for its performance.

This is why Haskell (ST) and Clojure (transients) provide local mutable state in the form of "linear types" where it is impossible to read a mutable value outside of tightly controlled conditions.

As Guy Steele wrote, "Lambda, the ultimate imperative"

I'm not sure I would call ST a linear type to any significant degree. It's only very slightly such a type.

The quotes were the part where I was bluffing. :-/

Yet another one confusing functional and immutable. Is it OK that I am happy with 50% OOP, 50% functional and 100% immutable?

How does one use immutability extensively without adopting functional style?

By keeping old data, imperative style.

It seems to me that if you go far enough down that route, you're going to end up thinking functionally, even if the data flow is obscured by out parameters and such.

The author's main observation is that with a non-functional approach one has to avoid any kind of intra-frame pollution of object data by keeping a list of all changes then applying all of those changes atomically as a final step. This doesn't seem like it has any obvious drawbacks, the simulation algorithm defines all objects updating at once, so it's not unnatural in an OO-world to have this "update step".

The Author calls this universal update step a "fix" and then goes on to say that a functional approach would avoid it, but in reality never explains why a problem exists in the first place.

I don't get it. There's a straw man of mutating state mid-step, which no sane simulation would do.

  var new_state = current_state.clone();
  run_tick( current_state, new_state );
  current_state = new_state;
Get your infinite undo with copy-on-write semantics for your objects & pushing current_state onto an undo list before replacing it with new_state.

I get that a purely functional simulation would have these free.

I made a load-testing tool using Clojure [1] and it did seem like a bit of an unusual use case for functional programming.

Upon researching further, I found that was actually to be expected as Object Oriented Programming was originally implemented as a way to help model simulations:

"The formal programming concept of objects was introduced in the 1960s in Simula 67, a major revision of Simula I, a programming language designed for discrete event simulation..." [2]

[1] https://github.com/alanning/meteor-load-test

[2] http://en.wikipedia.org/wiki/Object-oriented_programming#His...

The solution of queuing updates is natural and concise in E with its eventual sends:

is essentially like

    setTimeout(0, function() { tank.move(displacement); });
except it also works if the tank lives in another process or machine. The distributed/concurrent case was what it was for, but it can be nice for sequential programming too.

The talk "Deconstructing Functional Programming" by Gilad Brach should really be a must-watch if you're thinking about having any meaningful debate about functional programming.

tl;dr Almost everything we call 'Functional Programming' being equally applicable to 'Object Oriented Programming' if it's done in a sufficiently flexible language.

I didn't find this talk all that informative, particularly Gilad's rant about Milner type systems is really against a strawman and shows his ignorance of the topic. The rest of the rant about monads seems to boil down to "I don't like how they are named in Haskell" which to me seems like a very shallow criticism.

I enjoyed Brach's talk; criticism like that needs to be heard. Of course FP isn't the solution to all problems. Still I felt he employed a number of strawmen. The HN discussion it provoked was excellent, however https://news.ycombinator.com/item?id=6941137

Well I agree criticism must be heard, but much of what Brach talked about was unfounded and in some cases just plain wrong.

As more experienced functional programmers in that thread pointed out, Brach showed many misunderstandings which FP newbies typically have.

I think Gilad is right to claim that FP is almost completely devoid of meaning as a technical term (though it's certainty a certain kind of movement and community). I think however after that point in his talk he becomes more obsessed with describing his own favorite OO language and ascribing its features to this term FP, that by his own hand is not considered terrifically material.

He then finishes by scathing review of the things that a Haskell programmer would actually call novel and interesting.

It's a very strange talk. I don't think more than 10% of it is content though.

Turing-complete programming languages are equivalent. That's the beginning of the comparison, not the end.

When you have a discretized timestep the functional approach falls into place quite easily. I think the worst case for functional programming is where you have an incoming stream of events (e.g. user requests) and need to update (your internal model of) the world based on those.

This is where functional reactive programming really shines. It lets you model time explicitly. You can easily work with streams of events, but you can also work with things that change continuously over time.

With a standard callback/state-based approach, you end up implicitly modelling time through mutable state. This has a few major disadvantages: it's difficult to talk about history and time is distinctly second-class. The "resolution" over time is also implicit and ends up depending on your implementation.

An FRP approach lets you make time explicit and a first-class citizen. In turn, this allows you to directly talk about the relationships between time-varying values and their histories--for example, you can "integrate" over time explicitly.

My favorite example comes from a little game of life applet[1] I wrote. Basically, I have an explicit timer that controls animation steps (called life)--it's a stream of () which has a value every n milliseconds. I also have a behavior called "paused" which corresponds to whether the pause button is pressed. I can combine these directly:

    when (not <$> paused) (step <$ life)
(The step :: Life -> Life function represents iterating a single generation.)

I also have a stream of (x, y) coordinates from the mouse. There's a modify :: (Int, Int) -> Life -> Life function that flips a cell at the given point. I can combine these into another Life -> Life stream thanks to partial application:

    modify <$> clicks
Now that I have these two streams, I can just combine them with union:

    when (not <$> paused) (step <$ life) `union` (modify <$> clicks)
The cool thing about this short snippet is that it shows how I can combine three inputs--a button, a timer and the mouse--into a single stream very easily. It's also very flexible. Right now, the "when" only applies to the first input: you can modify using the mouse even when you're paused. If I wanted to change this, I could just wrap the whole thing in the when:

    when (not <$> paused) (step <$ life `union` (modify <$> clicks))
Anyhow, I think FRP is an awesome approach for this sort of problem. Another thing that I did not mention is that you can write your program in a way that's "resolution-independent" over time. If your library models time continuously, you get the moral equivalent of an SVG except for reactive code! I think that's pretty cool too, although it turns out to be tricky to implement.

[1]: http://jelv.is/frp

The game of life is about as discrete-time as it gets. As you say, continuous treatment of time is fiddly, and that's what I was trying to talk about. (And you're right that FP is probably a better approach if you need history).

(I suspect correct implementation using an object-oriented approach is also tricky, tbh - but you can write the naive OO version and it will work most of the time, and fail less badly, than a functional version at the same level of completeness.)

The buttons and clicks that I also support in my tiny example are not discrete. Or rather, not in the sense you meant. The grid of life is modifiable at any time--whether its running or not--and can be paused.

I only talked about it a bit, but we can also model continuous time. Not things like events but things like the mouse position. After all, how you sample the mouse position should be an implementation detail! This is what I meant when I was talking about being "resolution independent". The fiddly part is not the FRP abstraction but implementing it efficiently. And, again, I'm talking about continuous time not like events (which may happen at any time but are otherwise discrete) but like a continuous function in calculus.

I disagree. In particular, the actor model is easy to model in FP and that corresponds exactly to what you're looking for quite nicely.

An incoming stream of events is easy-- it is just a virtual clock instead of a physical clock. Multiple multithreaded streams is trickier.

I think the implication was not a single user, but many. Think store. And, like most arguments, I'm sure it can successfully be used on all sides of the war.

For fun, expand on that store. There is a real sense where a nice simple and convenient function helps a ton. Deciding when and where you need to restock something. Now, remove more and more assumptions and at some point the "implementation" details that make it no longer "functional" start to get in the way. Of course, at some point there isn't much you can do about it, and you have to be in the weeds.

The blog post tries to discredit imperative programming by describing naively incorrect simulation implementations and attributing them imperative programming. A final sentence describes the well known, obvious, and correct way where two world states are maintained, i and i+1, and state i+1 is calculated based on state i.

Then the post briefly touches on functional programming being able to automatically somehow transform an incorrect imperative solution into a correct functional programming solution, without actually showing how it is achieved.

The post contains no content of value.

To me, simulations always seemed to fall into a nice place for concurrent programming with messaging. In that case, the abstractions are not as numerous as when forcing things into a single threaded world.

Isn't this attacking a strawman? Can't you always make object mutations construct new objects and return those? Basically "fake" the immutable part of functional programming?

Immutability is one aspect of the family of techniques called "functional". Yes, you can right immutable code in C or Java, but the compiler can't verify that intent in general, which means that a whole class of bugs are possible, and certain kinds of high-level general functions (combinators) and automatic optimizations (inlining and rewrites and skipping computations) are not possible to write/apply.

i think interactive applications are a good example

we have what i consider 'a pile of hacks' (monad) in functional programming languages to handle these because its very difficult to make useful software which doesn't allow user input. whilst being functional in some sense most of the usual benefits are undone and we are left effectively with imperative code written in a functional style.

however the libraries hide the implementation details for us...

personally though i don't think its as cut and dry as 'OO is good for X' or 'FP is good at Y'

OO and FP are tools that should sit with good ol' procedural programming, array programming and other paradigms - they all have their place, and languages with too much emphasis on one in particular end up with weird constructs to work around that.

I/O monads are an example for FP, but if we look at an OO language, say C#, we have static class as a way to turn an object into a bunch of free-floating state and stand-alone procedures - because sometimes /procedural/ is the only approach and we must shoehorn it into OO with singletons which usually incur a needless performance pain - the static keyword then hides that our object is actually a load of procedural code for us.

Monad is a wonderful abstraction and not at all hacky. In my C, I miss the ability to parametrically combine "things that produce values".

i guess its a matter of taste - i consider it a clever trick and totally against the spirit of FP, but thats my take...

How would you describe "the spirit of FP"?

The purpose of any computer program is to produce side effects. A program which has no side effects has no observable behavior and is therefore useless. In that sense, side effects are good, so embrace them.

That implies that the purpose of a functional program is also to produce side effects.

The only problem with side effects is managing them properly. Side effects can become tangled and conceptually incoherent. Functional programs can help manage side effects.

For example, if I call (draw_line P1 P2), I expect a line to appear on the screen joining points P1 and P2. That is a side effect.

As a simpler example, if I call (say "Hello"), I expect the line "Hello" to be output to my terminal. That too is a side effect.

Those are good side effects.

The way I like to encapsulate side effects in a functional language (such as Fexl, a language which I wrote), is to use closures. Take your mutable data, and bury it inside a closure.

If you object to closures (pun intended), then think again. If you're a C programmer and you write printf("Hello\n"), you are in effect using a closure. How? Because stdout is effectively buried inside printf -- i.e. you don't have to pass it as a parameter.

If I saw (draw_line P1 P2), I am using a closure. How? Because most likely some X window gadget is buried somewhere down inside draw_line.

Fexl does provide a "var" construct which is a mutable variable. Pure functionalists might object to that. But why? The X window gadget is a variable. Your screen memory is a variable. All the memory in your computer is a variable. The buffer for stdout is a variable. The whole world is a variable. Stop fighting it.

I would prefer to avoid using "var" in Fexl, and aim toward pure functions, but sometimes that's ridiculous. For example, what if I wanted to simulate the behavior of draw_line, so instead of it going into an X windows gadget, the effects get stored in memory so I can use them differently, maybe for testing.

In that case I might use a var which simply collects all the draw_line and other graphic commands into a big list.

So let's stop fighting side effects and learn to embrace them, and recognize the value of functional languages in managing those side effects rationally.

LZ-based compression?

Mutable state is an optimization. Sometimes a powerful optimization with little loss of clarity, sometimes utterly ruinous to any hope of understanding the code. Usually it's between the two extremes. But I think the default should be the functional style. To reach for imperative tools immediately is usually a premature optimization.

It's not just an optimization. Functional style is often less clear and more confusing than using mutable state.

For example, I move my browser window. My browser models this as a Window object whose Position property is mutated to a new value. This is clear and easily understandable, in part because it matches the user's experience of moving a window.

A functional approach might be to construct a new window at the new position that shares some structure with the old window. This seems totally weird, and furthermore, it's hard to think of any benefit to keeping the stale, no-longer-onscreen window around, while it's easy to think of problems that might cause. We can pile more machinery on top (FRP, lenses, etc) but we only make the functional solution more complex.

(On that note, GUIs have got to be close to a worst case for functional programming.)

> it's hard to think of any benefit to keeping the stale, no-longer-onscreen window around,

Undo? No aliasing issues? Transactionality of the set of changes being done?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact