Hacker News new | past | comments | ask | show | jobs | submit login

He simply made a lot of assertions that certain styles of programming are impossible. That assertion feels great if you (a) don't actually know the techniques of achieving some of the benefits outlined and (b) try to program in a language that doesn't support them as well in a style which doesn't work well with them.

In some sense, despite the author's assertion that FP is mind-expanding and a great learning experience, I believe he still had a lot to learn. Mostly that most of these techniques and benefits are workable, but they take a genuine change in perspective (such as outlawing ambient state entirely).




Well, at least he gave some concrete examples. I was hoping for some concrete examples of what he was missing from you, because I am curious.

My own exposure didn't show me that much that was new/different in ways that were practically useful (and no, it wasn't writing FORTRAN or BASIC or Pascal in a different language).

Mind you, I have, for example used (mostly) immutable data structures and generally write in what many would probably consider a somewhat functional style. I also like HO mechanisms, but all of these are not at all exclusive to FP languages.

So as I wrote, I am looking for concrete examples.


Myth 1: if you restrict variables to be references alone and no longer mutable slots then this argument vanishes. Additionally, these kinds of computations do not need to take forever. Case in point:

    fix f = f (fix f)  -- ought to take forever, right?
    fact' rec n = if n == 0 then 1 else n * rec (n-1)
Here `fix` appears to set up an infinite computation, but applying it as `fix fact'` produces a terminating function, the factorial.

Finally, simultaneity in LC doesn't much hold you back from using concurrency because you have two avenues: (1) model your concurrency abstractly within LC and then execute it specially and (2) have concurrency applied implicitly with optional hinting.

Both of these are a little non-obvious, but extremely workable.

Myth 2: Summarized as: Turing Completeness holds, perhaps? It seems to be attacking a certain strawman, but in doing so depends a lot upon some definition of functional as being a totally different form of computation. It's hard to even concretely respond to this abstract notion, but I'll try with two points

1. LC and TMs are not equivalent on higher order computation. While any function Int -> Int can be equally represented in each, functions like (In -> Int) -> Int are different. In LC you cannot examine the input function (with tradeoffs and benefits) and in TM you can (with tradeoffs and benefits). That's totally theoretical, though since you can model higher order computation as a function Int -> Int for most practical purposes.

2. Turing Completeness is not an end goal. Some languages today even challenge whether you want TC to hold all of the time (or only sometimes) and push right up the border of TC from below. I think they provide example that taking TC as your ultimate arbiter of language comparison is shortsighted.

Myth 3: Referentially transparent is not a property of a language or of a (poorly defined) style of language. It's a property of an analysis of a language, the language's definition/semantics/statics. I've discussed this on this page in another comment concerning C.

The short of it is, however, that RT depends upon what you're willing to analyze as a frame of reference and for any language you can pick choices of frames which provide RT or those which break it.

The author's choice of mechanism to demonstrate RT breakage is pretending like variables in Perl are references and then showing that mutability and dynamic scoping breaks them. Then he even tries to pretend like universal use of dictionaries models reference and breaks that.

These are strawman arguments, deliberately pushing models outside of their theoretical value and then showing that they no longer have theoretical value.

Ultimately, this culminates in an unjustified argument that "referential transparency isn't all that desirable" due to "incredibly tight constraints" being untenable.

As a concrete counterexample, I give you pretty much the semantics of Haskell which has a ton of practical code written in it and referentially transparent variable semantics by default. It goes much further than his ideas do and achieves it in such a way where the programmer does not have the ability to break the model.

(For instance, if Haskell let you modify its own running memory and fiddle around with arguments on the stack then you'd certainly be able to use those facilities to break RT... but you can't.)

Myth 4: This is total strawman—he asserts that FP does not allow "assignment", defines assignment as he chooses, and then shows that his own model of assignment can be modeled without assignment. He does this by embedding an imperative language in a "functional" style in Perl and then observing the effects of running that embedded language.

Then he shows... something utterly unrelated to referential transparency but claims that it's the same. Since you asked for concrete, here's his example in Haskell

    data Op = Inc | Dec | Zero

    counter :: Int -> [Op] -> [Int]
    counter n []       = []
    counter n (op:ops) = case op of
      Inc  -> n + 1 : counter (n+1) ops
      Dec  -> n - 1 : counter (n-1) ops
      Zero -> 0     : counter 0     ops

    -- modified to return only the last value
    counter' n ops = last (counter n ops)

    x = [Inc, Inc, Inc]
    y = [Inc, Inc, Inc]

    test1 = counter' x == counter' x
    test2 = counter' x == counter' y
    test3 = counter' x == counter' (x ++ [Inc]) -- what?
---

Ultimately, while he does spend a little time talking about how reference works—and he's not completely wrong in his definitions—his application of this idea to "FP" (whatever he defines it as) is full of logical fallacies, unbacked assertions of impossibility, and... ultimately just totally flawed arguments.

It turns out you can have much of what he's asking for—even in Perl if you like. But to do so you must intentionally restrict yourself from using certain parts of your language. If you move the goal posts on your restriction over and over then, yes, you can discover a lot of weird counterexamples to your own broken models.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: