While knowledgable sounding... I kind of feel like he was missing out on what pure FP can and does achieve. There were many statements to the effect of "don't pretend X is Y because while Y is obviously great, it's also true that any lang which does Y could not operate (here's an example in Perl)".
I reject that last bit, that these things Y lead to a useless language though I do think the proof was far from the pudding 9 years ago.
"I kind of feel like he was missing out on what pure FP can and does achieve."
So what exactly was he missing? When I was programming in FP languages, the process didn't seem all that different from other languages, except generally more cumbersome.
"When I was programming in FP languages, the process didn't seem all that different from other languages"
Isn't there a rather famous phrase, 'You can write Fortran in any language'?
I kinda skimmed over the OP, admittedly, and the lambda the ultimate responses; I find myself far more on the side of the lambda the ultimate responses (that this guy is largely strawmanning/misunderstanding FP), but that's still immaterial to me, as I'm not really interested in academic arguments.
I've been programming in Erlang (which though not lazy nor pure still embraces an FP approach), and I've found that after embracing the languages idioms, structuring my code and my thinking to take advantage of the language, I'm far more productive than I was in any imperative language (and became so in less time than I had spent in those imperative languages). And things like concurrency, while still complex, have become comparatively trivial than the crap I was having to deal with in those imperative languages.
Now, that may be a combination of other factors (a REPL, more concise language, fewer teammates writing better code), and it's anecdotal, same as your response, but ultimately I think personal experience is going to trump academic argument every time.
No matter how much someone extols the benefits of FP, if you don't experience them you won't be swayed, and similarly no matter how academic an argument against FP someone tries to make, having experienced benefit from using a functional language, academic arguments are of little interest to me, and I'm curious to try more functional languages.
All I'd say is, to your experience, examine whether you were simply trying to bend the language to fit your existing models and approaches, or if you were willing and able to replace your models and approaches with those the language required (i.e., in Haskell, not merely "Okay, I need to do a side effect here, so time to write 'do'", but "Hmm, I have a side effect here...is this the right place for it? What other side effects do I have? Can I minimize where they're being called, so I have comparatively few functions that require the io monad?" etc). If it was the latter, fair enough. My experience was different; both are valid.
So true. Especially if the domain is not "naturally" functional - say an HTML5 game. I am sure brilliant programmers will come up with a functional model even in such cases, but I don't think that is how most people would think. I am more fond of languages like JavaScript, which has features associated with functional programming (higher-order functions, closures) but feels as natural to me as C.
You should look into http://elm-lang.org/. A lot of the examples are specifically for web games. I haven't used it myself, but from research I did into it months ago, I remain very intrigued.
He simply made a lot of assertions that certain styles of programming are impossible. That assertion feels great if you (a) don't actually know the techniques of achieving some of the benefits outlined and (b) try to program in a language that doesn't support them as well in a style which doesn't work well with them.
In some sense, despite the author's assertion that FP is mind-expanding and a great learning experience, I believe he still had a lot to learn. Mostly that most of these techniques and benefits are workable, but they take a genuine change in perspective (such as outlawing ambient state entirely).
Well, at least he gave some concrete examples. I was hoping for some concrete examples of what he was missing from you, because I am curious.
My own exposure didn't show me that much that was new/different in ways that were practically useful (and no, it wasn't writing FORTRAN or BASIC or Pascal in a different language).
Mind you, I have, for example used (mostly) immutable data structures and generally write in what many would probably consider a somewhat functional style. I also like HO mechanisms, but all of these are not at all exclusive to FP languages.
So as I wrote, I am looking for concrete examples.
Myth 1: if you restrict variables to be references alone and no longer mutable slots then this argument vanishes. Additionally, these kinds of computations do not need to take forever. Case in point:
fix f = f (fix f) -- ought to take forever, right?
fact' rec n = if n == 0 then 1 else n * rec (n-1)
Here `fix` appears to set up an infinite computation, but applying it as `fix fact'` produces a terminating function, the factorial.
Finally, simultaneity in LC doesn't much hold you back from using concurrency because you have two avenues: (1) model your concurrency abstractly within LC and then execute it specially and (2) have concurrency applied implicitly with optional hinting.
Both of these are a little non-obvious, but extremely workable.
Myth 2: Summarized as: Turing Completeness holds, perhaps? It seems to be attacking a certain strawman, but in doing so depends a lot upon some definition of functional as being a totally different form of computation. It's hard to even concretely respond to this abstract notion, but I'll try with two points
1. LC and TMs are not equivalent on higher order computation. While any function Int -> Int can be equally represented in each, functions like (In -> Int) -> Int are different. In LC you cannot examine the input function (with tradeoffs and benefits) and in TM you can (with tradeoffs and benefits). That's totally theoretical, though since you can model higher order computation as a function Int -> Int for most practical purposes.
2. Turing Completeness is not an end goal. Some languages today even challenge whether you want TC to hold all of the time (or only sometimes) and push right up the border of TC from below. I think they provide example that taking TC as your ultimate arbiter of language comparison is shortsighted.
Myth 3: Referentially transparent is not a property of a language or of a (poorly defined) style of language. It's a property of an analysis of a language, the language's definition/semantics/statics. I've discussed this on this page in another comment concerning C.
The short of it is, however, that RT depends upon what you're willing to analyze as a frame of reference and for any language you can pick choices of frames which provide RT or those which break it.
The author's choice of mechanism to demonstrate RT breakage is pretending like variables in Perl are references and then showing that mutability and dynamic scoping breaks them. Then he even tries to pretend like universal use of dictionaries models reference and breaks that.
These are strawman arguments, deliberately pushing models outside of their theoretical value and then showing that they no longer have theoretical value.
Ultimately, this culminates in an unjustified argument that "referential transparency isn't all that desirable" due to "incredibly tight constraints" being untenable.
As a concrete counterexample, I give you pretty much the semantics of Haskell which has a ton of practical code written in it and referentially transparent variable semantics by default. It goes much further than his ideas do and achieves it in such a way where the programmer does not have the ability to break the model.
(For instance, if Haskell let you modify its own running memory and fiddle around with arguments on the stack then you'd certainly be able to use those facilities to break RT... but you can't.)
Myth 4: This is total strawman—he asserts that FP does not allow "assignment", defines assignment as he chooses, and then shows that his own model of assignment can be modeled without assignment. He does this by embedding an imperative language in a "functional" style in Perl and then observing the effects of running that embedded language.
Then he shows... something utterly unrelated to referential transparency but claims that it's the same. Since you asked for concrete, here's his example in Haskell
data Op = Inc | Dec | Zero
counter :: Int -> [Op] -> [Int]
counter n [] = []
counter n (op:ops) = case op of
Inc -> n + 1 : counter (n+1) ops
Dec -> n - 1 : counter (n-1) ops
Zero -> 0 : counter 0 ops
-- modified to return only the last value
counter' n ops = last (counter n ops)
x = [Inc, Inc, Inc]
y = [Inc, Inc, Inc]
test1 = counter' x == counter' x
test2 = counter' x == counter' y
test3 = counter' x == counter' (x ++ [Inc]) -- what?
---
Ultimately, while he does spend a little time talking about how reference works—and he's not completely wrong in his definitions—his application of this idea to "FP" (whatever he defines it as) is full of logical fallacies, unbacked assertions of impossibility, and... ultimately just totally flawed arguments.
It turns out you can have much of what he's asking for—even in Perl if you like. But to do so you must intentionally restrict yourself from using certain parts of your language. If you move the goal posts on your restriction over and over then, yes, you can discover a lot of weird counterexamples to your own broken models.
That it's possible to write nontrivial programs using immutable data (see Clojure in particular), which makes debugging highly concurrent applications much much easier. That it's possible to achieve precise control over effects with only a moderate amount of overhead. Look at the perennial debate in Ruby-land over whether you should test at the unit or the integration level. In Haskell that's not a choice, it's clear from the very type of a function whether it's a pure function or a function that makes use of context. And when debugging this lets you rule out whole swathes of your program as areas that couldn't possibly be causing the problem (contrast e.g. in Java I've spent days tearing my hair out only to discover that a problem was caused by a bug in a getter method - by convention getters are short and simple and can't possibly go wrong, but there's nothing enforcing that at the compiler level the way there is in Haskell).
Concurrent access of immutable data is super easy since....no real concurrency is going on. But what about when I need concurrency? One can manage the time with which updates are seen (so it is deterministic), but there are many ways to accomplish this beyond one mutable reference to an immutable world style that clojure seems to promote.
I'm sympathetic to that viewpoint; I prefer the Haskell style where some things are mutating and some things are not and you can see which and control when they happen. But I've seen the Clojure style used effectively, building real-world webapps, and I'd take either over the imperative-language style where your tools give you no help and it's up to you to keep track of when mutations can and can't happen.
Here is something I worked on recently. Let's say you could re-parse code while executing it. A lock is needed so you don't modify a statement list while executing it (you don't want execution to see the partial list). This is where you want one mutable reference to the statement list and then change it completely...so you could use an immutable list at this point. But you could just as well create a new mutable list at this point and save some cycles.
I reject that last bit, that these things Y lead to a useless language though I do think the proof was far from the pudding 9 years ago.