It would be nice if the language required such functions, and the modules in which they reside, to be flagged. (I don't know if Haskell does something like this with mutation, or not)
It also seems that an "actor model" would be a good way to encapsulate the updates in an otherwise FP program by having a loop/reduce/fold wrapped around the mutable data responding to request-events and generating responses. This allows the other pure/immutable/idempotent type of code to remain isolated from it.
IO, ST, and STM mentioned by tome better match the description of "escape hatch" - though for ST and STM they are very carefully shaped escape hatches.
Also, I've never needed an unsorted dictionary, and parallelism is actually great in Haskell. http://chimera.labs.oreilly.com/books/1230000000929/index.ht...
This piece, to me, seems well-researched and factual. It's not perfect, but it's good. It has plenty of good citations for areas that aren't simply opinionated. A piece that is honest about the shortcomings of a piece of software can only help that software's community.
You claim this piece will push away newcomers. As a newcomer to Haskell, I disagree. There are good things and bad things to every language. What does actually push away newcomers are the pitfalls that they're not aware of.
Also, consider the fact that even though you don't need something out of a language, someone may. And those people will now know to avoid Haskell for those things they can't get.
So, while potentially everything in the article is true, it's all likely overinflated and/or has workarounds. I'd not put much weight on anything in here.
It's a shame he'd rather flamewar over "best language" than honestly discussing the issues of both.
An immutable collection isn't a place to be mutated, a bucket to fill, so saying that immutable collections don't support concurrent updates is really stupid, because not allowing concurrent updates is the point of immutable collections.
But then, I challenge the author to show me concurrent dictionary implementations that have the non-blocking, lock-free property. Such implementations exist, mind you, but they are an area of active research and not usually part of standard libraries. And there goes the author's argument about the established industry. One such implementation is the TrieMap , a new and interesting implementation of a lock-free Map that actually relies on techniques used in persistent data-structures ;-)
On the other hand, shove a persistent data-structure into an AtomicReference and you get a non-blocking concurrent collection for free, which of course includes any kind of persistent Map implementation you want. And lo and behold, the layman can achieve non-blocking concurrent data-structures without a PhD.
There are of course gotchas. The real strength of mutable concurrent data-structures is that they can distribute the contention over multiple nodes, whereas with an immutable data-structure you end up driving that contention to a single root. That can be really, really bad for writes. But then again, concurrent reads for immutable data-structures come basically for free. This means such a data-structure is bad for storage, but really good for message passing (like from producers to consumers). Hence actual databases making use of FP techniques, like for example Cognitec's Datomic, are implemented in a mixed style, best tool for the job and all that.
But instead the author's argument is just a rant that doesn't delve into any interesting details.
A similar weak point is number 7. First of all, what the author understands about parallelism isn't parallelism and I don't really understand his ramblings on performance and absolute performance, but that's not the goal of parallelism. The goal of parallelism is scalability (i.e. the ability to throw hardware at a problem in order to decrease processing time). But this will naturally tax the number of operations per second that a single core can achieve, which can be acceptable if you can make up for it with hardware. And for example his linked benchmark uses at most 8 cores, which is too few to come up with a sweeping generalization like that. That conclusion simply doesn't follow from this benchmark, being basically just Amdahl's law in action, which he fails to recognize.
It's a fact.
> Are any of his critiques incorrect?
Probably. That's what makes a real troll effective.
Harrop had a business around OCaml and later F#. He often posted to other FP communities controversial statements, with added links to his offerings. He started not knowing much about those other languages or FP in general, but in later years he learned from people and his attacks got more sophisticated. His controversial marketing did not make him many friends, though.
Going after the real or perceived drawbacks of purely functional programming in Haskell and also its data structures was one of his favorite targets.
Before that he used similar tactics in the Lisp community. For example I remember that he once said that Lisp is not used and this can be seen that there are no Lisp mailing list archived at GMANE. Well, that one was easy. I explained to him that GMANE has a top hierarchy for Lisp, where all the mailing lists are. But that did not stop him, he discovered or made up many more problems with Lisp. ;-)
"Why is Haskell used so little in the industry?" - http://flyingfrogblog.blogspot.de/2010/05/why-is-haskell-use...
Go and grab some popcorn before you move on to the comment section...
It's from 2010, I'd be interested in an update, just out of mild curiosity.
The answer to that 2010 blog post is that the ecosystem has dramatically improved in the last 6 years.
There's plenty of words and claims are made but benchmarks or code are nowhere to be seen. Why should anyone take these claims at face value?
> Furthermore, most functional programming languages (OCaml, Haskell, Scala) are incapable of expressing a fast generic mutable hash table because they lack the killer combo of: reified generics, value types and a fast GC write barrier.
Sounds plausible? Maybe. Incapable is a strong word. I'm not an expert on mutable hash-tables so I don't know for sure. If I was making a claim that you can't implement mutable hash table without 2 square feet of badger fur and a pound of whale fat would you believe me? I would like to see a citation.
I have written a lot of Haskell and there was never a situation when I said to myself "if only Data.HashMap (unordered-containers) was faster". Just make sure you're using the right tool for the job (which might not be Haskell).
The author doesn't seem to write any code himself but instead links to code that other people have written and then makes claims about F# being better (I can't find the F# solution to parallel quicksort from one of the linked posts). I see very little value in that.
> The author [...] links to code that other people have written
Here's a Haskell programmer looking for a faster hashmap:
Shake in fact does use reified generics (a.k.a. Dynamic/Typeable) and a Value type, and recommends disabling idle GC. I'm not sure if there's a "GC write barrier", Shake just uses MVar locks.
So the first 7 points are true because pure functional programming, by definition does not support mutable data structures very well. It is what it says on the tin.
If you go on stack overflow you'll find no shortage of ill-informed, unhelpful input about imperative programming languages. I think it's very strange to call a social phenomenon that is universal in software that affects every language community is a disadvantage of something as broad as "purely functional programming". If you want good discussion, go on the mailing lists or the IRC channels. #haskell is a very good channel with a lot of active professionals who love answering questions.
I am a professional Haskell programmer, AMA.
As to whether that actually counts as "purely functional programming", I can't say. Honestly the whole term seems quite misleading. https://chadaustin.me/2015/09/haskell-is-not-a-purely-functi...
It's a great puzzle, but it's not a huge drawback because subbing in IntMap or something really _does_ give adequate performance for typical union-find cases (unification or the like). Which is to say, the performance gap left to be closed is almost certainly _not_ where the important improvements to unification algorithms are made (which have to do with being able to impose a lot more particular structure on the types of things being unified, etc).
I have settled on a style of doing "functional-like" programming in Python and C++. It's more about the high level architecture than low level coding details.
It means being very paranoid and rigorous about state -- but still having great tools to express it! For example: Using ZERO mutable globals, and passing state in as a parameter to functions. Using immutable/persistent data structures. These techniques are done very naturally and effectively in Python and C++. You just have to be disciplined.
To me there's no real advantage to expressing something like "split a string by a delimiter" in a purely functional style. Either way, you have a trivial referentially transparent function you can reuse without causing complexity in your program. You might as well do the obvious imperative thing.
However there IS a benefit to threading state explicitly throughout the application, and encapsulating it in OBJECTS (yes objects).
For me, the thing that sealed the deal against functional languages for "real work" was trying to write a production quality sh parser.
I went down a long path of trying to do this first in OCaml and then in Lisp. The VERY first thing that hits you over the head -- lexing -- is heavily and inherently stateful. ocamllex and ocamlyacc to me are evidence of this paucity and poverty of purely functional solutions. They're just transliterating solutions from C. Well I might as well use C then.
Actually, I decided to use C++, which was like my 5th choice as language. Aside from headers and compile times, it's a good choice. I use "functions and data" (a la Rich Hickey).
Except my functions and data are both CLASSES. Data objects are basically structs, except they can do things like print themselves and answer simple queries based on their values, which helps readability (e.g. 1 liners, like word.AsFuncName() ).
Function objects are simply classes with configuration passed to constructors. That usually have a single method, but multiple methods are also often useful. Calling this method is basically equivalent to calling a curried function. But this is supremely useful for both compilers and servers, because often you have config/params that is constant once you reach main(), and then you have params that vary per request or per file processed. Many functions depend on both, and it's cleaner to separate the two kinds of params.
So both "functions and data" are effectively and usefully implemented as classes. The key is to make some classes like functions, and some classes like data. And have more of a bipartite dependency graph, where functions depend on data, and data depends on functions.
When all your classes are an equal mix of data and behavior, that's when they start getting "hard-coded" weird-shaped dependencies, and your program turns into fragile spaghetti. Functions and data are a useful architectural technique, and to me it doesn't have that much to do with Clojure or Lisp, although Hickey is certainly a great advocate.
However I came to the conclusion that the latest improvemetns in C#, Java and C++ are already quite good for doing FP style programming.
Yes, I feel a bit dirty not being able to use more pure FP languages, but I have to work within the context that customers allow us to do.
Although it feels like blasphemy, C++14/7 can be seen almost as an impure Haskell with curly brackets.
Learning OCaml really improved my appreciation of C++. C++ code is just all over the place, but you can write it in a pretty clean style (admittedly with considerable effort.)
Thanks to support for blocks (lambda), Smalltalk collections already had LINQ style programming, aka algorithm.h support, for example.
I'm more of a Python person, and Python doesn't really use blocks. I like the duality mentioned this post: http://journal.stuffwithstuff.com/2013/01/13/iteration-insid... (In summary it's for item in L: f(item) vs L.each(|item| ...)
I don't really think of C++ algorithm.h as LINQ ? I guess I don't use it that much. To me that is more "functional in the small", e.g. map or filter with lambdas, vs. "functional in the large". But I'd be interested to hear more details on this analogy.
The best free implementations to play around are Squeak and Pharo.
Or if you want to avoid installing anything, Amber gives some taste of it.
Now, regarding the LINQ like stuff, in Smalltalk doing this type of stuff was already possible back in those days.
vec := #(1 2 3 4 5 6).
sumPairs := (vec select: [:x | (x \\ 2) = 0]) inject: 0 into: [:l :r | l + r].
(PS - also did some work with Lisp in Uni back in the 80s, and sundry imperative/oop/func ; static/dynamic things since)
Classes with one method are useful, but so are classes with multiple methods.
OCaml and Lisp both have problems with polymorphic print. There were some recent posts on HN about the expression problem, and the duality between FP and OOP, which I liked. But I have to say that print should be polymorphic, full stop. And if your functional language has problems with that, that's the language's problem and not my problem :)
It's funny that one of the other commenters mentioned Smalltalk and blocks. Version 5 of the "Clipper" language I was using at work, which came out around 1990, added (borrowed, stole) blocks to an "XBase" style language. I didn't quite know what to do with them, so I used them to simulate virtual method tables in a non-OOP language :-)
It wasn't until I started tinkering with Ruby around 2006 or so that all the block / closure / lambda stuff REALLY sunk in ("Hmm. I could have been doing this - subroutines as closures, and not just "blank" call-backs - in all the Perl 5 code I've written the last 8 years"), some 20 years or so after I was in college. They did a good job of brainwashing proto-OOP into us back in the mid 80s.
Never used any Lisp macros, but "eval" and dynamic languages sure help shorten many tasks vs static types bondage and discipline. There's another Yegge essay on that sort of thing ("code compression"). The PC "4GL" type languages I used at work in the 80s were dynamic, with runtime types, and quite productive for the jobs they were designed for (pre MS-Windows...). I'm tired of the static type authoritarians punishing us all for the sins of C/C++. (I need to stop before I go into another rant)
Emacs Lisp now has lexical closures, too.
Common Lisp has no problems with 'polymorphic print'.
But it turns out that this axiomatic core is too impoverished for a lot of programming. You DO need something like Common Lisp on top. And I'm not really willing to open that can of worms, both for this specific project, and in general.
I would liken it to the language "Factor", which I've also experimented with... you took a specific and elegant paradigm, and tried to extend it to all paradigms, and ended up with a mess. A square peg in a round hole. Factor is basically Forth with OOP and a whole bunch of other stuff.
The other problem I pointed out is that the first problem you hit when writing a language is writing a lexer. I don't see anything that Lisp offers you in that respect. I looked at how Julia does it:
Julia is actually bootstrapped in femtolisp. If you look at the lexer, it just looks like C code written with Scheme syntax. There's nothing helpful about this. It doesn't help you manage state. I might as well just write C.
(And actually I chose the code generator re2c, which is BETTER than C.)
Of the axiomatic core. But not of any programming language.
> But it turns out that this axiomatic core is too impoverished for a lot of programming.
It's not even a programming language. It's just an axiomatic core. If you try to use it for programming you must be doing something wrong.
> is writing a lexer. I don't see anything that Lisp offers you in that respect.
Lisp is a language family, not language implementation with a library, which happens to include a lexer library.
If you mean Common Lisp as a programming language with implementations, there are portable lexers.
Writing your own lexer in Lisp shouldn't be too hard. People have written applications in Lisp which includes lexer functionality.
And I'm not saying it's not possible to write lexers in Scheme or Common Lisp. I'm saying that there's no real benefit to doing so over C or even Python. You're using the exact same algorithms and just transliterating it into a different language with more awkward syntax for that problem. The code isn't any shorter.
Related to the other commenter as well, the production quality Julia parser in Lisp doesn't use parser combinators. It uses recursive descent. It's C code written in Lisp syntax.
I have never seen anyone developing software with the 'axiomatic core'. But I see Emacs Lisp, Common Lisp, Scheme, etc. developers.
> And I'm not saying it's not possible to write lexers in Scheme or Common Lisp. I'm saying that there's no real benefit to doing so over C or even Python.
Depends on what level you program. With Lisp it is possible to develop a compact syntax, which expresses domain-level concepts, very easily. Interactive development is many times more convenient than in C.
> You're using the exact same algorithms and just transliterating it into a different language with more awkward syntax for that problem. The code isn't any shorter.
I have a surprise for you: it's perfectly legal to write imperative code in Lisp. Lisp is at its heart a multi-paradigm language with the option to add many other paradigms.
With Python you develop mostly object-oriented and in C it's mostly imperative. With something like Common Lisp you can do what you want.
I went down an almost identical path as you many years ago but found that after a certain point, the maintenance and just being away from the codebase for a couple of months made the time needed to make coherent mods that fit the design quite problematic.
1. Shell is a fixed problem domain. It's not changing, so you can spend time getting the components and interactions right, and know that the rug won't be pulled out from under you. (Though I am planning significant enhancements; it's not a "nostalgia" project).
2. But it is a complex problem domain. What I'm writing is a superset of full fledged POSIX shell, i.e. not a toy. You really do need to control complexity... or you'll end up with something like bash (at least 175 K LOC)
3. It's "systems code". I find that systems code needs to be more highly reliable than application code (or at least the market tolerates less sloppy systems code than app code). So good software architecture matters.
4. It's not a commercial product. I am doing all the design by myself, for better or worse, and the only deadlines are my own.
I think my main thing now is designing with ZERO mutable globals. I've probably been doing this for about 5 years now. Once you get in the habit, and get a few idioms down, I find it makes things go a lot faster. Things break less.
My main issue now is C++ headers and compile times. I started the current codebase with C++, but then switched over to Python to figure out the design. There were a lot of nontrivial issues to solve. Admittedly, implementing everything twice is pretty extreme! But maybe not -- I can write Python so fast that it saves time overall.
But that issue is entirely orthogonal to FP vs OOP. As far as that is concerned, Python and C++ are identical -- they have an imperative history, with significant OOP additions, and support a functional-in-the-small style (lambdas, list comprehensions, etc.). functional-in-the-large is already covered by classes, as mentioned.
I've looked at dozens of parsers, both for shells and for other programming languages, and none of them use this technique. I've studied many forms of parsing, and written my own lexing and parsing library/language. This included going on a pretty deep dive into PEGs.
I'm going to be a bit rude and say that parser combinators look appropriate for toy programs only. shell is the opposite of a toy language... it's fairly big, and has a very precise spec with all the hairs and scars of evolution.
Functional style is simply better and is applicable in many languages. Modern C++ is a great example. Throw away much of OOP clunkiness, use simple functions, win! (OOP has it's place but in small quantities and with moderation).
With respect to purity I'm on the sidelines. I think what's far more important is algebraic data-types, pattern matching and value-returning conditionals (values, not statements!). Rust has them and it is great and not pure.
"Nobody uses recursion."
Why doesn't anybody use recursion?
For various reasons, I chose to use OCaml, and from my experience, the difficulty part is only true insofar as the learning curve goes. Once past that, FP-style problem-solving is spectacular in that the vast majority of time you spend goes not into the coding, but rather (a) thinking about what exactly your problem is, and (b) how to decompose and solve it. I can't tell you how many times in Matlab, people start writing code before they've even fleshed out (a) and (b), simply because there's so much drivel and overhead that one naturally tends to feel they're "getting things done" by jumping in and writing this code.
Once I had a sufficient grasp of OCaml concepts and syntax, I would find myself getting frustrated by how little code I might get done in a day. Upon stepping back and reflecting, that was because I had not worked my way through (a) and (b), and that is where the human thinking is really required. I have since been delighted at the utility of each line of code I write and am similarly annoyed by the low signal-to-noise of something like Matlab.
With respect to slower, I think my points above address "slowness" in terms of development time. In terms of runtime, it's very much a function of your choice of language/ecosystem and paradigms. If GC and typical functional overhead are a constraint in your problem domain, one can always bring over a functional style into modern languages like Rust, where performance and lack of GC are first-order design goals. But you can't tar all of FP with the label of "slow" any more than anyone can make a blanket statement that "solving matrix problems is slow". It just doesn't make sense.
Lastly, as others have pointed out, Jon Harrop's points refer to purely functional styles and languages. In the end, striving for functional purity may not make sense. To illustrate, one can easily drop into an imperative style where needed in OCaml, for example when doing I/O. I use this quite a bit. But make no mistake that adopting a functional approach where it makes sense can pay dividends.
I think the concept of immutability confuses people bit. It really clicked for me when I stopped thinking of it in terms of things not being able to change and instead in terms of every version of things having different names.
Functional programming from that perspective is like version control for your variables. It makes explicit, not only which variable you are accessing but which version of it.
The reason it seems like you need to copy a variable each time you modify is just to give each particular mutation a different name. Note that the compiler can optimize things. It doesn't need to keep every version in memory. If it sees that you are not going to reference a particular mutation, it might just physically overwrite it with the next mutation so that in the background "var a=i, var b=a+j", might run as something like "var b = i; b+=j";
It's clear that some other aspects of FP, such as first-order functions, closures, etc are very useful in practice.
Immutability does have its advantages in that it supplies strong guarantees about what your code does. It avoids spaghetti code where everything can and does mutate everything else.
Myself, I tend to use whenever its advantages outweigh its inconveniences. Immutability is especially handy at interface boundaries.
"It's OO, and it can't be both OO and FP."
Art - as something subjective - also means opinions differ on difficulty and slowness. That's perhaps why downvotes - the statement like this presented as an objective fact draws enough scepticism from those who have different experience in the subject.
Slower - Optimize for developer time. Same reason most people write their backends in Python/Ruby instead of C++.
Unfortunately, it does have the challenges listed in the OP, so it seems to make most sense where either (a) it is ok to leave performance on the table in the interest of clarity or (b) you create a little sandbox in which you use functional programming for clarity, and something outside your sandbox compensates for the performance cost.
React and Om are examples of (b). They let you write pure functions that return the entire DOM. Then, outside that sandbox, a DOM differ makes it so you don't force the browser to re-render the entire DOM every time.
Also, there is a lot of cases where you need a total language with all its guarantees.