In order to be able to use a programming language or paradigm well, it is especially useful to know what its weaknesses are. Functional programming does not work well in programs which handle a lot of states. It does not work well in programs which have high requirements for performance or memory. It does not work well for programs which have to do low-level stuff.
These are serious disadvantages, and I would like to see them highlighted more often in this introductory-style articles. Nevertheless, I think functional programming is a must to be able to write simpler programs, and I think programmers should write functions (i.e. methods that only depend on their arguments) whenever possible, for modularity and correctness reasons. I think the disadvantages can be solved by non-pure functional languages, and that there is a lot to gain for new programming languages in this area.
There are trade-offs between the two, but I definitely belong to the second school: we simply add an imperative subset to our functional language. This means we can exploit any efficiency trick an imperative program can do, including low-level stuff. The seasoned FP programmer will then proceed to encapsulate the efficiency trick in a abstract module such that the rest of the program doesn't have to worry about it. Of course, the price to pay for this is that you are losing purity. I think this is a fair trade-off but others disagree.
As for performance and memory usage: it is always a property of the architecture or system, not of the programming language. Dropping to a low-level language, such as C, usually doesn't buy you too much these days. What is more important is that most C compilers in use have vastly more time invested into optimizing routines than the typical FP compiler. Apart from that, you can easily mange the same kind of data in e.g., OCaml than you can in C.
The reason FP can beat the curve of performance in practice is because you operate at a higher level of abstraction. You have more attempts at writing the correct architecture, and it is easier to change over time. Since most real-world problems are heavily time-constrained, this makes them beat low-level solutions: when the C programmer has written the first working version, the FP programmer has tried 5 different solutions.
There is one area FP tends to fare poorly: CPU bound tasks where an inner-loop has to squeeze out performance (video encoding comes to mind). But most low-level programming fares poorly as well: either you use assembly, write GPU-level programs, use an FPGA or create your own ASIC/SoC solution for this. Also note that moving to faster solutions here costs an order of magnitude in time and in dollars: FPGAs are, relatively speaking, expensive beasts.
If a language can allow mutability in an area of code without allowing side effects to escape from it we can have all of the reasoning advantage that FP gives us at a level above the mutations. We can also have the performance we want.
How about a programming environment tailored for small simulations or games? I could imagine such an environment maintaining referential transparency without strict immutability. Rather, such a system could provide a kind of "poor-man's" immutability by only allowing pure functions that take state from tick N and output state for tick N+1.
Perhaps such a system could even achieve high performance by exploiting its constraints? Maybe the language could essentially be built on top of a custom VM and around the mechanism of bump allocation, read/write barriers, and Beltway garbage collection?
As far as I see and can tell from experience, this exactly the power of the black-box processes of Flow-based Programming (FBP), communicating only via message passing (on buffered channels) .
FBP processes can be compared to "functions", but are free to change state etc. The "only communicate via message passing" means funny side-effects are effectively contained to the process itself.
Furthermore, I would argue that FBP, via its specification of keeping the network connectivity separate from the process implementation, makes FBP programs so extremely much more composable than most typical FP programs, which often allow references to other functions be hard-coded inside function code.
It is like making the call graph of an FP program declaratively configured by a list of processes and connections:
A (ports out1, out2)
B (ports in1, out1)
C (ports in1, out1)
A.out1 -> B.in1
A.out2 -> C.in1
I wrote a little text processing app in this style some time ago (The network connectivity scheme can be seen here: https://github.com/rdfio/rdf2smw/blob/master/main.go#L100-L1... ) and was amazed by the clarity and composability that emerged, even though I just used my own little experimental subset of full FBP ( http://flowbase.org ). As far as I can tell, this helped me going from first line of code to finished app go extremely fast ... app written in 2 days without ever really getting stuck in any strange bug due to bad code organization, which use to happen all the time otherwise.
A (ports out1, out2)
B (ports in1, out1)
C (ports in1, out1)
A.out1 -> B.in1
A.out2 -> C.in1
... none of which are necessarily faster than the other 4 solutions. It's not just the architecture of your code that influences performance, it's also – sometimes even more so – the underlying details of your implementation, like arrays vs. pointer-based data structures; Athas' comment describes this pretty well.
As a consequence, this statement:
> it is always a property of the architecture or system, not of the programming language
is wrong, simply because your programming language will heavily influence a lot of your productivity vs. performance tradeoffs.
This is true in in principle, but not in practice. It is not just a question of whether a language is particularly amenable to optimisation and clever compilation (C is not, by the way), but also whether the baseline of naive compilation is efficient in itself. A naive C compiler will generate vastly better code than a naive compiler for most functional languages, if only because naive C compilation will primarly allocate statically and on the stack, while a functional language will perform an enormous amount of heap allocation. Futhermore, natural C programming style tends towards cache-friendly arrays and bulk allocations, while natural functional programming style tends towards lots of pointers pointing everywhere. Certainly, there are functional languages with efficient array libraries and the like, but they are less natural, and their use is often considered an "optimisation". And of course, most functional languages give you some way of accessing raw memory and essentially just writing C-in-Haskell or whatever, but then you're not really doing functional programming anymore.
Of course, if the problem is at its essence about pointer chasing, or composing IO pipelines, then functional programming is a fine choice, because the performance of the language is less important, and the ability to reason about complicated control flow is important. What I find interesting about Haskell is that due to the clear reification of IO, the compiler can actually perform optimisations on IO pipelines, such as fusion. Think about it - IO is usually the prime example of a fusion inhibitor, but GHC can actually do it for libraries such as conduit! It is a little ironic that Haskell is probably the best language I know of for describing complex IO operations.
An interesting twist is of course when you construct a functional language with an eye towards efficient compilation from the start. Then the primary compound data type is no longer the linked list, but the array, and you tend to end up with an array language. Examples include SISAL, Single Assignment C, NESL, Accelerate, Futhark, Lift, etc, which are naturally fairly efficient due to their programming model, and which provide strong functional invariants that the compiler can then exploit. These .anguages are all still pretty experimental and unwieldy in practice, though.
FP code tends to be heavy on recursive algorithms, which means you need mutually tail recursive functions. And laziness. Somebody mentioned the State monad and the IO type. Well, these are lazy abstractions with a memory-safe bind/flatMap operation that makes them tick.
Well, the problem with these abstractions is that they'll require building a data structure that cannot be an array. And in the case of languages like Scala or Clojure that don't have real tail recursion because of the JVM, you need to manage your own trampoline as well. We now have the Free monad, which is awesome, except that it is very heap unfriendly.
Actually for FP you need persistent data structures everywhere, not just lists, like maps, hashes, vectors and all known implementations are some sort of trees. And this is an active area of research, but building cache friendly trees is a hard problem.
More generally tail call optimization allows you to avoid pushing a stack frame for any call in tail position. The JVM (or even LLVM) does not give you the necessary tools to implement this yourself, since you can't play games with your stack frame and return address directly.
I couldn't find this right away, but vaguely recall reading SISC transforms to CPS to anable this (and call/cc).
I think Kawa scheme supports two calling conventions: the standard JVM one and a second one for which it can do full TCO. The second one is a bit slower so the default in Kawa is not to do full TCO.
I was implicitly thinking there's nothing forcing you to use the JVM calling convention. For example your compiler could transform the source to CPS. That probably would generate slow code and interop would be awkward (because normal Java libraries aren't already in CPS).
I say that as a Scala developer that loves Scala and the JVM - the lack of tail calls optimization at the JVM level is a pain in the ass, because in order to work around it you have to effectively build your own call stack.
You can do that in a pure language as well (i.e. ST monad.) Purity shouldn't be given up lightly.
This made me curious, could you list some?
Handling IO exceptions and masking them as different errors.
Thread, process, or machine coordination in parallel execution.
> Anything that needs rewriting a memory address without losing time by allocation and garbage collection.
ST gives you mutable references. Or just use IO.
I think the trade-off is unnecessary. If your language can embed a convenient method for analyzing the imperative state in the way it is typically used (F* is a good example), you can get a similar result by lifting your internally stateful but externally pure code into truly pure functions, even if it wants to do something more complicated than simpler state monads can do conveniently. The compiler can then understand the language's state construction and write the imperative code directly when generating machine code, with little more infrastructure than OCaml needs to allow you to do so anywhere.
This way you get to keep purity without requiring you to lose performance or jump through hoops to keep your pure code downstream of your impure code.
You state earlier that programmers can and perhaps should write functional style (AFA feasible) in the languages that do excel at above problems. So you probably meant that "functional languages" ---rather than "functional programming"--- "does not work well" there.
> (..cont) programs which handle a lot of states. It does not work well in programs which have high requirements for performance or memory. It does not work well for programs which have to do low-level stuff.
The main problem with these issues is that there's no serious investment into optimizing the very few pure-and-lazy (for me the holy grail of FP) languages' (most-all of academic origin) compilers better for real-world loads such as you outlined. The described problem spaces themselves ("lots of states" -- huh?) are surprisingly simple to model in the type systems of the languages I'm into such as Haskell. Side effects are most elegantly solved and defining them yields succinct and non-ambiguous code. And really just like OOPs design classes to describe their problem spaces and stuff their logic into, so you start from your types in FP.
That being said, so far I'm rather satisfied with GHC speeds and resulting binary performance. That is, of course, for non mission-critical-realtime-high-frequency-trades-while-raytracing-while-guiding-missiles use-cases only so far. A lot of automation I do, I really couldn't give a hoot about its performance as long as I'm not billed by the second (which you shouldn't be anyway). Because, hey it's automation! It means I can do other stuff! Likewise I don't care whether a dishwasher takes 1 or 3 hours. Adjust expectations and use the time freed: slower automation yields more time freed!
(Starts up again on the 9th of January :) )
The major advantage of functional programming here is that it models the domain really well: the vast majority of my work involves taking a small number of data sets, doing a long sequence of processing, and outputting a small number of data sets. Everything in between benefits substantially from error-preventing techniques such as purity and static types.
The biggest disadvantage is how hard it is to analyze and improve performance. Working with Spark means that a pipeline that works on X gb of data will often fail on 5X with out of memory errors, and it's nontrivial to diagnose and fix. It's not clear to me how much of this is due to Spark itself vs. laziness.
None of this is true. This is just the generic set of plausible-sounding but meaningless complaints people who haven't actually used FP for any of these purposes tend to repeat.
> a lot of states
What is this supposed to mean? ADTs are by far the best tool for state management available in programming today, and yet there are very few non-functional languages that support them. Languages like Haskell offer extremely powerful tools for state management like monad transformers and ADT-based exception management.
> high requirements for performance or memory
This is an old meme that doesn't apply at all anymore. Haskell has a number of world-class packed data management tools (repa, vector, bytestring, etc.) that can actually do a ton of cool optimizations that a similar library in e.g. C++ could not do. Any marginal overhead incurred by using a functional style (which you are not obligated to use) are typically more than offset by the fact that high-efficiency techniques that are typically inconvenient to represent (like cache-sized chunked text management) are extremely easy to use with strong enough types and flexible enough combinators. For example, if you're doing streaming text management, you can get higher performance in C than Haskell, but it's going to take 100x more effort over just using lazy ByteStrings. Tight numerical code will get unpacked and turned into more or less C-equivalent assembly.
> low-level stuff.
I'm not sure what your definition of "low-level" is. For me it's doing register operations on microcontrollers, and in that case I agree. But for anything you can do on Linux, you are incorrect. Haskell has better low-level support via FFI mechanisms than, say, Java.
Apparently I'm not alone, here's John Carmack's take on it after "trying it on" for a while in C++, he has some very strongly positive things to say about it: http://www.gamasutra.com/view/news/169296/Indepth_Functional...
You haven't listed the specific disadvantages of FP, one I noticed with a bit of dismay was that the code for a basic quicksort in functional languages looks like this (Elixir in this case: https://gist.github.com/rbishop/c742ab53b12efc162176) and is actually quite beautiful BUT performs fairly horribly. (It is rare for me to find code that looks this good/simple and yet is one of the worst-performing.) The same algorithm is also slow in Haskell (which is where I first saw it).
Some might cite the preponderance of recursion instead of looping as a fault or flaw (since the language implementation needs to then implement TCO in order to not trivially blow the stack... and the programmer needs to be AWARE of how to trigger TCO), but in practice I prefer it because a semantically-infinite-recursing process ends up being a "nicer" paradigm even if in actuality it's implemented underneath with a loop construct.
We don't necessarily need new languages for this. There are already pragmatic languages out there, for example, OCaml  (and if the syntax is off-putting there's ReasonML  from Facebook). I'm sure there are other examples too.
From what I understand, isn't this what functional programming excels at?
Pure languages make this complexity apparent, and thus gives the programmer a true sense of how hard it is to do right. This makes it seem "harder" than in impure languages.
Impure languages let you get away with handwaving the state space, which – to be fair – is often enough. But when it isn't, you pay dearly for it after the fact.
Imperative languages in general have poor data structures for representing state (no ADTs!), but nothing is more explicit and straightforward than using mutable data or mutable containers for modeling actual mutable state. That's why languages like Scala exist. Functional without religious paradigm enforcement.
...gives me code that looks weird since I did imperative but straight-forward. The second link even shows some simplicity and less tangling vs imperative examples. I wonder what makes you think FSM's in Haskell turn into disasters if these examples were so easy.
Besides, you shouldn't be hand-coding FSM's to begin with. The various sub-fields of programming and IT I've studied all came from different directions to same best practice for FSM's: DSL's that generate them. LISP people did it forever. iMatix did it for reliable, distributed apps in C. Haskell embeddings go from stuff like that up to hard real-time C. Theres also open-source projects that compile easy descriptions of them to about any language. The DSL can be included in repo right next to resultant code for documentation purposes. Hardware designers also synthesize and transform them with automated tools.
Given that, the difficulty of doing FSM's by hand should never be a problem for any language. Just don't do it by hand. Automate the tedious stuff machines are good at. Hand-code the stuff humans are good at.
Apache Spark is written in Scala. It can handle extremely large amounts of state across large clusters of machines. Is the argument here that Scala is not purely functional? Or what am I missing?
Functional paradigms can still be used (e.g. HackerNews, though let's be honest...I love it but it's not the most challenging feature set). But FP isn't as natural of a fit as the problems chosen to introduce it.
> Functional programming does not work well in programs which handle a lot of states.
I've rewritten stateful, greedy, backtracking algorithms from Java to Scala. Scala (scalaz in particular) made this algorithms much simpler with no noticeable performance penalty. The backtracking in particular became trivial thanks to immutability and the State monad.
> It does not work well in programs which have high requirements for performance or memory.
There is a cost to immutability, but mostly it doesn't matter. When it does, FP has a lot of ability to encapsulate mutability/ugliness etc used for performance improved in different ways. Haskell in particular is great at this. It has ST for mutability, and rewrite rules/inline pragmas for performance gains. There still is a lot of room to improve here however.
> It does not work well for programs which have to do low-level stuff.
Haskell itself isn't ideal for low-level programming (low-level as in microcontrollers and FPGAs. Raspberry Pi isn't low-level and can handle Haskell fine). But Haskell has EDSLs that compile to C/VHDL/etc (C: Ivory, VHDL: Various Lava variants). With an EDSL approach, you end up using Haskell as a macro language for your low-level language. And unless the CPP, Haskell is a powerful and easy to reason about macro language!
There are other cases where immutability helps, even with regards of performance. At my day job, we use C#, with little focus on immutability. Performance has been a huge problem lately. One of the things we did was to change lots of constructors which initialized empty lists. In most cases, these lists remain empty, so we created a whole bunch of objects unnecessarily, which put a lot of toll on the GC. The "solution" was to initialize them to null instead, which made the code a lot more brittle and cumbersome. Had we used immutable lists instead, we could've shared one single, empty object instead (per type, of course).
We also have lots of copy constructors which we use often - these would be completely unnecessary with immutable data.
I'd say this is completely false, and almost no one agrees with this if they've done any actual functional programming.
I've been working with Phoenix lately, which basically just hands request/response details down a pipe of functions, which all can change the state of the response. There can be a lot of state contained in this response, but when you use Phoenix, you don't really notice it. The abstraction makes everything seamless - best web-framework I've used to date.
By the same token, whenever this conversation comes up a series of vague and poorly-formed criticisms come up saying, "You can't do low level things" without defining what on earth that means. Or, perhaps worse, conflating the idea that you must have Haskell in all its massive (and perhaps fairly: a bit bloated) confusion of ideas. People can and do write high performance code in functional style and with functional tools. People even do it with laziness as a core abstraction.
The main gate to functional languages participating in, say, the Linux kernel is NOT that they are "too slow" or that "laziness makes them too confusing". It's that the Linux kernel is written entirely around the unique weirdness and expectations of C, and only languages based on or descendant to C do well there.
It's difficult to treat the core question, "What are the disadvantages of functional programming" in the same way that it's difficult to answer, "What are the weaknesses of OO programming." Both use terms that encompass a very wide variety of approaches and decisions, but the way we evaluate them as bottom-up, on a case-by case language for each individual language.
Sure, but the usual functional style has intrinsic issues that prevent it from being feasible for writing kernels in general. A kernel (especially a microkernel) spends most of its time managing state. You can use a functional language as a metalanguage for an imperative DSL (as the Atom DSL for constant-space programming uses Haskell), but you won't be writing code that looks remotely functional.
It's not that they wrote the logic in a fundamentally different paradigm from C, it's that they take care to give their system the information it needs to both generate C code and most of the desired proofs simultaneously. The functional language is fufilling the role of metalanguage excellently, but little of that ends up in the generated code. The one functional thing that does is pattern matching, since the C equivalent (if statements and unions) are much harder to verify (and use).
> A kernel (especially a microkernel) spends most of its time managing state.
So does every computer program though. The idea that functional languages can't support mutation is a strangely persistent myth even in the face of multiple counter-examples AND 20 years of improvement via research and practical work.
> but you won't be writing code that looks remotely functional.
Many useful & powerful functional abstractions can be written to use constant space, even in Haskell.
Yes, you can embed those semantics inside functional semantics, and even use the functional language to add more static verification at the type level via things like F-Star's Hoare logic. But you're still mostly going to shoving bits in specific places based on the result of a shallow pure function applied to bits you yanked from a specific place.
On top of that, you're not going to be able to abide the kind of allocations that functions in Haskell, OCaml, etc. can do with little provocation (and which are hard to avoid categorically), so you'll need to work within an especially restrictive DSL. Definitely no lambdas or partial application. So in the end you'll be in "Generic Stack-focused Pointer-pushing Procedural Imperative Language: The Monad". Where in this do you see any functional-ness, outside of the fact that you'll probably call your procedures functions?
> Many useful & powerful functional abstractions can be written to use constant space, even in Haskell.
Yes, but not to the degree that Atom's use cases require (where the program must allocate all memory ahead of time and therefore know a specific upper bound). This necessitates deviation from usual practices of any kind, including Haskell's.
The difference being that this "very fine imperative language" has much stronger type safety guarantees.
Maybe we can stop having ring0 buffer overflows some day when C programmer pride is sufficiently assuaged. I doubt it though... My experience with the linux kernel community is that it is a limitless void of insecurity and infighting.
> On top of that, you're not going to be able to abide the kind of allocations that functions in Haskell, OCaml, etc. can do with little provocation
OCaml does much better here (in fact, really quite amazingly well here, on part with some of the greatest common lisp distribution compilers which were stunningly good at it). But yeah, Haskell has a very poor focus on the needs of the "industry" when said industry is focused around extremely tight optimizations.
That said, I refuse to confuse a specific example of FP with a traditionally academic and research focus with the discipline as a whole. That's a dodge.
> So in the end you'll be in "Generic Stack-focused Pointer-pushing Procedural Imperative Language: The Monad".
Honest question: what's the problem with this if it offers additional safety and promotes the use of stateless functions? If it all compiles down to similar code, then it's fine. People act like monadic code is not functional, when it is in fact extremely functional code.
That's what's funny about all this: imperative programming is expressible succinctly and easily in functional languages.
Would you consider a C program transliterated into a representation of C inside a functional language and then annotated with proofs to be "functional"? My argument is what you get if you write a true kernel (no sitting on top of a runtime written in something else) is going to look a lot like that would.
Is that a result of the specific language, rather than FP?
That is: One could think about building a procedural language that had... well, I'm not sure it could have Hindley-Milner types, because I'm not sure it could have higher-kinded types, but it could come close, couldn't it?
And from the other side: Does FP require very strong types? Or can it be done with something equivalent to C/C++'s type system? Or Python's?
Here's a table of processes. We want it to be an array, rather than a linked list, for efficiency reasons. When a new process is created, we don't want to copy the array, also for efficiency reasons. So we mutate the array.
> So does every computer programmer though.
Why do you think that FP doesn't have tools for this? Do you genuinely think that in 20+ years of research no one has thought of this? Have you investigated it?
I don't know. If you think that things like ST aren't suitable please say why (other than the larger problems with monad transformers, of course).
Are the FP tools as efficient as the direct, C-style approach? For an OS, that matters.
Are the FP tools as easy to reason about correctly (especially in a section you're not familiar with)? For an OS that's worked on by thousands of people, that matters.
In this context, what is "ST"? And, what are the larger problems with monad transformers?
I'm sorry, I decline to continue this conversation further.
In many ways the early MIT Lisp Machine morphed into a Flavors Machine in the early 80s.
Remember, Lisp is a multi-paradigm language.
At least in Linux, the table of processes is implemented as a (doubly) linked list.
Can you elaborate on why this is?
I see the parent post mentioned "lazy evaluation." Can you say why this is relevant to discussions on resource utilization and performance?
Non-strict (what you call "lazy") evaluation can make it more difficult to predict when resources are needed. In strict languages, the resources needed to evaluate f() are needed exactly at the point where you typed "f()".
With non-strict evaluation, those resources may be needed then, later, or not at all! If those resources happen to be needed when a) they are no longer available, or b) at the same time as a bunch of other computations need resources, you have problems.
In particular this example convinced me I'm missing out on something
int x = 42;
int y = x / 2;
double z = sqrt(x*x + y*y);
return 1 + z;
Yes, functional programming matters. It lets you add two things together in C without worrying about allocating a destination operand for the result, whose clobbering won't affect anything anywhere else.
This sort of thing in turn makes it a heck of a lot easier to write OS schedulers, drivers, memory managers, codecs, ray tracers, database engines, ...
Probably zero of them, in any non-Lisp machine.
Lisp is a fairly nuts-and-bolts language suitable for device drivers, depending on what you include in it. The
The basic Lisp evaluation model is close to machine language: Lisp values readily map to words (32 bit, 64 bit, whatever) stored in registers, and pushed onto a conventional stack during function calling.
Lisp compilers can optimize away environments: they can tell when some local variables or function parameters are not being captured by a closure and can live on the stack.
Lisp can compile to re-entrant machine code.
Dynamic memory allocation in contexts such as interrupt time is not off the table. In the Linux kernel, ISR's can call kmalloc; they just have to pass the GFP_ATOMIC flag. Similarly, a Lisp interrupt service routine can still cons up cells or other objects, probably in a limited way that can't trigger a full GC, or block for a page fault.
Parts of such as system can be written in a Lisp notation for a non-Lisp language. Such as, for instance, a "Lispified" assembly language. Thus the saving of registers on entry into an interrupt can still be notated in Lisp; it's just not the normal Lisp, but some S-expressions denoting architecture-specific machine instructions (register to memory, and register to register moves and such). When the system is built, an assembler written in Lisp converts that to the executable code.
To prove a point one is forced to create a system just to prove the others wrong, without any ROI.
On a current typical OS (Windows, Android, iOS, macOS, Linux, ...) the chance of ever seeing a Lisp-based device driver in action is near zero.
runConduitRes -- dealing with finite resources
( sourceFileBS "input.txt" -- read input.txt as binary data
.| decodeC utf8 -- decode assuming UTF-8
.| linesC -- split into lines
.| mapC parseList -- parse each line into list of text
.| mapC (get 5) -- get sixth element of list
.| catMaybeC -- discard lines with no sixth element
.| encodeC utf8 -- encode as UTF-8
.| sinkFileBS "output.txt" -- dump into output.txt
This will run in constant space and linear time, it will buffer reasonably, it will not leak file handles, it will gracefully clean up on exceptions, it will not crash when it fails to parse something correctly, and it makes the encoding assumption explicit (you cannot split into lines unless you know the encoding).
Dealing with I/O is not hard when you use the correct primitives.
I can come up with two explanations for this reliance on third-party libraries.
1) Haskell has always been a quickly evolving language attracting research-minded people which in turn go on to develop really cool libraries that are much better than conventional ways of doing things. The interpretation of this explanation is that it's simply not possible to keep the standard library up to date with the latest library developments.
It may also be the case that
2) Haskell has always been a really powerful language capable of offloading important tasks to libraries. What would need to be built-in functionality in other languages can be implemented as libraries with no sort of special treatment in Haskell, so people do it that way because they can, and because it keeps the base simple.
> I don't think Haskell is worse than any other language in that regard.
Many Haskellers are happy about this kind of stuff:
min = head . sort
I can say that newcomers also seem to get a lot of advice that's directly at odds with how the Haskell community seems to think people should use the language. For example, Learn You A Haskell starts right off the bat by encouraging people to use the "list of characters" version of strings, even though that approach courts serious performance concerns.
Python does in fact use a third-party library for efficient arrays (Numpy) and its use is so widespread as to be a defacto language unto itself.
I would guess those languages get built-in array syntax specifically because they have built-in arrays. The arrays come first, the syntax later.
There really isn't a serious attempt to build an industry-friendly programming language with functional features and laziness as a default. You'd make many different decisions than Haskell:
- No more language extensions, you'd fix the feature set.
- Better transparency on memory growth and GCs in tooling
- It'd be easier not to generate a ton of garbage, so the GC could be designed such that very large working sets could be handled with low latency.
- You wouldn't structure IO the way the have. Modern programs have demands for many, many side effects and threading an IO monad through every place they can touch is common practice, but not useful. A reduced power version of IO that would let developers push events to an IO-enabled functional reactor would be in the stdlib.
- Monad Transformers would almost certainly not be included, preferring: http://lambda-the-ultimate.org/node/4786
We shouldn't hold Haskell as the perfect expression of functional and lazy programming. It was never meant to be and it shows. It's just the tool we have right now to go to bat with.
In the meantime Haskell may well be the best option for various circumstances even with that flaw though. (And note that strict evaluation in non-total languages brings its own problems)
EDIT: It's not even a lazy IO problem since no lazy IO functions are used there.
That's what I get for not reading the code carefully...
To tell you the true, I can't understand why the code the OP talks about has a problem, and I can't even reproduce the problem (and never saw any of the problems of lazy IO in practice either). But I also can not say the code has no problem, and that is a big issue with the language.
I think there's a significant difference between undefined behavior (hello security flaws!) and the program crashing in a (reasonably) well-defined way.
"The ways in which one can divide up the original problem dep end directly on the ways in which one can glue solutions together Therefore to increase ones ability to mo dularise a problem conceptually one must provide new kinds of glue in the programming language."
Functional programming is great because it provides two (new) kinds of glue: function composition and lazy evaluation.
Even if you accept that (I certainly do for function composition, less enthusiastic about lazy evaluation), I would say that it only provides two new kinds of glue.
We need lots of kinds of glue, in other words, lots of architectural connectors. And that means linguistic means of defining and varying architectural connectors. http://objective.st
Certainly true for composition but laziness is more the exception than the rule in today's FP languages. And it's getting an increasingly bad reputation to the point that even Haskell is slowly (and reluctantly) being dragged in the strict direction (which it will never fully reach because so much of it would break).
I'm gonna hop over to C# because that's where my favorite example lives: LINQ is a functional library that lets you describe queries on data that are executed lazily. The reason why the laziness is great in this scenario is that it lets you separate the tasks of constructing a data processing pipeline, and executing it.
The spot where it's tricky, though, is that it's a very leaky abstraction. It's easy to forget that these expressions might actually represent a lot of work, so if you get your lazy sequence object (IEnumerable<T> in C# terms) and then check if it has any values in one expression, and calculate its sum in another, then you might end up accidentally round-tripping a database twice.
Because of those sorts of stumbling blocks, I think laziness is a power that needs to be handled with care. I'm pretty sure that means you most certainly should not make it the default behavior.
"Why Functional Programming Matters" solved with C#
Exactly. I also wrote I am somewhat less enthusiastic about laziness.
So that leaves just one kind of glue as FP's contribution, which goes back to my point to the contribution being good but not sufficient.
I was fine with the post-script but for some reason it seemed thin and there was a slight blur.
I just use my browser to read the PDF (usually chrome, sometimes Firefox or IE or even Opera).
He holds up the idea that every piece of the game has the rules for what you can do with it tacked on as some sort of horrible mess, but, in the age of code completion, I'm finding that it's used to drive an amazing convenience from a practical perspective: Code completion.
Take Python, which is a language that I'm still learning. If I have an object, but I'm not sure what I can do with it - or, more particularly, I'm not sure of the names for the things I can do with it - I can get a quick reference by hitting '.-tab' to bring up an autocompletion menu, just so long as I'm interacting with a more OO Python library. If I'm trying to work with a more procedural library such as matplotlib, though, I'm SOL and end up having to dive through the documentation. (I can't think of a really great functional library for Python that I use, but the same is true for the more functional-y bits of numpy and pandas.) And matplotlib is a big library, so there's a lot of documentation. Far from being a form of organization, that fabled central store of the rules that the author holds up as an ideal ends up being an awful quagmire to wade through.
Granted, this is dependent on having an editor that does tab completion. And I'm sure it could be done with a functional library, too, but probably only if you're using a statically typed functional language, and I've no idea what a good UX would look like given how functional syntax works.
But still, given the current situation, I think I've realized my main reason for thinking that object-oriented programming also matters: Because right now, when you're working with large and complicated systems, object-oriented programming still offers the more pragmatic, human-friendly user experience.
Haskell's evaluation strategy is not something you can just change and have the rest of the language stay the same. If Haskell were strict there would be a good chance that it wouldn't be pure (see: ML variants); if it wasn't pure then IO would not be a problem; if IO wasn't a problem then Phil Wadler wouldn't have needed to invent typeclasses, etc.