Hacker News new | past | comments | ask | show | jobs | submit login
Why Functional Programming Matters (1984) [pdf] (chalmers.se)
181 points by tosh on Dec 8, 2016 | hide | past | favorite | 140 comments



A lot of pleas for functional programming give a long list of convincing examples that work well with functional programming. You can do this with almost any language, and this is also quite deceptional, as this is often done to convince beginners to learn another language; "look at how simple it is to program this contrived example in <language X>!".

In order to be able to use a programming language or paradigm well, it is especially useful to know what its weaknesses are. Functional programming does not work well in programs which handle a lot of states. It does not work well in programs which have high requirements for performance or memory. It does not work well for programs which have to do low-level stuff.

These are serious disadvantages, and I would like to see them highlighted more often in this introductory-style articles. Nevertheless, I think functional programming is a must to be able to write simpler programs, and I think programmers should write functions (i.e. methods that only depend on their arguments) whenever possible, for modularity and correctness reasons. I think the disadvantages can be solved by non-pure functional languages, and that there is a lot to gain for new programming languages in this area.


Functional programming largely has two schools: Treat "commands" as a separate entity from expressions, and bake "commands" into expressions. The former is largely Haskell, Clean, ... and the latter is exemplified by e.g., Standard ML or OCaml.

There are trade-offs between the two, but I definitely belong to the second school: we simply add an imperative subset to our functional language. This means we can exploit any efficiency trick an imperative program can do, including low-level stuff. The seasoned FP programmer will then proceed to encapsulate the efficiency trick in a abstract module such that the rest of the program doesn't have to worry about it. Of course, the price to pay for this is that you are losing purity. I think this is a fair trade-off but others disagree.

As for performance and memory usage: it is always a property of the architecture or system, not of the programming language. Dropping to a low-level language, such as C, usually doesn't buy you too much these days. What is more important is that most C compilers in use have vastly more time invested into optimizing routines than the typical FP compiler. Apart from that, you can easily mange the same kind of data in e.g., OCaml than you can in C.

The reason FP can beat the curve of performance in practice is because you operate at a higher level of abstraction. You have more attempts at writing the correct architecture, and it is easier to change over time. Since most real-world problems are heavily time-constrained, this makes them beat low-level solutions: when the C programmer has written the first working version, the FP programmer has tried 5 different solutions.

There is one area FP tends to fare poorly: CPU bound tasks where an inner-loop has to squeeze out performance (video encoding comes to mind). But most low-level programming fares poorly as well: either you use assembly, write GPU-level programs, use an FPGA or create your own ASIC/SoC solution for this. Also note that moving to faster solutions here costs an order of magnitude in time and in dollars: FPGAs are, relatively speaking, expensive beasts.


The key insight is that immutability is not an end in itself, it's a tool to give us referential transparency.

If a language can allow mutability in an area of code without allowing side effects to escape from it we can have all of the reasoning advantage that FP gives us at a level above the mutations. We can also have the performance we want.


The key insight is that immutability is not an end in itself, it's a tool to give us referential transparency.

How about a programming environment tailored for small simulations or games? I could imagine such an environment maintaining referential transparency without strict immutability. Rather, such a system could provide a kind of "poor-man's" immutability by only allowing pure functions that take state from tick N and output state for tick N+1.

Perhaps such a system could even achieve high performance by exploiting its constraints? Maybe the language could essentially be built on top of a custom VM and around the mechanism of bump allocation, read/write barriers, and Beltway garbage collection?


Are you familiar with Urbit [0]?

[0] urbit.org


Is anyone?


> If a language can allow mutability in an area of code without allowing side effects to escape from it we can have all of the reasoning advantage that FP gives us at a level above the mutations. We can also have the performance we want.

As far as I see and can tell from experience, this exactly the power of the black-box processes of Flow-based Programming (FBP), communicating only via message passing (on buffered channels) [1].

FBP processes can be compared to "functions", but are free to change state etc. The "only communicate via message passing" means funny side-effects are effectively contained to the process itself.

Furthermore, I would argue that FBP, via its specification of keeping the network connectivity separate from the process implementation, makes FBP programs so extremely much more composable than most typical FP programs, which often allow references to other functions be hard-coded inside function code.

It is like making the call graph of an FP program declaratively configured by a list of processes and connections:

Processes: A (ports out1, out2) B (ports in1, out1) C (ports in1, out1)

Connections: A.out1 -> B.in1 A.out2 -> C.in1

etc.

I wrote a little text processing app in this style some time ago (The network connectivity scheme can be seen here: https://github.com/rdfio/rdf2smw/blob/master/main.go#L100-L1... ) and was amazed by the clarity and composability that emerged, even though I just used my own little experimental subset of full FBP ( http://flowbase.org ). As far as I can tell, this helped me going from first line of code to finished app go extremely fast ... app written in 2 days without ever really getting stuck in any strange bug due to bad code organization, which use to happen all the time otherwise.

[1] https://en.wikipedia.org/wiki/Flow-based_programming


Ouch, didn't see that the formatting got lost before editing was frozen ... should be:

Processes:

A (ports out1, out2)

B (ports in1, out1)

C (ports in1, out1)

Connections:

A.out1 -> B.in1

A.out2 -> C.in1


This is what StateT in Haskell is for.


> when the C programmer has written the first working version, the FP programmer has tried 5 different solutions.

... none of which are necessarily faster than the other 4 solutions. It's not just the architecture of your code that influences performance, it's also – sometimes even more so – the underlying details of your implementation, like arrays vs. pointer-based data structures; Athas' comment describes this pretty well.

As a consequence, this statement:

> it is always a property of the architecture or system, not of the programming language

is wrong, simply because your programming language will heavily influence a lot of your productivity vs. performance tradeoffs.


> As for performance and memory usage: it is always a property of the architecture or system, not of the programming language. Dropping to a low-level language, such as C, usually doesn't buy you too much these days. What is more important is that most C compilers in use have vastly more time invested into optimizing routines than the typical FP compiler. Apart from that, you can easily mange the same kind of data in e.g., OCaml than you can in C.

This is true in in principle, but not in practice. It is not just a question of whether a language is particularly amenable to optimisation and clever compilation (C is not, by the way), but also whether the baseline of naive compilation is efficient in itself. A naive C compiler will generate vastly better code than a naive compiler for most functional languages, if only because naive C compilation will primarly allocate statically and on the stack, while a functional language will perform an enormous amount of heap allocation. Futhermore, natural C programming style tends towards cache-friendly arrays and bulk allocations, while natural functional programming style tends towards lots of pointers pointing everywhere. Certainly, there are functional languages with efficient array libraries and the like, but they are less natural, and their use is often considered an "optimisation". And of course, most functional languages give you some way of accessing raw memory and essentially just writing C-in-Haskell or whatever, but then you're not really doing functional programming anymore.

Of course, if the problem is at its essence about pointer chasing, or composing IO pipelines, then functional programming is a fine choice, because the performance of the language is less important, and the ability to reason about complicated control flow is important. What I find interesting about Haskell is that due to the clear reification of IO, the compiler can actually perform optimisations on IO pipelines, such as fusion. Think about it - IO is usually the prime example of a fusion inhibitor, but GHC can actually do it for libraries such as conduit! It is a little ironic that Haskell is probably the best language I know of for describing complex IO operations.

An interesting twist is of course when you construct a functional language with an eye towards efficient compilation from the start. Then the primary compound data type is no longer the linked list, but the array, and you tend to end up with an array language. Examples include SISAL, Single Assignment C, NESL, Accelerate, Futhark, Lift, etc, which are naturally fairly efficient due to their programming model, and which provide strong functional invariants that the compiler can then exploit. These .anguages are all still pretty experimental and unwieldy in practice, though.


And it's not just about linked lists vs arrays, but also about the actual call stack.

FP code tends to be heavy on recursive algorithms, which means you need mutually tail recursive functions. And laziness. Somebody mentioned the State monad and the IO type. Well, these are lazy abstractions with a memory-safe bind/flatMap operation that makes them tick.

Well, the problem with these abstractions is that they'll require building a data structure that cannot be an array. And in the case of languages like Scala or Clojure that don't have real tail recursion because of the JVM, you need to manage your own trampoline as well. We now have the Free monad, which is awesome, except that it is very heap unfriendly.

Actually for FP you need persistent data structures everywhere, not just lists, like maps, hashes, vectors and all known implementations are some sort of trees. And this is an active area of research, but building cache friendly trees is a hard problem.


Saying that Clojure and Scala don't do tail call optimization because of the JVM could be a little misleading. The JVM is a pretty general compilation target, I'm sure you could implement tail call optimization in a compiler targeting the JVM in any of the usual ways. If Clojure and Scala don't do it it must be for some other reason. If I had to guess I'd say it's probably (1) that the developers don't think TCO is that important (lots of language implementations don't have it and are perfectly usable), (2) for ease of implementation, (3) for better interop with Java libraries, or a mix of these reasons.


You can implement tail recursion optimization on top of the JVM (which is far, far away from a general compilation target BTW), where the distinction is that the function may only call itself. These cases can always be trivially rewritten into loops, and I know at least Scala does this just fine.

More generally tail call optimization allows you to avoid pushing a stack frame for any call in tail position. The JVM (or even LLVM) does not give you the necessary tools to implement this yourself, since you can't play games with your stack frame and return address directly.


An example of a language implementation that does full TCO and targets the JVM: http://sisc-scheme.org/

I couldn't find this right away, but vaguely recall reading SISC transforms to CPS to anable this (and call/cc).

I think Kawa scheme supports two calling conventions: the standard JVM one and a second one for which it can do full TCO. The second one is a bit slower so the default in Kawa is not to do full TCO.


I think you're implicitly thinking that a sensible compiler targeting the JVM would compile whatever functions the source language has to JVM methods and call them via the usual JVM method call instruction. That's certainly a good thing to do if you want have simple Java interop and to reuse the JVM's call logic instead of simulating your own calling convention with other instructions.

I was implicitly thinking there's nothing forcing you to use the JVM calling convention. For example your compiler could transform the source to CPS. That probably would generate slow code and interop would be awkward (because normal Java libraries aren't already in CPS).


It appears that your claim "Scala or Clojure that don't have real tail recursion because of the JVM" is in error.

http://stackoverflow.com/questions/1677419/does-scala-suppor...

http://clojure.org/reference/special_forms#recur


I specifically mentioned mutually tail recursive functions. Self-recursive functions, like what you're describing, are much less useful or interesting.

I say that as a Scala developer that loves Scala and the JVM - the lack of tail calls optimization at the JVM level is a pain in the ass, because in order to work around it you have to effectively build your own call stack.


It's not exactly an error. Neither language offers enough tail recursion to handle complex cases though they both offer simple support.


As your own link makes clear, Scala does not have real tail recursion because of the JVM. It has a very limited form of tail recursion.


> This means we can exploit any efficiency trick an imperative program can do, including low-level stuff. The seasoned FP programmer will then proceed to encapsulate the efficiency trick in a abstract module such that the rest of the program doesn't have to worry about it.

You can do that in a pure language as well (i.e. ST monad.) Purity shouldn't be given up lightly.


I think he was talking about something like unsafePerformIO. There are plenty of tricks you can not write on pure monads, and there are many Haskell libs that use the unsafePerformIO trick too.


> There are plenty of tricks you can not write on pure monads

This made me curious, could you list some?


Anything that needs rewriting a memory address without losing time by allocation and garbage collection.

Handling IO exceptions and masking them as different errors.

Thread, process, or machine coordination in parallel execution.


You don't need unsafePerformIO for any of these.

> Anything that needs rewriting a memory address without losing time by allocation and garbage collection.

ST gives you mutable references. Or just use IO.


You need it for doing what the OP is talking about.


ST monad is not state; it's a convoluted way of threading a value through a chain of functions. State means we put the value into the same memory location (or CPU register, or piece of tape or whatever).


ST is exactly that. You probably mean the State monad.


> There are trade-offs between the two, but I definitely belong to the second school: we simply add an imperative subset to our functional language. This means we can exploit any efficiency trick an imperative program can do, including low-level stuff. The seasoned FP programmer will then proceed to encapsulate the efficiency trick in a abstract module such that the rest of the program doesn't have to worry about it. Of course, the price to pay for this is that you are losing purity. I think this is a fair trade-off but others disagree.

I think the trade-off is unnecessary. If your language can embed a convenient method for analyzing the imperative state in the way it is typically used (F* is a good example), you can get a similar result by lifting your internally stateful but externally pure code into truly pure functions, even if it wants to do something more complicated than simpler state monads can do conveniently. The compiler can then understand the language's state construction and write the imperative code directly when generating machine code, with little more infrastructure than OCaml needs to allow you to do so anywhere.

This way you get to keep purity without requiring you to lose performance or jump through hoops to keep your pure code downstream of your impure code.


> Functional programming does not work well in (cont..)

You state earlier that programmers can and perhaps should write functional style (AFA feasible) in the languages that do excel at above problems. So you probably meant that "functional languages" ---rather than "functional programming"--- "does not work well" there.

> (..cont) programs which handle a lot of states. It does not work well in programs which have high requirements for performance or memory. It does not work well for programs which have to do low-level stuff.

The main problem with these issues is that there's no serious investment into optimizing the very few pure-and-lazy (for me the holy grail of FP) languages' (most-all of academic origin) compilers better for real-world loads such as you outlined. The described problem spaces themselves ("lots of states" -- huh?) are surprisingly simple to model in the type systems of the languages I'm into such as Haskell. Side effects are most elegantly solved and defining them yields succinct and non-ambiguous code. And really just like OOPs design classes to describe their problem spaces and stuff their logic into, so you start from your types in FP.

That being said, so far I'm rather satisfied with GHC speeds and resulting binary performance. That is, of course, for non mission-critical-realtime-high-frequency-trades-while-raytracing-while-guiding-missiles use-cases only so far. A lot of automation I do, I really couldn't give a hoot about its performance as long as I'm not billed by the second (which you shouldn't be anyway). Because, hey it's automation! It means I can do other stuff! Likewise I don't care whether a dishwasher takes 1 or 3 hours. Adjust expectations and use the time freed: slower automation yields more time freed!

Wait..


The dishwasher case comes down to: period between meals; and capex/storage for duplicating crockery and kitchen utensils. So I would say there is a threshold around 3 hours, 1 hour good, 5 hours bad. It is also a factor of volume, so 1 person unnecessary, 4 person family (or whatever they call it these days), probably essential.


That's one of the reasons I loved the "programming languages" course on coursera[0]. The course went through 3 languages, ML, Racket and Ruby giving 2 weeks to each language. And then spent time contrasting the weaknesses and strengths of the different languages and their paradigms.

[0]https://www.coursera.org/learn/programming-languages (Starts up again on the 9th of January :) )


Seconded! Programming Languages is wonderful. Easily the best I've taken in Coursera (and better than Odersky's courses).


I write code for a living (data science), but am not really a programmer. So far, my favorite language is Scala with Spark. Other things I've worked with: Python, Clojure, Haskell, R, SAS, tons and tons of SQL.

The major advantage of functional programming here is that it models the domain really well: the vast majority of my work involves taking a small number of data sets, doing a long sequence of processing, and outputting a small number of data sets. Everything in between benefits substantially from error-preventing techniques such as purity and static types.

The biggest disadvantage is how hard it is to analyze and improve performance. Working with Spark means that a pipeline that works on X gb of data will often fail on 5X with out of memory errors, and it's nontrivial to diagnose and fix. It's not clear to me how much of this is due to Spark itself vs. laziness.


> Functional programming does not work well in programs which handle a lot of states. It does not work well in programs which have high requirements for performance or memory. It does not work well for programs which have to do low-level stuff.

None of this is true. This is just the generic set of plausible-sounding but meaningless complaints people who haven't actually used FP for any of these purposes tend to repeat.

> a lot of states

What is this supposed to mean? ADTs are by far the best tool for state management available in programming today, and yet there are very few non-functional languages that support them. Languages like Haskell offer extremely powerful tools for state management like monad transformers and ADT-based exception management.

> high requirements for performance or memory

This is an old meme that doesn't apply at all anymore. Haskell has a number of world-class packed data management tools (repa, vector, bytestring, etc.) that can actually do a ton of cool optimizations that a similar library in e.g. C++ could not do. Any marginal overhead incurred by using a functional style (which you are not obligated to use) are typically more than offset by the fact that high-efficiency techniques that are typically inconvenient to represent (like cache-sized chunked text management) are extremely easy to use with strong enough types and flexible enough combinators. For example, if you're doing streaming text management, you can get higher performance in C than Haskell, but it's going to take 100x more effort over just using lazy ByteStrings. Tight numerical code will get unpacked and turned into more or less C-equivalent assembly.

> low-level stuff.

I'm not sure what your definition of "low-level" is. For me it's doing register operations on microcontrollers, and in that case I agree. But for anything you can do on Linux, you are incorrect. Haskell has better low-level support via FFI mechanisms than, say, Java.


Is the 'A' in ADT algebraic or abstract?


Algebraic.


The longer time I spent programming, the more at least the functional paradigm (even within non-functional languages) made sense, mostly as a labor-saving device (like all things in programming) resulting in fewer bugs, easier modularity, easier unit testability, etc. etc.

Apparently I'm not alone, here's John Carmack's take on it after "trying it on" for a while in C++, he has some very strongly positive things to say about it: http://www.gamasutra.com/view/news/169296/Indepth_Functional...

You haven't listed the specific disadvantages of FP, one I noticed with a bit of dismay was that the code for a basic quicksort in functional languages looks like this (Elixir in this case: https://gist.github.com/rbishop/c742ab53b12efc162176) and is actually quite beautiful BUT performs fairly horribly. (It is rare for me to find code that looks this good/simple and yet is one of the worst-performing.) The same algorithm is also slow in Haskell (which is where I first saw it).

Some might cite the preponderance of recursion instead of looping as a fault or flaw (since the language implementation needs to then implement TCO in order to not trivially blow the stack... and the programmer needs to be AWARE of how to trigger TCO), but in practice I prefer it because a semantically-infinite-recursing process ends up being a "nicer" paradigm even if in actuality it's implemented underneath with a loop construct.


> I think the disadvantages can be solved by non-pure functional languages, and that there is a lot to gain for new programming languages in this area.

We don't necessarily need new languages for this. There are already pragmatic languages out there, for example, OCaml [1] (and if the syntax is off-putting there's ReasonML [2] from Facebook). I'm sure there are other examples too.

[1] http://ocaml.org

[2] http://facebook.github.io/reason/


> Functional programming does not work well in programs which handle a lot of states

From what I understand, isn't this what functional programming excels at?


Handling "a lot of states" (rather, a large state space) is a hard problem in any paradigm.

Pure languages make this complexity apparent, and thus gives the programmer a true sense of how hard it is to do right. This makes it seem "harder" than in impure languages.

Impure languages let you get away with handwaving the state space, which – to be fair – is often enough. But when it isn't, you pay dearly for it after the fact.


Yes. You can model state easily in functional programming languages but the technique is unfamiliar to imperative programmers since it's more explicit. This is a very good thing.


More explicit? Try more convoluted. Ever try to build an event-driven FSM in Haskell? I've seen several try, including myself...the results of which have all been complexity disasters.

Imperative languages in general have poor data structures for representing state (no ADTs!), but nothing is more explicit and straightforward than using mutable data or mutable containers for modeling actual mutable state. That's why languages like Scala exist. Functional without religious paradigm enforcement.


I don't do FP or Haskell but did do FSM's in imperative languages. Looking up Haskell's event-driven FSM's...

https://wiki.haskell.org/Real_World_Applications/Event_Drive...

https://accu.org/index.php/journals/2199

...gives me code that looks weird since I did imperative but straight-forward. The second link even shows some simplicity and less tangling vs imperative examples. I wonder what makes you think FSM's in Haskell turn into disasters if these examples were so easy.

Besides, you shouldn't be hand-coding FSM's to begin with. The various sub-fields of programming and IT I've studied all came from different directions to same best practice for FSM's: DSL's that generate them. LISP people did it forever. iMatix did it for reliable, distributed apps in C. Haskell embeddings go from stuff like that up to hard real-time C. Theres also open-source projects that compile easy descriptions of them to about any language. The DSL can be included in repo right next to resultant code for documentation purposes. Hardware designers also synthesize and transform them with automated tools.

Given that, the difficulty of doing FSM's by hand should never be a problem for any language. Just don't do it by hand. Automate the tedious stuff machines are good at. Hand-code the stuff humans are good at.


What's the problem with event-driven state machines in Haskell that leads to "complexity disasters"? Maybe you could show an example of a non-disastrous imperative variant, and we'll see if me or someone else can come up with a Haskell translation that isn't disastrously complex?


-- Functional programming does not work well in programs which handle a lot of states

Apache Spark is written in Scala. It can handle extremely large amounts of state across large clusters of machines. Is the argument here that Scala is not purely functional? Or what am I missing?


By "lots of state" I don't think he meant "lots of data", but rather data that get changed a lot: Q&A forum, CRUD application, etc.

Functional paradigms can still be used (e.g. HackerNews, though let's be honest...I love it but it's not the most challenging feature set). But FP isn't as natural of a fit as the problems chosen to introduce it.


Not saying your wrong about what he meant, but Spark can also keep state across the Cluster, not just data. Check out the "UpdateStateByKey Operation" in the following page.

http://spark.apache.org/docs/latest/streaming-programming-gu...


I haven't taken a look at the Spark codebase, but it is very easy to write Scala code that is not purely functional.


NOTE: I agree with your comment as a whole but I want to give some counterpoints to some things you've said. These counterpoints all do require more "advanced" FP ability, but as FP becomes more popular, these techniques will become less "advanced."

> Functional programming does not work well in programs which handle a lot of states.

I've rewritten stateful, greedy, backtracking algorithms from Java to Scala. Scala (scalaz in particular) made this algorithms much simpler with no noticeable performance penalty. The backtracking in particular became trivial thanks to immutability and the State monad.

> It does not work well in programs which have high requirements for performance or memory.

There is a cost to immutability, but mostly it doesn't matter. When it does, FP has a lot of ability to encapsulate mutability/ugliness etc used for performance improved in different ways. Haskell in particular is great at this. It has ST for mutability, and rewrite rules/inline pragmas for performance gains. There still is a lot of room to improve here however.

> It does not work well for programs which have to do low-level stuff.

Haskell itself isn't ideal for low-level programming (low-level as in microcontrollers and FPGAs. Raspberry Pi isn't low-level and can handle Haskell fine). But Haskell has EDSLs that compile to C/VHDL/etc (C: Ivory, VHDL: Various Lava variants). With an EDSL approach, you end up using Haskell as a macro language for your low-level language. And unless the CPP, Haskell is a powerful and easy to reason about macro language!


With regards to performance, sometimes taking the penalty of immutability in the small leads to gains in the large. The React wrappers in ClojureScript are a great example of this - since all data is immutable, shouldComponentUpdate is trivially and automatically implemented for you, meaning React not having to diff parts of the virtual DOM that are guaranteed to be identical.

There are other cases where immutability helps, even with regards of performance. At my day job, we use C#, with little focus on immutability. Performance has been a huge problem lately. One of the things we did was to change lots of constructors which initialized empty lists. In most cases, these lists remain empty, so we created a whole bunch of objects unnecessarily, which put a lot of toll on the GC. The "solution" was to initialize them to null instead, which made the code a lot more brittle and cumbersome. Had we used immutable lists instead, we could've shared one single, empty object instead (per type, of course).

We also have lots of copy constructors which we use often - these would be completely unnecessary with immutable data.


>Functional programming does not work well in programs which handle a lot of states.

I'd say this is completely false, and almost no one agrees with this if they've done any actual functional programming.


I'd say that state is actually easier to handle in functional languages. You just need the right abstractions for it.

I've been working with Phoenix lately, which basically just hands request/response details down a pipe of functions, which all can change the state of the response. There can be a lot of state contained in this response, but when you use Phoenix, you don't really notice it. The abstraction makes everything seamless - best web-framework I've used to date.


> In order to be able to use a programming language or paradigm well, it is especially useful to know what its weaknesses are. Functional programming does not work well in programs which handle a lot of states. It does not work well in programs which have high requirements for performance or memory. It does not work well for programs which have to do low-level stuff.

By the same token, whenever this conversation comes up a series of vague and poorly-formed criticisms come up saying, "You can't do low level things" without defining what on earth that means. Or, perhaps worse, conflating the idea that you must have Haskell in all its massive (and perhaps fairly: a bit bloated) confusion of ideas. People can and do write high performance code in functional style and with functional tools. People even do it with laziness as a core abstraction.

The main gate to functional languages participating in, say, the Linux kernel is NOT that they are "too slow" or that "laziness makes them too confusing". It's that the Linux kernel is written entirely around the unique weirdness and expectations of C, and only languages based on or descendant to C do well there.

It's difficult to treat the core question, "What are the disadvantages of functional programming" in the same way that it's difficult to answer, "What are the weaknesses of OO programming." Both use terms that encompass a very wide variety of approaches and decisions, but the way we evaluate them as bottom-up, on a case-by case language for each individual language.


> The main gate to functional languages participating in, say, the Linux kernel is NOT that they are "too slow" or that "laziness makes them too confusing". It's that the Linux kernel is written entirely around the unique weirdness and expectations of C, and only languages based on or descendant to C do well there.

Sure, but the usual functional style has intrinsic issues that prevent it from being feasible for writing kernels in general. A kernel (especially a microkernel) spends most of its time managing state. You can use a functional language as a metalanguage for an imperative DSL (as the Atom DSL for constant-space programming uses Haskell), but you won't be writing code that looks remotely functional.


Does COGENT look functional or imperative in programming style to you? Genuine question as I don't do FP. COGENT is latest in attempts at bringing functional, verified languages into system space. A team already redid the ext2 filesystem with it.

https://ts.data61.csiro.au/projects/TS/cogent.pml


COGENT takes pretty much precisely the approach I described below: it uses domain-specific structures in a functional language to encode verified imperative code. If you look at pretty much any code in the ext2 pilot (here[1] is as good as any) you'll notice that a lot of the functions called are things like destroy_Ext2DirEnt (which unless they named things poorly, takes a clear action), as well as a lot of passing around of memory addresses to functions that write them (e.g. deserialise_Ext2DirEnt2).

It's not that they wrote the logic in a fundamentally different paradigm from C, it's that they take care to give their system the information it needs to both generate C code and most of the desired proofs simultaneously. The functional language is fufilling the role of metalanguage excellently, but little of that ends up in the generated code. The one functional thing that does is pattern matching, since the C equivalent (if statements and unions) are much harder to verify (and use).

[1]: https://github.com/NICTA/cogent/blob/master/impl/ext2/cogent...


Thanks for the insightful reply. I agree that code does look very imperative. Likely due to goal of C synthesis as you said.


> Sure, but the usual functional style has intrinsic issues that prevent it from being feasible for writing kernels in general.

Not really?

> A kernel (especially a microkernel) spends most of its time managing state.

So does every computer program though. The idea that functional languages can't support mutation is a strangely persistent myth even in the face of multiple counter-examples AND 20 years of improvement via research and practical work.

> but you won't be writing code that looks remotely functional.

Many useful & powerful functional abstractions can be written to use constant space, even in Haskell.


My point is not that functional languages can't support mutation, I'm well aware of the whole gamut from State to F-Star, Eff, and Idris. My point is that you're going to spend almost all of your time explicitly mutating things, using whatever functional language as a "very fine imperative language."

Yes, you can embed those semantics inside functional semantics, and even use the functional language to add more static verification at the type level via things like F-Star's Hoare logic. But you're still mostly going to shoving bits in specific places based on the result of a shallow pure function applied to bits you yanked from a specific place.

On top of that, you're not going to be able to abide the kind of allocations that functions in Haskell, OCaml, etc. can do with little provocation (and which are hard to avoid categorically), so you'll need to work within an especially restrictive DSL. Definitely no lambdas or partial application. So in the end you'll be in "Generic Stack-focused Pointer-pushing Procedural Imperative Language: The Monad". Where in this do you see any functional-ness, outside of the fact that you'll probably call your procedures functions?

> Many useful & powerful functional abstractions can be written to use constant space, even in Haskell.

Yes, but not to the degree that Atom's use cases require (where the program must allocate all memory ahead of time and therefore know a specific upper bound). This necessitates deviation from usual practices of any kind, including Haskell's.


> My point is not that functional languages can't support mutation, I'm well aware of the whole gamut from State to F-Star, Eff, and Idris. My point is that you're going to spend almost all of your time explicitly mutating things, using whatever functional language as a "very fine imperative language."

The difference being that this "very fine imperative language" has much stronger type safety guarantees.

Maybe we can stop having ring0 buffer overflows some day when C programmer pride is sufficiently assuaged. I doubt it though... My experience with the linux kernel community is that it is a limitless void of insecurity and infighting.

> On top of that, you're not going to be able to abide the kind of allocations that functions in Haskell, OCaml, etc. can do with little provocation

OCaml does much better here (in fact, really quite amazingly well here, on part with some of the greatest common lisp distribution compilers which were stunningly good at it). But yeah, Haskell has a very poor focus on the needs of the "industry" when said industry is focused around extremely tight optimizations.

That said, I refuse to confuse a specific example of FP with a traditionally academic and research focus with the discipline as a whole. That's a dodge.

> So in the end you'll be in "Generic Stack-focused Pointer-pushing Procedural Imperative Language: The Monad".

Honest question: what's the problem with this if it offers additional safety and promotes the use of stateless functions? If it all compiles down to similar code, then it's fine. People act like monadic code is not functional, when it is in fact extremely functional code.

That's what's funny about all this: imperative programming is expressible succinctly and easily in functional languages.


It's not a problem at all, it's exactly the direction I'd like to see things go as well. It's just that your code is still a stranger in a strange land in these cases. Your proofs will be full of the typical halmarks of functional programing, and for the stuff that is executed at runtime you'll definitely want some pattern matching, but there will be no closures or higher-order functions and limited opportunities for monads, functors, etc.

Would you consider a C program transliterated into a representation of C inside a functional language and then annotated with proofs to be "functional"? My argument is what you get if you write a true kernel (no sitting on top of a runtime written in something else) is going to look a lot like that would.


> The difference being that this "very fine imperative language" has much stronger type safety guarantees.

Is that a result of the specific language, rather than FP?

That is: One could think about building a procedural language that had... well, I'm not sure it could have Hindley-Milner types, because I'm not sure it could have higher-kinded types, but it could come close, couldn't it?

And from the other side: Does FP require very strong types? Or can it be done with something equivalent to C/C++'s type system? Or Python's?


A kernel spends most of it's time managing mutable state.

Here's a table of processes. We want it to be an array, rather than a linked list, for efficiency reasons. When a new process is created, we don't want to copy the array, also for efficiency reasons. So we mutate the array.

> So does every computer programmer though.

Not really.


> A kernel spends most of it's time managing mutable state.

Why do you think that FP doesn't have tools for this? Do you genuinely think that in 20+ years of research no one has thought of this? Have you investigated it?

I don't know. If you think that things like ST aren't suitable please say why (other than the larger problems with monad transformers, of course).


I know that FP has tools for that. But if the problem is primarily managing mutable state, isn't a tools that lets you directly see what you're doing a better fit?

Are the FP tools as efficient as the direct, C-style approach? For an OS, that matters.

Are the FP tools as easy to reason about correctly (especially in a section you're not familiar with)? For an OS that's worked on by thousands of people, that matters.

In this context, what is "ST"? And, what are the larger problems with monad transformers?


What's most frustrating about these conversations is what I've noted in another thread: people feel really comfortable talking about what FP can and can't do without even acquainting themselves with 10+ year old techniques in the field.

I'm sorry, I decline to continue this conversation further.


How do you think Xerox PARC, Genera and TI managed to write Lisp based OSes?


By using all the imperative, procedural and object-oriented features of Lisp.

In many ways the early MIT Lisp Machine morphed into a Flavors Machine in the early 80s.

Remember, Lisp is a multi-paradigm language.


But that is the thing, none of the successful FP languages in the industry are pure FP, rather multi-paradigm, even if functional first.


> Here's a table of processes. We want it to be an array, rather than a linked list, for efficiency reasons.

At least in Linux, the table of processes is implemented as a (doubly) linked list.


I wasn't going to say anything but...


TIL.


Here is cat in brainfuck

  ,[.,]
Why it is not year of the brainfuck programming is beyond me


>"It does not work well in programs which have high requirements for performance or memory"

Can you elaborate on why this is?

I see the parent post mentioned "lazy evaluation." Can you say why this is relevant to discussions on resource utilization and performance?


Higher levels of abstraction are harder to translate to efficeint machine code.

Non-strict (what you call "lazy") evaluation can make it more difficult to predict when resources are needed. In strict languages, the resources needed to evaluate f() are needed exactly at the point where you typed "f()".

With non-strict evaluation, those resources may be needed then, later, or not at all! If those resources happen to be needed when a) they are no longer available, or b) at the same time as a bunch of other computations need resources, you have problems.


That makes total sense. Thanks for the explanation.


I'm too much of a noob to know what I'm talking about but so far this series of talks has convinced me FP with strong typing brings something new to the table in the same way that say C brings something over assembly and s expressions in lisp bring something over non s expression languages.

http://fsharpforfunandprofit.com/

In particular this example convinced me I'm missing out on something

http://fsharpforfunandprofit.com/posts/designing-for-correct...


Well, C brings ... functional programming over assembly, right? Look, calculation with machine words, but without side effects:

   int x = 42;
   int y = x / 2;
   double z = sqrt(x*x + y*y);
   return 1 + z;
We're just binding fresh variables, and nesting expressions that evaluate terms and return results. No assignment anywhere.

Yes, functional programming matters. It lets you add two things together in C without worrying about allocating a destination operand for the result, whose clobbering won't affect anything anywhere else.

This sort of thing in turn makes it a heck of a lot easier to write OS schedulers, drivers, memory managers, codecs, ray tracers, database engines, ...


I don't believe many are advocating functional programming as a replacement for hand-crafted C, but it is a good alterative to imperative high-level languages which often make big performance sacrifices but offer only small improvements in reliability and abstraction/composition. I absolutely believe the correct way to build software is with language layers. Ideally using FP or declarative languages where you can. Python gets this partially right by using C libraries for all its performance critical work. C++ is IMHO the wrong approach, a jack of all trades and a master of none.


Why not? Lisp did it before C was even born.


How many device drivers have ever been written in Lisp? I'm not necessarily advocating C as the best way to write high performance, non-allocating code either.


Why, all of them, in a Lisp machine.

Probably zero of them, in any non-Lisp machine.

Lisp is a fairly nuts-and-bolts language suitable for device drivers, depending on what you include in it. The

The basic Lisp evaluation model is close to machine language: Lisp values readily map to words (32 bit, 64 bit, whatever) stored in registers, and pushed onto a conventional stack during function calling.

Lisp compilers can optimize away environments: they can tell when some local variables or function parameters are not being captured by a closure and can live on the stack.

Lisp can compile to re-entrant machine code.

Dynamic memory allocation in contexts such as interrupt time is not off the table. In the Linux kernel, ISR's can call kmalloc; they just have to pass the GFP_ATOMIC flag. Similarly, a Lisp interrupt service routine can still cons up cells or other objects, probably in a limited way that can't trigger a full GC, or block for a page fault.

Parts of such as system can be written in a Lisp notation for a non-Lisp language. Such as, for instance, a "Lispified" assembly language. Thus the saving of registers on entry into an interrupt can still be notated in Lisp; it's just not the normal Lisp, but some S-expressions denoting architecture-specific machine instructions (register to memory, and register to register moves and such). When the system is built, an assembler written in Lisp converts that to the executable code.


Nice write up. I'm not sure "Lispified" assembly language really counts as functional programming though. It suggests a layering approach that I suggested originally. Of course, Lisps ubiquitous linked-list and dynamic types are issues for efficient compilation that are not shared by every functional language. The Mirage project is having a lot of success writing a lot of low-level code in OCaml.


Why would you slavishly try to stick to a functional approach when making device drivers in Lisp (or anything else).


Ever heard of Interlisp-D, Lisp Machines?


Yes I have. But my context was modern mainstream hardware. We won't convince anyone on the merits of functional programming by pointing out the existence these old machines. My argument was "have your cake and eat it too", use FP and low-level languages together.


The problem is that like Alan Kay states, the industry has become pop culture, and very few believe what can be achieved, rather take what they can see today.

To prove a point one is forced to create a system just to prove the others wrong, without any ROI.


Completely agree. I should have better worded my post.


The problem: this was long ago on an OS written largely in Lisp.

On a current typical OS (Windows, Android, iOS, macOS, Linux, ...) the chance of ever seeing a Lisp-based device driver in action is near zero.


It isn't because those ideas failed to get mass adoption that we should ignore them.


I don't think anyone is intentionally ignoring them, just accepting the present reality of hardware designed for C. When Intel finally start selling reduceron hardware graph reduction for my Haskell, I'll be the first in line.


The top link on /r/haskell right now [1] is someone complaining that their program runs out of space due to a subtle interaction between laziness and IO. I think it's safe to say that we have tried laziness as the default and have learned that it's the wrong default, because it plays havoc with space and with IO.

[1] https://www.reddit.com/r/haskell/comments/5h6emf/haskell_run...


Look, I fixed their code in so many ways:

    main =
        runConduitRes                        -- dealing with finite resources
             ( sourceFileBS "input.txt"      -- read input.txt as binary data
            .| decodeC utf8                  -- decode assuming UTF-8
            .| linesC                        -- split into lines
            .| mapC parseList                -- parse each line into list of text
            .| mapC (get 5)                  -- get sixth element of list
            .| catMaybeC                     -- discard lines with no sixth element
            .| encodeC utf8                  -- encode as UTF-8
            .| sinkFileBS "output.txt"       -- dump into output.txt
             )
And it's still very functional, if not more so!

This will run in constant space and linear time, it will buffer reasonably, it will not leak file handles, it will gracefully clean up on exceptions, it will not crash when it fails to parse something correctly, and it makes the encoding assumption explicit (you cannot split into lines unless you know the encoding).

Dealing with I/O is not hard when you use the correct primitives.


It seems to me that part of the problem he was having is that he didn't really understand how these methods were being implemented internally. I know that all languages inevitably have these problems but how do the amount of leaky abstractions compare in Haskell to other languages?


I don't think Haskell is worse than any other language in that regard. What perhaps sets it apart is how much of its functionality is implemented in third-party libraries, which may be difficult for a beginner to come to grips with. "Why should I download a library to use efficient arrays? Shouldn't they be built in, like in Python?"

I can come up with two explanations for this reliance on third-party libraries.

1) Haskell has always been a quickly evolving language attracting research-minded people which in turn go on to develop really cool libraries that are much better than conventional ways of doing things. The interpretation of this explanation is that it's simply not possible to keep the standard library up to date with the latest library developments.

It may also be the case that

2) Haskell has always been a really powerful language capable of offloading important tasks to libraries. What would need to be built-in functionality in other languages can be implemented as libraries with no sort of special treatment in Haskell, so people do it that way because they can, and because it keeps the base simple.


> > how do the amount of leaky abstractions compare in Haskell to other languages?

> I don't think Haskell is worse than any other language in that regard.

Many Haskellers are happy about this kind of stuff:

    min = head . sort
which is the very definition of a leaky abstraction. The blame lies with laziness, because it allows code to depend on implementation details of other code in crazy ways: "sort is O(n log n) unless you ask for only the first element, in which case it's O(n). What if you ask for the last element? Uhhh..."


I wouldn't depend on invocations of sort to be anything but the worst case.


Absolutely agreed! 'head . sort' is efficient by accident!


I'm leaning toward #2, largely because I know that's the Scheme approach, and I get the impression that Schemers and Haskellers have similar feelings about protecting the design purity of the language. By contrast, the Python community has always been pragmatic, arguably to a fault.

I can say that newcomers also seem to get a lot of advice that's directly at odds with how the Haskell community seems to think people should use the language. For example, Learn You A Haskell starts right off the bat by encouraging people to use the "list of characters" version of strings, even though that approach courts serious performance concerns.


>"Why should I download a library to use efficient arrays? Shouldn't they be built in, like in Python?"

Python does in fact use a third-party library for efficient arrays (Numpy) and its use is so widespread as to be a defacto language unto itself.


It might be simpler than that. Array syntax is not built in in Haskell therefore the array implementation needn't be built in.


I'm sure I misunderstand you, because it sounds to me like other languages were designed on the basis of "Wait! How did this array syntax get built into my language? Oh well, now that it's there I guess I might as well build in arrays..."

I would guess those languages get built-in array syntax specifically because they have built-in arrays. The arrays come first, the syntax later.


I suspect that typically both arrive at the same time!


I'm not sure why we keep pretending Haskell is supposed to work well. It's half prorgramming environment, half research environment. This shows in MANY ways. And don't get me wrong, the Haskell team tries, but ...

There really isn't a serious attempt to build an industry-friendly programming language with functional features and laziness as a default. You'd make many different decisions than Haskell:

- No more language extensions, you'd fix the feature set.

- Better transparency on memory growth and GCs in tooling

- It'd be easier not to generate a ton of garbage, so the GC could be designed such that very large working sets could be handled with low latency.

- You wouldn't structure IO the way the have. Modern programs have demands for many, many side effects and threading an IO monad through every place they can touch is common practice, but not useful. A reduced power version of IO that would let developers push events to an IO-enabled functional reactor would be in the stdlib.

- Monad Transformers would almost certainly not be included, preferring: http://lambda-the-ultimate.org/node/4786

We shouldn't hold Haskell as the perfect expression of functional and lazy programming. It was never meant to be and it shows. It's just the tool we have right now to go to bat with.


I believe PureScript solves most of these problems.


Thanks! After your comment I finally got around to checking out PureScript, and it looks nice (aside from the crazy tooling requirements which are typical for 2016).


Sure. I'll be moving to Idris as soon as it matures, since by enforcing totality you gain the advantages of laziness without the problems.

In the meantime Haskell may well be the best option for various circumstances even with that flaw though. (And note that strict evaluation in non-total languages brings its own problems)


No, that's not laziness per se, that's "lazy IO", something somewhat different.

EDIT: It's not even a lazy IO problem since no lazy IO functions are used there.


The problem is laziness. Lazy IO just makes it happen with files, which is more noticeable. In regular lazy code it happens with memory instead (space leaks).


EDIT: I was wrong. It seems to actually just be a hideously inefficient in-memory representation of the data (akin to List[Char]) perhaps plus a space leak.

That's what I get for not reading the code carefully...


these are not problems, only small details that you have to be careful of..not a large tradeoff to pay


That's like saying that bounds overflow in C isn't a problem, just something you have to be careful of.

To tell you the true, I can't understand why the code the OP talks about has a problem, and I can't even reproduce the problem (and never saw any of the problems of lazy IO in practice either). But I also can not say the code has no problem, and that is a big issue with the language.


> That's like saying that bounds overflow in C isn't a problem, just something you have to be careful of.

I think there's a significant difference between undefined behavior (hello security flaws!) and the program crashing in a (reasonably) well-defined way.


To me, this is the most important part of the text (p2, bottom):

"The ways in which one can divide up the original problem dep end directly on the ways in which one can glue solutions together Therefore to increase ones ability to mo dularise a problem conceptually one must provide new kinds of glue in the programming language."

Functional programming is great because it provides two (new) kinds of glue: function composition and lazy evaluation.

Even if you accept that (I certainly do for function composition, less enthusiastic about lazy evaluation), I would say that it only provides two new kinds of glue.

We need lots of kinds of glue, in other words, lots of architectural connectors. And that means linguistic means of defining and varying architectural connectors. http://objective.st


> Functional programming is great because it provides two (new) kinds of glue: function composition and lazy evaluation.

Certainly true for composition but laziness is more the exception than the rule in today's FP languages. And it's getting an increasingly bad reputation to the point that even Haskell is slowly (and reluctantly) being dragged in the strict direction (which it will never fully reach because so much of it would break).


There's one spot where I like laziness, and that's where it supports composition.

I'm gonna hop over to C# because that's where my favorite example lives: LINQ is a functional library that lets you describe queries on data that are executed lazily. The reason why the laziness is great in this scenario is that it lets you separate the tasks of constructing a data processing pipeline, and executing it.

The spot where it's tricky, though, is that it's a very leaky abstraction. It's easy to forget that these expressions might actually represent a lot of work, so if you get your lazy sequence object (IEnumerable<T> in C# terms) and then check if it has any values in one expression, and calculate its sum in another, then you might end up accidentally round-tripping a database twice.

Because of those sorts of stumbling blocks, I think laziness is a power that needs to be handled with care. I'm pretty sure that means you most certainly should not make it the default behavior.


I hope you will be interested in the C#/LINQ codes found in following pages:

  "Why Functional Programming Matters" solved with C#
  http://www.oki-osk.jp/esc/cs/whyfp.html
  http://www.oki-osk.jp/esc/cs/whyfp2.html
  http://www.oki-osk.jp/esc/cs/whyfp3.html


> laziness is more the exception than the rule in today's FP languages

Exactly. I also wrote I am somewhat less enthusiastic about laziness.

So that leaves just one kind of glue as FP's contribution, which goes back to my point to the contribution being good but not sufficient.


Laziness by default is only a property of the Miranda branch of FP languages, not a feature of all FP languages.


Does someone have a better scanned version of this? The format is not great on this, but it looks really interesting.


I am not sure if this is better. But here is a scanned copy of the original journal article. I put it on google drive. Do you have a preferred PDF host?

https://drive.google.com/file/d/0B8_iX4Icv1BmQkh3TWtqTXY5ajQ...


Thank you, this does read better (thicker font, better resolution) on my screen.

I was fine with the post-script but for some reason it seemed thin and there was a slight blur.

I just use my browser to read the PDF (usually chrome, sometimes Firefox or IE or even Opera).


See also Raganwald's Why Why Functional Programming Matters Matters: http://weblog.raganwald.com/2007/03/why-why-functional-progr...


Ironically, his "rules of Monopoly" analogy helped me realize why I haven't completely given up on OOP, especially for non-hobby work.

He holds up the idea that every piece of the game has the rules for what you can do with it tacked on as some sort of horrible mess, but, in the age of code completion, I'm finding that it's used to drive an amazing convenience from a practical perspective: Code completion.

Take Python, which is a language that I'm still learning. If I have an object, but I'm not sure what I can do with it - or, more particularly, I'm not sure of the names for the things I can do with it - I can get a quick reference by hitting '.-tab' to bring up an autocompletion menu, just so long as I'm interacting with a more OO Python library. If I'm trying to work with a more procedural library such as matplotlib, though, I'm SOL and end up having to dive through the documentation. (I can't think of a really great functional library for Python that I use, but the same is true for the more functional-y bits of numpy and pandas.) And matplotlib is a big library, so there's a lot of documentation. Far from being a form of organization, that fabled central store of the rules that the author holds up as an ideal ends up being an awful quagmire to wade through.

Granted, this is dependent on having an editor that does tab completion. And I'm sure it could be done with a functional library, too, but probably only if you're using a statically typed functional language, and I've no idea what a good UX would look like given how functional syntax works.

But still, given the current situation, I think I've realized my main reason for thinking that object-oriented programming also matters: Because right now, when you're working with large and complicated systems, object-oriented programming still offers the more pragmatic, human-friendly user experience.


Why Functional Programming ended up not mattering much at all, after all (2016) [pdf]


For those whose browsers don't speak bare postscript, a pdf also exists:

http://www.cse.chalmers.se/~rjmh/Papers/whyfp.pdf


Out of curiosity, which browsers do support native postscript rendering?


Konqueror supports it in the sense that it will embed the appropriate KPart, just as it would for e.g. PDF. How "native" you consider that is an open question.


Thanks. By native I meant it renders without plugins or any configuration by the user, on a default install of the browser.


The line between plugin and not is pretty blurry for Konqueror - even its HTML renderer is a plugin in a sense.


Renders fine in Safari. I assume it uses pdf.js or something in the same vein though, the page scrolling is very slow.


My Firefox does not.


Thanks, we've updated the link.


[flagged]


Please don't do the programming language flamewar thing here. There are plenty of other ways to discuss them.


Framing this essay as apologia for Haskell's laziness is a weird because Haskell was created exactly because there was a proliferation of non-strict functional languages and the Haskell committee recognized that having a common language would be beneficial. So it seems disingenous to separate Haskell and laziness: the latter partly defines the former.

Haskell's evaluation strategy is not something you can just change and have the rest of the language stay the same. If Haskell were strict there would be a good chance that it wouldn't be pure (see: ML variants); if it wasn't pure then IO would not be a problem; if IO wasn't a problem then Phil Wadler wouldn't have needed to invent typeclasses, etc.


Is it that time of the year again?


I love this paper


You can program functional in almost any language. Its more a mindset. I can not emphasize this enough, keep it simple.


I think what you mean by "functional" and what the author means by "functional" are distinct. Are you aware of the distinction?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: