Hacker News new | past | comments | ask | show | jobs | submit login
What are potential disadvantages of functional programming? (reddit.com)
119 points by tosh on July 27, 2017 | hide | past | favorite | 127 comments



Biggest disadvantage for me comes when trying to implement a complex algorithm where the paper uses pseudocode that's imperative (which is almost always). In the best case you can implement parts of it functionally but when it comes time to put these together you will likely use some kind of loop, but that's not possible in "purely" functional languages.

Otherwise -- and in particular when coming up with algorithms -- I find it preferable for languages to be efficient with functional idioms (recursion, map, filter, etc.).

Edit: I understand that anything's possible in a functional language that's possible in other languages. My point is that conversion from imperative to functional can be complex and might not be worth the trouble, and that doing it purely functionally might defeat the purpose: it will be nearly impossible to read or incredibly inefficient.


If what you care about is "everything is a beautifully curried function that actually doesn't do anything", the concern about loops is definitely a thing (but any loop can be turned into a recursive function so long as your stack is big enough, even if they're usually hideous). But pretty much everybody I've ever worked with considers functional purity in the logic layer to be "functional programming" at its most useful (Gary Berhardt's functional-core, imperative-shell talk is a good one). Is it easier to write with a loop? Then use a loop! If that loop is contained in a pure function that has no side effects and encapsulates something that isn't the most beautiful crystalline code formation anybody ever did see, nobody cares.

It's the mindset, not the implementation, that actually makes functional programming so valuable a tool.


The concern about stack space is slightly misleading since many languages, i.e. Scheme, can easily optimize such functions to use O(1) space by reusing stack frames across recursive calls. The problem is that while Scheme guarantees this tail-call optimization in the spec the are many other languages that leave it up to implementation, or intentionally omit it because they don't want to lose stack traces.

Anyway I agree with the overall sentiment, and will check out the talk you linked. There's another good one along similar lines: "The Clean Architecture in Python" by Brandon Rhodes: http://pyvideo.org/pyohio-2014/the-clean-architecture-in-pyt...


Functional API, efficient implementation. It's always a good rule of thumb


> Is it easier to write with a loop? Then use a loop!

Yes, but, say, in Haskell, that's not an option.

It's useful to note that Haskell is usually touted as efficient, and lack of efficiency is a #1 concern around functional programming. That concern I personally find revolting: Haskell is slower than C, yes, but orders of magnitude faster than many widely used languages.


> Yes, but, say, in Haskell, that's not an option.

    forM_ [1, 2, 3, 4] $ \n ->
        print n


>Yes, but, say, in Haskell, that's not an option.

Haskell does have the ST monad, which lets you encapsulate a non-pure algorithm (as in, one that relies on mutable state) within a pure function.

In fact, any monad will let you write imperative looking code (whose behavior is determined by the semantics of the particular monad), the standard library even provides for and while loops as functions.

You can also look at OCaml, which is very clearly a functional language, but has 1st class support for side effects and loops.


> in Haskell, that's not an option.

I'd say you have at least three fairly straightforward options.

1. Mechanically translate the loop to a recursive call. Not always idiomatic, but easy and pretty much guaranteed to compile to the same/similar assembly for a tight loop.

2. Use StateT and lenses. Your code can even look almost exactly like C if you want, with for and while loops with the same semantics as in C. I often use this when implementing heavily imperative papers in Haskell. It's still pure and everything, but looks just like an imperative language and performs well (if maybe not always full C speed).

3. If for some reason neither of those approaches work (e.g. If you're writing an algorithm that really benefits from mutation), you can use IO or ST for mutable variables. ST is really cool; it provides local mutability, but uses the type system to ensure that mutability never "escapes" into pure Haskell.


> 2. Use StateT and lenses. Your code can even look almost exactly like C if you want, with for and while loops with the same semantics as in C. I often use this when implementing heavily imperative papers in Haskell. It's still pure and everything, but looks just like an imperative language and performs well (if maybe not always full C speed).

I am no Haskell expert and that sounds quite interesting. Do you have any links to examples?


Shoot, I had some great example code from a few years ago but it was part of a uni assignment so the university made us take it off github.

Here's a fairly in-depth analysis of the approach: http://www.haskellforall.com/2013/05/program-imperatively-us...

The gist of it is that StateT gives you nice syntactic sugar around state-update functions of type `state -> (state, result)`.

Let's say your state type is like

    data Boss = Boss {
        health :: Int,
        ...
    }
    
    data Player = ...
    
    data State = State {
        boss :: Boss,
        player :: Player,
        ...
    }

Using Lenses, you can construct StateT operations like this:

    boss.health -= 1
    player.stats.score += 5

Where each of those updates the state as you imagine. So your code just looks like a bunch of stateful OO-style updates, but it's actually pure. This also lets you do a bunch of cool stuff that you can't do with real imperative code in this fashion, like re-interpret the semantics of assignment or modification, but that's for another time.


> Haskell is slower than C, yes, but orders of magnitude faster than many widely used languages.

Unless you make the mistake of using a String, in which case it will be orders of magnitude slower than any widely used language.


> > Is it easier to write with a loop? Then use a loop!

> Yes, but, say, in Haskell, that's not an option.

Sure it is! It might not be a "for" loop or a "while" loop with a "for" or "while" keyword, but a tail recursive function is a loop, and pretty much every loop in an imperative language can easily be translated into a recursive function in a functional language.


Something of a tangent, but:

> any loop can be turned into a recursive function so long as your stack is big enough, even if they're usually hideous

What amuses me to no end is how TCO turns around and converts that recursive function call right back into a loop before executing it.

And this summarizes my thoughts about the main disadvantage of functional programming - you're working against the way the computer fundamentally works, and relying on the compiler to get it right.


It's not that silly, really. The same argument could be made regarding GOTO. The compiler can't really avoid turning everything into jumps, because it's all the machine understands. And yet structured programming is hands down far more advantageous for the human who has to be able to reason about the program at the end of the day.


YES. Functional programming is more of a pattern. I can write functional ruby/es6, but neither of those are purely functional languages.


> that's not possible in "purely" functional languages

To be clear, it's always possible in a purely functional language, just not always reasonable or efficient.

As far as your point goes, I totally agree. There are algorithms that are better suited for functional languages (ones that involve tree's and lists) and there are algorithms that are better suited for imperative languages (disjoint sets, hashing algorithms, lots of graph algorithms).


Not to get all FP on you, loops and recursion are isomorphic. It is a hurdle to convert imperative code to functional code, just like the opposite, but it is unconditionally possible.


I'm aware of that. It's still not always trivial to implement.


Isn't it trivial to implement a loop via recursion?

    loop fn cond update i acc:
        if (cond i):
            return acc
        else:
            return loop fn cond update (update i) (fn i acc)
I'm not sure how using such a loop wouldn't be "functional", since you're not updating anything. (I believe you get TCO, so it shouldn't be that bad performance wise.)


That's "functional", but I sure don't want to read it.


One of the things that I really love about Erlang, is that it unrolls loops and recursion like:

  count_to_ten(n) ->
    count_to_ten(n+1);
  count_to_ten(10) ->
    done.
which is much cleaner than how the same code is presented in 99% of functional programming languages.


That's just a combination of pattern matching and tail call optimization. That code would look exactly this same in Haskell:

    countToTen n = countToTen (n + 1)
    countToTen 10 = done


The 10 case would have to be first, or that's an infinite loop. Cases are checked in order. That said, the compiler will give you a warning about it.


For readability and conciseness you would most likely use a fold instead.

The fold/or Foldable interface factors out the mechanism of the traversal of the relevant data-structure - list, vector, tree etc.

Functional languages are an improvement in this respect over C/C++/Java etc with their explicit loop iteration statements.


> For readability and conciseness you would most likely use a fold instead.

Agree strongly with this. Long complex for-loops in imperative languages that are mutating a counter along the way, building up a list along the way, have "if (...) { continue; })" parts, have nested loops etc. make me cringe now.

Break your code up into a series of folds, maps and filters and it's miles more readable. I find a lot of imperative programmers just don't think in this way and mangle folds, maps and filters into one big confusing ball.


> Functional languages are an improvement in this respect over C/C++/Java etc with their explicit loop iteration statements.

Not to put too fine a point on it, but Smalltalk had folding way back when (inject:into:). Higher order functions are not a specialty of functional programming; technically, even C has higher order functions, they're just more verbose because C doesn't support inline closure literals.


> The fold/or Foldable interface factors out the mechanism of the traversal of the relevant data-structure - list, vector, tree etc.

So does an iterable/enumerable in anything else (writing a red-black tree that supported the Iterable interface in Java was something I did in college). Certainly not a material improvement over explicitness that, unless you have a rather particular shop, your fellow programmers won't have to scratch their heads about three weeks after you commit it.

I have been the too-clever-by-half developer before. It was a bad idea. The principles of functional purity as a design goal do not have to have the baggage of "look at this neat thing I know about," yanno?


Why?

I'm genuinely curious why you think that it's worse in functional languages than, eg Python or C.

It's not like you don't specify all of those variables when using a loop in C -- you just have a special syntax to. I was outlining the actual recursion mechanism, not showing what you'd have to code....

You can make it look basically the same, with a signature like...

    loop (idx test update) fn acc
...which in practice will look like...

    loop (0 (\ x -> x < 5) (+1)) fn 0
The only annoyance is the slightly verbose lambda syntax for the test, but if you're using a function as a test, it becomes a smaller issue.


Python (well, sort of) and C (despite its numerous other flaws) have a special syntax that doesn't read poorly, yes. The code you just described reads awfully. It's line noise. One of the most important aspects of a programming language is encouraging code that can be trivially read by someone who is familiar-but-not-reflexive with the language and does not necessarily understand the domain well enough to be managing both application state and syntactic state. If I have to think about what the code's syntax means, I'm not thinking about the problem.

And, frankly, the idea that you have to soft-pedal it as "you could do this" should be a red flag for you. I could meditate about a `loop` function or I could write the code I actually care about.


Are you sure you've got it the right way around? For-loops in C are syntactic garbage. "This is what I'm used to" is not the same as "this is good."


Why not? "What I'm used to" will tend to win in most usability studies. Weird and unfamiliar are harder to use.

Looking at how human languages work, it's hard to see how it could be otherwise. Your own language is always going to be easier to use than an unfamiliar one.

And this is why language debates can't be settled via logic alone.


I ran into this exact problem and figured out you can fix it with StateT and Lenses. You can reproduce imperative code exactly, but in pure Haskell. You can even write things like for loops, no problem.


yeah but functional operations just abstracts imperative statements.. so your pseudo code is almost like a granular expansion of the eventual implementation


One of my professors used to say that functional programming makes all problems equally hard. There's some truth to that. For example, for I/O, most functional languages fall back on imperative idioms. Those that don't (Haskell, mainly) basically tell beginners to pretend that this weird thing (I/O monad) is just an imperative code block. You can figure out how it works once you're more familiar with the language.

Since doing I/O is a pretty common task, functional languages generally include imperative constructs to fix this, but these constructs generally feel tacked on.

The main drawback I see in practice is that functional programming can encourage people to be more clever than they ought to be. In the worst case, you can get some of the worst Java-like issues, where there's a function that returns a function that returns another function that actually does what you want, even if it's only used in two places anyway. In a more typical case, you'll get someone who likes to unnecessarily write everything in continuation-passing style, or jumps through hoops to move recursive calls into tail position even though the recursive depth never exceeds 5.

If the team has the discipline to just avoid that kind of thing, the only real downside I see is being farther from the hardware.


People can shoot themselves in the foot with practically any language and framework. I'd be hard-pressed to call that a FP-specific problem.

When I am writing Elixir, all the lessons I learned from Martin Fowler and Kent Beck on refactoring became 50x more useful; I don't go 5 levels deep into making my code uber clever. I recognize these 5 levels but I only do 1 level and stop there.

It's very important to get clever only when there is a valid reason to do so. Many colleagues very easily forget that this is a job and you shouldn't get carried away; do what you must, do it very well so that your future self doesn't hate you -- but don't try to do it as if your code is gonna persist for a millennia on a space probe.


Inability to practice hardware sympathy is the biggest drawback for me. With imperative code, my intuition can be useful in estimating what instructions the compiler is likely to generate (what loops get unrolled, how registers get packed, etc). It's still not easy mind you, but with functional languages, the generated code goes through another battery of passes and abstractions that obscure this even more.


If you go deeper, then we start to see a lot of similarities. Functional programming strongly resembles circuit logic. Maybe a strange loop?

See also http://www.clash-lang.org/


I've coded in vhdl and verilog. Working at the gate level is definitely more functional in nature and has the same problem (it's not obvious intuitive how the code you write maps to gate logic). Hence, why novices introduces accidental latches and metastability. I've actually used clash for one non-trivial project, and there's something interesting there for sure.



Functional code can do better fusion, inference and unrolling than imperative code.


Citation needed. Not saying it's not true, just haven't seen it in practice. Usually if you want efficiency, you have to drop down to intrinsics or assembly, no matter how much the compiler is helping you.


Look at the publications here http://benl.ouroborus.net/

Also look at the Disciplined Disciple Language (related to Haskell).


Thanks for the links!


It's not a question of whether it "can" or not sometimes. Say there's a performance issue in your code. In my experience, I have a much easier time figuring out where the bottleneck is just by inspection with imperative code than with functional code (especially when concurrency is involved). Sometimes, transparency is more important than raw output.


I personally much prefer functional programming, however it has some downsides.

The biggest practical hindrances I have found are:

1. Performance due to things like allocations due to immutability thrashing cache, or a language that allocates on calling into closures, or fails to inline them properly, can be significantly worse depending on the environment you are writing your code in.

2. It's less common therefore there is a higher chance other programmers are going to struggle with your code, or that libraries aren't going to be as well supported.

Take lodash/fp for example. Vanilla lodash has really good @types support in something like Typescript. lodash/fp it's an open ticket (last I checked). This type of stuff happens a lot there just isn't the same number of folks writing code in fp style so you don't have the same level of long tail support.

3. This is maybe a less common view but in an object oriented style with code-complete, usually discover-ability is better. Typically the IDE has a better context of what you could be doing when chain method calls with a . rather than having the whole world of methods at your disposal when going for a code complete. For example when writing "abc".r<TAB> compared to r<TAB> the IDE is going to have significantly narrowed down scope to suggest in (methods that are legal on string). This can be very useful and I suspect narrows the breadth of the space the programmer must conceptually keep in there head from token to token. It's big reason why things like extension methods, which really don't have to exist (they could easily just be static helper methods) are frequently a highly requested addition to a language.

4. Somewhat related, and there are ways around this, I prefer a threaded / piping style as opposed to composing methods together. e.g. `(1 + 1) / 2` or `(->> 1, (+ 1), (/ 2))` as opposed to `(/ (+ 1 1) 2)` To me I feel the code reads better from left to right as opposed to inside-out. But that is possibly (probably?) a result of being an imperative programmer for so long before switching to functional programming. Also I feel like I am playing around with a lot more parens matching and jumping backwards for something like f(g(h(x))) as opposed to h(x).g().f(). Again this could be my imperative programming upbringing leaking through though.


For point #4, you can use a "backwards application" operator like F#'s |>

    x |> h |> g |> f
|> is defined by (x |> f) = (f x). I often find myself defining it myself when it is not in the standard library of my FP language du jour.


Very much agree with your point 3. It's really a namespace issue with classes providing a natural division of the namespace. Also OO best practices over time have encouraged more modularization and encapsulation with heavier use of restricted visibility. At least in my experience functional code tends to pollute the global namespace with a bunch of not so obvious functions.


It depends on the language but you often have the option of grouping functions accepting a specific input type in a dedicated module, and using them qualified.


It seems like your points 3 and 4 could be solved by using a Forth-like syntax. "abc" r<TAB> would only suggest functions that can operate with a string at the top of the stack. Threading code would be `1 1 + 2 /`. There are no parens to match in `h g f`.

Of course it's even more obscure than functional programming (point 2) and if the semantics aren't changed, point 1 isn't solved either.


3 is the reason why I write a lot of functions as static functions of a class. That way you get a list of applicable functions with auto complete.


It's surprising to me to see how few discussions of Functional and/or Object Oriented programming end up even mentioning the Expression Problem: http://wiki.c2.com/?ExpressionProblem

    The expression problem is a new name for an old problem.
    The goal is to define a datatype by cases, where one can 
    add new cases to the datatype and new functions over the 
    datatype, without recompiling existing code, and while 
    retaining static type safety (e.g., no casts).
FP and OOP are two axes in a space of problems around building and extending programs. To be "purely functional" or "strictly OO" is basically painting yourself into a corner that you will eventually learn to regret.

Eli Bendersky wrote an excellent piece on it: http://eli.thegreenplace.net/2016/the-expression-problem-and...


I was planning on mentioning it as soon as I saw this post as well. I think most people who try using a functional programming language that encourages you to use sum types (at least when you're beginning) run into this in some way, but most people probably aren't aware that it has an actual name.

Here's a post on solving it using type classes in Haskell, if people are curious! http://koerbitz.me/posts/Solving-the-Expression-Problem-in-H...


I was going to link the same article!

I think the interesting part is the similarity between haskell and clojure. Essetially programming to very small classes/protocols. The linked article is cool, because i think the java approach would generalize to the OP's C++ solution, and be able to eliminate that gross dynamic cast.


Yes, the Bendersky link explains it very well.

The c2 wiki Shape example is in fact an instance of the Interpreter pattern (although it doesn't mention this I believe), which is the closest thing to pattern matching in Java (Visitor would be the other way of sort of doing pattern matching in Java, and closer to FP, with the same drawbacks and advantages).

There is another good explanation in this PDF: https://ocw.mit.edu/courses/electrical-engineering-and-compu...


Yes it's a sample problem but the natural conciseness of the Haskell compared to that C++ is almost comical.

You can count non-whitespace lines or lexical constructs and get a 3x reduction. But conceptually, the ability to have that much code so concise is way more than a 3x productivity gain.


Immutability: The property of functional programmers that prevents them from shutting up about pure functional programming. - @raganwald

Computation is not just about functions. If computation were about functions then quicksort and bubble-sort were the same because they're computing the same function. As I said a computing device is something that goes through a sequence of states and what an assignment statement is doing is it is telling you here is a new state, and also there's the notion of it's non-determinism, so the new state is not a function of the old state. So functional programming in a sense - functions - don't solve the problem of programming. - Leslie Lamport

Perhaps functional programming is best for S-Programs and P-Programs...

S-Programs are programs whose function is formally defined by and derivable from a specification. P-Programs [are problems] that can be precisely formulated but whose solution must inevitably reflect an approximation of the real world. E-Programs are inherently even more change prone. They are programs that mechanize a human or societal activity. - Meir M. Lehman, Proceedings of the IEEE, Vol. 68, No. 9, September 1980.

... from my fortune clone @ http://github.com/globalcitizen/taoup


In the same spirit:

The nice thing about declarative programming is that you can write a specification and run it as a program. The nasty thing about declarative programming is that some clear specifications make incredibly bad programs. The hope of declarative programming is that you can move a specification to a reasonable program without leaving the language. - The Craft of Prolog, Richard O'Keefe (1990) (copied from CTM)


I'm gobsmacked that someone as smart as Lamport could make so lazy a criticism. Maybe he's criticising functional programming as it was decades ago when he last looked at it. Modern functional programming is by no means (merely) about "functions". It's basically about composability and reuse.


The main disadvantage is that the next guy/gal will have to rewrite it all in Java - the language he/she actually can read.


And then the next person will re write it in python. It would faster and less costly to just learn the language...


Rewriting a large codebase in Python (or any dynamically typed language) would be a mistake.


Thats true, if you wanted a large codebase you should stick with Java.

Only half joking.

I like to think the goal is to create a working system. I haven't seen sufficient evidence that Static Typed languages as a category are more likely to produce working systems then Dynamically Typed ones. Feel free to link anything you have on the subject. I would take the time to read meta studies on the subject though, many of which seem to conclude their isn't much evidence either way.

I do believe that "Discovered" languages are more capable then "Invented" ones. Phillip talks a bit about the difference here: https://www.youtube.com/watch?v=V10hzjgoklA&t=2176s


Optimizing runtime performance can be a little tricky and nonintuitive. Debugging can similarly be tricky-- inspecting internal return values is not super fun.


For me one of the biggest hurdles is using immutable data structures all the time. In the real world you want to add to a list or change a map's value. Typically functional programming languages use linked lists to make h::t efficient but if you need performance characteristics of arrays and good performance/low GC you are going to have a lot impedance mismatch with the typical functional programming patterns based on built in lists.


> In the real world you want to add to a list or change a map's value.

Having spent a good amount of time doing production FP, and then taking what I learned into imperative work, I rarely find myself doing this. In the real world, what we usually want is the result of adding to a list or changing a value in a map. But also in the real world, we rarely want to consider whether we've affected other things that supplied that list or map (which they generally would prefer we did not do).

> Typically functional programming languages use linked lists to make h::t efficient but if you need performance characteristics of arrays and good performance/low GC you are going to have a lot impedance mismatch with the typical functional programming patterns based on built in lists.

I can't speak for the wide variety of FP languages out there, but it sounds like you've been exposed to languages with a poor set of data structures. Clojure, for example, provides a wealth of great immutable/persistent data structures that provide the semantics you'd want and perform really well.


I've done a bit of functional programming too and don't get me wrong, I love the FP cultural victory of Java/C# etc. getting immensely useful functions like map, reduce, filter, lambdas and higher order functions. But if you are going to use purely immutable data structures you're going to make trade offs. In the kind of work that I do generating excess garbage or using extra memory can be unacceptable so I think I'd struggle with that but I can still apply a good % of FP principles with mutable structures of course.

If you read introductory functional programming material, you will agree they are full of h::t style pattern matching and default list types in ocaml, haskell and scala are simple linked lists (I sadly haven't tried Clojure yet). I'm open to the possibility of advanced patterns with other data structures overcoming some of these problems that I'm not very familiar with but that's what I mean with impedance mismatch, you will need to step away a bit from the clean elegant patterns of "everything is recursion+pattern matching on lists" of typical FP teaching.


And if you do any serious functional programming, you will quickly learn that using these lists as a data structure is almost always a mistake. A mistake that happens to be encouraged by the language, but that is largely a historic wart of languages that are decades old at this point.


Sure, agreed, but even if you use more sophisticated immutable structures you either accept the performance/memory trade offs or you have to fall back on mutable structures.


But that's quite a different claim from your original one!


Maybe it helps if I explain better. I find it difficult to use immutable structures if I want certain performance characteristics. It's possible I need more training in certain techniques, but I think that's why I find it difficult, the default data structures and introductory literature seems to be too naive. And in the end, no matter what I use, there will be certain trade offs for purely immutable structures, and some stuff I just think it's easier to think about in mutable terms.


OK, that's a reasonable position. It is easy to read your original claim as "All real world programming requires mutable values", and that is certainly not a reasonable position.


> In the real world you want to add to a list or change a map's value

Do you? I don't. I've written loads and loads of production code that never did this. I know it never did it because I wrote it in Haskell and never used an IORef or STRef!


I do. I'm not saying it's not possible to reach that state. Just that there are many things I tend to find easier to think about in mutable terms. And when trying to implement immutable equivalents there are also trade offs. I don't start from the point of view that functional programming is superior and everything should be done in terms of lambda calculus and purely functional data structures. I also don't deny that is good for some, maybe many use cases.


It's fair enough if those are your needs but I think your comment would be better expressed as "For the sorts of programming I do I want to add to a list or change a map's value". Otherwise you give uninformed people the impression that Haskell cannot be used for "real world" programming. It can be and it is.


Ok you are right, now that I read what I wrote it could be interpreted that way and it wasn't my intention at all. I know Haskell can be used for real world programming because I own a book called "real world haskell" :) I was abusing the expression to put forward a case in which I believe the mutable thinking comes across as more natural (imo).


Fair enough. Language is rather ambiguous!


Functional languages give powerful expressions for proving code correctness. Imperative languages provide good semantics for reasoning about performance. I don't see them as rivals, rather as two paradigms operating on different domains.

Object-oriented languages on the other hand....


Immutability can have performance issues I believe...not sure you'd use it to write a game engine for a modern FPS.


You might want to read what John Carmack has to say on the matter, for example http://functionaltalks.org/2013/08/26/john-carmack-thoughts-...

I'm not saying he disagrees with you, but he doesn't agree with you either and it might help you refine your point of view.


Persistent data structures can generally overcome a lot of the performance issues with immutability, but generally at the cost of memory.


Ehhhhhh hard ass real time stuff like heavy dsp or a game engine, you're on the metal and you're pitching your code to the CPU, not to humans. And the CPU thinks (kind of) imperatively. And the GPU does its own thing which is basically neither, haha.

Stuff like web apps, or honestly, the vast, vast majority of programming, functional is just fine. The deep stuff is gonna stay imperative, but that's a small fraction of the work that needs doing out there.


That sums up my thoughts better.


I feel that Haskell's learning curve comes with the benefit that the curve actually goes in a hyper-productive direction. I have ~3 years of professional experience but with Haskell I've been able to create robust and extensible systems in deterministic amounts of time on a consistent basis, often working alone or with other "junior" devs. I think this is because I've spent the last ~6 years constantly learning FP/Haskell, and while it's been hard, every new thing I learn pushes me to be a more self-sufficient, capable engineer. If I weren't so lucky and didn't get the opportunity to only work in pure FP in Haskell/Scala from my first job out of college onwards, I don't think I would be as capable as I am now.


Well I have one: next to no one can read/understand functional programming stuff, much less write.

Yeah it might be hyped in academia and in high performance/scaled systems but the average non-academic coder will never interact with FP code.


That's because you've learned imperative coding first. If you learned functional coding first you'd probably find imperative coding complex. I'd certainly rather code in OCaml than C.


A growing number of super-performant API servers and websites written in Elixir / Phoenix say hello.


Your first statement is unequivocally false, as evidenced by the myriad of people and projects using purely functional languages, let alone non-pure functional ones.


Thinking that any variant of "functional programming" stands for all variants?

For me, moving to F# from C#, and then trying to get my team to do likewise, it's far more like "this encourages good CS101 practices" than anything else. The only disadvantage is the occasional CPU burp (usually easily resolved).

To reddit commentor, I'll agree a lot of OO ideals are baked into GUI, and are actually quite good for it (decorator pattern, etc). Though a lot of GUI (winforms binding, wpf mvvm (sorry)) conventions paint you into a bit of a corner.


Is there any example of a usable Graphical UI written in a functional language?

I've seen something very minimal in erlang, but it was so resource intensive it barely functioned at all.


How about xmonad? http://xmonad.org


A colleague of mine who wants to play with haskell tried it out and threw it in the too-hard basket - apparently to configure it you write haskell in the config files; not where he wanted to be trying his hand at the language.


Does the combination of Clojurescript+HTML count? [1]

[1] https://reagent-project.github.io/


Mobile apps: See the Apps section on http://cljsrn.org/ (and there are more mentioned in the talks).

For purely functional, have a look at Elm.


The Yi editor contains a minimal Emacs-like interface.

https://github.com/yi-editor/yi/


It depends how deep you want to go. I would say that 'functional' can mean different thing to different people, but usually I see it as methodical use of certain programming features over others.

1. Language supports functions as values, anonymous functions, has higher order functions, and ability for functions to access scope in closure. At this base level there are no disadvantages to prefer using these, because they allow some nice abstractions (as evidenced by many functional libs in js.)

2. using recursion over loops, can be a good fit for expressing some algorithms. Often requires some support for optimisation in language (i.e. tco, so that your stack doesn't explode. In my experience you'd use higher order functions like map, fold/reduce most of the time anyway, instead of dealing with loops/recursion directly.

3. pure functions. This is the first interesting trade-of in functional programming, how much should you use functions that only depend on its inputs to provide its output (and for the same input output would stay the same). There are clear advantages, you can test these well, understand them in isolation, refactor them easier, memoization is almost for free.

On the other hand, only using pure functions would limit the usefulness of the language (it is said that first version of haskell was only able to do a transformation from stdin to stdout, because they had just pure functions) Languages that are big on being 'pure' usually allow you to make impure functions, but then track their usage in the type-system. This was one of those things, that I often think is more trouble than is worth.

4. immutable data-structures, which come in handy when you want to create pure functions that would rather return a copy with modifications, than modify in-place (because that might have unintended consequences elsewhere) It takes some time to get used to these, but their performance is on par with regular datastructures nowdays.

5. lazyness, where you compute only when necessary. Can make certain programs more efficient and easier to comprehend, others harder, i.m.o

6. no global state.

7. comprehensive type system is sometimes viewed as necessary for real functional language (how else would you check you don't do side-effects in wrong places)

For me, 1., 3., 4. and 6. are those I consider important and with little drawbacks to try to use them 80-90% of time.


The big one for me is a practical one: the current ecosystems around them. I'd love to code mobile apps and web apps in OCaml but it would just be going against the grain too much for it to be worth it in the vast majority of cases I think. Can you easily deploy on most cloud services? Are there good libraries you can use? Is there a community that can help you when you get stuck?

It's not a fault of functional languages but that is the issue, it's that the vast majority of coders and ecosystems are entrenched in imperative languages still.

I'll use strong static typing where I can though e.g. TypeScript is decent in terms of practicality.


I believe clojure's made a lot of progress porting to android's JVM. Last I heard the clojure runtime took a couple seconds to boot, which is shitty if you're real serious about your apps, but not a dealbreaker. And the last time I messed with it was a couple years ago, I would hope to see improvements in that amount of time.


For Clojure it seems React Native + ClojureScript is the more popular option. It has good startup performance and works also on iOS: http://cljsrn.org/


Functional Programming is a big tent, too big to be very useful. All good programming provides some proof of correctness, through test coverage, the type system, or a bit of reasoning done to the side (worst case). Ease of reasoning from FP in the large is not very valuable but does eliminate some cognitive overhead. There are no disadvantages to higher order functions and referential transparency except those we impose on ourselves with our choice of hardware. Type systems are where the meat is at, and a good type system clearly states the advantages and disadvantages in its design.


Higher-order functions and indirection in general can lead to increased cognitive overhead. Try reading or debugging a deep points free functional expression and it's obvious there are limits.


In my own (limited) experience, I'd much rather debug a nasty pointfree expression than debug a deep nest of loops keeping state in mind.

Perhaps it is just different kinds of cognitive overhead.


How would you even go about stepping through a pointsfree expression though? Have Haskell/scala debuggers improved significantly in this regard?


I wasn't aware these debuggers existed. I'll have to check them out.

The purpose of writing point-free functions is to encourage thinking about chaining functions together instead of considering what really happens to low-level values. It's like using pipelines. This means that for debugging, "stepping through" wouldn't help you much IMO even if you could do it![0] (A whiteboard might help, though.)

I'll try to give an example: if you're looking at an unnecessarily point-free expression like "mapReduce = (. map) . (.) . foldr1", the logic is difficult to read. I'm not an expert in point-free-fu yet, so in that case I'd recommend pointful[1], which deobfuscates that expression to "mapReduce x x0 x2 = foldr1 x (map x0 x2)", a map followed by a fold. So the problem is again reduced to looking how functions are being chained together. For larger examples of point-free expressions I'd try to isolate individual functions as where clauses.

[0] As mentioned by cgmg, the Writer monad is useful if you need to know the values going in and out of functions. [1] http://github.com/23Skidoo/pointful


You should try the Writer monad, which adds an extra value for logging: http://learnyouahaskell.com/for-a-few-monads-more


> Ease of reasoning from FP in the large is not very valuable but does eliminate some cognitive overhead.

I really wouldn't underestimate this. Formal proofs and verified programs pretty much always involve functional languages at the core because there's just less going on to complicate the proof. Once you start introducing state, manual memory management, unrestricted recursion, pointers, undefined behaviour etc. it becomes much, much harder to write proofs about programs and justify why they're bug free. This also carries across to humans...it's much easier to reason in your own head why a program is correct when the program behaviour is simple.


True, but I would want that reasoning to have some representation in the code as well, which doesn't happen if you do a proof to the side.


Massive hype squeezing functional programming into all kinds of imperatively shaped holes, code that's going to remind you of your dad's 80's pictures someday.


Disadvantages: Sometimes it takes more work. Quick and dirty can be easier to read, if the function is short.

Advantages: You can mix and match. Pure functions are more composable and easier to test because you don't have to worry about side effects.


Difficult to write a good compiler.

Optimizing the code generated by the frontend for a function language is a nightmare.

There are several theoretical reasons why that is the case, so do not expect significant breakthroughs for general purpose functional programming.


Care to go into any detail?


well i love fp, but the reality is that fp is not even an afterthought in the libraries that underlie numerical computation--most significantly, BLAS, LAPACK. Those libraries rely entirely on mutable data structures.

it has very little to do with their age--fact is these libraries are continuously updated (last major re-write for LAPACK was 2008; what's more, fp is not new, it was a well-known paradigm when, say MKL was conceived (2003), but apparently even the most cleverly designed persistent data structures (eg, Scala's vector) aren't enough.


No disadvantage. Learn your FP, it's good for you. ;)


It's fine as long as you use spaces not tabs.


one disadvantage is all of the people telling you to rewrite it in Rust. Seriously, why are you such a bad person?

*just kidding. mean no offense.


I am currently studying the excellent https://www.amazon.com/Concepts-Techniques-Models-Computer-P..., and I believe that this book answers the questions in the OP very clearly, and although maybe you find it too theoretical, it does in fact provide loads of practical advice, and is very readable; not for the faint of heart though ;)

Anyway, just to practice what I've learned so far I will try to answer some of your questions from the top of my head; apologies in advance for my verbosity:

First of all, let's define functional (in fact, to be strict, declarative; more on this below):

An operation (i.e. a code fragment with a clearly defined input and output) is functional if for a given input it always gives the same output, regardless of all other execution state. It behaves just like a mathematical function, hence the name.

This gives a declarative operation the following properties:

1) Independence: nothing going on in the rest of the world will ever affect it.

2) Statelessness (same as immutability): there is no observable internal state; the output is the same every single time it is invoked with the same input.

3) Determinism: the output depends exclusively on the input and is always the same for a given input.

So what is the difference between functional and declarative? Functional is just a subset of declarative = declarative - dataflow variables

These properties give a functional program the following key benefits:

1) It is easier to design, implement and test. This is because of the above properties. For instance, because the output will never vary between different invocations, each input only needs to be tested once.

2) Easier to reason about (to prove correct). Algebraic reasoning (applying referential transparency for instance: if f(a)=a^2 then all occurences of f(a) can be replaced with a^2) and logical reasoning can be applied.

To further explore the practical implications of all this lets say that, given that all functional programs consist of a hierarchy of components (clearly defined program fragments connected exclusively to other components through their inputs and outputs) to understand a functional program it suffices to understand each of its components in isolation.

Basically, despite other programming models having more mindshare (but, as far as I can tell, aren't really better known, and this includes me ;), because of the above properties functional programming is fundamentally simpler than more expressive models, like OO and other models with explicit state.

Another very important point is that it is perfectly acceptable and feasible to write functional programs in non strictly functional languages like Java of C++ (although not in C, I won't explain why, it's complicated but basically the core reason has to do with how memory management is done in C).

This is because functional programming is not restricted to functional languages (where the program will be functional by definition no matter how much you mess up).

A program is functional if it is observably functional, if it behaves in the way specified above.

This can be achieved in, say, Java, with some discipline and if you know what you are doing; the Interpreter and Visitor design patterns are exactly for this, and one of the key operations to implement higher order programming, procedural abstraction, can easily be done using objects (see the excellent MIT OCW course https://ocw.mit.edu/courses/electrical-engineering-and-compu... for more on this).

Because of its limitations, it is often impossible to write a purely functional program. This is because the real world is statefull and concurrent. For instance, it is impossible to write a purely functional client-server application. How about IO or a GUI? Nope. I don't know Haskell yet, it seems they somehow pull it off with monads, but this approach, although impressive, is certainly not natural.

Garbage collection is a good thing. It's main benefit to functional languages is that it totally avoids dangling references by design. This is key to making determinism possible. Of course, automatically managing inactive memory to avoid most leaks is nice too (but not all leaks, like, say, references to unused variables inside a data structure, or any external resources).

However, functional programs can indeed result in higher memory consumption (bytes allocated per second, as opposed to memory usage, which is the minimum amount of memory for the program to run), which can be an issue in simulators, in which case a good garbage collector is required.

Certain specialised domains, like hard real time where lives are at stake, require specialised hardware and software anyway, never mind whether the language is functional or not.

So, for me, for the reasons above, the take home lesson so far is:

Program in the functional style wherever possible, it is in fact easier to get right due to its higher simplicity, and restrict and encapsulate statefulness (and concurrency) in an abstraction wherever possible (this common technique is called impedance matching).

Each programming problem, or component, etc, involves some degree of design first, or modelling, or a description, whichever word you prefer, it is all the same. There are some decisions you must make before coding, TDD or no TDD.

What paradigm you choose should depend first on the nature of the problem, not on the language. Certain problems are more easily (same as naturally) described in a functional way, as recursive functions on data structures. That part of the program should be implemented in a functional way if your language of choice allows that .

Other programs are more easily modelled as an object graph, or as a state diagram (awesome for IO among other things), and this is the way they should be designed and implemented if possible. But even in this case, some components can be designed in a functional way, and they should be wherever possible.

There is no one superior way, no silver bullet, it all depends on the context. It is better to know multiple programming paradigms without preferring one over the other, and apply the right one to the right problem.


Higher order functions (taking/returning functions) in practice can lead to code that is very convoluted.


The article describes functional programming in its purist form: 0 side effects. In practice that simply doesn't exist in any language except Haskell and maybe variants of LISP. Next is the simple fact that a pattern or functional idiom has never and will never save your project, so adopting it for those reasons is simply not practical. Also it is not practical to reduce your hiring pool by one or more orders of magnitude. No profitable company you've worked with this week relies completely on functional programming. All of the most profitable companies in the world have side effects in their code. Deviate from the path at your own risk.


> The article describes functional programming in its purist form: 0 side effects.

Incorrect, reread the first two paragraphs:

> Of course one can define functional programming so that no local mutable state and no side effects are possible, and then point out the obvious disadvantages. But that's perhaps a "no true Scotsman" kind of argument. If you would define object-oriented programming with the same strictness, everything that is not an object and uses mutable values would be forbidden. Even something such as y = sin(x), copy-on-write or returning constant objects or containers from a function would be off-limits. That is, I am asking with a very pragmatic definition of FP in mind!

> What I am wondering is rather, do you observe pragmatic functional programming as John Carmack described it here as valuable? And of course this question goes also to people who have tried it to some extend, as a purely theoretical discussion would be boring. I am interested in good examples!


Pragmatically speaking, most non trivial programs consist of functional and, yes, even object-oriented styled code. Doing functional programming over those nounish state-indexed things doesn't make them non objects even if you prefer to call them "entities" instead, while most OOP programmers use lambdas freely without any feelings of regret.


The thing about OOP that is avoided in FP isn't calling an entity an "object", it's the fact that the same object changes over time. It's hard to reason about "I let this so-called function change a thing and I can't know how or why". It's much easier to reason about "I gave this function a piece of data and it gave me the change back".


Entities externalize state that can vary over time. So it isn't f(g) but f(g(t)). Time (and hence mutation) really does exist, whether it is implicit or explicit.

It is only easier to reason about when the data you give the function is limited. You can always pass the world into the function and take it as an output, then it no longers matter. Effect systems are also possible with objects as well, they just aren't a feature of functional languages.


Well, there's languages like OCaml where you can have mutable state if you want. If you can avoid state in most of your code and isolate it to small parts it'll still be worth the benefit. Imperative languages are all pushing immutability as a good thing now but at least with functional languages immutability has always been the default and good defaults can have a big impact.


Au contrair. Recruit for a Haskell job and you'll need a stick to beat them off!

But if you need a kilotonne of developers stick with a mainstream language.


I am genuinely convinced, that all such topics have nothing to do with science. Such question is more philosophical or religious. Many people will try to explain their ideologies and believes about how things should work. Each side may try to say what is correct, while nobody has a proof.

Discussing such topics has no scientific meaning, it is a waste of time and it should be considered a clickbait :)


OP didn't mention science at all. They were asking for real-world experiences, actually.

So, you're meaninglessly correct. Such questions do not have to do with science, which is why science was not featured in the question.


The subject is scientific, so it sounds like e.g. "what are disadvantages of complex numbers". I am not talking just about science. I mean the whole purpose.

Such discussion will have no effect on people, who truly know what functional programming is. It also may have incorrect or misleading effects on people, who don't fully understand, what functional programming is.


To expand on Ranger's point, the OP's question is specifically requesting people's anecdotal experiences and opinions concerning applied knowledge. Science is the repeatable process of discovering and accurately modeling nature (for varying definitions of "nature").




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: