Hacker News new | past | comments | ask | show | jobs | submit login
Why functional programming matters (1990) [pdf] (kent.ac.uk)
102 points by palerdot 12 days ago | hide | past | web | favorite | 103 comments





If anyone is looking for a really eye-opening video take a look at that this. "Domain Modeling Made Functional"

It's applying Functional Programming to plain old Enterprise line of business apps in F#. The author argues why it's so much simpler, and simply walks through an example with cases. It's hard not to be persuaded. It leaves behind all of the lofty crazy maths you see in a lot of presentations of FP, and just shows how it's a simpler technique to solve real world problems.

Domain Modeling Made Functional - https://www.youtube.com/watch?v=Up7LcbGZFuo&feature=youtu.be... (Watch on 1.25x speed)

It's really impressive, and also surprising why more people promoting FP don't show it's tangible benefits and simplicity vs. the theoretical/abstract way it's usually presented.

It's also a great introduction to 'Domain Driven Design' and data modeling regardless of the language you use.


Agreed 100%, sometimes the FP community is its own worst enemy. Too many FP enthusiasts seem to think that "real" FP requires mastering category theory, which is cool and all but really just a particular subset of the paradigm, and a particularly Haskell-centric subset at that.

I encourage anyone who wants to learn functional programming to pick up the classic "Little Schemer" by Friedman and Felleisen. It's a charming book, designed for undergraduates, that teaches you all the important and appealing aspects of FP. And you won't be made to feel like an idiot because you don't know what a monoid is.


I'll dust it of my nightstand. I really should get to it, I guess now is a good day.

Thanks!


After watching it for 20 minutes, I want to know more.

Where can I learn more about this magic called functional programming? Is there a good course? I want to do it for BLOBAs (as he calls them) apps as I'm not too math-heavy.

Also, to what extent does JS support FP? I use anonymous functions/callbacks (I mean red functions, pun intended) and closures (even as poor man's objects, lol) but I don't really know why that is FP and what makes it FP.

This is a bit of too big of an ask now that I think about it. I sound like a person who never programmed before and to be like: so what programming language should I use?

Is there anything good around for JS to create BLOBAs?


So I didn't include it in the original link, but he has a book he wrote under the same title (Domain Modeling Made Functional). I've been going through it now. It's a much more in-depth treatment of the same topics. He walks through a small product from start to finish.

Additionally he does a really great job of walking through two things that I don't think are covered well enough at all for beginning and intermediate programmers (and even experienced ones like myself may have missed parts of along the way).

1. How to effectively gather customer requirements, what questions to ask, what things to dig into, how to organize them. A simple walk through, better than the hand-wavy stuff most people do when requirements gathering.

2. How to model a domain / problem space before implementing. How to effectively reason about your entities, services and actions. And iterate on these with your customer before coding.

I seriously wish I had run across this when I first started coding. A really great collection of tangible, actionable advice.

I've never done it in javascript so won't try to guess, but the first two parts of the three part book are really applicable regardless of the language. Will have to finish the third to see how it is.

Domain Modeling Made Functional - Scott Wlaschin

  https://pragprog.com/book/swdddf/domain-modeling-made-functional

  https://www.amazon.com/Domain-Modeling-Made-Functional-Domain-Driven/dp/1680502549
(I have zero affiliation with the author and get nothing from any links)

His blog https://fsharpforfunandprofit.com/


Thanks! I bought the book.

You can write JS functionally but to get the benefits from the video like modeling your data as sum & product types as well as static type checking, you need to use a language with its own compiler.

For languages that compile to JS, Elm looks a bit more like the F# in the video (https://elm-lang.org/) and Typescript (https://www.typescriptlang.org/) is a much more like standard JS.


You can also use Fable (https://fable.io/) to compile F# to JS.

Scott's website, F# for Fun and Profit, is really good and has some great articles. His article on Railway Oriented Programming is really good.

As for typed, adt functional languages that compile to javascript, there's F#, Elm, Purescript, and ReasonML. I'm a big fan of F# and Elm.


It's very easy to write JS in a functional style, since as you point out, it already supports first-class functions, which is the most important building block.

To really go whole hog, try setting one constraint for yourself: only use immutable data. Make all your variables `const`s, and only use immutable data structures (Immutable.js is a useful library for this).

You'll discover that it's basically impossible to write code in the familiar imperative style – everything needs to be a pure function, that just takes an input and returns a consistent output, without mutating state. This really is the "secret sauce" of FP.


This seems to be missing the point which the GP makes that a significant advantage for FP is not so much the functions (or even the immutability) as it is the domain modelling. In JavaScript a float is a float and the fact that it is immutable won’t save you from accidentally confusing two things that happen to be numbers (say the width of an element and the duration of an animation, or the price of some product and the amount in stock, or a pipe length in feet and thickness in millimetres).

The other domain-modelling-in-FP feature missing from JS is discriminated unions (aka sum types or enums in some languages).

This domain modelling is a big important feature in rust too (with more invariants which may be expressed using ownership constraints) but no one claims it’s a particularly functional language. Indeed I don’t think one needs FP for this kind of domain modelling that we get via ML-family languages.


Yep, true, DDD with FP looks practical. I love defining new words, and that talk in F# simply made it really clear.

It was weird to see a talk like that and realize to myself "but this is just how my brain works!" I happen to be quite a noun-driven person [1]. And I guess I fell into the trap that this made everything seem so easy and understandable!

Which is why I got excited :P I just bought his book, lol. That was too easy to understand and yet entertaining at the same time while feeling useful as well.

[1] Sometimes even creating new nouns into the Dutch language to make arcane but important concepts clear. My most fun noun is "search term scavenger hunt" (in Dutch it is conjugated) for the situation where you know what you're looking for but don't know the right term to get hits on a search engine and you're hunting for the right word.


I'd also recommend https://youtu.be/tV4pHW_WOrY

On one end we have FP, which is great for expressiveness and its declarative nature of getting things done, side-effect free, with nearly perl like terseness :)... but on the opposite end you have von Neumann CPU architectures which know nothing about FP. The utopian "silver bullet" is always something in the middle of course, a language that can give you the power and expressiveness of FP without sacrificing all the things that FP style compels you to bear - like mutability, performance optimizations/fine tuning, etc.

Personally, I find languages like Scala to be right there in the middle, where neither performance nor FP expressiveness is sacrificed severely.

That makes Scala flippin complex like C++ was, but it is probably the best attempt to bridge this gap if we all just want to learn "one language to rule them all" and incorporates both FP and imnperative/mutable style techniques that milk the hardware for what it's worth as well as give you the power/expressiveness of FP.

Of course one can write an operating system in Haskell... just as one can imitate functional style programming in C... or we can come up w/some "middle ground" language that takes the cake from both worlds and call that "the best".

My point is, this quest or journey for the best language/paradigm never ends and always shifts across the years as hardware develops and as we discover better ways of doing more with less (e.g. the crux of FP).

IMHO, ultimately you have to pick an appropriate language (tool) for the job at hand. If we forget that languages/programming paradigms are just tools... then, you'll forgive me for saying, we become tools ourselves. Or... fools, rather.

To me, whomever has realized this trade-off, is a true zen master of programming/engineering...not the one who espouses one paradigm/language over another. That's just me tho... YMMV.


Is Scala really that much more performant than Haskell? I haven't done much high performance computing in either, but the Scala compiler is significantly slower than the Haskell compiler (both self hosted).

JVM can be extremely performant, although sometimes memory hungry. I haven't done any Scala, but I've tried comparing Haskell and similar Clojure code. Most of the time it was easier to write more performant Clojure code, and accidentally write Haskell code that was not as performant. I have to admit though that my Haskell knowledge is pretty weak.

Slower compilation doesn't mean slower execution. Scala has "Native" and JVM implementations with different performance characteristics. Scala is a hybrid that allows imperative code with a lower level of abstraction than Haskell, closer to hardware. Thus it is potentially more performant.

Sure, but has it been benchmarked? From what I've seen on the benchmarks game GHC is neck and neck with the JVM in most stuff, which is damn impressive. That is admittedly a silly comparison, though.

Of course they have been benchmarked. Both compile to native code, so performance can be roughly the same if the algorithms are the same. However, figuring out how to design a Haskell program that reaches maximum performance is very difficult, and the code probably won't be idiomatic.

> Both compile to native code, so performance can be roughly the same if the algorithms are the same.

Can and will are very different statements here though, particularly as the quality of a compiler makes an enormous difference in the performance of the resulting code.


My point is that both compilers are written in their own languages. If scala could be performant, surely the compiler would be too?


This is an odd article, brilliant in parts, somewhat less so elsewhere.

The analysis of why we need modularity and that this means we need new kinds (plural) of "glue" is spot on. Note the plural. But then he goes on to present exactly two kinds of glue, so the smallest N for which the plural is justifiable.

I contend that he was richt that we need a lots of different kinds of glue, which means that the glue must be user-definable. And what is "glue"? Well, architectural connectors, that's what.

Why Architecture Oriented Programming Matters https://blog.metaobject.com/2019/02/why-architecture-oriente...


Purely functional algorithms and data structures are typically slower. Imperative code is also more concise than equivalent functional code.

The only real benefit of functional programming is that it makes code much easier to reason about. But this difference is so great, that in most cases I am ready to pay the price.

P.S. Just to clarify my position to the commenters who want to convert me: I write code in functional languages most of the time for the living, so I know well enough how ST monad works, about "O(log n) == O(1)" meme, and so on. But I also have some background in HPC, where I used imperative languages. So I have plenty of experience in both paradigms, and I know exactly what are their weak and strong parts.


I find this to be largely untrue. There are certainly classes of algorithms and data structures that benefit from mutation for performance reasons, but at that point you can simply implement mutation in your pure functional language (e.g. with monads). Even ignoring that case, there are a great many data structures and algorithms that don’t meaningfully suffer from a lack of mutation at all, especially when the compiler is able to leverage assumptions from being in a pure language.

This is born out in practice where Haskell for example may not necessarily out perform highly optimized C, but it is frequently able to keep pace with Java and will almost always beat imperative python code.

In terms of being concise it’s laughable to me to even make the comparison. I’ve written a lot of C, Python and Ruby in my career in addition to Haskell, and Haskell wins on expensiveness to such an absurd degree that the comparison feels unfair.

The one area that I think FP, although really Haskell in particular, suffers is being to reason about. Performance and memory usage of course are notoriously hard to reason about in lazy languages, but it can also be challenging to understand what’s happening in large code bases that may have some deep MTL stack or be using multiple interpreters over some free monad, since the semantics of how computations are being evaluated can often be defined very far from the computation itself.


When it comes to Haskell, I want to add that especially when you start doing parallel programming, it becomes even harder to reason about. You might expect that doing that extra data type conversion in parallel rather than in sequence is going to make things faster, but you could find that the GC needs to do more work for whatever reason, and it ends up actually becoming slower. It's very counter-intuitive. It's a shame because writing parallel code in Haskell can be very elegant. If only the behavior was more obvious it would be so great.

> Imperative code is also more concise than equivalent functional code

Functional code is almost always more concise than the equivalent imperative code. It's one of the main reasons why people prefer functional code. Almost everyone who looks at it pretty much agrees. The only cases where it will not be is if someone is directly re-writing an imperative algorithm into a functional language, then the functional can be longer. But for the most part, that's translating < 50 lines of code or something. Anything of substance functional is more concise.

If you haven't seen that to be the case, then you may not have looked at any substantial functional code.

Regardless of whether people prefer functional or not, that's almost a universal take-away from anyone re-writing an imperative project into a functional language. It's usually a factor of 2 or 3 shorter and simpler.


First of all, please stop making sweeping generalizations, "people prefer", "almost everyone", etc... Without proper citations, these claims are meaningless.

Second, more concise doesn't always mean better, let alone more readable or more maintainable. It's often the opposite.

Third, functional programming is usually preferred because of aspects such as composition or immutability, not conciseness.

Fourth, rewrites of an existing code base usually leads to shorter code bases, simply because of the acquired knowledge, regardless of the language or the techniques.


>Imperative code is also more concise than equivalent functional code.

Okay, I'm really hoping there's a good surprise behind this, because I've been an imperative programmer for 10 years and a functional programmer for 5 and I have only rarely seen imperative code that was more concise than equivalent functional code. Faster? Sure. More efficient? I'll buy it. Perhaps even easier to read, especially when you get into point-free style on the functional side. But more concise overall? I find that hard to believe, but I'm genuinely excited now! The ultimate trump here would be APL but that's admittedly an eccentric case.


In imperative languages you don't need Edward Kmett to invent lenses for you.

Firstly, that's a non-sequitur, but secondly, in an imperative language with immutable data structures, yes, you do.

But, this is a problem specific to Haskell and its poor record syntax, not the functional paradigm in general.

I...sure? But you could get a lot of help from Phil Wadler and Simon Peyton-Jones!

> Purely functional algorithms and data structures are typically slower.

Functional persistent data structures bring many advantages, even to imperative programmers. For example, if your text editor used a persistent data structure, it probably wouldn't need to lock you out during a save and would support better undo. Mutation is efficient, but comes with many drawbacks, especially when state is shared or provenance is required. This is why functional data structures and ideas are appearing outside of functional programming, for example ZFS, git, Blockchain, Spark, Kafka etc etc

> Imperative code is also more concise than equivalent functional code.

This couldn't be further from the truth. Just as Fortran liberated arithmetic expressions from the verbosity of imperative programming, FP allows us to move beyond non-compositional word-at-time sequences of mutation commands and build much higher-level abstractions. Modern imperative languages have borrowed many ideas from FP, but are still no way near as expressive as e.g. Haskell. Of course, imperative programming is still very useful and that's why Haskell has good support for it.


> This couldn't be further from the truth.

In imperative languages you don't need Edward Kmett to invent lenses for you.


Lenses are not necessary to practice FP, just sometimes useful (and Edward didn't invent them). Try updating a deeply nested record in your favourite imperative language without mutating the original (!)

You are correct, that purely functional data structures are slower, but the differences is not as meaningful as you might actually think. Haskell is actually pretty fast, and I'd say for the vast majority of applications, Haskell is just fine performance-wise. See a few examples here: https://two-wrongs.com/on-competing-with-c-using-haskell

For me (and I'm going to sound like a massive fan boy here) Rust has surpassed functional programming. It gives me mutable data when needed and immutable when shared. So ONLY having immutable data structures now seems stupid. We can have best of both worlds.

Read this if you don't follow: "In Rust, ordinary vectors are values" http://smallcultfollowing.com/babysteps/blog/2018/02/01/in-r...


Being an OCaml/F# fan, I find this comment amusing. Rust is in fact heavily influenced by functional languages, particularly OCaml - OCaml-style algebraic datatypes and pattern matching are huge advantages Rust brings over C++ in terms of reasoning about complex domain logic. And OCaml/F# both have variables which are immutable by default but can be labelled as mutable in (e.g.) performance-sensitive regions. So it's odd to say Rust "surpassed" what other functional languages have been doing for literally decades now.

I almost think of Rust as a lower-level OCaml (faster and better support for memory management, but with somewhat weaker support for typeclasses and an inconvenient ADT syntax). An ML language you can conceivably write an OS in, but I would personally be nervous to write a complex finance application vs. OCaml.

And of course Rust also has typeclasses, with a design strongly influenced by (but somewhat weaker than) Haskell. Monads and Readers are all over the shop in modern Rust precisely because they are so useful in modern Haskell.

Basically, the reason why Rust is such a good low-level programming language is the reason that functional programming matters.


Fun fact: rust’s original implementation was in OCaml.

I sometimes lament it's not more OCamlish, to be frank ;P Learning Rust inspired me to go learn OCaml. Not being a systems programmer and wanting pervasive garbage collection I don't want to go back now!

OCaml's type system isn't capable of preventing shared mutable references. So you don't get the same compile-time safety guarantees in OCaml when you use mutation.

> you don't get the same compile-time safety guarantees in OCaml when you use mutation.

OCaml is a garbage collected and mono-thread language.

You obviously have the same guarantees in that you can't do what would be problematic.

As a side note, a lot of the people currently writing Rust would ihmo be better served by Ocaml. You rarely need the protections Rust provides and they have a real complexity cost.


>OCaml is a garbage collected and mono-thread language.

Well, yes, but that's the point. Rust gives you the same guarantees without limiting you to using a single thread or requiring you to use garbage collection.

>You rarely need the protections Rust provides and they have a real complexity cost.

Agreed.


From the abstract, the main point of the article is:

> ...higher-order functions and lazy evaluation, can contribute significantly to modularity.

You can not have lazy evaluation without referential transparency, and you can not get referential transparency with mutable variables.

Rust does good use of high-order functions, but gets nothing near the modularity of Haskell code.


> Rust does good use of high-order functions, but gets nothing near the modularity of Haskell code.

This has not been my experience.

After years of doing pure functional programming (in Elm) professionally, I was surprised how much Rust felt like writing in an ML family language.

I still prefer referential transparency (though I don't think it would have been the right design for Rust), but "nothing near" does not fit my experience. I'd say the ergonomics of borrow checking versus a GC (which you don't have to think about) is the bigger gap.

John Hughes is a big believer in the value of laziness. There are many of us who believe laziness turned out to be a dead end; it makes some code nicer/more elegant, makes performance optimization much harder, and brings space leaks into your life. Are those costs worth the benefits? Not even close to worth it, if you ask me. I'll gladly take the referential transparency and pass on the laziness.

Almost all the researchers who started working on Haskell specifically because they wanted to explore the power of laziness...ended up pivoting to research type theory instead. I don't think that's a coincidence.


Don't get me wrong, I like Elm a lot, but the drop in expressiveness/ability to build abstractions from Haskell is substantial. Especially when you're looking at modularity, there are a bunch of things that Elm can't abstract out that both Rust and Haskell can manage just fine.

I haven't used Rust heavily enough to comment on how it compares in great detail, but comparing to Elm as a proxy for Haskell doesn't really work.

Frankly, paradigms are a really lousy way to think about languages. I wrote a series of blog posts about this[1], but this opening lecture from one of Brown's PL courses I think does a better job of making the point:

https://www.youtube.com/watch?v=3N__tvmZrzc

It's useful to talk about what say, GC, laziness, lifetimes, ownership, typeclasses/traits, higher-kinded types, higher rank types, variants, elm-style records, etc. do to a language, and how they compose, but I think you can't go very far talking about how "paradigms" compare.

[1]: https://zenhack.net/2018/07/14/three-funerals-in-the-name-of...


> It's useful to talk about what say, GC, laziness, lifetimes, ownership, typeclasses/traits, higher-kinded types, higher rank types, variants, elm-style records, etc. do to a language, and how they compose, but I think you can't go very far talking about how "paradigms" compare.

Sure. My experience has been:

* GC, lifetimes, and ownership are all high-benefit and high-cost. The cost with GC is at runtime (where the cost is so high that in many domains GC is not tolerated at all; in many others, of course, we take it for granted as fine), and the high costs of lifetimes and ownership are at development time.

* Variants and records are high-benefit, low-cost.

* Higher-rank types are low-cost, low-benefit.

* Laziness and higher-kinded types are both features with costs that significantly outweigh their benefits.

It sounds like you disagree with the last bullet point. If so, then either we've had different experiences or we walked away with different conclusions from them.


> * Laziness and higher-kinded types are both features with costs that significantly outweigh their benefits.

> It sounds like you disagree with the last bullet point. If so, then either we've had different experiences or we walked away with different conclusions from them.

I more or less agree on laziness (at least lazy-by-default; having a lazy type as found in OCaml available is a big win for little downside).

Re: Higher-kinded types: I'm curious as to what you think the high costs are? My impression is that they've mostly been left out of Elm due to pedagogical concerns. Is it just that or are there other things?


Learning curve is a very high cost by itself; HKP is the reason Haskell is notoriously difficult to learn. (Compare to Elm, which is also a referentially transparent typed ML with parameteric polymorohism and row types, with nearly identical syntax to Haskell...and which is notoriously easy to learn.) Haskell beginners don't need to get into GADTs and type families and higher-ranked types, but HKP is practically unavoidable on the road to understanding a Haskell program that prints Hello World.

Another cost is in standard library complexity. You can't have HKP and not have a stdlib with Functor/Monoid/Monad etc. As Scala has demonstrated, if you have HKP but don't put these in the stdlib, a large faction will emerge pushing an alternative stdlib that has them. A larger, more complex stdlib is a cost, and so is a fractured community and ecosystem; HKP means you'll have to pick one of those two.

API design is another. Without HKP you write a function that takes a List. With HKP you now need to decide: should it actually take a List, or is it better to take a Functor/Semigroup/Monoid/Applicative/Monad instead?

If you choose one of the more generic ones, now it takes more mental steps to collapse the indirection when reading it. (1. I have a Maybe Int. 2. This function expects an `s`. 3. `s` is constrained by `Semigroup s`. 4. Can I pass a Maybe Int to something expecting a Semigroup? Compare to "This function takes a `Maybe a`", and multiply that small delta of effort by a massive coefficient; this is something everyone who reads these types will do many, many times.)

This indirection also has implementation costs; in theory you could make docs and error messages about as nice if HKP is involved as if not, but there's an implementation cost there, and it seems like it must be pretty steep if you stack up languages with HKP and their quality of error messages and docs against other typed languages that don't.

So I'd say it's one huge cost (automatic induction into the highest tier of learning curve steepness), one big cost (either a larger and more complex stdlib or fractured community), and several smaller costs with high coefficients because they come up extremely often.

Yeah there are benefits too, but I don't think they get anywhere near outweighing the costs.


To indulge in fisking a tiny bit -

> HKP is practically unavoidable on the road to understanding a Haskell program that prints Hello World.

  main :: IO ()
  main = putStrLn "Hello World" 
Doesn't require an understanding of HKP at all. There isn't even any LKP. I do agree that it's necessary for a productive employed Haskeller, but not a beginner playing around with simple command line apps. There's a distinction to be made between understanding enough to make it run (Just label the IO bits with IO and think of 'do' as kind of like imperative programming but not really) and understanding more deeply, which only becomes necessary later.

> With HKP you now need to decide: should it actually take a List, or is it better to take a Functor/Semigroup/Monoid/Applicative/Monad instead?

It doesn't seem like a decision that would involves a lot of cognitive overhead. In general I'd probably just go with whatever type GHC infers. Failing that, it kind of arises naturally from like, what the function is about. Is it about reducing the List to a single value? Use Foldable. Is it about transforming the elements of the list? Use Functor. Is it about nondeterminism, but the code isn't necessarily specific to that computational context? Use Applicative if there are no sequential dependencies (which the type checker knows anyway) and Monad otherwise. Ok, maybe it seems a little complex when you write it out, but it's really fairly instinctual.


The point about how difficult it is to learn is well made and well taken. This difficulty is very contextual though - this is a problem for the language in terms of adoption, but it's not a problem if, for instance, you're a team of experienced Haskell programmers deciding on which language to use for your Important Business Applications.

I think that you overstate the cognitive overhead of reading polymorphic type signatures to those who are reasonably familiar with common idioms. Taking a second to remember that `Maybe a` is a semigroup if its argument is seems like a small cost to pay to me.

I think there's a significant benefit to highly polymorphic functions in the standard library which seems to rarely be brought up. Polymorphic functions are applicable more often. So then if the function is already written for you, and you use it, then anyone who reads your code has to look at one less definition to understand it (if they're familiar with the library function). This forms a larger common vocabulary and in some ways imposes a smaller cognitive overhead on the reader.

Speaking more broadly, people are often frustrated by the number of abstractions from the standard library that they have to learn to be productive. But they don't notice all of the abstractions they now don't have to learn in individual codebases - because they don't have to exist. And this effect adds up - every time there would have been a slightly less well implemented, proprietary, monomorphic version of a function in a codebase and you use the polymorphic standard library one instead, everyone who reads that code has one less thing to get their head around. It pays for itself.

I also feel like you understate the benefits - the amount of times I think for 20 seconds and realize that the complicated function I was about to write is just like, `traverse` or something is incredible to me.

I think it's possible to to have legitimate disagreements here, which are often driven by differing personal experiences, and by the kinds of domains people are operating in and so on. I don't think there's a one size fits all answer.


> If you choose one of the more generic ones, now it takes more mental steps to collapse the indirection when reading it. (1. I have a Maybe Int. 2. This function expects an `s`. 3. `s` is constrained by `Semigroup s`. 4. Can I pass a Maybe Int to something expecting a Semigroup?

Semigroup doesn't involve higher kinds at all. What you seem to be discussing is either type classes or just a pile of junk from abstract algebra. Fwiw, Elm has semigroup too -- it's called appendable (which is a much much better name...).

> Learning curve is a very high cost by itself; HKP is the reason Haskell is notoriously difficult to learn.

I don't think any one feature of Haskell is why it's hard to learn. I think the reasons are much more mundane, the main ones being:

1. The language is just enormous. It's a lot to need to have in your head to understand some bit of code you come across. Folks end up picking a (small) subset of it just to stay sane, but this doesn't help you when trying to come across a new library; you basically need to have most of the language in your mind somewhere to understand $RANDOM_NEW_LIBRARY reliably. And because it's an issue of sheer size, there's no short-cutting it. It has a lot of features with heavy overlap in use cases, so you spend a lot of time thinking about silly things like "Should I use FunctionalDependencies or TypeFamilies?" "I'm writing a library that needs to generate a bunch of boilerplate code, should I use GHC.Generics, TemplateHaskell, or something else?".

2. The community is really lousy about pedagogy. They tend to lead with the abstraction, which is just not how people learn. I really wish this[1] had been written like a year earlier; it would have saved me a lot of trouble wading through useless instructional material trying to learn this stuff. It doesn't seem like the bulk of the community took that to heart though, and while there are some good learning resources out there, there's a sea of worse-than-useless ones.

3. There's a culture of complexity/over-engineering. I don't think this is unavoidable, but it's particularly a hazard of being research language where to a large extent the whole point is to play with crazy ideas. The maintainers still see the language as primarily a platform for experimentation, so KISS can be a hard thing to push for.

> and it seems like it must be pretty steep if you stack up languages with HKP and their quality of error messages and docs against other typed languages that don't.

I'm not really sure I buy this; I think the list of languages that have these things and have seriously made good error messages a priority is pretty short (empty?). I can point to some simpler ML dialects that still have some really lousy error messages.

[1]: https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...


> you can not get referential transparency with mutable variables

Sure you can: https://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.ht...

Rust's ownership types and lifetimes actually allow for this just fine. The latter is the essence of how Haskell's ST Monad works; you can use lifetimes to get locally-mutable state without violating global invariants, since once they go out of scope they can't be reused.

It's interesting to observe that, without "magic" standard library functions and `unsafe`, Rust's type system actually completely constrains mutability, and if a function doesn't have `mut` somewhere in its type signature, it doesn't break referential transparency.

That said, in practice, the language does have magic functions that violate this property, and they do so in a way that means you can't use the above reasoning principle at all. Also, mutability being constrained by the types is not the same thing as typical code not using it everywhere, which is the situation with rust-as-found.


> You can not have lazy evaluation without referential transparency, and you can not get referential transparency with mutable variables.

"… allowed to up­date the unique ob­ject de­struc­tively without any conse­quences for referential transparency."

Uniqueness Typing

https://clean.cs.ru.nl/download/html_report/CleanRep.2.2_11....


I love Rust but comes from an Ocaml/F# background (and, as others have said, the inspiration is palpable) and you seem to have a common misunderstanding around immutable datastructures.

Immutable datastructures in functional languages are not datastructures that cannot be mutated, they are datastructure that efficiently produces a new, updated, version on each operation (such as the cons list). While they are slower than traditional datastructure for a typical use case, they tend to be a lot easier to think about, make parallel code much easier and let you easily get back in time to previous versions of your datastructures (when discovering rust, I was actually disapointed that they were not available in the std).

The im[0] crate offers what seem to be good Rust implementations of that kind of datastructures (with abetter explanaition of why you would want them).

[0]: https://docs.rs/im/13.0.0/im/


>Immutable datastructures in functional languages are not datastructures that cannot be mutated,

This claim seems dubious. But in any case, I think the OP was just talking about immutable data, i.e., in Rust terms, data to which there is no mutable reference. The data in question could be a simple integer.


It seems dubious because the FP community tends to use the expression 'immutable data structures' were 'persistent data structures' would be the proper term.

As a matter of fact I have seen data structures, in FP language, called immutable while having operations to mutate them in place.

I believe the OP spoke of both Rust immutable data and the fact that most FP languages have only immutable datastructures (different mecanism to deal with similar problems).


I think they were referring to an actual difference between Rust and OCaml, Haskell, etc. Rust allows you to mutate data via mutable references that are guaranteed to be unique. Mainstream functional programming languages do not allow you to do this (though linear typing may land in GHC soon).

Rant ahead.

Functional programming discussions on HN are pretty depressing. Many of the statements about FP that I see here right now are the same old shit I've heard about Java in mid 00s. You just need to mentally translate some buzzwords, but the essence is the same. Seems like the software industry is just running in circles. Something get hyped, people jump on it, fail, then search for the next bandwagon.

Some examples:

1. Endless yammering about low-level correctness. As if it's biggest problem in software engineering right now. In reality, most domains don't need perfection. They just need a reasonably low defect rate, which is not that hard to achieve if you know what you're doing.

2. Spewing of buzzwords, incomprehensible tirades about design patterns. FP people don't use the term "design pattern" often, but that's what most monadic stuff really is. Much of it is rather trivial stuff once you cut through the terminology. (Contrast this with talks by someone like Rich Hickey, who manages to communicate complex and broad concepts with no jargon.)

3. People who talk about "maintainability" of things while clearly never having to maintain a large body of someone else's code.

Etc.

---

The #1 problem in software right now is not correctness or modularity or some other programming buzzword. It's the insane, ever-growing level of complexity and the resulting lack of human agency affecting both IT professionals and users.


For me, and this is having grown and maintained some large functional code bases, the fact that your day-to-day code does have an enhanced low-level correctness is exactly what helps you attack what you describe as the major problem: complexity.

The way I've dealt with complexity in large code bases is through being fearless about refactoring. Refactoring may not reduce complexity in terms of what the software does, but it reduces the complexity of understanding the code base tremendously by realigning the structure of the code with the actual problems being solved.

Refactoring gets a lot less scary when you have greater confidence in the low level correctness of the code.

On your second point, yes, I have found that FP has some, shall we say, interesting jargon. But I have trouble thinking of succinct names for a lot of FP constructs that are nonetheless useful, such as monads. A lot of more colloquial terms that come to mind in brainstorm sessions might even undermine understanding by providing a false equivalence. I think the same argument can be made for mathematical notation.

In summary I'd turn around your last sentence a bit. Yes, the #1 problem is complexity, but you can reduce complexity significantly by applying correctness and modularity and other programming 'buzzwords'.

You can rail against complexity itself, but I think we're probably on the bottom end of a very large complexity slope over the next decades. So we'll need better and better constructs to deal with it.


If you think the solution to large codebases is refactoring IMO you haven’t seen codebases large enough. The kind of your compiler can’t help, because you start hitting data modeling, legacy, regulatory and people issues.

Hmm. Most recently I managed a refactoring design process across a multi hundred person org at a large tech company, also needing to include a few other multi hundred people orgs. It wasn't the first time I've done that. Most of the effort involved was people issues, to your point.

At that point you're talking multiple codebases and the complexities become managing transactions, data transformations, and contracts across discrete processes.

I'm not sure how that's germane to the discussion at hand. In fact, to the opposite point, I've found that in multi organization refactors and designs functional programming continues being a useful mine for concepts to simplify thinking around data transformations, immutability, and data contracts.


Amen. When I read that paper, it was clear that the author's definition of modularity was very different from my own.

When I think about an algorithm like merge sort, mini-max decision trees or other low-level algorithms, the concept of modularity doesn't even enter my head. It doesn't make any sense to modularize an algorithm because it is an implementation detail; not an abstraction and not a business concern.

Modularity should be based on high level business concerns and abstractions. The idea that one should modularize low-level algorithms shows a deep misunderstanding of what it means to write modular software in a real-life context outside of academia.

It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation.

Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.


Which business concern of a machine is not an algorithm? An algorithm is a pretty general thing. If modularity works at the lowest levels by induction it works great at the higher levels as well.

I've developed and maintained projects that are more than a million lines of code (probably much more), and I've also written large haskell programs (5000 lines is large, since it encompasses what would have taken me maybe 30000 lines in C++). I can say that the maintenance time and error rate of my Haskell programs dwarfs that of any C++ program I've written or maintained.

We've also learned in software engineering that the defect rate is mainly correlated with the code size, ie. the complexity of the code and how much there is, or simply the entropy of the code. With functional abstraction, the abstractions aren't "leaky" and actually allow you to reduce complexity and forget about the lower level details entirely.


This is absolutely wrong. OOP is the one that blurs the meaning between modularity and implementation.

Think of it this way. In order to make something as modular as possible you must break it down into the smallest possible unit of modularity.

State and functions are separate concepts that can be modularized. OOP is an explicit wall that stops users from modularizing state and functions by forcing the user to unionize state and functions into a single entity.

Merge sort is a good example. It can't be broken down into smaller modules in either OOP or functional programming. The problem exists at a higher level.

IN FP mergeSort can be composed with any other function that has correct types. In OOP mergeSort lives in the context of an object and theoretically relies on the instantiated state of that object to work. So to re-use mergeSort in another context, a MergeSort Object must be instantiated, that object must be passed along to another ObjectThatNeedsMergeSort in order to be reused. ObjectThatNeedsMergeSort has an explicit dependency on antoehr object and is useless without the MergeSort object. Remember modules don't depend on one another hence this isn't modularity, this is dependency injection which is a pattern that promotes the creation of objects that are reliant on one another rather then objects that are modular.

I know there are "Design patterns" and all sorts of garbage syntax like static objects that are designed to help you get around this. However the main theoretical idea still stands: I have a function that I want to re-use, everything is harder for me in OOP because all functions in OOP are methods in an object and to use that method you have to drag along the entire parent object with it.


>Modularity should be based on high level business concerns and abstractions. The idea that one should modularize low-level algorithms shows a deep misunderstanding of what it means to write modular software in a real-life context outside of academia.

Modularity in functional programming languages penetrates to the lowest level. Functional programming encourages the composition of powerful, general functions to accomplish a task, as opposed to the accretion of imperative statements to do the same. With currying, a function that takes four arguments is trivially also four separate functions that can be further composed. The facilities for programming in the large are also arguably more general and expressive than in OOP languages: take a look at a Standard ML-style module system, where entire modules can be composed almost as easily as functions.

>It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation.

I'm not sure I understand you here entirely, but implementation details matter. Is this collection concurrency safe? Is this function going to give me back a null? Is it dependent on state outside its scope that I don't control? Etcetera. Furthermore, when it's necessary to hide implementation details, it's still eminently possible. Haskell and OCaml support exporting types as opaque except for the functions that operate on them in their own module, which is at least as powerful as similar functionality in OOP languages.

>Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.

Yeah, I've lost you here. Would you mind clarifying?


Currying is just another example of poor abstraction. You have a function which returns another function which may be passed around to a different part of the code and then called and it returns another function... Abstraction doesn't get any leakier than this. It literally encourages spaghetti code. I despise this aspect of FP that the code ends up passed around all over the place and keeping track of what came from where is a nightmare.

I've written plenty of very short OOP programs. They don't have to be huge to be effective. The reason why you sometimes see very large OOP software and rarely see large FP software is not because FP makes code shorter, it's because FP logic would become impossible to follow beyond a certain size.

My point about black box and referential transparency is that a black box hides/encapsulates state changes (mutations) by containing the state. Referential transparency prevents your function from hiding/encapsulating state changes (mutations) and thus it prevents functions from containing the state which is relevant to them; instead, the relevant state needs to be passed in from some (usually) far-flung outside part of the code... A part of the code which has nothing to do with the business domain which that state is about. To make proper black boxes, state needs to be encapsulated by the logic which mutates it.


Then don't use it. Not all functional programs force you to use currying and anonymous functions. I'm a functional programmer and I agree that passing around functions as first class can get kind of messy. Don't do it. Data is data and functions are functions, pass data into the pipeline not functions. But if you pass an object into a pipeline (which OOP forces you to do) it's 10x worse.

Keep in mind that in OOP is essentially Forced currying. A method that returns an object full of other methods is identical to currying except that method isn't returning a single function... It's returning a group of functions that all rely on shared state... way more complicated.


Can you explain why you see currying as an abstraction leakage? What implementation details does it betray?

I haven’t found a single large OOP program in the line of business that was easy to understand. Quite contrary to my experience with large FP code bases, of which many exist, to be clear. They are just a lot smaller than what equivalent OOP code would look like, and I challenge you to refute that with evidence.

I completely disagree about black boxes and think they are actually a complete scourge on software engineering. I should know everything that is relevant to me from a function’s type signature. In languages with pervasive side effects, this is not possible.


I thought my example about being able to call a function to get another function and then passing it to some other part of the code and calling it there was enough to illustrate the kind of confusion and disorganization that currying can cause.

For me, the most important principles of software engineering are:

1. Black boxing (in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant).

2. Separation of concerns (in terms of business concerns; these are the ones that can be described in plain language to a non-technical person)

You need these two principles to design effective abstractions. You can design abstractions without following these principles, but they will not be useful abstractions.

Black boxes are a huge part of our lives.

If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there. The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.

As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys. These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals.

I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country.

With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer.


>I thought my example about being able to call a function to get another function and then passing it to some other part of the code and calling it there was enough to illustrate the kind of confusion and disorganization that currying can cause.

I generally think of spaghetti code as code that has unclear control flow (e.g. GOTOs everywhere, too many instance variables being used to maintain global state, etc.) Currying, plainly, does not cause this.

>1. Black boxing (in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant).

Sure, completely possible in ML-family languages and Haskell. Refer to what I said about opaque types earlier.

>2. Separation of concerns (in terms of business concerns; these are the ones that can be described in plain language to a non-technical person)

Again, nothing in functional languages betrays this. You are talking about code organization at scale, and none of what you have said so far is precluded by using pure functions and modules and such.

>If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there. The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.

I do not like analogies in general, though for this one I will suggest that you should at least know what the baseline social expectations are of the place you are traveling to. That is, plainly, what I am arguing that functional programming makes clearer and easier to deal with.

>As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys. These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals.

Yes, and underneath that program in a functional programming language are lots of small, carefully composed functions that are often just as applicable to many other problems and problem domains.

>I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country.

This is why I don't like analogies. I have no idea what you are talking about here.

>With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer.

What makes functionally-styled APIs hard to explain to a junior developer?


It's not abstraction leakage imo. But it's generally not good to use this feature excessively in FP. First off it's equivalent to instantiation in OOP. An FP program that passes along functions as first class all over the place is similar to an OOP program that is passing along Objects all over the place. Closures contain the concept of state and method just like an object. If you use the pattern of passing closures everywhere you are essentially mimicking the complexity of an OOP program and you will encounter the same problems.

This is true, but it’s also a lot harder to accumulate that amount of messy state with pure functions. Besides, most of the power is in a few functions, and I’m mostly talking about curried and composed forms of fold and map which are very useful.

> Yeah, I've lost you here. Would you mind clarifying

Some time after leaving university, when your statements are no longer labeled as correct or incorrect - people like the GP start strongly believing that any nonsense that enters their head is a fundamental truth. GP's statement is an example of the above.


The goal of FP would be to make it straightforward to modularize both large and small details. The notion that you can modularize a low level algorithm does not preclude modularizing a high level business process.

This is a good thing because I've found in my own experience that it's hard to say at what level you're at when implementing something. Implementing a standard deviation function? Is it happening over in memory data or persistant? Is it going to happen in parallel when it can? Is it going to be distributed across servers? Suddenly you're back at the high level.


> 3. People who talk about "maintainability" of things while clearly never having to maintain a large body of someone else's code.

This is the point from your rant which bugs me the most.

Quite literally due to maintainability my team uses an FP language. It is so easy to pick up code you wrote 3 years ago, code someone else wrote 6 months ago, code the guy who left 4 weeks ago wrote, code 10+ people are working on at the same time, and then continue to maintain that code without fear of misunderstanding the deep complexity that is also associated with it on top of the additional complexity you are about to add to it.

My team doesn't need to waste days doing archeological digs and comprehensions on the 18+ projects we now maintain. We simply grab the code base, modify the code which uses the idiomatic "write small components, build bigger components with those" mentality that comes with writing in a FP language, and then fix the chain of compiler errors along the way until our new feature works.

By attempting to be more precise (through either terminology or code correctness), introducing higher confidence levels (programmer confidence, code operability confidence), and wrangling complexity through well designed idioms (monads, proper effect handling), you end up delivering a large amount of value to your customers which impact their bottom line. Faster feature delivery, lower bug rates, nearly zero risk of data leaks, uncrashable software, maintainable custom software over 5+ year timelines.

I've been in this industry for 20 years now and have seen a wide spectrum of good and bad. Using FP over the last 3 years has definitely moved the bar in the "good" direction a lot further than I originally anticipated, but I suppose YMMV.


>My team doesn't need to waste days doing archeological digs and comprehensions [...] We simply grab the code base, modify the code [...] and then fix the chain of compiler errors along the way until our new feature works.

"We don't need to read and understand stuff. We just grab the code by the horns and spur it with changes until the compiler stops thrashing. Yeehaw!"

There are myriad things that distinguish long-term maintenance from greenfield development. Like transferring application ownership from one team to another, reverse engineering, adapting to the changes in external systems you integrate with, investigating bug reports and performance issues, doing monitoring, etc. etc. If your only concern in "maintenance" is making some changes while avoiding the kinds of bugs that can be prevented by a static type checker, then something seriously does not add up.


If you would stop thinking of your relationship with the compiler as adversarial and start thinking about it as a tool which guides development I think you might start to see where we're coming from.

I was literally commenting about your point on “maintenance“.

Yes, you were. And you effectively had proven the point I made in my root comment. If your notion of large-scale maintenance involves only the problems that can be caught by a typechecker, you don't know what real maintenance looks like. Which means that even your comments about the issues that can be prevented by a type-checker are suspect.

Again, the wast majority of claims made about FP on Hacker News right now (even in this thread) were made about Java in early 00s. Almost word-to-word, except some terminology. Unfortunately, back then I didn't have enough experience to spot the issues with those claims and people who did have real experience were mostly silent.


This reads to me as, “You filthy FP advocate, you have no clue what real maintenance is!” I’m not saying the type checker alone is enough to perform large scale maintenance, you and I both know there is more to it than that.

At the end of the day, sure, we all have opinions on tools and how they make our lives better. You and I may not agree that FP is the right tool, but I’m not going to make sweeping generalizations that you know nothing about large scale development and and maintenance solely based on your language paradigm choice and a few focused HN comments.


I'm guessing that you're talking about a strongly typed functional language here? ML family?

Yes. I didn’t outright say “Haskell” because it seems to draw even crazier comments out. That said, I’m aware not all of my points apply to numerous FP languages.

> FP people don't use the term "design pattern" often, but that's what most monadic stuff really is.

This is exactly backwards. A monad is not a design pattern - a design pattern is an awkward manual reimplementation of a monad (or another category). In OO design patterns the structure behind what you're actually doing is buried under both the clunky type system of most OO languages and the arcane memorization of patterns and their names.

The whole reason design patterns exist is that in C++[1]/Java/Smalltalk/etc the type system is not quite good enough to enforce consistency with certain complex designs - failure management in concurrent systems being an excellent tricky example. In imperative/OO-dominant languages there is inevitably a huge amount of boilerplate around checking nulls, lots of wrapping things in try/catches, and so on. Design patterns are an useful abstraction of this boilerplate in a way that's easy to maintain (and, more importantly, are an intutive common language for many programmers). But they are no substitute for categories, which allows the compiler to make sure the design pattern is actually properly implemented.

But this stuff is quite complicated. It really does sound like a lot of navel-gazing mathy jargon. But the way you've phrased this makes me wonder - and I am aware this is condescending - if you haven't actually used categories in practice.

My thinking on this has been strongly influenced by this excellent blog series from Mark Seemann: https://blog.ploeh.dk/2017/10/04/from-design-patterns-to-cat...

This blog series is a very very good lower-level introduction from the same blog, with examples in C#/F#/Haskell: https://blog.ploeh.dk/2018/03/22/functors/

[1] That said, you can be pretty fancy in template C++ with category-level type programming.


A monad isn’t a category.

Tbh I think the whole “category theory” obsession one sees in some parts of online FP evangelism needs to die. Haskell has (endo)functors which are a useful concept for which one needs to know zero category theory (similarly for monads). But otherwise, FP basically only ever has one category which has all the nice properties one could want (ok if you have a weird type system based on a weird logic you might have a slightly different category), so the thing people call category theory is just the theory of the one category you live in. One doesn’t say group theory is category theory just because there is a category of groups.

Actual category theory isn’t much about specific categories so much as it is about the relationships between them, and ideas common to many categories (natural transformations, limits, terminal objects, etc).

I’m not saying category theory is bad, but I think to get much out of it one needs more than one category (and hopefully one can think of a category which isn’t a topos too), and one doesn’t tend to come across categories in day to day functional programming. Some definitions and constructions from category theory may be useful in constructing a type system for a new ML-family programming language.


> This is exactly backwards. A monad is not a design pattern - a design pattern is an awkward manual reimplementation of a monad (or another category).

This isn't correct either. Design patterns are simply cocategories.


Monad is a design pattern.

1. Google knows what they are doing. How many zero days does Chrome have this month?

2. Yeah, monads and applicatives aren't too hard, but really understanding monad transformers well is challenging.

3. I mean, yes, there are a lot of junior-ish devs who see the potential about FP and then spout how much better it is, but isn't that just like complaining about how annoying people are on twitter? I'm unsure its meaningful to make this criticism, or at least I'd like an example.


The best way to tell if someone has no idea what they’re talking about and are just mindlessly repeating other ideas is when they say the phrase “easier to reason about.”

It appears in thousands of talks and blog posts. It’s completely subjective and unquantifiable what you or I think is easier to reason about. It’s largely (entirely?) about aesthetics.


Pure functions are quite concretely easier to reason about though, because you know that when you are looking at a function or calling it that you don't have to be worrying about its effects on state and whether or not it is a leaky abstraction in that regard. I compare it a lot to looking at code in a dynamically scoped vs lexically scoped language. It is genuinely a massive step forward in clarity.

I have the opposite experience. My simple OO plane->setSpeed(400) function would become a functional monstrosity that takes in the entire previous world, and return a new World, Military, AirForce, and Fleet objects based on the old ones, to contain a new plane with the new speed.

The entire function is way harder to reason about, because I can't tell from the outside what other parts of the world it may have accidentally modified. The lack of basic encapsulation really turns me off to FP.

Supposedly the best FP answer to the problem was lenses, which would drive the complexity of my code up into the stratosphere. Shortly after trying those out, I migrated my project to imperative OO and haven't looked back since.


I thought getters and setters were considered an antipattern by most OO people anyways?

Anyway beyond that the relevant part is not that one function call in isolation but wherever it’s getting called from and why it is getting called there. Without more context I don’t know how I would solve that but suffice it to say getting into lenses and the entire world and such sounds massively unnecessary.


> The best way to tell if someone has no idea what they’re talking about and are just mindlessly repeating other ideas is when they say the phrase “easier to reason about.”

"easier to reason about" is not someone's opinion, it's math. Please educate yourself on formal verification methods before drawing broad conclusions including substrings like "no idea" or "mindless repeating".


You can roughly quantify it as the number of things you need to keep in mind when looking at a piece of code. For example, if you need to be aware of how a variable is modified outside of the scope currently on your screen, that makes it harder to reason about.

Possibly interesting: https://overreacted.io/the-bug-o-notation/


>The #1 problem in software right now is not correctness or modularity or some other programming buzzword. It's the insane, ever-growing level of complexity and the resulting lack of human agency affecting both IT professionals and users.

Typed Functional programming is part of the solution to this. It's not just correctness and modularity (Both of which reduce complexity btw).

Also you're right about the whole monad thing, it is a pattern, and like patterns in OOP, doesn't necessarily reduce complexity.


I'm not sure complexity itself gets in the way of human agency so much. It's more the risk and fear of problems, including job security, when trying to wrestle with that complexity and invariably breaking unforeseen things.

Maybe that's what the FP people mean by DDD and bounded contexts. If you constrain the complexity into something bite-size that in turn has well-documented interface points, then it all starts getting easier to manage again.


> (Contrast this with talks by someone like Rich Hickey, who manages to communicate complex and broad concepts with no jargon.)

It really seems to me that a big reason Rich Hickey seems so profound is that what he's saying means 10 different things to 10 different people.


One of my favorite papers. Incidentally, although I'd spent my entire career chasing some elusive notion of "good design", this paper was the first I have seen to explicitly define "good design" as "modular".

It is now generally accepted that modular design is the key to successful programming... However, there is a very important point that is often missed. When writing a modular program to solve a problem, one first divides the problem into subproblems, then solves the subproblems, and finally combines the solutions. The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together. Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language. Complicated scope rules and provision for separate compilation help only with clerical details — they can never make a great contribution to modularization

I now tend to see all the other design principles as ways to improve modularity. Everything else is mostly just fluff.


I think functional programming is an useful (and probably fun for some people) exercise to see how far you can go when solving problems with a limited or "pure" set of tools. We got and are steel getting a lot of insight from it. Like forcing a boxer to only box with one hand while the other one is tied behind his back. I'm sure one would have learn quite a few new tricks not to get battered in the ring that way. Things do get weird when people start forgetting the the world is a bigger place and there are a lot of ways do the same thing and the reality doesn't care about purity. That makes some people bitter and they dig down even harder into their limited world and start evangelizing it even harder to signal their smartness. In their bitterness and anger they do not see that functional programming has made it big time. Almost every important language today supports multiple paradigms, including functional.

If it mattered we wouldn't have needed articles like this. It's a useful niche for certain tasks, that's about it. Certain types of people with certain thinking patterns strongly prefer it, other people don't. :shrug:

People should learn a functional language, even if they do not intend to ever use it professionally. Languages are tools for thought and if you only know imperative programming, it will limit your thinking (for example, one well-respected OOP programmer on stackoverflow seriously suggested modelling a bank account with a mutable number). Functional programming ideas are gaining traction and have made big contributions to distributed computation, filesystems and databases recently. Don't get left behind by dismissing it all as niche. You'll also get a preview of the upcoming Java and C# features.

I agree; I recommend everybody to master imperative, OOP, functional, logical and constraint-based declarative languages. They are all useful, certain things are much easier in one or the other type.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: