Hacker News new | past | comments | ask | show | jobs | submit login
What's functional programming all about? (2017) (lihaoyi.com)
161 points by Ivoah 12 days ago | hide | past | favorite | 153 comments





Author here. This blog post is actually kind of funny; I had a flash of clarity one afternoon that wouldn't go away so I spent 8 hours in a manic frenzy banging it out in one pass with no editing. Not how most of my writing happens (typically its a tedious slog with multiple passes of editing and refinement)

Anyone who likes this article on my blog should check out this presentation on applying this philosophy to build tools

https://youtu.be/j6uThGxx-18?si=ZF8yOEkd4wxlq84X

While the blog post is very abstract, the video demonstrates how to take the abstract philosophy and use it to help solve a very concrete, very common problem


Hi Li, appreciate your work. How do you feel the state of Scala is these days? I took the EPFL intro on Coursera years ago, but I was always disappointed by two things: the community feels very fragmented outside of IDEA (RIP ENSIME — oh, is it back now?), and it seems like Spark completely overwhelms the rest of the Scala ecosystem. I’ve mostly moved on these days but still think fondly of it from time to time.

Have you read Backus' original FP paper "Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs"?

https://dl.acm.org/doi/10.1145/359576.359579


As a programmer, I don't know if it's still relevant to make a strict separation between programming paradigms. You can use immutable types, pure functions, closures and so on in most languages. Conversely, you can define mutable types and imperative code in most functional programming languages.

I'm always surprised reading comments on these topics, people saying they don't grasp FP. But don't we use higher-order functions, closures, combinators all the time in most mainstream languages? How hard can it be to learn OCaml or Clojure for someone who use closures all over the place in JS?

Monads have a steeper learning curve, but besides Haskell, they aren't that pervasive. And there are constructs with similar flavor in mainstream languages too (result types in Rust...)


True, the conceptual difference of (pure) functional and imperative programming is disguised by the many functional patterns most mainstream languages have absorbed. While these patterns are useful, there is more to say about pure functional programming.

I recently gave a talk to some colleagues about that, which was divided into two parts: Practical functional programming patterns we can use today in our codebases (we use mainly C++, Python) and a more abstract part about pure functional programming.

The first part basically boils down to using functions as first class "objects", while the point of the second part was, that there can't be implicit state in pure functional language. The consequence of that is, that there is no strict order of execution, which is in direct contrast to imperative programming, which is all about list of statements, that are executed one after another.

I presented small code examples in Haskell and showed corresponding execution graphs to emphasise, that the compiler can easily optimise the execution order.

I like that POV, because it clearly distinguishes imperative from functional programming. Starting from there, it's also easy to understand the motivation behind monads or elaborate on architectural patterns like " functional core, imperatively shell".


I don't understand the bit about the execution order, nor how it's relevant in your day-to-day programming.

When you execute a function in a mainstream languages, the arguments are usually evaluated first, then the function is called with the result of that evaluation.

E.g. the following in JS will always print the confirm message. But in a lazy language like Haskell, as the x parameter is never used, it'll never be printed.

  (function(x) { return "bar"; })(confirm("foo"));
This can help with implementing things like infinite streams (e.g. iterate[0] in Haskell), which may be a pleasant patterns to solve some problems sometimes. On this example alone, I wouldn't qualify it as highly relevant for day-to-day work, but there may be more interesting use cases I'm not familiar with (I'd guess there might be interesting side-effects in concurrent settings).

[0]: https://hackage.haskell.org/package/base-4.20.0.1/docs/Prelu...


Mainstream languages are not expression orientated like true FP languages are. Most people working in mainstream languages aren’t aware of the significance of this and wonder why FP seems awkward in their language, despite it having closures, some immutable types, etc.

One thing that struck me while learning nushell is that instead of:

    $ echo "hello world"
    hello world
I can do:

    $ "hello world"
    hello world
Is this an indication that it is expression oriented? (just checking that I've understood the phrase).

This is the statement/expression distinction; `echo "hello world"` is a statement that, when evaluated, has the side effect of printing "hello world" to stdout (or wherever), and has no value (for the purposes of this example).

`"hello world"` is an expression with the literal string value "hello world"; your REPL probably prints out that expression's value once it is evaluated for convenience.

FWIW I'm not actually familiar with nushell and am speaking fairly generally about statements vs. expressions.


Yes.

Essentially, statements have no value (or return no value). Expressions have a value.


I will add to that the other big difference is how much the compiler enforces the style.

I don't think we should think of things as having a strict separation, but certainly some languages push you harder than others toward certain programming paradigms, and some make other paradigms difficult or awkward to use.

For example, while I can do FP in Rust, I would not really call Rust a FP language. It does have some features that make doing FP possible, but a lot of the code I see (and write) is a mix of imperative and FP.

But if I'm writing Scala, I'm going to be mostly writing FP code, and the language helps me there. Recent versions of Java make it easier to write FP code if you so choose, though you'll still see a lot of people write more imperative code in Java.

(I think if Rust had Scala's for-comprehensions, I'd write more FP-ish Rust. I know there are crates that emulate them, but they're not the most ergonomic, and I don't want to write what ends up being unidiomatic Rust code.)


For-comprehensions are actually not functional programming, their raison d'etre is to embed traditional imperative programs in expression-based referenetially transparent FP programs.

I think if Rust had Scala's for-comprehensions, it would be a strictly worse language than it is now. FP langs' do- and for-comprehensions are highly unergonomic, not least due to all the ways it could be overloaded. Haskell has tons of extensions, Scala's is basically desugaring to flatMap calls. This has a lot of abuses in the wild, and with how it depends on implicits causes practical problems when you write that code.

Scala's new wave of direct syntax and general direction from Monads to algebraic effects is more sane in how it presents compiler messages, and opens ways to have better performance.


Monads are pervasive: async-await.

We just don’t call them monads.


How is async-await monads? Isn’t it just syntax sugar over callbacks?

Consider async-await a syntactic sugar over Promises (from JavaScript). Then, Promises constitute an instance of the Monad typeclass where monadic `bind` or (>>=) is `Promise.then()`, and `return` is `Promise::resolve()`.

Here is a translation of a modification of the example given in [1]:

    const promise1 = Promise.resolve(123);

    promise1.then(v => v * 2).then((value) => {
      console.log(value);
      // Expected output: 246
    });
into Haskell:

    ghci> let promise1 :: IO Int = return 123
    ghci> promise1 >>= (return . (* 2)) >>= print
    246
One key discrepancy worth pointing out is that in the `Promise.then()` API of JavaScript, the function provided to `then` (e.g. `v => v * 2` above) is implicitly composed with a call to `::resolve` in order to turn that function's pure return value, the `Number` 246, into a Promise resolving into the `Number` 246; in Haskell, this operation must be made explicit (hence the composed function `return . (* 2)` passed to the first application of (>>=) or `bind`).

You could say that the instance method `Promise.then()` expects an argument of type (a -> b), with the return value of type `b` being implicitly and automatically wrapped into a Monad `m b`, whereas Haskell's bind expects an argument of type (a -> m b) on that operator's right-hand side, with the return value explicitly wrapped into a Monad `m b` by the provided function argument itself.

[0] https://wiki.haskell.org/Monad

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Maybe I’m confused, but I dont see how Promise.then() corresponds to bind? If I understand correctly, the point of the bind function is you pass a callback which itself return the monad type. But the Promise.then() callback should not return a monad but just the regular result value of invoking the callback.

So in essence Promise.then() is like Array.map() while bind is like Array.flatMap()

Edit: It seems you are correct, you can indeed return a promise in the callback and it compose correctly. But in your example the callback does not return a promise and is therefore not an example of monadic bind.


I had a caveat in my original post re. the implicit wrapping of the `Promise.then()` callback's (pure) return value in a new Promise, and how this differs from Haskell's monadic bind; I had hoped to make a loose analogy while pointing out the differences for the sake of illustration. However it is indeed also possible to return a Promise from `.then()`'s callback, which is closer to Haskell's bind: [0]

    The behavior of the returned promise (call it p) depends on the handler's
    execution result, following a specific set of rules. If the handler function:

    * returns a value: p gets fulfilled with the returned value as its value.

    [...]

    * returns an already fulfilled promise: p gets fulfilled with that promise's value as its value.
... then you can obtain a solution closer to the Haskell translation by using the behaviour of the second cited bullet point from the MDN article:

    const promise1 = Promise.resolve(123);

    promise1.then(v => Promise.resolve(v * 2)).then((value) => {
      console.log(value);
      // Expected output: 246
    });
[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

This article argues that while promises support the required operations for monad, they do not support the monad laws:

https://www.siawyoung.com/promises-are-almost-monads/


It's a good point. `Promise::resolve()` "flattens nested layers of promise-like objects (e.g. a promise that fulfills to a promise that fulfills to something) into a single layer." [0]

The example was meant to be more of an illustrative analogy than an exact correspondence.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


At that moment, the student became enlightened.

Callbacks are not the same as monads if that is what you are implying.

That's right, but only because it's a 1-to-many comparison error.

Callbacks are one particular monad, not monads in general: https://jsdw.me/posts/haskell-cont-monad/


But then CPS machinery is often used to implement Monads.

> Programming with monads strongly reminiscent of continuation—passing style (CPS), and this paper explores the relationship between the two. In a sense they are equivalent: CPS arises as a special case of a monad, and any monad may be embedded in CPS by changing the answer type.

https://cs.uwaterloo.ca/~david/cs442/monads.pdf


Promises in JavaScript are not monads as far as I can tell. While they have a somewhat similar interface, they do not appear to obey the monad laws.

There is one particular edge case in which they do not satisfy the laws. That happens to make them much more practical in day to day coding than a strict interpretation would be.

So having Promises be monads would make them less useful? This is an interesting point.

It seems to confirm the argument that monads are not that useful or pervasive outside of Haskell.


We might consider promises “applied monads” or “engineered monads”. Monodic in inspiration and they solve the same core problem, but they aren’t straightjacketed into the ivory tower “laws” in absolutely every edge case (they do satisfy them in the vast majority of cases). Which is good, because it means we never need to write things like “await await await await await foo()”

> Which is good, because it means we never need to write things like “await await await await await foo()

But what your describing is the actual value proposition of monad though! It's literally the construct you use to avoid excessive unwraps.

Look at the type signature of bind:

  m a -> (a -> m b) -> m b
It returns m b. Not m (m b). Not m (m (m b)).

> but they aren’t straightjacketed into the ivory tower “laws” in absolutely every edge case

In other words, you can't abstract over them. If you have libraries that manipulate monads, they will be incompatible or buggy, because they will assume (by design) behaviour that a non-monad will not live up to.


I should not need to explain to you that the problem is when b itself is internally an (m b’). Or when it is a sum of many types, of varying m-nests.

I can assure you that JS libraries work with Promises just fine.


This is all besides the root topic though, which is that CPS and Monads are isomorphic. Which is true. Promises are a different thing.

I was not actually thinking of Javascript (or any particular language) when I mentioned async-await. I was just referring to the concept.

Async-await in Rust isn't a Monad.

they don't grasp FP. But don't we use higher-order functions, closures, combinators all the time in most mainstream languages?

We don’t really use them in mainstream languages. They exist as idioms for shallow application, but one doesn’t go full-on combinators in javascript, for at least performance reasons. What we do in mainstream is as FP as walking stairs compared to rope climbing.

That said I think deep FP makes no sense and only entertains bored intellectuals by being a subspecies of code golf.


"People who closures all over the place in JS" have IME much to gain in learning about and using FP.

- understanding paradigms of state handling enables you to make conscious choices about what to aim for, how to design your data model, etc

- learning the benefits of having all data as values, except at the edges of the system (instead of object refs encompassing mutable state and/or IO handles)

- eg the resulting ability to reason about your system in a principled way (this can still be hard to mentally reach, even given all the ingredients, if you're very used to the "poking the mystery ball" way of relating to your app)

- how it comes together so you can just test a subset of your code at your REPL, being able to paste the input datastructure there (because it's just values, not active objects) without having your app framework up and running in a stateful context in browser or server environment.

Another strength of FP is about paralell programming. The bottleneck of parallelizing code by using multiple threads is concurrency bugs which come from mutable data. Immutable data and referential transparency is close to a silver bullet for this. (But JS programmers aren't as likely to appreciate this since JS is nearly always single-threaded)


There is an interesting trend where things with a strong mathematical definition tend to have the advantage. "Functional Programming" doesn't have a precise definition at all as far as I know, so it is likely it doesn't refer to a real thing. Most of the paradigms are similar as they don't seem to actually mean anything. People seem to want to describe something (in today's article, data flow) but they don't quite have the language to do it.

If you go to https://en.wikipedia.org/wiki/Functional_programming right now you will see "functional programming is a programming paradigm where programs are constructed by applying and composing functions" which is a bit of a ... it is quite hard to program without doing that. Even SQL is basically applying and composing functions. There are languages that are branded together because they have similar communities or techniques, but the brand isn't describing something beyond people thinking that things belong in a basket together.


> Functional Programming” doesn't have a precise definition at all

It may be that different people mean different things when they talk about FP. I come from the Haskell corner and from my point of view that's a completely insane claim, at least when it comes to pure FP.

[ ] Are you writing a procedure that just processes its arguments into a return value? [ ] Does the procedure also operate on a non-static context? [ ] Does the procedure generate additional side effects?

If you answer yes|no|no, you are programming purely functionally.

The formal litmus test is the question of whether your procedure actually fulfills the definition of a mathematical function with regard to the relationship between the arguments and the return value. If it also does not generate any side effects, it is a purely functional procedure!

Then you can apply the principles of mathematical composition of functions to the construction of programs. Purely functional languages force you to program this way by default and to use special constructs for everything that has to do with side effects: IO monad, algebraic effects, The Elm Architecture.

How are you supposed to program like this? This is where (in addition to the concepts mentioned above) the higher-order functions that everyone knows come into play: map, fold, filter, flatMap, ... If you only know these and their use in impure languages, as if this were nothing formally thought through, but just a programming style for hipsters, then I can understand how the impression arises that there are no precise definitions here.

In practice, the concepts are often very distorted or not understood at all. For many programmers, OOP mainly seems to mean that you get the drop-down list with completions in Rider when you press “.”.


I also come from the Haskell corner and I agree with roenxi. I don't think that FP is a well-defined concept, nor do I understand your characterization of pure FP. Let's take a simple program

    main = do
      v <- newIORef 0

      let procedure = do
          n <- readIORef v
          print n
          modifyIORef v (+ 1)

      ...

and have a look at the conditions you laid out

* Are you writing a procedure that just processes its arguments into a return value?

  No, it has no arguments and its return value is just (), but it does more besides.
* Does the procedure also operate on a non-static context?

  Yes, it operates v
* Does the procedure generate additional side effects?

  Yes, it prints to the terminal.
So I have answered no | yes | yes, the exact opposite of what I should have to be classed as doing pure functional programming.

You might say "Ah, but the return value of `procedure` is not (), it's IO ()", but I don't see that that changes anything. I can write my entire program in IO if I want, in a way that's hard to distinguish from having written it in, say, Python. Is that not pure functional programming, despite the fact that it's being carried out in Haskell? Then you might say "Ah, but the difference is that you could have written your program purely, without IO". But again I fail to see how that differs from Python. I can write programs that don't do any IO in Python too.

So what is it that makes Haskell a pure functional language and Python not?

My response to all this is that the notion of "pure" is very unhelpful and the correct way to describe this property that Haskell has is "referential transparency", that is

    let x = rhs in ... x ... x ...
has the same result as

    ... rhs ... rhs ...
regardless of how many times (or zero) x occurs.

main is never pure. Your `procedure` is also not pure (and also not referentially transparent). You can write impure code in Haskell, which your example demonstrates very well. But then it is necessarily IO code. Non-IO code, on the other hand, is always pure (and referentially transparent).

> but I don't see that that changes anything

It changes everything, but I can't explain it any better than I did above.

> So what is it that makes Haskell a pure functional language and Python not?

If you give me a Haskell procedure `f1 :: Int -> Int` without showing me the content, I still know that it is referentially transparent and pure. If f1 evaluates x to y once, I know that it ALWAYS evaluates x to y. If a main program in which f1 is used generates a screen output, for example, I can safely exclude f1 as the source of this side effect. If you give me a function `f2 :: Int -> IO Int` instead, I can't draw all these conclusions. In Python, these conclusions would be invalid from the start.

Now you can say, “So what? Who cares?” But it helps me enormously when designing and structuring programs, understanding programs, localizing program behavior, etc. If you don't see any added value in this distinction (ensured by the compiler), we don't have to argue about it. But I simply cannot agree with you that this does not represent a conceptual and formally justified difference between Haskell and Python.

Of course you can program purely functionally in Python. Just as you can program in a dynamically typed language as if you were dealing with static data types. Or in Java without throwing exceptions and null references around everywhere. Or in C without segmentation faults.

These are not normative or moral comparisons, but other examples of the fact that you can of course program in such a way that the code has certain properties, even if the compiler does not secure these properties. The question is whether you want that.


> main is never pure. Your `procedure` is also not pure

Ah, it's possible I misunderstood you. I missed that you said "Purely functional languages force you to program this way by default". I mistakenly assumed you were saying that in a purely functional language you can only program purely.

So you are saying that Haskell is a pure functional language, but you can also write non-pure things in it? So being a "pure language" is more a case of what sort of style is encouraged rather than what sort of style is possible?

> (and also not referentially transparent)

Interesting. Could you explain what you mean by that? (The definition of referential transparency that I know doesn't apply to functions but to languages.)

> You can write impure code in Haskell, which your example demonstrates very well. But then it is necessarily IO code. Non-IO code, on the other hand, is always pure (and referentially transparent).

That's interesting. How do you distinguish these two pieces of code:

    f = do
      v <- newIORef 0
      modifyIORef v (+1)
      readIORef v

    g = do
      v <- newSTRef 0
      modifySTRef v (+1)
      readSTRef v
is the former impure and the latter pure? Or are both impure?

> So what is it that makes Haskell a pure functional language and Python not?

If you give me a Haskell procedure `f1 :: Int -> Int` without showing me the content, I still know that it is referentially transparent and pure ... In Python, these conclusions would be invalid from the start.

Do you mean because of the type? If not, I don't understand what distinction you're drawing, and what makes the conclusions invalid for Python. If so, then does a pure language have to be typed? Would it be impossible for a pure language to be untyped?

> Now you can say, “So what? Who cares?” But it helps me enormously when designing and structuring programs, understanding programs, localizing program behavior, etc. If you don't see any added value in this distinction (ensured by the compiler), we don't have to argue about it.

We don't have to argue about it. I program in Haskell every day and reap the benefits :)

> But I simply cannot agree with you that this does not represent a conceptual and formally justified difference between Haskell and Python.

I agree there's a conceptual and formally justified difference between Haskell and Python. I just don't agree with you about how to characterize it :)

To reiterate my characterization, it is that Haskell has "referential transparency", that is

    let x = rhs in ... x ... x ...
has the same result as

    ... rhs ... rhs ...
regardless of how many times (or zero) x occurs. That's it! Nothing to do with "purity", "IO", "processing arguments", "non-static contexts" or "side effects". Now I may be wrong. That's why I'm trying to understand what you think the characterization is.

But so far I have never discovered a benefit of programming "purely" in Haskell that is not ultimately a consequence of what I call "referential transparency" above. Do you know one?


This will be a bit more detailed. I apologize in advance.

> So being a "pure language" is more a case of what sort of style is encouraged rather than what sort of style is possible?

The crucial point is that the (potentially) impure code is clearly and explicitly differentiated from pure code (IO code vs. non-IO code. Haskell uses the type system to draw this distinction.

>> (and also not referentially transparent)

> Could you explain what you mean by that? (The definition of referential transparency that I know doesn't apply to functions but to languages.)

Functions are expressions in Haskell (and in Python). Applications of functions to their arguments are also expressions. I use "procedure" as a synonym for "function".

As I understand it, referential transparency is a property of expressions. But if we understand a language as the total set of expressions that result from its alphabet and its production rules, then we can say that a language is referentially transparent if and only if all its expressions are referentially transparent. Consequently, the views are compatible.

An expression is referentially transparent if it can be replaced by its value (what the expression evaluates to) without changing the behavior of the program.

This implies that if side effects are emitted during the evaluation of an expression, then the expression cannot be be referentially transparent, because these side effects would disappear if the expression is simply replaced by its value in the program. If a dynamic (i.e. changeable) context is also processed during the evaluation of an expression, then the expression cannot be guaranteed to be referentially transparent over time either.

A function (i.e. procedure) is pure if it (1.) always returns the same return value for the same arguments and (2.) does not produce any side effects. Note that the definition is also compatible with functions that do not process arguments. The requirement is the same: the same return value must always be returned.

This implies that a function is pure if the relation between its arguments (even if they are 0 arguments) and its return value is like that of a mathematical function. I already wrote that above. It simply boils down to the fact that you can execute the function as many times as you want: it will always return the same value (for the same arguments). This is only guaranteed if the function does not operate on a dynamic context in addition to its arguments (including system time, values from a random generator, etc.). Pure functions are just stupid simple things that deterministically transform their arguments into some value. I want these things to make up as much of my code as possible, because they combine like Lego and are pretty foolproof to handle. The value proposition of purely functional programming languages is that they semantically distinguish these pure functions from impure functions. In Haskell, I only have to look at the type of a function. If it is something like `t1 -> t2 -> ... -> IO tn`, then the function is potentially impure because it evaluates to a value that is wrapped in the IO monad, so to speak. For any other function in Haskell, I know for sure that it is pure. So I can write as much pure code as possible and then add a bit of IO code to interface the pure code with the execution context, or with other subsystems of the program where it is unavoidable to work with shared state. I can also do all that in Python. But in Haskell, the language semantics ensure that my code intended to be pure is really pure, and recognizably different from impure code.

To answer your question: purity implies referential transparency. I can understand that from the definitions. Conversely, the implication does not apply. I don't understand exactly why, but so far it hasn't been a priority for me to understand this.

> is the former impure and the latter pure?

Correct. You can recognize this by the data types:

f :: IO Integer

g :: GHC.ST.ST s Integer

You use do notation in both cases, but that has nothing to do with it. You can write not only IO code with the do notation, but all kinds of other code.

foo :: String

foo = do

    s <- "hello"  

    pure s
The do notation is only syntax to avoid the bind operator. Otherwise your functions would look like this:

f :: IO Integer

f =

    newIORef 0 >>= \ v ->  

    modifyIORef v (+1) >>  

    readIORef v

g :: GHC.ST.ST s Integer

g =

    newSTRef 0 >>= \ v ->  

    modifySTRef v (+1) >>  

    readSTRef v
> ...

> Do you mean because of the type?

Exactly. I can see from the data type that f1 is pure and f2 is impure.

> what makes the conclusions invalid for Python.

Consider this code:

def add(a: int, b: int) -> int:

    print("hello")  

    return a + b;
The function is impure because of the print. Without the print, it would be pure. But the type annotation would be exactly the same. The example is a bit silly, because the type annotations are not taken into account in Python anyway (unless you use typeguard)

In Haskell you cannot simply print something in a procedure of type `Int -> Int -> Int`. The type checker would reject it. If you want to do that, you have to type it as `Int -> Int -> IO Int`. You can then no longer simply use it as if it were typed as `Int -> Int -> Int`. It is then simply a completely different procedure: an impure one.

> does a pure language have to be [statically] typed?

Very good question! I don't know. I don't think it's formally a requirement, but I only know languages that handle it via the type system (IO, algebraic effect handlers, TEA).

> But so far I have never discovered a benefit of programming "purely" in Haskell that is not ultimately a consequence of what I call "referential transparency" above. Do you know one?

I think you're right about that. Purity and referential transparency somehow seem to be almost the same thing but not 100% congruent. I have a gap in my understanding at this point.

https://stackoverflow.com/questions/4865616/purity-vs-refere...

This seems to suggest that I won't be able to close this gap any time soon. In the end, it probably doesn't matter. I assume we both mean the same thing.


> This will be a bit more detailed. I apologize in advance.

No need to apologize. Thank you for your detailed response!


It's arguable that SQL is "basically applying and composing functions," given that its mathematical underpinnings lie in the relational algebra (hence relational databases). Further, while it's maybe technically correct to make statements such as, all programming languages are based on lambda calculus, a universal model of computation [1], this is about as precise a statement as saying that all mathematics is based on set theory. While it may be true, it may be too low a level of abstraction, and there are probably higher-order constructs or machinery that get closer to your actual domain of study or application (e.g. real numbers, functions, vectors).

I have mentioned downthread [0] a few characteristics that distinguish FP from other paradigms - namely, first-class functions and higher-order functions (the ability to pass functions to functions); referential transparency, "pure functions" without side effect, and equational reasoning (the ability to replace expressions with their value and keep the behaviour of the program identical); and a tendency to prefer parametric and ad-hoc polymorphism (generics and trait/typeclass bounds) to subtype polymorphism, the "inheritance" boogeyman of OOP.

Again as mentioned in [0] the "expression problem" [2] is a good model for discussing the ergonomics and tradeoffs of FP vs. OOP.

[0] https://news.ycombinator.com/item?id=41450851

[1] https://news.ycombinator.com/item?id=41452258

[2] https://wiki.c2.com/?ExpressionProblem


The distinguishing characteristics don't distinguish FP from anything. Those are features like first class functions that I would expect to find support for in Python or C++, for example. And it isn't possible to write a useful program that doesn't have side effects because there would be no IO.

It is a case of the paradigm not really meaning anything. It is an arbitrary bunching of language features and code properties that doesn't provide a particularly useful guide. Anything that is a good idea in one paradigm is also a good idea in any other paradigm too.


It's worth noting that both languages cited (Python, C++) are multi-paradigm and therefore contain aspects of both OOP and FP; there are very few languages that are exclusively object-oriented or functional. The naming of the functools [0] stdlib package for Python's higher-order functions is illustrative in this respect, and there is an entire HOWTO for "implementing programs in a functional style" [1] in the Python docs.

If you want to see object-orientation taken to the extreme, and how it makes writing programs in a functional style extremely unergonomic, you can read old-school Guava code from Java 6 and try to decipher all the line noise; there is even a caveat in its official documentation [2] that states,

    functional programming without Java 8+ requires awkward and verbose use of anonymous classes.

    Excessive use of Guava's functional programming idioms can lead to verbose, confusing, unreadable,
    and inefficient code. These are by far the most easily (and most commonly) abused parts of Guava,
    and when you go to preposterous lengths to make your code "a one-liner," the Guava team weeps.
> It is a case of the paradigm not really meaning anything. It is an arbitrary bunching

This is a case of throwing the baby out with the bathwater; just because we can't come up with a perfectly precise definition doesn't mean that we should dispense with rough heuristics and (perhaps crudely drawn) boundaries that serve as a useful clustering of related features. Try asking a machine learning or computer vision algorithm for the criteria it uses to categorize something as a "dog;" it's unlikely that you'll come up with something as precise as the definition of the Platonic ideal of a dog, but we can still use its model to recognize dogs with a reasonable degree of accuracy.

[0] https://docs.python.org/3/library/functools.html

[1] https://docs.python.org/3/howto/functional.html#small-functi...

[2] https://github.com/google/guava/wiki/FunctionalExplained


But if we take a holistic view here, languages that treat paradigms as a thing have been trounced in the marketplace of ideas. And to find an example of a non-functional programming language your first instinct was to talk about Java 7 - where there is a strong consensus that if you were going to create a new program today it should not be used. We're up to Java 22.

The attempts from decades past to draw a distinction between OOP and FP paradigms has largely failed. Consensus seems to be that whatever is concise and easy to understand is best and attempting to force code into a paradigm is inferior to looking at specific individual features and techniques in context of the problem being tackled.


> "Functional Programming" doesn't have a precise definition at all as far as I know, so it is likely it doesn't refer to a real thing. Most of the paradigms are similar as they don't seem to actually mean anything.

> The distinguishing characteristics don't distinguish FP from anything.

> The attempts from decades past to draw a distinction between OOP and FP paradigms has largely failed. Consensus seems to be that whatever is concise and easy to understand is best and attempting to force code into a paradigm is inferior

This last statement is a marked departure from your original, first assertion which is a descriptive claim that functional programming, and indeed all paradigms, don't actually exist or mean anything; in it you instead make the normative claim that "attempting to force code into a paradigm is inferior," on the practical bases of concision or comprehensibility.

Moreover, if paradigms don't actually exist as per your first assertion, I find it difficult to understand what exactly it is we are "forcing code into" in your second statement. Clearly there exists some cluster of related features you are referring to when you point out the pitfalls of trying to pigeonhole your problem into a certain style of solution. As mentioned, Python even has a name for this cluster, and it's called the `functools` module, which lives alongside other implementation styles (OOP, procedural, etc.) in its multi-paradigmal space of solutions.

> And to find an example of a non-functional programming language your first instinct was to talk about Java 7 - where there is a strong consensus that if you were going to create a new program today it should not be used. We're up to Java 22.

The claim was basically that OOP and other paradigms don't exist; to refute the claim I supplied a counter-example of a language that leans so heavily towards the OOP side of the spectrum that writing FP code within it is prohibitive. Just because that language has been updated to incorporate techniques from other paradigms doesn't mean that the OOP paradigm ceased to exist once Java 8 came out. Just because following a "purist" FP or OOP approach is impractical does not mean that such categorizations don't exist, even if only as a limit or extreme to which varying languages tend to with varying degrees of resemblance. Java 22 can be considered mixed-paradigm, but it's probably still more OOP than Racket Lisp.

There is a difference between claiming that a distinction does not exist, and that drawing distinctions is an inferior or impractical approach.

In general I agree with the sentiment that one should be comfortable working with multiple paradigms, because as per the "expression problem" a strictly FP or OOP approach comes with tradeoffs, and which tradeoffs should be made depends on the particular problem being tackled.


Here's a good enough definition: "The same input yields the same output".

> "functional programming is a programming paradigm where programs are constructed by applying and composing functions"

This is only seems like a useless truism if you take 'function' to mean 'method' or 'procedure'. If you nail down 'function' to sameinput->sameoutput then it starts to make more sense.


No. You missed it the meaning. The paradigm is referring to the fact that the ENTIRE program must be constructed this way.

So let's say you have this in your code:

   x = 1
   b = x + 2
   x = b + x
You have composed three procedures here in order. This is illegal. You must ONLY construct the programs via composing functions there can be no other way.

Not so fast. This next is legal Haskell:

  let x = 1
      b = x + 2
      x = b + x
  in <do something with b and x>
The third line creates a second binding of x which shadows the first binding, and shadowing is considered by most people to be within the functional paradigm. What is not considered functional by most people is assigning a variable in a loop although even here you can do it in Haskell. Specifically, you can call readIORef and writeIORef every loop iteration, but last time I tried, that was about 1000 times less efficient than in an imperative language.

The intent of my syntax is to show mutation and procedures not variable shadowing and let in.

I don't think I missed anything. You presented non-functional code as an example of code which isn't functional. There's no contradiction.

And good riddance, too! I'm so sick of seeing this anti-pattern at $DAYJOB: First, create a thing which isn't what you called it (e.g. 'result'), and then mutate it in place until it is what you called it.


It's funny you say that because I'd say FP has a much more well defined mathematical foundation compared to OOP. In fact, before it became as dominant as it is today, it was often criticized as being too scholarly and theoretical. It's built mostly on the foundations of Lambda Calculus (and maybe some Type Theory).

I don't think OOP ever had nearly as strong of a mathematical/scholarly foundation as FP has


> It's built mostly on the foundations of Lambda Calculus (and maybe some Type Theory).

Can you describe a programming language that isn't based on lambda calculus? It is a universal model of computation.


In that sense, every programming language (as well as lambda calculus) is based on Java, since Java is a universal model of computation. So I don't think it's very useful to take "is based on" to be as broad as "is interpretable using".

Conspicuously absent in that thought is an alternative way to interpret "is based on". Because I suspect that in most senses that FP is based on lambda calculus we probably can claim it is also based on a subset of Java. Everything useful is multi-paradigm these days, programming paradigms turned out to not be a useful lens for developing useful programming languages.

> Conspicuously absent in that thought is an alternative way to interpret "is based on".

"Has designers who consciously adopted significant ideas from" seems good enough for me.


That is a path-dependent property though. So if we interpret it that way we can't tell if a language is based on lambda calculus without interviewing the author.

Indeed, the definition admits that we could have two almost identical languages but only one of them is based on lambda calculus. I don't think it is a reasonable way to interpret "is based on", it requires too much knowledge of history. It is more proper to have a technical definition that is rooted in the language itself.


C'est la vie: it's always going to be a very fuzzy concept when applied, short of broadening it until it's unrecognizable. Not every property of a thing needs to be perfectly decidable with no room for guesswork.

To put it pithily, "This popular notion is provided 'as is', without warranty of any kind, express or implied, including but not limited to fitness for a particular purpose." If you really want a property that's fully intrinsic, I'd suggest disregarding fuzzy 'based on'-ness and instead considering some particular aspect you care about, e.g., "How difficult is it to express such-and-such a technique in this language?"


FP is math. There's not much distinction. The language of math is FP.

> I don't know if it's still relevant to make a strict separation between programming paradigms.

Consider whether the same input to your code will result in the same output (aka functions), that's a good place to draw the line.

You can stick 'FP' fancy types and closures into a method; that doesn't turn it into a function.


Most js code bases I see (frontends typically in react) have only a basic sense of functional programming/closures. It is a massive paradigm shift to move to clojure from where modern js is today. It was probably less so in the jquery days funny enough

I wouldn't think of paradigms as strictly separate, but there are definitely clusters of related techniques that go well together. Higher order functions work best with immutability and pure functions, and struggle without them. Currying is somewhat useful by itself, but much more valuable with function composition or HOFs.

It's also important to teach the techniques together because functional programming tools are typically (deliberately) less powerful, which makes them easier to analyze, but means you need more tools.


The article itself was well written, although I'd appreciate if the author was more "to the point" with the examples.

FP never resonated with me and never fit the mental model I have for programming. I tried learning Prolog and Haskell a few times, and I never felt like I could reason about the code. This line from the article:

"[..] With functional programming, whether in a typed language or not, it tends to be much more clear when you've made a trivial, dumb error [..]"

wasn't my experience at all. In my experience, what made it clear when I made a trivial/dumb error was either having good typing present, or clear error messages.

I do always try to use the aspects from it that I find useful and apply them when writing code. Using immutable data and avoiding side effects when possible being the big ones.

I'm glad FP works for the blog author - I've met a few people that say how FP makes it easier for them to reason about code, which is great.


I’ve wondered for some time how this breaks down along preferences for math vs (human) language, or for proofs/formula thinking vs algorithmic.

I find FP concepts easy enough to grasp (provided they’re not demonstrated in e.g. Haskell) and even adjacent stuff like typeclasses or monads or what have you aren't a stumbling block, and I'm plenty comfortable with stuff like recursion.

… but I'm firmly on the language-is-more-natural-for-me side of things, and find non-algorithmically-oriented math writing incredibly difficult to follow—I have to translate it to something more algorithmic and step-oriented to make any headway, in fact. I find languages like Haskell nearly illegible, and tend to hate reading code written by dedicated FP fans in just about any language.


the "FP is for math people" meme IMO is incorrect and comes from FP mainly being used to refer to Pure FP, aka Haskell.

While Monads and co. are interesting constructs, I think the main thing with FP (pure or not) is immutability by default.

That alone makes code so much easier to think about, in my experience.

One can do FP in languages not commonly associated with FP by just not (re)assigning variables. FP languages just make it increasingly hard to do so.


To it wasn't really math, even though there was some of that, it was about the amount of concepts and devices that have to work together.

Ability to reason about expressions means everything can be run and have a result without side effects (unless you allow, say, lisp effectful builtins). The fact that everything is built around function means you can always unplug or compose them. All this with a very reduced set of ideas and syntax (at least for the core)

On the other hand most imperative languages required you to pay attention to state, which is rapidly a mental dead end, with a lot more ceremony and syntax. At least before the 2010s .. nowadays everybody has expression oriented traits and lambdas.

After learning ml/haskell and doing interesting things with a kind of clarity.. I tried going back to c/python and suddenly a lot of errors, issues and roadblocks started to appear.

Then you have the parallel case.


When I was younger, I found writing essays, mathematical proofs, and imperative code very similar activities. Functional programming was difficult, because I could not find the right way to think about it.

Over time, I have slowly become better at functional programming. But I'm now worse with proofs and essays due to the lack of practice.


I think I find a lot of FP code harder to both write and read (even code I've written myself), but when the FP code is written and compiles, it is much more likely to be correct.

It's an annoying trade off, to be sure. With a language that makes writing FP code ergonomic and idiomatic, though, I'm usually going to choose to write that way.

But even in languages where writing in an FP style is a chore (if it's even possible at all), you can take some lessons. The C I write today is more "functional" than 25 years ago, when I'd never even heard of functional programming. I try to avoid global state and mutation and side effects, and write more pure functions when I can. I think about my programs as transformations of data, not as a series of instructions, when I can. My C code today is not very FP at all when you'd compare it to something written in Haskell or Scala or whatever, but it's, IMO, "better" code than what I used to write.

In 2022 I went back to a C-based open source project that I used to work on heavily in the mid-'00s. Reading a lot of that old code makes me cringe sometimes, because the details at the micro level make it really hard to reason about behavior at the macro level. As I've been working on it and fixing things and adding features again, I'm slowly morphing it into something easier to reason about, and it turns out that using functional concepts and idioms -- where possible in C without having to go into macro hell -- is essentially what I'm doing.


>I think I find a lot of FP code harder to both write and read (even code I've written myself), but when the FP code is written and compiles, it is much more likely to be correct.

You're probably thinking about haskell. It's like this because of the type checking combined with the FP makes haskell especially robust.

That being said FP is programming nivana. The FP function is the most modular unit of computation in CS. By writing an FP program composed of functions you have broken down all sections of your program into the smallest form by definition.


>it is much more likely to be correct

It is not. FP programs measurably have at least as many defects as non-FP programs.

There is literally no reason to subject yourself and, worse, your users, to the garbage of FP.


I even know where this comes from. Some very popular circa 2000 book on programming that coined this dogma without a proof, based on the fact that writing a type-correct program in haskell isn’t trivial.

FP doesn't have to be a religion; in Common Lisp it's just an idea, an ideal to aim for perhaps.

Same can be said about Clojure. Although Clojure usually described as an FP-language, it's not "purely FP". In general, I find that Lispers typically don't concern themselves with the popularity of certain tools or techniques; They don't care for things like MSFT marketing shoveled in your mouth. Die-hard pragmatists, they'd use an instrument if it makes sense. They don't give two shits about OOP propaganda or extreme functional purity, or the notion such as "not using static types is immoral" and shit like that, they'd use object-orientation where it makes sense; metaprogramming, where desired; constrains, if applicable; concurrency when required, etc. All that without any zealotry - just pragmatic use of the right tools for the job, embracing Clojure's flexibility and power while staying true to the only core "philosophy" - "simple made easy".

As a dyed-in-the-wool Clojurist, I appreciate leveraging the functional aspects as much as immutable by default data structures. It takes a while to stand back and realize, but once you have a large program that is mostly all passing immutable hashmaps all around to static functions, testing becomes soo much easier and reasoning about how all the code is glued together becomes far easier. You know with certainty that code doesn't have all the ordering problems that come with Objects, whether you called some random function on some object at aome time that cached some instance variable that should have been shared with some other object. Reasoning about that is nuts, and the status quo for most OO code I've seen. If you know that most code only is implemented with pure functions with immutable data, then the ordering questions are nearly completely gone. You can now refactor so much easier as well without risk of subtle ordering related breakage. And then Clojure has atoms and channels which are very nice, thread-safe constructs that also are far easier to know your code won't have memory safety issues. I have dabbled at learning Rust or Haskell or Swift but Clojure gets so much so right.

> You know with certainty that code doesn't have all the ...

You have this certainty to the extent that the compiler enforces it.

If you like language X because it's 'pragmatic' instead of 'religious', then you make the pragmatic choice of not knowing with certainty.


I know, right? It may feel weird at first - with immutability, parentheses and prefix notation, but once you grok REPL-driven interactivity and structural editing - so much about programming becomes intuitively simpler.

I have the same experience. I learned Haskell because of the aura of prestige around it, and must admit I still love the syntax, but moved over to Clojure as my daily-driver as the practicality of "just use maps" is incredible.

IMO the article misses the point there. Immutability is the "thing" in FP.


Yes! I spent months (maybe even years) trying to understand Haskell, and for the love of god, I just couldn't wrap my head around so many things. I just wanted to build stuff with it, but it felt like becoming a doctor - getting a bachelor's, passing MCAT, then four years in medical school, then residency program, then getting a license and a board certification. All that for the sake of knowing how to properly apply a band-aid (or something). Except, with Haskell, there's no rigid structure, there's no single, proven, 100% guaranteed path to glory - it's all blur. There's no "10 steps to build a website with Haskell". And then I picked up Clojure, and OMG! everything suddenly started making so... much... sense... All the things in Haskell, for which I had either no clue or had some shallow understanding - that all became so much more intuitive.

It definitely took me at least 3 years of reading and practicing to be able to get to -at least think- understand it, and I still think I would be considered pretty incompetent by people that do Haskell daily.

There is no doubt in my mind that the Purity concept in Haskell is taken to an impractical degree. I happen to like it, but it does not make developing software with it any less of a troubled experience. The defining concept in comparing Haskell and Clojure for me is the pareto principle :D Clojure gets 80+% of the way there, without 80% of the self flagellation.

There is also a consideration of static typing vs dynamic but that's a whole other can of worms.

Simply, in Haskell all mutability is delegated to the runtime while in clojure one has the freedom and responsibility to manage it directly, and boy does it gain in simplicity for it!

I know I am preaching to the choir but it's pleasant to find a similar experience and share thoughts!


It's possible that what makes functional languages easier to reason about and debug is the fact that they allow the types to capture a lot more information than the imperative languages.

What would also explain why Prolog has none of those benefits. If you don't use the extra information, it can't do any good.

But if it is really just that, it can be replicated over imperative languages. Anyway, Rust is evidence that there is something to that idea.


Not disagreeing with you, though I'd say Rust syntax also feels very hard to understand most of the time. I think function signatures is something that I distinctly remember getting very complicated sometimes.

Which is unfortunate, as I like the principles behind it. I wonder if someone will ever write a Rust-like language that has a syntax closer to Java or Haxe.


Oh, Rust type signatures are harder to use than even Haskell.

But that's because there are a lot of low level detail that must go into them. Most of the complexity that C developers tend to ignore (and create wrong programs) goes there, explicitly. If you don't want the detail, you can make them simpler.

That said, I have a long rant about how Haskell-like types ignore the entire "algebra" thing from algebraic types, and could be way more expressive and simpler to use.


I'm personally a fan of FP. It offers clear benefits: simplified parallelization, improved testability, and reduced side effects. These advantages often lead to more maintainable and robust code.

However, FP's benefits can be overstated, especially for complex real-world systems. These systems frequently have non-unidirectional dependencies that create challenges in FP. For example, when component A depends on B, but B also depends on a previous state of A, their interrelationship must be hoisted to the edges of the program. This approach reduces races and nondeterministic behavior, but it can make local code harder to understand. As a result, FP's emphasis on granular transformations can increase cognitive load, particularly when dealing with these intricate dependencies.

Effective codebases often blend functional and imperative styles. Imperative code can model circular dependencies and stateful interactions more intuitively. Thus, selectively applying FP techniques within an imperative framework may be more practical than wholesale FP adoption, especially for systems with complex interdependencies.


>d. As a result, FP's emphasis on granular transformations can increase cognitive load, particularly when dealing with these intricate dependencies.

Does it increase cognitive load, or is it just making the cognitive load more apparent. Sure it's easier to write multithreaded code if you assume race conditions can't happen, but that's not actually accurate to what would happen.

Perhaps FP just makes explicit in the typing/coding portion, what would otherwise be uncovered hours/days/weeks later in a bug?


Also worth mentioning in terms of mixing functional/imperative techniques: it can be very helpful to use languages (and frameworks, libraries, interfaces) which are functional-first/-by default, not necessarily pure but which provide specific affordances for managing side effects. This can be seen in languages like Clojure (with reference types distinct from most of the rest of the language/stdlib). It’s also a hallmark of many projects with a reactive paradigm (which in some form or another have dedicated APIs for isolating state and/or effects).

These aren’t strictly necessary for effectively using functional techniques in an imperative environment, but they can go a long way toward providing useful guardrails you don’t have to figure out yourself.


I don't know exactly what you are trying model with this hypothetical circular dependency.

However, circular dependencies can be represented with lazy (aka non-strict) references and deferred function calls (aka thunks / call-by-name), and are IMO easier to reason about than mutable imperative techniques for representing such relationships. They also have the advantage of being totally thread-safe.

The OP (Lihaoyi) is the author of an important set of Scala libraries, and Scala is an example of what you are describing. Scala is a hybrid OO / FP language that is not dogmatic about purity. You can be totally imperative if you want.

It is common in the Scala ecosystem to implement performance-critical library code using local mutability and null references. Internally the function is imperative; to the caller, it is functionally pure.


It offers none of those things and provably doesn’t have more robust code.

Maintainable code? I dunno. From what I’ve seen, FP is much worse for changing when you have shit deep in your call stack. I wouldn’t call that maintainable.

More readable? Nah. Nearly everyone has a much easier time consume code when they’re not constantly context switching between function jumps all over the place.

Additionally, FP only makes threading “easier” in one very specific circumstance. Once you need to parallelize the same workload, FP is generally much much harder to thread, whereas this is trivial in other languages.


Interesting how the post doesn't mention the word "side effect" once.

To me, all of this could be summarized by "no side effect".


He didn't need to mention it because it's implicit in data flow. Instead of the side effecting / state changing

    whip(cream)
that needs to be under flow control, you have

    whipped_cream = whip(cream)
describing the flow of data. Data flow describes the relationship between non-changing entities and thus there are no side effects.

While they could have mentioned it, it wouldn't really change the message.


Similar reaction, I searched for "lambda calculus" first thing first and was pretty weirded by the "0 results".

To me (someone who really doesn't like FP as a religion), FP is about function application: everything should just be "pure" function calls taking values (not references) and returning values; immutability is indeed a corollary of that, static typing really isn't (isn't Church's original LC untyped, btw?).


Maybe the author was trying to avoid potential fallacies that follow that phrase around though:

* If you don't have side effects, you can't do anything.

* Haskell can do things? Then it has side-effects, so it's not functional.

* Computer getting hot is a side-effect.

* i++ is not a side-effect, because I intended to do it.


No, as in zero, (side) effects means no program for most domains.

For me, the essence of FP is the minimization of global state


I think the point of the article is to illustrate why "no side effect" is important.

While I agree, you can get all of that FP example from the article in say C++ by liberal const-usage. So is "const-y" C++ functional programming?

It's not enough to take out the 'bad bits' - you have to put in 'good bits' too.

For example, Java collections are mutable, but enough ink has been spilled about the dangers of shared mutable state, that there are various recommended defences, e.g. make defensive copies when you return collections to a caller. Or use one of the immutable collections that will throw a runtime exception if someone tries to mutate it.

Now consider if you wanted to evaluate 3 + 5. Should you throw an exception which tells the caller off for trying to change the value of 3? No, the caller just wanted 8!

This is what's missing with the 'just make things immutable' approach to FP mimicry. I want to be able to combine collections together. The Java standard library Set<> still doesn't have union and intersection for christ's sake.


I think liberal const-usage breaks move optimizations so it's not used in practice.

Perhaps not these days, back in pre-C++11 days it was very liberally used. And something I miss in other languages, precisely because it allowed for much easier reasoning.

this is wrong! ocaml is a functional programming language with side effects.

To be fair, OCaml is multi-paradigm.

i mean, then we get into defining multi-paradigm :P does it have objects that could, theoretically, be used in a java-like object oriented system? yes! does anybody do that? not really! it's far more functional than any other paradigm, and that's what counts.

Related:

What's Functional Programming All About? - https://news.ycombinator.com/item?id=15138429 - Aug 2017 (139 comments)

What's Functional Programming All About? - https://news.ycombinator.com/item?id=13487366 - Jan 2017 (2 comments)


My problem usually appears when rather than clearly laying out the types of each returned value like in the article, all the FP guys I've worked with want to build giant chains that look like : return beat(whip(mix....(eggs)))

Likewise with the OO guys.

  Eggs.mix()
      .whip()
      .beat();
There's even a term for it - 'fluent API' - described as:

> Fluent API is a way of implementing object oriented API in a way that it provides readable code. [https://www.progress.com/documentation/sitefinity-cms/for-de...]


Mine too. I've worked for a shop adopting this practice some time ago. A very common pattern was to declare a relatively large Python dictionary with function calls and list comprehensions nested deep in the dict [sub]properties. Nice to glance, terrible to debug and reason about.

Very nice article, I liked it a lot! It personally resonated with me and my own conclusion that the core "benefit" of FP is (for lack of a better work, stupid Bitcoin) "proof of work".

Writing functions FP is essentially all about returning results from a function, which is proof that that a computation has occurred. If you don't have that return value, that proof, then obviously the rest of your code can't and shouldn't continue, and FP makes it obvious compared to more traditional imperative approaches.

And this idea extends into so many other things that people consider core, or at least originating from FP. Result/Option types come to mind, making the possible failure of "proof of work" explicit in the type signature, so people are forced to consciously handle it. It also leads into the whole idea of type driven design, one of my favorite articles, "Parse, don’t validate"[1], describes this as well, making types that clearly restrict and set the expectations needed for the "proof of work" to always be returned from a function.

[1] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


This is borderline nonsense.

You’re too kind. There’s nothing borderline about it.

> Anything vertically separated can be done in parallel.

This assumes no contention on a limited number of bowls or having only one kitchen tool for beating or whisking etc. :P

I point that out not to demand that the metaphor be bulletproof, but because I think it may help explore something about state-handling and the FP/imperative split.

How might this tiramisu-tree model change if we were limited to N bowls and 1 beater and 1 whisk, unable to treat them as statelessly-shareable or endlessly-duplicable?


You're describing (safe) locking of resources, best put forward by Dijkstra in 1965 as the Dining Philosophers problem.

I have yet to see a more practical, elegant solution than Haskell's STM for this: https://www.adit.io/posts/2013-05-15-Locks,-Actors,-And-STM-...


Sounds like you need a Kitchen Monad!

It’s hard to characterize what fp is. A lot of people think fp is a bunch of techniques like map, reduce, immutability or high level functions or monads.

None of those things are exclusive to fp. They are just tricks developed by fp.

Here’s how to get a high level characterization of what fp actually is:

You know how in math and physics they have formulas? Formulas for motion, for area, etc.

Functional programming is about finding the formula for a particular program.

Typically when you code you write an algorithm for your program. In functional programming you are writing a formula. Think about what that means.

The formula has side effect benefits that make it easier to manage complexity and correctness. That’s why people like it. These side effects (pun) are not evident until you programmed with fp a certain amount.

Obviously though people naturally reason about things in the form of procedures rather then formulas so fp tends to be harder then regular programming.


> Functional programming is about finding the formula for a particular program.

Can be disproved trivially. Anyone can code up a program that generates the nth prime for some n, by iteratively accumulating n primes starting from 2 using trial division. otoh, an actual formula that produces the nth prime would be an earth shaking event.


Iteration doesn't exist in functional programming. If you ever used iteration in FP you're doing it wrong.

So I ask you... how does a functional program do the algorithm you ask for above?

Recursion and the ternary operator. All iterative programs can be written in recursion and vice versa.

Another way of thinking about this is rather then formula, think expression. Functional programming is about coding up an expression that can fit on one line.

Anything that breaks the program into multiple lines means you're turning your functional expression into a list of procedures.


There exist many formulas that encode the program you mentioned or other sieving methods, so it is unclear what you mean by "actual formula". See https://en.wikipedia.org/wiki/Formula_for_primes

I prefer to make a distinction between recurrence, identity, and formula. Otherwise the whole discussion is moot. For example, to compute the nth fibonacci, we have Binet’s formula. We don’t need to bootstrap - we just take the golden ratio and its conjugate, exponentiate, take the difference and scale by root 5. That’s a genuine formula. otoh If one says the nth fibonacci is just the sum of the previous two fibonaccis, that’s technically not a formula - you would need to bootstrap to get those previous ones, so that a recurrence - still very useful, but not a formula in the sense you can’t turn the crank and be done. In that specific sense, I agree with Wilf and Dudley that the procedures outlined by Mills, Wright, Wilson etc are essentially worthless as formulas. They are nice identities and recurrences, but not a genuine formula in that you can’t get the nth prime by turning the crank - you have to bootstrap with so many extraneous params it makes the whole exercise futile.

Definition of formula does not preclude recurrence relations.

Evaluation of a formula will always require some form of "turning the crank" whether it involves recurrence or not.

The evaluation of the formula lives in sort of a separate universe then the formula itself. For example the integral or the limit of something represents something as a formula, but sometimes algorithmic evaluation of such things are not possible. A formula can exist EVEN when a algorithm or "turning the crank" evaluation doesn't exist. It's a seperate thing and does not influence the definition of "formula".


You are talking past each other on what formula means. In FP, a recursive function is a valid “formula”. P.S. I’m not arguing for that characterization.

Hint: it's a recursive formula. There's a isomorphism

I believe that a good definition of the "core" functional programming is: it's a practical way of using the λ-calculus. Similarly, imperative programming is a practical way of using Turing machines.

It's always somewhat possible to express what are typically considered imperative features in a functional fashion (e.g. monad), or bend imperative languages to behave somewhat like functional ones: I think the differences become clearer once we reach out for the underlying theoretical models.


> there are probably just as many people using FP without static types: in some parts of the Javascript community, Clojure, Scheme or one of the many other Lisps.

Erlang & Elixir too.


It might look like a bit of a mess, but if you look carefully, you will see

I “grasp” FP, but that pretty much sums up my experience with it. Half of its promises it delivers in the form of “you got used to read a multi-faceted inside-out mess”. I think FP is actually harmful, because it masks the fact we don’t properly teach regular programming.


While I agree that that's the case for the promises in many such tutorials, I think this article explicitly shows that that isn't inherent to the thing that FP is about.

Your quote is specifically about the code formatted in such a way, with arrows drawn all over it, to match the box diagram. But if you look at the code as it would actually be written, in a functional style:

        def make_tiramisu(eggs, sugar1, wine, cheese, cream, fingers, espresso, sugar2, cocoa):
            beat_eggs = beat(eggs)
            mixture = beat(beat_eggs, sugar1, wine)
            whisked = whisk(mixture)
            beat_cheese = beat(cheese)
            cheese_mixture = beat(whisked, beat_cheese)
            whipped_cream = whip(cream)
            folded_mixture = fold(cheese_mixture, whipped_cream)
            sweet_espresso = dissolve(sugar2, espresso)
            wet_fingers = soak2seconds(fingers, sweet_espresso)
            assembled = assemble(folded_mixture, wet_fingers)
            complete = sift(assembled, cocoa)
            ready_tiramisu = refrigerate(complete)
            return ready_tiramisu
...then surely you would agree that that doesn't look like a mess at all, compared to the imperative version a couple of lines below it? And yet, it brings all the benefits described in the following paragraphs!

There’s nothing particularly FP about it though. A function taking a value and returning a new value is as “FP” as a good deed is “Christian”.

That's mostly a discussion about semantics, which isn't that interesting IMO. It demonstrates the use of pure functions that avoid side effects, and argues that this is how you can get some important benefits associated with functional programming without having to "read a multi-faceted inside-out mess".

If you have a definition of FP that intrinsically includes that mess, then sure, maybe that's harmful, but that seems orthogonal to the article in question.


I wonder if there is a Python PEP for adding a pipe operator |> to the language. This could be pretty handy, as described in the article.

The article is not about FP.

> Languages like Java encourage patterns where you instantiate a half-baked object and then set the fields later.

Maybe it did before 2010. For many years everyone prefers immutable objects (so that object and its builder have different types, or in simpler case -- no setters, only constructor initialization). You can see it in pretty much all frameworks.

I'm ok with both functional and procedural languages, I just think this article is not about functionality. Come on, FP is all about monads!

Moreover, the "imperative" code example would be impossible anyway with immutable objects without side effects. So what I think the article is about is immutable data types. Everyone agrees that immutable are better, we have to do mutables only when we're optimizing for CPU/RAM.

And BTW concurrency is typically easily achieved if variables were not plain objects, but Future/Observable/Flow/Stream -- pick your poison. They all have passed the hype hill already, and became "just a tool" I think.


Define what you mean by "everyone" -- there are times where the cost of immutability can be overwhelming, such as in high traffic systems with overly complex data structures which you are required to use because someone who should have known better insisted upon writing.

(sorry, bitter personal experience) And yes, that is explicitly "modern" Java code written by a lead engineer and "java champion" in 2023.


Yes, I feel you. As I said, we often need to sacrifice immutability to performance, and that's ok. If they insisted using immutable structures in high-performance applications, then functional programming won't help anyway.

Hopefully we won't have to make that trade off for too long

https://www.microsoft.com/en-us/research/uploads/prod/2020/1...


Even reference counting needs memory management. Granted, allocations are very fast on the JVM heap, but cannot be faster than no allocations.

Came here to write this.

There's also a (research) language that uses the Perceus algorithm: https://koka-lang.github.io/


The classic dichotomy drawn between functional programming (FP) and object-oriented programming is the "Expression Problem" [0]; in the former approach you optimize for extensibility of the number of operations in your model, at the cost of reduced developer ergonomics when attempting to extend the number of types you can operate upon. In the latter, object-oriented approach, you have the inverse tradeoff.

In FP the fundamental unit of composition is the function; your solution is expressed as a composition of functions, which are treated as first-class objects (first-class meaning, functions can be manipulated via higher-order functions, or "functions of functions"), a feature not always seen in object-oriented or multi-paradigm languages. Polymorphism is most frequently achieved through parametric polymorphism, or "generics," and ad-hoc polymorphism, or "trait bounds"/"typeclass constraints".

In OOP the fundamental unit of composition is the object; your solution is expressed as a composition of objects, whether through actual composition/aggregation (objects containing and possibly delegating to other objects [1]), or subtype polymorphism, also known as "inheritance." Parametric and ad-hoc polymorphism can often feature in OOP languages as well, but subtype polymorphism is a distinguishing characteristic of OOP.

Functions, particularly pure functions without side effects in the "real world" such as I/O or hardware access, are akin to equations in which an expression can be replaced by its value - the "left-hand side" equals the "right-hand side." Mutable state often does not enter into the picture, especially when programming in this "pure" (side-effect free) style. This makes functional programs easier to reason about equationally, as one can determine the value of an expression simply by inspection of whatever variables are in the function's scope, without having to keep track of the state of the entire program.

Objects are distinguished by their often stateful nature as they bundle together data/internal state, and operations over that internal state. Often such internal state is hidden or "encapsulated" from the client, and the internal state is only modifiable (if at all) via the object's class' set of public methods/operations. Objects with immutable internal state are more akin to closures from functional programming - that is, functions with access to a "parent lexical scope" or "environment."

Between the two extremes exists an entire spectrum of mixed-paradigm languages that incorporate features of both approaches to structuring and modelling a software solution.

[0] https://wiki.c2.com/?ExpressionProblem

[1] https://en.wikipedia.org/wiki/Composition_over_inheritance


Missed the key point: FP serves to make someone feel like they're smarter than someone else.

Sounds like a take from an individual who a) does not see the benefit or b) just does not get it.

I have refactored large enterprise systems, and always tend to write in an FP style depending on the language. Im not talking about full blown Haskell here, but traditional C like languages.

Just keeping it simple, avoiding mutation, isolate side effects and having pure functions will take you a LONG way for better software.

You dont have to go balls deep in category theory for FP, and usually the language you use is not really suited for a monadic way of things, i dont like to shoehorn that stuff in if im not working with something like ocaml etc.


>The core of Functional Programming is thinking about data-flow rather than control-flow

That's not right. The difference between data and control flow oriented programming is the difference between laziness and eagerness. The difference between imperative and functional programming is largely one of abstraction, not a feature in itself.

The genuine core feature of FP is the separation of data and methods, that stands in contrast not to imperative programming but object oriented programming, whose core feature is fusion of data and methods. Functional programming tries to address complexity by separation of concerns, pulling data and methods apart, OO tries to address complexity by encapsulation, pulling data and methods into purely local contexts.

This is also where the association between FP and static typing comes from that the post briefly mentions. Pulling data and functionality aside lends itself to programming in a global, sort of pipe based way where types act like a contract between different interacting parts, whereas the information hiding of OO lends itself to late binding and dynamic programming, taken to its most extreme version in say, Smalltalk.


One of the foundational building blocks of FP is the closure, the purpose of which is to couple together data and the function operating on it.

ML, one of the standard-bearing functional programming languages, is at least partially defined by its powerful module system. And an ML module serves a similar sort of encapsulatory purpose as the class does, often binding together a type and functions that operate on it - the internals of the type sealed away such that only those functions can operate on that data.


>One of the foundational building blocks of FP is the closure

Yes, because the closure solves a problem in functional programming by injecting a bit of OO. Closures are just stateful function objects with a single call method operator where the local context serves the same purpose as private variables in an object.

It's exactly because FP otherwise lacks the coupling of data and methods that closures are so important, and it's why, the other way around, in languages where functions are literally first class objects, you achieve that through closures.


If paradigm A is defined, in part, by a particular concept that was created wholly independently of paradigm B, even if that concept has some analogs in paradigm B, it's a real stretch to say that it's merely "injecting a bit of" paradigm B into paradigm A.

A more parsimonious ontology is that there's an underlying concept shared by both paradigm A and paradigm B, and that you're making your cut in the wrong place - that this concept is not where the split is.


> it's a real stretch to say that it's merely "injecting a bit of" paradigm B into paradigm A.

Not at all. Leibniz and Newton both invented the same thing independently, calculus, even though each probably called it something else. The reason why this whole discussion is so confused is because people tend to do what you proposed, which was argue from the top down by association. We say ML is "functional", ML contains closures, closures encapsulate state and behavior, therefore that's "functional programming". But that's like saying, Python is "object oriented", X is in Python, therefore that's object oriented programming. Which is of course not true, you can write functional code in Python just fine.

If we want a real ontology we've got to look at what structures in a program do, regardless of language. Closures behave like objects, modules that you brought up are basically just static classes. In fact the Ocaml manual gives a good example of how classes, modules and objects can be expressed in interchangeable ways (https://ocaml.org/manual/5.2/advexamples.html). There's a reason that Ocaml is a fairly logical extension of Caml, it didn't suddenly become the ontological opposite because you added the O to it.

Working up from clear conceptual differences I think also leads to a much more sensible conclusion, which is that a lot of languages combine both functional and object oriented features and that labelling an entire language as either one is just wrong.


> by injecting a bit of OO.

OO is defined by message passing. What does a closure have to do with message passing?


In most OO languages, message passing is just a fancy term for function calling. It’s really not fundamentally different from functional application on a closure value in FP.

OOP is defined by encapsulation and subtyping (polymorphism and inheritance). In fact, the one thing that doesn’t exist in standard FP is inheritance. (Encapsulation sort-of exists with closures, and polymorphism exists with function values.)


> In most OO languages, message passing is just a fancy term for function calling.

Having objects does not make a language oriented to those objects. By your definition C++ would be considered OO, but we know it is not. Kay quite explicitly stated that C++ is not OO. I expect you're thinking of Object-based programming.

If you look at the actual definition of OO, it basically is just a laundry list of Smalltalk features. It is probably not usable outside of Smalltalk for that reason, despite Joe Armstrong's playful insistence that Erlang is the only OO language in existence.

You might be able to include Ruby and Objective-C (and, iff you're really stretching, Swift with @objc enabled), but that's about the extent of it.


I don’t really care much about what Kay says. His view of OO isn’t what caught on. C++ as a descendant of Simula, which is considered the first object-oriented programming language, certainly supports OO. (C++ is really a multi-paradigm language.)

Object-based means OO without inheritance or subtyping.


Simula long predates OO. To call it an OO language is laughable.

Simula has objects, but that does not imply that the objects are oriented.



Implying that the inventors of Simula had a time machine?

I am inclined to think the more logical explanation is that they didn't actually take OO back in time and that Simula isn't an OO language at all (perhaps confused by the first OO language being created with Simula?), but admittedly I'm rooting for the time machine! Look forward to you telling us more about it.


> The genuine core feature of FP is the separation of data and methods

Couldn't disagree more. Based on this definition C language would be the epitome of functional programming... but it is not.


That's exactly what the parent comment said, since C is an imperative programming language:

> The genuine core feature of FP is the separation of data and methods, that stands in contrast not to imperative programming but object oriented programming, whose core feature is fusion of data and methods


Yes, the original comment posits that functional programming and imperative programming are the same thing, but that does not address the practical concerns of the parent. The harsh reality is that we have both "functional programming" and "imperative programming" terms with an intent for them to have different meanings.

Let's accept temporarily that functional programming and imperative programming are, indeed, the same thing. But now you have two terms with the same meaning still wanting to mean something different. A conflict in need of resolve. So, from this point forward what can we say that makes them different?

A trait I often see in languages that are considered functional, and not found in C, is the closure. Its purpose is to bind data with functions. That suggests to me that, as we seek a division in meaning, separation of data and functions is better left for the imperative definition rather than the functional definition.

Perhaps functional programming is best described as the intersection of imperative programming with "OO" programming? That seems to be the view the original commenter ends up taking in subsequent comments.


There's always some rambling answer in every FP post about what FP _actually is_ which then devolves into senseless drivel and arguments.

If you wrote this in a book and gave it to me as a way to learn what FP is, I'd be pretty pissed if I paid for that book is all I'm saying.

Also, what in the actual fuck are you on about? FP is many things and some of them are less enforced than others depending on implementation, but I'm pretty sure in each FP book I've looked at has mentioned exactly this idea of data-flow. So either you are an amazing genius who sees what others can't, or you are just generating nonsense. Either way I don't care, HN is kind of a joke to me anymore.


Now you’re just defining FP as a contrast to OO. That’s wrong and boring.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: