Hacker News new | past | comments | ask | show | jobs | submit login
Functional programming should be the future of software (ieee.org)
341 points by g4k on Nov 2, 2022 | hide | past | favorite | 504 comments



It would be helpful if the article started off defining what a functional language is. A lot of languages have functional features but are not “purely” functional. I think most would agree there’s a spectrum; dynamic vs static, eager vs lazy, mutable vs immutable.

So what flavor of functional programming one might ask, since javascript is a dynamically typed flavor that is ubiquitous nowadays? The fine article suggests drum roll... Haskell! The author believes a statically typed and lazily evaluated language is what we all should be using. Unlike the various other dynamicaly typed options like small talk or scheme or lisp. Standard ML and Ocaml being statically typed are eagerly evaluated.

Most popular languages have added a lot of functional features in recent years so it’s clear the world is moving that way, imo.


To quote "Stop Writing Dead Programs" [1]: "If what you care about is systems that are highly fault tolerant, you should be using something like Erlang over something like Haskell because the facilities Erlang provides are more likely to give you working programs."

[1] https://www.youtube.com/watch?v=8Ab3ArE8W3s


That quote is absurd because the vast majority of applications on the planet are not written in Erlang and work just fine. Working and fault tolerance are in no way related. Being generous the majority of applications with very high uptime are also not written in Erlang.


This. I came across this quote:

> If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in

RabbitMQ might be the most famous example of a product written in Erlang. It's great, but I've seen it fall over. In my experience cluster failures typically caused by a handful of root causes like hardware error, resource exhaustion, network partitioning or operator error. Whether a system is built in Erlang or Go, I'd imagine that these same root causes would exist.

I'd love to read in-depth why RabbitMQ's Erlang underpinnings make it better than, say, ActiveMQ or Kafka. Assuming a 3 perfectly built clusters that aren't mishandled, will RabbitMQ somehow "win" over the other two because of some particular greatness in the Erlang?


> the vast majority of applications on the planet are not written in Erlang and work just fine

The vast majority of applications on the planet are bug-ridden, fragile, over-budget, under-thought, mark-missing, user-hostile garbage. They most certainly do not "work just fine."


And also the vast majority of "working" applications have a full devops team, legions of highly paid senior developers, etc. "You" do not.


Ok so we’re adding “developer productivity” on to the list. Aside from moving the goal, one does not have faith a lot faith in the knowledge of people making these claims, like where’s the proof?

Hint read Joe Armstrong’s dissertation.


Is this actually true? I feel like the majority that software exists is non-business software simply because there's zero cost...


This quote isn't about "vast majority of software".

It's about tools we use and waste our time on.


Cloud Haskell brings ideas from Erlang to Haskell. But don't know how they compare.


> A lot of languages have functional features but are not “purely” functional.

That was my first thought. I work mostly in Java because that's what they pay me to do, but I've almost never worked with a Java programmer who could actually write Java code using the OO features that the language is based around. When I see their Scala code... it's mostly var, rarely val, because it's easy to think about.


> I've almost never worked with a Java programmer who could actually write Java code using the OO features that the language is based around

I don't understand this. The language is based around primitive, flawed, simplistic OO features, right? Like "class Dog : Animal"? I never write code like that either, because it's bad practice. But you're saying they can't write code like that? Or that they don't use classes at all? How can you even write any Java that way?


> they don't use classes at all

As much as they can avoid it, yes. The only classes I ever see are those that are auto-generated from some JSON or ORM processing tool - everything else is a static function.


Unfortunately, a lot of Java programmers write imperative code in Java. Although I have seen "good" Java programmers write "good clean code", it's the exception, rather than the norm. Most of the good Java programmers that I know have moved on to better languages and platforms.


Would like to see a collection of good and bad Java code examples.


Whenever I see "interesting" object modeling beyond very basic inheritance, I groan. I've never experienced it being worth the hassle (templates, factories, etc..).

But my code is flush with collection.stream().filter().map().collect() etc. I was initially critical of it in code reviews (coming from C), but have been totally converted.


Templates (I assume you mean Generics, if we're talking Java?) aren't worth the hassle, but your code is flush with .filter() and .map()? Which rely entirely on generics to provide any typing support at all? You should learn to embrace generics; they're absolutely critical for the code you just said you write, and more. And they are most certainly not going away.


Functional programming won't succeed until the tooling problem is fixed. 'Tsoding' said it best: "developers are great at making tooling, but suck at making programming languages. Mathematicians are great at making programming languages, but suck at making tooling." This is why Rust is such a success story in my opinion: it is heavily influenced by FP, but developers are responsible for the tooling.

Anecdotally, the tooling is why I gave up on Ocaml (given Rust's ML roots, I was seriously interested) and Haskell. I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling. As for Haskell, I gave up maybe 20min in of waiting for deps to come down for a Dhall contribution I wanted to make.

Institutionally, it's a hard sell if you need to train the whole team to just compile a project, vs. `make` or `cargo build` or `npm install && npm build`.


I think tsoding has the wrong idea here. Most mathematicians are not working on GHC or Haskell standards or even using Haskell. Most are still doing mathematics on pen and paper. Many use packages like Sage or Wolfram alpha. Few are using interactive theorem provers like Lean.

Haskell is a poor language to be doing mathematics in.

I’d say the majority of people working GHC are software developers and CS researchers. They’re a friendly bunch.

What’s holding back tooling is that the developers of the de-facto compiler for Haskell are spread out amongst several organizations and there isn’t tens of millions of dollars funding their efforts. It’s mostly run by volunteers. And not the volume of “volunteers” you get on GCC or the like either.

That makes GHC and Haskell quite impressive in my books.

There are other factors of course but the tooling is improving bit by bit.

The whole “Haskell is for research” meme needs to go into the dustbin.


I think he counts academic computer scientists more as mathematicians than developers.


C++ and Java wouldn't be where they are without CS researchers, so I think the point still stands. These people aren't doing useless work. They're the ones making sure Concepts won't break the entire ecosystem and that have made the JVM the beast it is.

Grit and determination will get you far but like it or not there is a ton of work that requires knowledge of mathematics and theory that you can't avoid if you want to make good things.

What's kind of neat about Haskell is how closely researchers can work with users and collaborate on solutions.

Remember, the GHC team is pretty small. Their IRC channel isn't huge. Releases still get made fairly regularly and GHC is running one of the most advanced industrial strength programming languages out there with a large ecosystem.


> C++ and Java wouldn't be where they are without CS researchers, so I think the point still stands

I'm not sure how you gleamed that they aren't important/haven't made important contributions/aren't doing useful work from that comment. Tsoding specifically said that they don't make good tooling, but make great languages. It's not memeing about "Haskell for research," it's talking about things it needs to improve (potentially to break free of that meme).

For what it's worth, C++ tooling also sucks. We've just layered tons of kludges on top of it that make the ecosystem somewhat bearable. It's not ideal, contributing to a project for the first time usually requires some troubleshooting to get a working build.


> Tsoding specifically said that they don't make good tooling

And I disagree.

Mathematicians aren't building Haskell.

CS Researchers aren't noodling around either. I know a few of the people working on GHC who are researchers and are doing the hard work of improving error messages and reporting because they care deeply about tooling. They are also users after all!

> For what it's worth, C++ tooling also sucks.

Yeah, so does Haskell's. I think it's just an unfortunate fact of life that nothing's going to be perfect.

My point is that the reason for Haskell's situation is less to do with researchers and more to do with funding and organization.

To that effect, the Haskell Foundation is relatively new and gaining steam. It might change. But it's nowhere near the funding levels that get poured into TypeScript and C# or even Java, gcc, etc.

Update: Put it this way, the set of people building Python packaging is probably mostly developers and very few, if any, researchers. The tooling isn't great either. I don't think researchers make bad tooling and programmers make good tooling. I think programmers make stinking bad tooling all the darn time. Programming is hard. Tooling is hard. And programmers are a fickle bunch that are really hard to please.


He's talking about Programming Language Theory, which includes various type theories and such. It's where math and logic meet programming languages. In other words, the math of analyzing programing languages, not using programing languages to implement applied math.


A URL's domain on your profile seem to be expired and hosts some random stuff


You gave up using a programming language after a day? And Haskell after installing/building some dependencies for 20mins? Tbh, this sounds like you were not really trying. What kind of experience with a programming language do you expect to have after a mere day? Learning takes time. Anyone might spew some not idiomatic code within a day, but really becoming proficient usually takes longer.

Do you have any references for the "Rust is heavily influenced by FP" thing? To me it does not feel that much FP. I have (for now) given up writing FP like code in Rust. ML-influence -- Yeah maybe, if I squint a bit.


> What kind of experience with a programming language do you expect to have after a mere day? Learning takes time.

It does, which is why you need tooling to get out of the way and let you actually learn. Working out obscure tooling commands to build a hello world app then having to grok the error messages absolutely destroys the learning loop.

For rust it took me approximately 3 minutes from scratch to install, bootstrap a project and run the hello world CLI. The rest of the day was spent purely, 100% learning rust. Not Cargo.

20 minutes to install some beginner level dependencies, presumably with little feedback as to what is going in? Dead.


You shouldn't need to install any dependencies beyond `base` for hello world in Haskell:

```

module Main where

main :: IO ()

main = putStrLn "Hello, world!"

```

If you mean installing Dhall's dependencies (https://github.com/dhall-lang/dhall-haskell/blob/master/dhal...), those aren't too crazy, but they're definitely not all "beginner level". Template Haskell in particular is quite heavyweight.


As I see it, this is a completely legit beginner level perspective. When I first touched a programming language (C, Borland compiler) I was able to run a first program within minutes. I completely understand the frustration with development environments that frustrate an aspiring user unnecessarily - I had this experience with F#, VSCode an Ionide, which is a very ugly case. Haskell was not that bad, but it it certainly is not for the faint of heart.


> You gave up using a programming language after a day? And Haskell after installing/building some dependencies for 20mins...

I'd do the same. It's 2022. There are so many options without this friction, why would you fight your way through it?

If either language had some magic power or library, that'd be one thing, but their only selling point is the FP paradigm, which is only arguably somewhat better than what other languages do. Not only that, but most other languages let you do FP to varying degrees anyway.


Haskell's promise is not only FP. That's part of it though, of course, to have an ecosystem, which encourages to continue in an FP style. Haskell's promise is also strong type safety and, as a distinguisher to many langauges, lazyness by default. Aside from that, its implementation is quite performant, if one needs to worry about such things.

I will take a 20min build process for dependencies (probably a few commands, which hopefully are documented in the project's readme and probably only once for most of the lifetime of a project on your personal machine) over a language, that is quickly up and running, but breaks any number of basic principles (for example looking at JS) and lets me shoot myself in the foot. Some languages and the lessons we take from learning them are worth some initial effort. Of course it is not great, that things are not as easy, as they maybe could be, but if the language has other perks making up for that, it might still be worthwhile.


> but if the language has other perks making up for that, it might still be worthwhile.

Totally agree, I just think for the vast majority of developers, the trade off here isn't worthwhile. This is reflected in Haskell's stagnant growth despite it's model being better for a lot of things.


Sure -- people have the right not to try things, which is basically what messing around with a programming language in a new language paradigm for 1 day is. That's fine, although it should also be expected that their opinions on things they haven't tried won't be given much weight.


Day one experience matters a lot to most people. For better or worse, your community won't grow if newcomers have to spend a day hating things first.

It won't matter if you're looking for a language to make your baby for the next 5+ years, but most people are trying to solve small problems on an incremental basis.


> Do you have any references for the "Rust is heavily influenced by FP" thing? To me it does not feel that much FP.

The original implementation of Rust was in an ML dialect (I think OCaml?), so from that we know immediately that the original authors were familiar with FP and used it for their own purposes. It seems odd, then, to assume that there would be no influence of their own language.

But if we look at the actual feature set, we find a lot of things that previously belonged almost entirely to the realm of FP. The type system as a whole has a fairly FP feel, plus algebraic data types, exhaustive structural pattern matching, maps and folds, anonymous functions, the use of the unit value for side-effecting work, the functioning of the semicolon operator (which is identical to the OCaml usage)... there's quite a lot, and those were just the examples off the top of my head!


Yep, though it's got type-classes specifically from Haskell.


(It was OCaml, yes)


My job/hobby is to produce code. It is not fighting a toolchain. It does not bode well for long-term efficient use of my [limited] time if I'm spending a whole day fighting toolchains from the outset.

User experience matters, and developers are ultimately users.


IME learning the (usually shitty) tools and the culture/ecosystem eat up way more time when starting with a new language than learning the language itself does.

I can absolutely understand abandoning such an effort if one's earliest interactions with the tools and/or ecosystem are very unpleasant.


Immutability, most things are expressions, no nulls. I think this is what they mean, it's a good experience if you want to go purely functional, they took influences from everything though.


other things require my attention and there are so many hours in a day

either get those initial minutes right or lose me


I think FP suffers from what I call the "ideological gruel" problem. A lot of niche ideology-oriented communities with very strong opinions tend to ignore usability issues not necessarily because it's a theory/practice dichotomy, but that the highly ideologically excited community ignores the problems because they're so positively motivated by the ideology. So even if you're eating gruel, if the gruel is produced by an ideology you identify strongly with, the identification alone is enough to make the gruel taste better than mere gruel. A lot of folks that use FP are willing to overlook the myriad of rough edges around tooling because they're so excited to work with FP that they often get used to the tooling. Rust made tooling UX an explicit focus of the project which is why it was able to escape the "ideological gruel" curse.

Additionally most prominent FP projects are old. Both Haskell and Ocaml date from a time when UX expectations around language tooling were much lower (think C++.) The inertia around the projects never cared much for UX anyway so now in 2022 when languages like Rust and Go have raised the floor of expectation for PL tooling, Haskell and Ocaml struggle to keep up.


That's interesting, I often find the tooling one of the best things that FP languages offer. In OCaml for instance I found Dune to be fantastic and extremely intuitive. Another very good experience I had was with Elixir and Hex. In Haskell I personally think that there are indeed quite a few things that could be improved around build system and packaging, but overall it's not really that bad once you learn the quirks.


It would the OCaml community a great service if you wrote up a beginner's tutorial on working with Dune.


I felt the small section on dune here was enough for me.

https://ocaml.org/docs/up-and-running

Teaching anything beyond dune build or opam install feels out of place for a beginner tutorial.

However, there really should be more examples of “how to do X in dune”. It took me a bit to learn how to pin a git repo for local use and installation.


Tooling is why Go gets its foot in the door much quicker than other languages IMHO. A single binary with no dependencies that does pretty much everything.


Indeed. Also formatting decided at the language level (gofmt) and a standard library powerful enough to accomplish most business tasks, so you can avoid 3rd party dependencies entirely if you want, while still being productive.


I've seen go programs fail to start because they were dynamically linked to some .so file I happened to not have.


They mean the `go` binary, not Go programs in general.


Yeah ‘Fully static’ requires some extra linker flags.


As someone who likes both mathematics and programming, I find this comment and the article too divisive for divisiveness’ sake.

Programming is applied mathematics. An assignment is not 'sloppy' like the article posts, it is just another kind of operation that can be done.

A proof is very much like programming, except you are also the parser, the compiler, and the (comparatively slow) computer. Learning to write proofs helps immensely with learning how to program.

We should strive to make our proofs and programs easier to understand, no matter the paradigm.


I think F# has a good tooling story, since it's part of .NET and a first-class citizen in Visual Studio. It doesn't get as much love from Microsoft as C#, but it's still quite nice to use.


After learning a bunch of programming languages and their corresponding ecosystems (incl. Rust, Lisps, Scala), F# is still my favorite by a long shot. Its only serious shortcoming, imo, is the tooling.

Visual Studio is great, but if you're not on Windows, your only practical choices are VS Code + Ionide (I was a sponsor for a while; ultimately lost hope), or JetBrains Rider, which is powerful, but heavy.

Comparing my 10+ years focused on C# with ~5 years focused on F#, I was ultimately more productive in F#. But:

1. Tools for refactoring and code navigation were better for C# 2. Testing was more predictable with C#; I often just tested from the CLI with F# (so much love for Expecto, though) 3. Paket and dependency management between projects caused days of pain, especially during onboarding


Microsoft has come a long way in supporting .NET on non-Windows boxes, but I agree that it's still not as good as it could be.

C# certainly has better tooling, but C# has possibly the best tooling of any programming language on the planet, so that's a high bar to meet.


I used Visual Studio on macOS for some small F# projects and it worked fine. The only trouble I had was with some setup instructions being spotty because there was a shift in how things were done in recent versions at the time a few years back.


There’s also the jetbrain ide that I’ve heard good things about compared to ionide.


Yes, we ended up buying JetBrains Rider; I used it for a few years. I was much more productive in it than Ionide.

Code analysis was excellent, though it would sometimes get weird on multi-project repos. Unit testing (incl. Expecto tests) were a little flaky. Like many JetBrains products, it was a resource hog on large projects.

Probably the worst part of the Rider experience had more to do with other devs using different tools, i.e. Ionide or Visual Studio. (We were a "bring your own whatever" kind of startup.) Each IDE/toolchain had its own opinions about .fsproj files. A CI step to keep things "normal" would have been great, but there wasn't (isn't?) anything available, and we wouldn't spend the time building our own.

tl;dr - Rider better than Ionide; whole team should use it


Last time I used F# on Linux, the REPL was a mess and mostly unusable. Compilation takes forever. You have to edit an fsproj rather than inferring modules from the file system structure like most modern languages.

It’s a great language— maybe my favorite, but the tooling stinks if you’re not using VS. I’m not switching to Windows, so that leaves me in limbo.


PSA: Visual Studio for Mac[0]. I know you're on Linux but for others.

[0] https://visualstudio.microsoft.com/vs/mac


And just for clarity, it's not proper Visual Studio, but rather an updated MonoDevelop. Was missing quite a lot of functionality (for Unity/C#) compared to VS, so I used Rider instead.


I didn't know Rider supported F#, thanks for the tip.


I'd argue Rider from Jetbrains is better than VS for F#, though that is a subscription based IDE (though you can stop paying and keep the version you originally "bought" after)


The part about fsproj is due to module order being significant, which I don't think they can change without breaking existing code.


Honestly Haskell's tooling is really surprisingly good these days. It works fine cross platform, there is a single "blessed" path without too many options to spend time agonizing over.

Today the Haskell example is just `cabal install --only-dependencies && cabal build`.


The npm install experience should be baseline for newer languages. Simply let me get into hacking fast. This is one of the top reason I like tinkering with JS because it just works. (Yes, I know all weaknesses of JS ecosystem, but getting is really easy)


Please, no. Node tooling is such a mess and I can almost never get anything running easily on Nix because Node developers download binaries from the internet without understanding the system. All of these executables fail because of linked libraries. If instead they told you what libraries and executables you'd need instead loosy-goosy installing garbage all over my system, more things might work.


Sure npm is not silver bullet. But I want the npm experience. Technical implementation can be improved as you say and I agree.


Then you should be a fan of Nix


I do agree.

I think .Net has got it right. And dotnet-script [https://github.com/dotnet-script/dotnet-script] has been a game-changer for me with a REPL-like experience for unit testing and writing command-line utilities.


Not only tooling is a problem. Name one FP language that I can use for high-performance and systems programming. Is there any one except ATS?

And ATS is pretty hard (unlike C, C++ and Rust). I think it will take a while until linear & dependent type languages will hit mainstream. Rust already succeeded in that regard, so it's a great stepping stone.


Most computers follows the Von Neumann architecture. Any imperative languages with no GC would do great because of the small number of abstractions needed to make a program run. AFAIK, C only requires to set up a stack with the registers.

When we build something with lambda calculus as its core, you might want to revise that opinion.


There’s some truth to this - imperative languages with state make sense because the underlying hardware is a series of imperative instructions and a large amount of state. What does lambda calculus hardware look like?



> it is heavily influenced by FP

Is it really? I agree with the rest of your post, that Rust provides great tooling, but not sure it's "heavily influenced by FP", at least that's not obvious even though I've been mainly writing Rust for the last year or so (together with Clojure).

I mean, go through the "book" again (https://doc.rust-lang.org/book/) and tell me those samples would give you the idea that Rust is a functional language. Even the first "real" example have you mutating a String. Referential transparency would be one of the main point of functional programming in my opinion, and Rust lacks that in most places.


Considering Rust pretty much started as a way to have a ML for system programming and was written in Ocaml, yes, I think it's fair to say it was heavily influenced by FP.

It became less and less ML-like as time went on but it still as a ton of features it inherited from Ocaml and Haskell: variant types, pattern matching, modules, traits come directly from type classes, etc.


I think Rust is not particularly FP because it encourages using loops instead of recursion and “let mut” is quite idiomatic in my understanding. Those two characteristics are more relevant than the type system. For example, Scheme and Clojure don’t have type classes but are clearly FP because recursion and immutability are idiomatic.

In Rust, even though it is true that .map, .fold, .filter, and .zip exist, first of all they also exist in Python, and second, they need to be sandwiched between .iter and .collect unless one is working with iterators which makes the code noisier and pushes the needle toward loops.

The influence of OCaml and Haskell is clear though, and it makes the language more pleasant to use.


The claim was that Rust is "heavily influenced by FP"; I think that's clearly the case, while "Rust is FP" is probably not (which case you make pretty well).


FPLs yes, FP no.

We might argue it's influenced by FP indirectly because those adopted features from ML etc also jive well with functional programming, for example pattern matching as a control flow construct...


Eh, I think that's a distinction we could make, but it's not clear to me it's useful, and FPLs themselves are (definitionally?) shaped by FP.


It's true it's arguable. But if we start calling the languages that adopt FPL pioneered features "heavily influenced by FP" we end up putting a lot of languages in that set, like Java, Python etc for having GC and closures. So in order to use that as a distinguishing feature I think it's warranted to make the distinction.


I would phrase this as: adopted PL features pioneered by FP languages, but not in support of functional programming.

Like Java.

I think we come upon a phenomenan of cultural treatment of FP that makes it like AI - over time time some of the stuff invented initially invented and used in FP languages becomes adopted in mainstream languages (like eg closures, garbage collection, etc) and it gets gradually detached from the FP languages association and mainstream programmers aren't even aware of the FP origins.

(The AI analogy being: particular approaches start out being called AI and if it works out, ends up being called just normal programming technique when its adopted in mainstream - https://en.wikipedia.org/wiki/AI_effect).


It's not a FP language, but it's clearly heavily influenced by FP.

Mutability is a significant part of Rust, but it's much more sharply curtailed than any non-FP language I've ever seen. To be allowed to mutate something, you have to prove that nothing else holds a reference to it. That means that any code that doesn't use mutability itself can pretend that mutability doesn't exist.


The entire "idea" of Rust is the ability to achieve memory safety by mutation XOR multiple references, which is a different way to achieve the benefits of referential transparency/immutable data structures without the loss of performance.

Rust is not a "functional language" in that sense, but that was not the claim made, which is that Rust is heavily influenced by FP. This is most clearly seen in the trait system (typeclasses) and iterator patterns.

Influence doesn't mean you're doing exactly the same thing. If I make a rock band influenced by classical music, that doesn't mean I'm doing classical music, but I'm still very obviously influenced by it.


Most (all?) dependency management systems are single threaded and download thousands of tiny files one… at… a… time…

I have gigabit internet and I’m lucky if some package manager can get more than a couple of megabits of throughput.

Most industries would never accept less than 0.5% efficiency, but apparently software developers’ time is just too expensive to ever be “wasted” on frivolous tasks like optimisation.

I kid, I kid. The real problem is that the guy developing the package manager tool has the package host server right next to him. Either the same building or even a dev instance on his own laptop. Zero latency magically makes even crappy serial code run acceptably well.

“I can’t reproduce this issue. Ticket closed, won’t fix.”


I'm really happy with Meson, as a lot of Wayland (the new display protocol for Linux, the "successor" of X11) apps seem to be built in C, so using meson is super simple and I don't have to worry about tooling (I don't deal much with C/C++, so let me make my change and run away please).

Rust is the same, you can even define a nightly version if you want, so even the correct version is ran with rustup. It's fantastic, and I can contribute much easier to projects without worrying about tooling.


> I don't deal much with C/C++

that is because there is no such thing.


the slash symbol is often used to denote multiple entities. For example, "I don't deal with foo / bar" usually means that the commenter doesn't deal with either foo or bar. The commenter is not claiming that foo and bar together constitute a single entity.


C/C++ is just 1 in the limit, right?


two things wrong with your statement:

- the ++ operator only acts on integer types, not floats or doubles, so there is no limit to speak of here

- the expression "C++" has value equal to C before incrementing, hence the expression "C/C++" is just one for positive C, even when C is small


Hey, wait a minute, ++ is defined for floats.

It might be a bad idea to use it in many cases (since there are values for which the result is just rounded back to the original value), but it works!


So, luckily my mistakes cancel out -- C/C++ = 1 always, so it must also in the limit. Once we figure out how to define limit.


> the ++ operator only acts on integer types

no, I believe it works on pointer types and enums as well


It is also defined for floats.

Using it seems like a bad move though -- for large values it can round back to the input value.

Indeed, the following stupid test program works, although it may heat up your laptop slightly.

    #include <stdio.h>

    void main()
    {
      float C   = 1.0f;
      float Cin = 0.0f;
      int i=0;

      while (Cin != C)
      {
        i++;
        Cin = C;
        C++;
      }
      printf("%i %e\n", i, C/C++);
    }
And, it finally lets us confirm what the mathematicians never could. When does the limit happen? 16777216. No further questions.


The only problem I had with tooling using F# was getting my dev env set up. The only help I need from the editor was navigating between classes/functions and build/run.

Even refactoring was easier because the types are sometimes left to be inferred and not named everywhere. The type inference in F# being weaker also even helps with both compile speed and readability where annotations are needed both to help the compiler and the reader.

Perhaps on larger projects other things become important, but I got the sense that it's on the devs to name things well, use type annotations where helpful, and otherwise document non-obvious aspects.


There are plenty of non-academic languages with tooling issues as well. I think the larger issue is simply if the language has significant usage in a large corporation who can sponsor tooling development, or not.

Most tooling issues are pretty minor for small apps, it’s once one employs scores of developers that the lack of tooling begins to hurt (and by hurt, I mean cost money).


> I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling.

I tend to agree but, compiling C++ isn't just about typing "make". And it did take me more than one day to figure out python/js workflow.


> Functional programming won't succeed until the tooling problem is fixed.

I think different people have different wants and needs with tooling. I make (and use) binaries with Haskell. I wish more mainstream languages could make binaries.


I don't know any purely functional languages[], so my viewpoint is skewed here. Whenever I have seen a python developer drink the functional kool-aid they end up either storing state in global variables, environment variables or in a dictionary they end up passing to every function. I then have to explain to them that passing a dictionary around and expecting variables to be in it is just half-assed OOP.

My rule of thumb is anything that needs state should be in a class, and anything that can be run without state or side effects should be a function. It is even good to have functions that use your classes, as long as there is a way to write them with a reasonable set of arguments. This can let you use those functions to do specific tasks in a functional way, while still internally organizing things reasonably for the problem at hand.

The minute people start trying to force things to be all functional or all OOP, then you know they've lost the plot.

[] I have been wanting to learn lisp for over a decade, I just never get around to it.


I'm mathematically trained so in the beginning I really like writing python program in pure functional paradigm. Until I was facing this issue.

But, adopting OOP doesn't mean one have to give up on pure functional paradigm, there's a book which basically is talking about how you can incorporate more functional paradigm into OOP.

One way to look at it is (perhaps trivially) that methods are just functions, and class is just something with properties. So in a sense class defines a type, where the methods/functions expects this type.

In python I find properties (and the related cached properties), dataclass etc can really make the above more apparent in construction. To just give one example, having a property that returns a pure function would acts practically the same as methods (but not in docs unfortunately, and related auto completion kind of stuffs.)

Yet another way of looking at this is to treat Python class as singular dispatch in first argument only.

I find thinking this way enables me to reap the benefits of both OOP and pure functional paradigm.


I strongly agree. Rule one: avoid and simplify state (e.g. recalculate data if it's cheap instead of updating it when any of its inputs change). Rule two: if you need state, keep it coherent by tightly managing it (go through functions that maintain the invariants) and exposing as little as possible. That is where classes come in.


Pure functional programming is orthogonal to that. In other words: you can have your class that contains a maximum simplified state, no problem. Extending this to be pure functional means that in addition to everything else that you said, the calls to the class that manages the state are now considered as needing "special treatment" in the sense that you can't merely call them, you have to also explain what should happen if there are multiple calls. I.e. the order of execution is not depending on the order of lines of code anymore, but it is defined through the means/syntax that the programming language gives you.


Could you ELI5 or perhaps give an example? I’m not sure I understand.


It's not easy but I'll try:

    counter = new Counter

    currentValue = counter.value
    newValue = currentValue + 5
    counter.set(newValue)
In most programming languages, each of those lines is executed sequentially. Therefore we are used to it.

If this is just a script then it's simple and pure functional programming (PFP) as no benefits here. The reason is that the order of calls is always the same (it's the same as the order of lines)

Things change when the order of calls is not static anymore but becomes dynamic. Think about a webserver. Or a any system that receives calls from the outside - or has something "running" like a cron job.

In that case, you can't just look at the lines of code to understand how the program operates. You know have to simulate not only the state, but also the access/change to the state (including external state).

Here PFP comes in, making those things explicit and therefore decoupling it from the order of lines of code.

In the example of a simple script, this is just annoying because we now have to be explicit even though we now everything should be ordered as the lines of code:

    counter = new Counter

    currentValue = counter.value
    newValue = currentValue + 5 // does not compile, because currentValue now is an "effect"
    counter.set(newValue)
currentValue is now an action/effect that might be run at some point or maybe not. Therefore we have to rewrite it:

    counter = new Counter

    currentValue = counter.value
    newValue = currentValue.onceItHappenedModify(value -> value + 5)
    // newValue is now also an effect
    updateCounter = newValue.onceItHappenedExecuteOneMore(value -> counter.set(value))
    updateCounter.execute()
   
In the end we have to execute the "updateCounter" effect because until this point it is just a datastructure. A blueprint for an execution if you want so. However, in PFP we don't actually execute it - that's the whole clue! We just pass the blueprint around and it gets bigger and bigger. Until the point where we return it as datastructure to the main method. And then, the programming languages runtime executes it!

If you find that complicated, you are right. That's why PFP only works in language that support this concept and make it ergonomic. I often use languages that don't (e.g. typescript) and in there, I don't use this technique because it has more drawbacks than benefits.

Anyways, it becomes more interesting once things happen in parallel/concurrently and from different points in the application. The reason is that when you work with those blueprints, you are forced to explicitly combine/merge effects.

You can, for instance, do this:

    fireRockets = fireRocketsEffect()
    activateLasers = activateLasersEffect()
Nothing has happened so far. We only created two blueprints. In other languages, things would be running already, but not here. We now explicitly have to decide how to run those:

    fireRockets.onceItHappenedExecuteOneMore(activateLasers)
or

    activateLasers.onceItHappenedExecuteOneMore(fireRockets)
or

    activateLasers.executeAtTheSameTimeAs(fireRockets)

And so on. As you can imagine, you quickly end up with combinators for e.g. running a list of effects either in parallel or sequential or in parallel but max X at the same time and so on.

I hope that explanation makes sense. I found it hard to grasp without actually building something myself.


Imo functional programming is one of those things that makes sense from a theoretical perspective, but comes with compromises when it comes to reality.

The thing about functional programming is that the confidence you get from immutability comes at the cost of increased memory usage thanks to data duplication. It's probably going to create a ceiling in terms of the absolute performance which can be reached.

There are just other ways to solve the problems like NPE's and memory safety, like ADT's and ownership rules, which don't come at the same costs as FP.


This is actually an area where there’s room to improve FP languages.

If you track ownership in functional languages, you can statically determine if a value is used more than once.

If it’s only used once, you can apply any updates to that value in place without allocating more memory.

This gives the performance benefits of mutability with the safety benefits of immutability, in some common cases.

The main trick is adjusting memory layouts accordingly. You can keep it simple by only applying this optimisation for functions of A -> A, or if you’re replacing it with a different type you can analyze the transformations applied to a value and pre-emptively expand the memory layout.

If a value is likely to be used only once, but might be used multiple times, you can also apply the same approach at runtime by reference counting and updating inplace when there’s only a single reference (for functions of A -> A at least).

I believe the Roc folks are aiming to have aspects of this functionality, and I also believe there’s similar stuff becoming available in Haskell under the guise of linear types.

Finally, if you really need a shared mutable value, that can be achieved with mechanisms like the State type in Haskell.

In short, the pieces are there to create a functional programming language that doesn’t introduce needless memory usage overhead, but I don’t think anyone has put all the pieces together in a convenient and accessible way yet.


I get that not everyone does, but a large part of why I use Clojure is because it makes a whole class of concurrent designs easier. In particular, sharing that immutable data across multiple threads.

As a simplified example: one thread modifies the data, and another thread writes a snapshot of the data.

In the programs I write, it would pretty much never benefit from this optimization.


This article tries to push FP as a solution to NPE??? What the?! NPEs are a problem due to dodgy type systems... it's a type sytem problem, not a language paradigm one... i.e. it has no relation to whether a language is functional or not.

For example, Dart 2 is null-safe! No one in their right mind would claim Dart is a FP language. Even Java can be null-safe if you use some javac plugin like the Checker Framework.

Also, a language can totally be functional and yet suffer from NPE, like Clojure or Common Lisp, but I suppose the author may be forgiven here because they are talking only about "purely functional programming languages"... (they didn't mention "statically typed" though, but that's clearly implied in the content)...

I believe the author is inadvertently pushing for two things that are pretty much unrelated to FP, even if they are a requirement in most purely-functional languages:

* immutability

* strong, static type systems

I would mostly agree with both (except that local mutability is fine and desired, as anyone trying to implement a proper quicksort will tell you - also, see the Roc language[1], which is purely functional but uses local mutability), but I write my Java/Kotlin/Dart just like that and I wouldn't consider that code purely functional. I don't think whether the code is purely functional actually matters much at all compared to these 2 properties, which can be used in any language regardless of their main paradigm.

[1] https://www.roc-lang.org/


If the first thing you talk about is performance and not quality and maintainability, then you're already missing the point. Most software just isn't in some super high-perf environment - what matters is fewer bugs, easier maintainability, better communication with other engineers (through declarative code).

The code we work on in the 2020s is much, much more complex than code written 20 years ago. We need better primitives to help our weak and feeble brains deal with this complexity. FP (particularly pure FP) gives us that. It isn't a panacea, but it's a major step in the right direction.


I just disagree that performance isn't a concern in almost every context. There's a hierarchy of concerns, to be sure, and if you haven't written reliable code which solves the problem yet you shouldn't be worried about performance, but if your PL itself imposes a performance tax, that's something which has to be paid every time your program gets executed, by every user.

As programmers, our job is to not to play with abstractions, it's to move electrons and make hardware do things. We can't afford to abstract away the complexity of the hardware completely. Indeed the trends in PL popularity of the past 20 years have been to move back closer to the hardware, and away from highly abstracted environments, like scripting languages and JVM.


But Python/Ruby (and JS? Not sure how far they are with optimising that or what the comparison would be) are very slow compared to Haskell. So people are paying that price all the time without getting any benefits that Haskell (etc) can over next to it. I agree with the GP; performance is really not very interesting for most projects and most (Py/JS are the top dev languages by far I think) programmers/companies are agreeing with that by using low performance environments that make them productive. So productivity seems to win out.

For sake of the environment and hardware upgrades, I think we definitely should make an effort and we can see that improvements in compilers and PL theory do help with this when the goal is practical programming languages using these techniques; Rust does, Haskell was meant as academic language for a long time.

I think robustness/security should go first anyway as hierarchy of concerns; that's where things are really breaking now.


Python and JS are top programming languages for very specific reasons. Python because:

1. It has a low barrier of entry for non-programmers, which makes it suited for applications like data science and ML.

2. There is vast library support for math and science, which makes it basically the only choice for those domains.

But python is basically used as an API for highly optimized C libraries since any kind of a hot loop in python is basically an anti-pattern unless you own stock in energy companies or hate getting results quickly.

JS has its place because because until very recently it had a monopoly on web front end development, which is one of the largest programming domains.

So for both of those examples, it's the use-case determining the PL, not the programmer.


Can you give some examples of increased code complexity over the last 20 years? I am blanking on that.

I have noticed a lot more ops complexity and additional library usage, but not complexity in the code I am responsible for.


Why do you think that functional programming results in data duplication? I would think it's rather the opposite.

With strong immutability like in Haskell, you can share values even between threads and can avoid defensive copying. Two versions of the same immutable tree-like data structure can also share part of their representation in memory.

(Haskell has other problems causing increased memory usage, but not related to data duplication in my mind)


Why do you think that functional programming results in data duplication? I would think it's rather the opposite.

I suspect it has something to do with the perceptions around always returning a new thing from a function rather than the mutated input to the function. For example, if you need to mutate the property of an object based on other inputs, the perception is you would clone that object, modify the property appropriately, and then return the cloned object.

[Edit: formatting]


So, ELI5: What do you do instead, where you return a modified object, but still have immutability? Or do you avoid the problem by not trying to do that?


In most situations, you use persistent data structures where you only have to copy the modified leaves.

If you really need your type to be backed by a contiguous block of memory, you batch updates (stencils, SIMD, etc.)


But if you modify the leaf, don't you also have to modify the branch node that points to the leaf, so that it points to the new one? And every node from there to the root?


Yes, you (or rather, core library developers) need to pick a data structure appropriate to your access patterns: linked lists for iteration, search trees for concatenation, finger trees for random access, and so on. But you should be doing this anyway for clarity, even if you face no performance constraints.


Not the poster you’re responding to but I think they’re referring to the current need to allocate more memory when updating immutable data structures.

Not that there aren’t ways to represent mutability in Haskell, just that the de facto use of immutability causes excess allocation.


Efficient functional programming often uses tree-like data structures. These can be immutable but still avoid duplication.

Consider if you "duplicate" a Data.Sequence Seq (finger tree) for modification. You're not actually copying the whole structure, you are creating a new root node and re-using as much as possible of the common structure.

The end result is that a bit more memory is used in the simplest case, but not due to duplication I think.

The benefit is that a thread can make a modified value at cheaper cost without affecting another thread that is still using the original value. I also think it's easier for the programmer to understand the code.


Won't this still result in a lot of fragmentation? I.e. won't you have disjoint allocations for those new branches of the tree? Sounds pretty cache-unfriendly.


In a strict language or with poorly optimized lazy code, yes. If you can get good stream fusion, not really. If your code fuses really well (common for lists, harder elsewhere) the whole thing will happen on the stack.


That's duplicating part of the structure. That uses more memory than just modifying a value in-place, but less than duplicating the whole tree.


Sure, but let's assume that the program has more than one thread and that another thread could still be using the old value. In that case, an imperative program might be required to copy the whole structure or sleep until the existing users are done, which is often less efficient and is always more complicated.

If it's ok to support only a single concurrent user of the value, then a mutable structure is indeed more efficient. Even in Haskell we have mutable structures that can be used for such purposes.

The interesting question to me is, what should be the default? I think there is a good argument that it should be the safer, simpler immutable structures.


"Simpler". If you're only single-threaded, the mutable structures are simpler.

If you're multi-threaded, you have to choose: Do I go with immutable, which means that the other thread gets something safe? It also means that the other thread gets something stale (out of date), which can have its own consequences.

Or do I use a lock or mutex, which means that I have to place guards in all the right places, which is error-prone and can break in weird ways if I miss one?

Or do I use mutable data without guards, and just let it crash? (Because it's probably going to crash if I do that...)


> Imo functional programming is one of those things that makes sense from a theoretical perspective, but comes with compromises when it comes to reality.

Ironically whay you say is true in theory, but not true in the real world.

Source: real world Senior Haskell programmer


My problem with functional programming is refactoring. If you don't have side effects when you want to do 2 things deep into the call tree to something that is in completely different call tree branch - you have to extract it all the way up to the common parent and pass it through all the intermediates just so that one function deep there can access it.

It's incredibly frustrating when you work in a functional language, and yet it's the main benefit (no side effects).

I'd like to have a language that is imperative when written and functional when read :)


> I'd like to have a language that is imperative when written and functional when read :)

Depending on which language Monads give you exactly that bridge between imperative and functional.

In your example, you can always choose to have side effects in that deep call.

Personally, I like a type-driven approach. Then, I do not care so much where functions are (they will be grouped logically, but could be anywhere), as long as the type in and the type out matches.


I've handled this (in Clojure) either by passing a context map through the entire stack or binding some context atom at the top and using it lower down the stack. The binding is less obvious at first glance, so I prefer to pass the context, but both make testing quite easy and reduce the need for heavy refactoring.


I like that aspect because it gives you a hint that your call tree is not how it should be. Lifting out side-effects and passing the data back in is one approach, but not the only one.

Turning a deep call tree into a flat pipeline is a popular other approach and often leads to less complexity, better compos-ability.


I like it when I'm done. I very much dislike it when I'm not sure yet what my code should do so it changes constantly. And that's most of the time in gamedev for example.


Immutability does not even have to be bound to functional programming. One can use persistent (= immutable from the user's perspective) data structures, like immutable.js and get most of the benefits.

Moreover, persistent data structures can be optimized well (see Clojure), so that the performance issues may be relegated to the inner loops, as usual, and the rest of the program may use safe-for-sharing data structures.


The extent to which immutability leads to duplication seems a matter of implementation rather than a principle.

The compiler/runtime could optimize such that memory is reused, as long as all other laws are obeyed.


To some degree. But it really is the case that a persistent functional data structure is going to have a slower insert operation than a traditional mutable set. There's no getting around that.


https://hackage.haskell.org/package/containers-0.6.5.1/docs/...

log(n) slowdown to add in the end

but the cost to add in the middle is cheaper then the trivial array (see insertAt)


The asymptotic behavior is not the entire story. Because persistent data structures almost necessarily need to have their data allocated non-contiguously they have terrible cache performance and prefetching/speculation behavior in comparison to data structures that take advantage of contiguous memory locations.

I also mentioned sets, not lists.


Not that this answers all your objections (it does not, caching might be a problem!)

But sets are also a mere log(n) away

https://hackage.haskell.org/package/containers-0.6.6/docs/Da...


Duplication problem is solved by immutable data structures with structural sharing. They are part of Clojure standard library and exist in some other languages.


From my experience if you are not dogmatic/extremist about it you can gain a lot, so I try to do things in a functional way in my non functional languages.


Yeah I totally agree. I think writing code which is "functional where possible" is super powerful and offers a ton of advantages. Trying to cover that last 20% of cases where you have to bend over backwards to create a performant functional solution, to solve a problem which is trivial with a little bit of mutability, doesn't make sense.


The other problem with functional programming is it's harder to look at code and figure out the time complexity. At least with non-functional languages I can easily figure out why there are performance problems with it. Stuff like lazy evaluation may seem cool, but it's not when you have to figure out why something's slow.


That's an Haskell issue not a functional programming issue. Haskell is pretty much the only functional programming language which is lazy by default. It's the sole one for the reason you give. Every other ones are eager by default and you can opt in to lazy evaluation when needed.


In addition to sibling’s recommendations, consider OCaml too for a strict language that is vaguely similar to Haskell and performs similarly well in benchmarks:

https://cs3110.github.io/textbook/cover.html


This is lazy vs strict, not functional vs whatever. I'm a fan of functional programming, not so much a fan of lazy evaluation. Look at PureScript or Idris perhaps?


Data duplication is coming to all minstream languages these days, whether it's a community best practice or a requirement for some libraries (e.g. streams in Java).

I'm not saying that in-place operations don't have their uses, but to me, these use-cases look more and more like niche cases, as opposed to being the default choice in the past. That is well-reflected in new languages like rust, where a new name is by default immutable, and must specifically be flagged as mutable in order to be variable.

It means that the benefits of immutability and the performance tradeoffs people are willing to take are evolving. I would assume larger codebases and faster hardware means performance is less valuable, and clarity comes at a premium.

I do agree with you that NPEs and memory safety are unrelated problems, though.


Data duplication is a tool which can be used to avoid large classes of bugs, but there's no question it comes at a cost to performance.

If anything, I would say explicit mutability allows us to decrease data duplication. By being explicit about what's mutable and what's not we don't have to resort to holding "safe copies" to ensure shared memory is not being mutated unexpectedly.


I will certainly be downvoted for this, but I want to be honest so here it is anyway:

In all my programming years, 20+ years that is, I've met hundreds of programmers, and 95%+ of them handled imperative programming languages just fine, with very few actual bugs coming from each one.

Each time there is such a conversation, I have yet to see some actual concrete proof that functional programming provides a substantial increase in productivity over well-implemented imperative code.

In other words, I am still not convinced about the merits of functional programming over imperative programming. I want some real proof, not anecdotal evidence of the type 'we had a mess of code with C++, then we switched to Haskell and our code improved 150%".

Lots and lots of pieces of code that work flawlessly (or almost flawlessly) have been written in plain C, including the operating system that powers most of Earth (I.e. Unix-like operating systems, Windows etc).

So please allow me to be skeptical about the actual amount of advancement functional programming can offer. I just don't see it.


I think personal experience with multiple paradigms is essential for a developer to decide which one they prefer.

The opinions of programmers who have only used one paradigm are less than worthless, since they are demonstrating a basic lack of curiosity and lack of willingness to invest in their craft.

You can always find excuses not to learn.


I agree fully.

My viewpoint, as a functional programmer, is software is a young field.

We don't know much of anything about it.

We're living in the first 100 years after the Gutenberg printing press.

All I know is NLP code generation will be a dramatic change.

Everything else, we are still figuring out.


I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all.

The reason why multiple paradigms exist is because here in the real world, the competing issues and constraints are never equal, and never the same.

A big part of engineering is navigating all of the offerings, examining their trade-offs, and figuring out which ones fit best to the system being built in terms of constraints, requirements, interfaces, maintenance, expansion, manpower, etc. You won't get a very optimal solution by sticking to one paradigm at the expense of others.

One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)


This is just an appeal to the law of averages. I don't believe that you're actually considering ada, cobol and forth for new projects.

> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

This is never really a reason for anything, it's a personal attack on people who advocate the thing that you don't want to do. FP people are not bullying you, or shoving anything down your throat.


To be charitable to kstenerud, FP advocacy does often sound like proselytizing. When promoting FP to folks, I think we can mitigate that issue by communicating a couple points:

1. There's no dichotomy between FP and OOP. Sprinkle it into places where it makes sense to you. Adoption can come by degrees.

2. Just thinking in composable, pure functions on projects buys you serious mileage. You don't have to wrangle monads (or worse, victimize team members with advanced FP ideas). Just KISS.

FP often feels to non-practitioners like a pretentious and byzantine fad. We'll have better luck with widespread adoption if we can be inviting and dispel those negative associations. We'll seem less zealous if we can frame FP as a practice to dip one's toes into, rather than a religion to be submerged & baptized into.


I think a lot of the "religious" feeling I get from FP advocates is the constant focus on purity and the "No, you AREN'T DOING IT RIGHT!!!" kinds of tantrums. You HAVE to understand the nuance. You HAVE to participate in these strange little paradigms that esoteric thinkers have come up with. Ultimately, I think of FP languages like Haskell as playgrounds to pilot bleeding-edge ideas before the best ideas get incorporated into languages that are widely used in production.

Ultimately, some real-world processes are just procedural, and procedural languages are easier to apply to them. When it comes to callbacks, mapping functions to a list, etc., it's pretty clear to me that FP paradigms are superior.

In other words, I think it's part fad/religion, but as with anything, you can learn from it and take the good parts.


Most languages have already taken the good parts. We barely even consider them a "FP thing" anymore. Callbacks and map() are just things you can do in most languages.

What remains when people advocate for more FP is the more esoteric stuff. (And non-nullability, which inexplicably gets associated with FP surprisingly often despite being orthogonal to everything else in the paradigm.)


Problem is, it's hard to distinguish the chaff from the wheat. A lot of practices that advertised themselves as "the way of the future" and most of them faded into nothingness.

At some point people try to save time and stick with practices that stand the test of time.


> Just thinking in composable, pure functions on projects buys you serious mileage

This is exactly where I was a called a zealot and gave up because it was obvious the other person felt attacked, as if I told them "you are an idiot for using procedural and imperative style all your career" -- which I absolutely didn't say or even imply.

Ultimately, selling someone on FP is selling them a new mental model and I think a lot of us how hard that is. People by default don't want to change the way think, it's a fact of life.


> FP people are not bullying you, or shoving anything down your throat.

Have you ever worked with a FP evangelist? Every single experience I've had has been with a person who simply won't take "no" for an answer. The impatience, ego, and pettiness is bar none, really.

Aside from that, there are objective reasons to be skeptical: it's inefficient when it comes down to actual implementations that have to work on actual computers and solve actual problems ("but but tail recursion and copy optimizations lolol you moron you just don't understand it"). Also, most FP tends to become much more complex than an equivalent imperative program solving the same problem, usually through a terrible confluence of code density and mazes of type definitions.


As someone that's left scala and come back to dotnet land... depending on the domain, functional programming results in much cleaner solutions. A lot of the code messes in functional paradigms that I dealt with and also created were mostly due to people that didn't understand how to think functionally. It took a long time for me to figure it out. I find myself missing features all the time now, and spending a whole lot more time fixing bugs that simply wouldn't exist had people just applied more functional principles in their code. I understand where you're coming from with your frustrations, but I would personally attribute that to the LISP culture problem that I also saw with the hardcore functional programmers. Everyone would just nerd snipe themselves into re-inventing the wheel.

Also, my dotnet contemporaries on average do not care as much about code quality as people did when I was in scala land. But this might just be a company culture thing.


This rings true - experiencing different paradigms opens you up to more elegant solutions. That's why years later I now see the value of the "Programming languages" course I had in university - every lecture covered a new language and had homework to implement a program in it. I haven't touched LISP or Prolog since, but just being exposed to them made me write better code in "traditional" languages.


Agreed. I took a year (long ago) to learn lisp just to be exposed to a different way. And it has paid dividends in terms of providing alternate ways of thinking about code in other languages.

That said, I'd never want to use lisp in a real production project. Same for FP. Get exposed to these things, take the learnings and bring them into KISS boring production code that is fast and maintainable.


.net, and C# especially are absolutely fantastic tools for programming. The problem is just that they are embraced by large companies and that is where crappy engineers tend to congregate, in my experience. The bigger the company the easier it is to get lost in the crowd.

I enjoy functional programming, but I'm a wizard with C# and .net, and I can pull a much higher salary continuing to focus there. I can write my own fun stuff in Haskell for myself and will continue to enjoy corporate money.


It's also laziness and controversy. Unit testing is something aggressively pushed in many of these places, but then they continue to litter side effects, long flows, loads of complex objects, void methods, and more which inherently make it more difficult to test. FP guidelines help a lot of these cases, and you don't need to understand monoids, monads or any of that to grasp the concept of simple functions with clear output.

Meanwhile, FP is kind of a pain to connect things in, for most people.


Before learning FP I used to ask myself: "Assuming that by tomorrow my memory gets erased and my IQ diminishes by 50 points, will I understand this code?".

That's why I try to juggle the least amount of variables as possible at the same time, while trying to have my code to read like a cooking recipe or a very simple literature. From that, FP feels almost natural.

Even more so, I think that most software bugs and spaghetti code is made by people that were too confident on themselves. They thought they could handle a lot of random unrelated variables floating around all the time, while having their functions do a lot of random algorithms at the same time.

And yes, they could handle it. The first time. But, in the long run, most people forget, or they leave and another person ends up having to maintain it, and that super brief-and-clever code became an unmaintainable mess.


I'm trying to push unit testing, as a means to get rid of some of those negatives you pointed out. But it's not a panacea. And obviously, I don't want to test for the sake of it either.


I'll be honest, most of the unit tests and integration tests I write are mostly because it's more fun than cranking out "features"


... there are lots of optimizations available when you have immutable types and no assignment operator if you're curious.

Pure, immutable data structures might not be the right choice if you don't have the right language/tooling/runtime to take advantage of the benefits but they're there.


I have spent the time with Haskell to learn how to not just use it a bit, but program it idiomatically and with some fluidity. I think there is a very interesting point you can reach if you go 100% all the way in. There are some types of programs that you can write, like compilers, that are kinda painful and dangerous and quirky in imperative or OO programming and are just beautiful in full on, no compromises functional programming.

I am waaaaay less impressed by the half-assed "I borrowed a few paradigms and bashed them into an OO or multi-paradigm language but it's still fundamentally imperative" approach.

Reconceptualizing your entire program as consisting of recursion schemes and operations that use those recursion schemes, what I think the deep, true essence of functional programming as a paradigm is, is a very interesting and has a lot of interesting benefits hard to obtain any other way, along with some costs you might never otherwise think about if your mindset is too deeply into one particular world.

Rewriting a single for loop with five maps, reduces, or filters is nothing. It's a parlor trick. It buys you nothing of consequence. For avoiding bugs I almost never have, this requires a lot of extra syntax and costs huge runtime performance unless the compiler is smart enough to optimize it away.

That doesn't mean it's bad, per se. I use maps and filters as appropriate. In some languages, they're syntactically convenient, and if I'm mapping or filtering with something that is already a function anyhow, the performance issue is moot. I'm not saying this is bad and you should never use those things.

What I am saying is that is completely uninteresting as a "paradigm shift". Writing architecturally imperative programs with a light sprinkling of map and filter is nothing. Come back to me when your program is no longer architecturally imperative, and the top level of your program is now a two-line expression of a chain of mapMs and <*>s that represents your entire program. Now you have something fundamentally different.

Frankly, I think those who bang on about how vital it is to replace for loops with maps and filter chains have actually managed to miss the entire interesting point about FP. One could perhaps blame FP advocates for not explaining it very well with some justification. But it still is what it is. These people are banging on about how square a brick is, and how hard it is, and how nicely you can use one to bash things open, and how nicely you can polish up a brick if you want, but the point of bricks is that you can put them together to build walls. Sticking a couple of bricks in the middle of a mud hut leaves you with a weird looking mud hut.

The point of functional programming isn't map and filter; it is taking the class of things for which those are only two easy examples (and not even necessarily the most useful in general), and then building your entire program out of that class of things. You end up with something very different if you do that.

The linked article also bangs on about not having null references. Nothing stops you from having an imperative programming language that lacks null references. It is not a good or interesting example of why FP is interesting.

Anyways, upon re-reading, I lost a bit of focus on my original point, which is that you do need to watch out for which of the types of FP are being advocated. I have very different reactions between the two of "consider rewriting your fundamental view of the world to be composing recursion schemes instead of as the stringing together of imperative operations in various fancy ways" and "you shouldn't use for loops". You may not love the first; it is certainly a trip and not necessarily for everyone and everything, but it will at the very least expand your mind. The second is, well, mostly annoying to me because of the vigor of its advocates far out of proportion to either its programming utility or its mind expansion characteristics.


> The second is, well, mostly annoying to me because of the vigor of its advocates far out of proportion to either its programming utility or its mind expansion characteristics.

Thanks to closures, map/reduce function callbacks in Javascript are extremely handy.

Please link me to something like a stackoverflow answer with "vigor" greater than this, I'd love to read it.


If you want to see this for yourself, be it either because you don't believe or an honest intellectual curiousity, find a forum (StackOverflow isn't actually the best since it's not really a "forum") and express skepticism about the utility of map and filter and suggest that a for loop works just as well. Be polite, but be firm and don't back down.

You will be called names. You will be insulted and told that you just don't understand functional programming, probably because you're dumb.

I know this, because I had to deliberately cultivate a writing style that learned how to avoid this (which is why I start out waving around the fact that I don't just know a bit of Haskell, but actually know Haskell; I would not normally make a deal out of that, for instance). I wouldn't have had to deliberately learn how to write around this if it doesn't exist.

Now, the tradeoff I get is people who don't believe such advocacy exists, because it is no longer appearing in the replies to my posts like it used to. A worthy tradeoff even so, I think.


If you were to write a post about taking an example imperative program and showing the conversion to FP, I’d be very interested to read it. Or if you have a link handy to someone else’s post?

For me it’s difficult to immediately understand what you mean about the top level program, mapM, etc. I guess what you mean is that they entire program can be expressed as 1 call with some args? I feel like there is probably more nuance that I am not grasping.

I am curious to add this way of thinking to my toolbox, and I learn best by examples.

I have read things like Mostly adequate guide to FP and whatnot. They always get stuck on side effects and containers, which is fine, but doesn’t really address the larger scope of program design.


In the second part of the "Solving Problems the Clojure Way"[0] talk, the speaker shows step-by-step transformation of the heavy imperative (book-keepish) JavaScript code to a more functional approach.

As a result of transformation, ~95% of the code is easily unit-testable and the only impure code is the call to main which basically starts all the domino falling.

[0] https://www.youtube.com/watch?v=vK1DazRK_a0


Where are all the FP software products I can use? It doesn't seem to dominate a lot of software niches, no?


How do you distinguish between people who are in a religious fervor and people who have figured out that something is obviously beneficial?


> people who have figured out that something is obviously beneficial?

If it's so beneficial then those people can prove it by building something exceptional. I worked with Haskell and Scala for almost a decade and libraries were full of bugs and performance problems. A lot of bugs just hadn't been reported because so few people were using them. FP certainly has its strengths but so far there's very little evidence that it produces better software in the long term.


In life, there are a lot of obvious things that are very difficult to teach.

For example, I'm a biochemist with 12 years of education, and I can't seem to be able to convince anyone that vaccines are safer than no vaccines.

Humans just don't like new things or changing their mind, and that's a very generalizable fact, even among very smart groups of intelligent engineers.


The typical argument for vaccines is much less shaky than the typical argument for pure FP. For one, vaccine people can actually explain why vaccines are good. FP people seem to mostly just repeat variations on "it's easier to reason about" and showing trivial functions that are generally not easier to reason about.

Imagine trying to get people to accept vaccines, but in a world where we have other types of medicine that usually works well enough for all the things you've developed vaccines for, and the only argument you're allowed to make is "it's obviously better, look at how amazingly liquid the stuff in this syringe is".

You'll still get some resistance, but in our current world, pure FP is much more of a niche thing than pro-vaccine stances. There's a reason for that.


> The typical argument for vaccines is much less shaky than the typical argument for pure FP.

I suspect you aren't that knowledgeable about either.

> For one, vaccine people can actually explain why vaccines are good.

Unless you have a very particular background, I'm suspicious that you can actually follow the frontier of that argument. More likely, you're getting the ELI5 explanation.

> FP people seem to mostly just repeat variations on "it's easier to reason about" and showing trivial functions that are generally not easier to reason about.

Well-designed FP languages reify coherent low-complexity formal semantics. When FP advocates say "reason about", they mean in a (semi-)formal sense. No popular imperative language has any kind of denotational semantics, so good luck reasoning about the compositional behavior of C++ libraries, unless your definition of "reason about" is "think about informally".


> Unless you have a very particular background, I'm suspicious that you can actually follow the frontier of that argument. More likely, you're getting the ELI5 explanation.

People who actually understand complex things are able to provide ELI5 level explanations for people who haven't the background for more rigor. You're presenting a false representation of the OP's comment anyway: it never suggested the explanation for why vaccines are good required "frontier" level discourse.

> Well-designed FP languages reify coherent low-complexity formal semantics.

This is such a great example of what so many of us have noted about FP evangelists. It's so divorced from writing software as to be gibberish.


> It's so divorced from writing software as to be gibberish.

Only if the software you're writing is A) aggressively simplified, so as to be amenable to informal analysis, or B) garbage.

If you're only interested in writing garbage, or perhaps you don't even notice that's what you're doing, then the appeal of FP is significantly reduced.


Minus the tone, I agree with the substance of your comment. In my experience, people who don't care about this don't really understand it either.


> are able to provide ELI5 level explanations

Yes, in theory, and this sometimes works. But it rarely works in general.

In practice, people come to your explanation pre-conditioned with a lot of (often politicized) misinformation, Dunning-Kruger type overconfidence in their own ability, very little curiosity or openness to new ideas, and exhibit the attention span of a 26th percentile squirrel.

People tend to listen very little and are more skeptical of others than they are of their own understanding: instead of searching for ways in which their mental models need adjusting, they try to poke holes in your explanations. They'll repeat whatever objections they've seen or heard somewhere, whether or not the objection is relevant or adequate. This undermines both "ELI5" approaches (because what you gain in simplicity you lose in nuance and correctness) and more pedantic approaches (which require more prerequisite knowledge, experience, or patience).

If you disagree with me because that is not your experience, it's possible you surround yourself with unusually insightful and wise people.


It simply can't be done. It's always possible in theory but never in practice. Or at least certainly not in this special snowflake case.

Until someone bucks the trend and does it.

But for that you need someone with actual intelligence and empathy; a Feynman of that sphere so to speak.

In every gatekeeper community this is the order of things until someone finally destroys the gates to the knowledge and the monopoly of the priesthood. Until then, it's basically "blame the victim" for their own lack of understanding (e.g. "people listen very little", "people are more skeptical of others", "they try to poke holes in your explanations", "They'll repeat whatever objections they've seen or heard somewhere", etc).


Well, I don't disagree with you, but also all of these things you list in the parentheses are just how humans generally behave. They are real obstacles.

And regardless of where we place the blame, at the end of the day:

> In life, there are a lot of obvious things that are very difficult to teach.


> a Feynman of that sphere so to speak

Hopefully people who read QED don't go around thinking that they know all they need to know to make an informed judgement about lagrangian QM.


> there's very little evidence that it produces better software in the long term.

Have you used other software? 99% of it is dogshit.

> libraries were full of bugs and performance problems

This is a really wild claim to me. IDK about Scala, but libraries in Haskell, Ocaml, and Elixir are all vastly better than libraries in C++, Java, Python, despite a smaller userbase.


>figured out that something is obviously beneficial?

Case in point for the problem with FP communication. When you speak in smug absolutes, it comes off as religion.

I'm pretty big into the FP world. I love Purescript. I dig Idris and exploring dependent types. I spend a lot of time at work teaching "functional" thinking. And yet, I still find most "hard core" FP evangelists insufferable for smugly explaining things as "obviously beneficial." ugh.

That is a lots that sucks about pure FP. It's not all upsides. The FP community can do a lot better at "evangelizing" by talking like engineers rather than sales people.


Where did I say that FP was obviously beneficial? You're reading too much into my question. I was questioning the GP's epistemology.

> talking like engineers rather than sales people.

They do talk like this - go to any Haskell conference and it's 90% engineering papers - much better than e.g. a JS conference. It's just that most people don't even know what PL engineering even looks like, let alone how to interpret it.


> Where did I say that FP was obviously beneficial? You're reading too much into my question. I was questioning the GP's epistemology.

cmon fuckin guy


Care to elaborate? You literally wrote 3 words


> How do you distinguish between people who are in a religious fervor and people who have figured out that something is obviously beneficial?

Seems easy to me.

If it's obviously benefitial, show me:

- Performance benchmarks showing how much faster the compiled code runs

- Case studies demonstrating faster development and/or fewer time spent debugging

- Studies showing consistently better maintainability

If it is lacking this data, it comes across as religious fervor.


If you believe in the epistemic power of "studies" to convey useful information in these domains, I have bad news for you - you are the one in a religious fervor.

The only semi-objective metric you've proposed is compiled code performance, which is a fairly small component of most people's utility function, especially when the difference disappears in all but numerical-computation-centric applications.


I was once at a distributed systems meetup where a member of the audience interrupted the talk to give an extemporaneous presentation on why we should all be using Haskell.

FP advocates have a reputation for being zealous and pretentious. The only question is whether they’re right to be.


An anecdote of one person with bad manners shouldn't be representative of the whole.

I'd estimate there are three kinds of FP advocates:

1) People (like me) who have experienced personal pain in building or maintaining complex systems in imperative or other paradigms, and are genuinely astounded and relieved when we learn how Haskell and other FP languages can mitigate or eliminate that pain. They tend to advocate FP as a solution to specific problems they've personally encountered.

2) PLT academics who are just generally fascinated with building a mathematical programming language that can elegantly express the most advanced notions in math and logic.

3) People who like to feel superior and rudely interrupt presentations to expound on monads and endofunctors and the like.

Try to focus on and/or associate with #1 and #2 and ignore #3.


> An anecdote of one person with bad manners shouldn't be representative of the whole.

It is so incredibly widespread, it's not just "an anecdote of one person".

The entry-level courses at TU-Berlin where I studied had just been taken over by FP disciples when I started studying, and it was crazy. "Let me tell you about our Lord and Saviour Functional Programming, Hallelujah".

And of course the reality didn't come close to matching the advertising.

And it never does.

Another nice example was in a lecture by SPJ on parallel Haskell, where he says "This can only be done in a functional language.". Audience member: "We've been doing exactly this in HPC for decades. In FORTRAN". Instead of conceding or apologising for the gaffe, SPJ doubles down with something along the lines of "well, then what you are using is an FPL". Jeez. Oh, and when it comes to the results he reveals that the overhead is so high that you need 7-8 cores running full tilt to be equivalent to a single core C program. Jeez.

In general, there is also the widespread phenomenon I call "functional appropriation", where a similarity is noted between some aspect of FP and some other mechanism or paradigm or some such, and then the claim is made that this obviously means that the other mechanism/paradigm is just thinly veiled FP.

Newsflash: let me tell you about Turing machines. Or NAND gates...

For example FRP, "Functional Reactive Programming". Which is really just dataflow, and badly packaged dataflow at that. Or the React people's claim that the core concept or React is that the UI is a "pure function" of the model. Well, it turns out it is not. Not a pure function at all. And not really a function either. Just a mapping. And any UI had better be some sort of mapping of the model, or it's hard to see how it would qualify as a UI.

I could go on (and on, and on, and on...) but I'll stop now.


> Not a pure function at all. And not really a function either. Just a mapping.

A function is a mapping of an input to an output.


But not all mappings are functions.


imho the whole thing about FP it's not about a objectively superior way of coding, but a way of coding that's better suited to the current high level, highly distributed world.

There are A LOT of developers, a lot of custom made software and a lot of web applications. And unless you really need performance, most web services are an ideal use case for FP practices. Even if you do it in plain Javascript.

Some people are zealots about it, yes. But imho FP is a better way to trickle down those concepts instead of telling people to spend decades to become imperative code masters.


> imho the whole thing about FP it's not about a objectively superior way of coding,

OK.

> a way of coding that's better suited to the current high level, highly distributed world.

But is it objectively better suited? One might even say "superior"? ;-)

Also, I actually believe FP is actually quite ill-suited to the high-level and particularly the highly distributed world.


> current high level, highly distributed world

That's one area it's well-suited for. Also in error-handling and prevention, and in refactoring, and configuration management. All personal pain points for me previously.


> in a lecture by SPJ on parallel Haskell, where he says "This can only be done in a functional language.".

Can you provide a link?


> Try to focus on and/or associate with #1 and #2 and ignore #3.

I disagree. We are too tolerant with the zealots.

They are an obstacle to the exchange of knowledge. They don't want people to learn and get better at programming. They want to live in their own Mr Robot personal fiction.

FP is a tool. A great tool, yes. But a tool in the end. And we should promote it as such.


If there is a way to do it that you feel more efficient/enjoyable than an other more widespread one, just show other people. If they don’t look interested, changing communication style next time might help. Become more proselyte will almost neither pay positively.


I understand that it looks like that, but I believe most of the FP evangelists are just too excited about the cool tech and want to share it with everybody so everybody else can feel the joys of functional programming. At least that's why I often mention and talk about FP to other devs.


> so everybody else can feel the joys of functional programming

That is the kind of knowing-better-than-thou that undermines the evangelism.


I used to program in Erlang. The joy of it doesn't lie in FP. Not even closely.

It's in the runtime, effortless processes etc.

If all you can offer is "joy of programming in FP", you're in a cult.


So like a Christian who just wants you to share the same joys of Jesus Christ that they do correct?


In my experience growing up as an evangelical Christian, the proselytizing has very little to do with wanting strangers to “share the same joys of Jesus Christ”.


People do still consider ada. I wouldn't, but I don't need that tool. I feel like you are creating a strawman. The gp wasn't talking about languages but about paradigms. OO was overused but it has its place, and both imperative and functional programming have strengths. There are some languages that can do everything, or almost everything, but multitools are never quite as good at being pliers as pliers are. And they are pretty bad at being a hammer. There is nothing quite like having exactly the right tool for the job, to the point that I would rather have a poorly made version of the exact right tool than a well made version of almost the right tool.


> This is just an appeal to the law of averages. I don't believe that you're actually considering ada, cobol and forth for new projects.

First it greatly depends on what "new project" mean here. There myriad of new projects within existing code base for which the existing used technologies are the major factor.

When it’s about creating a brand new product out of a blank slake, sure Cobol won’t be your first idea for a startup. Maybe Ada might come to mind if some high level of reliability is required, like in aeronautics and the like.

It also depends on the team skills. If you have team mates which are all Haskell advanced practioners, surely imposing J2EE for the next big thing might not be brightest move in term of crew motivation.


> I don't believe that you're actually considering ada, cobol and forth for new projects.

I'm not considering Erlang either.


Addendum after watching this thread blow up:

What's interesting to notice about this thread is how many messages are just oozing with smug superiority and disdain for anyone who doesn't share their knowledge. Yes, some are from genuinely humble and even-handed FP practicioners, but when we look at people who are vocal about FP, this small example shows around 90% of them in the gatekeeper camp. And the punchline is I don't think they can even see what they've become. This is what adds to the insidiousness of it all: They genuinely believe themselves to be helpful and positive.

This is a huge cultural problem, and one that I've seen many times in other communities over the years.

Take Linux, for example. Nowadays, it's trivial to get up and running with Linux, but it wasn't always so. Back in the 90s the installation was tricky, to say the least. The end user tooling was iffy at best and buggy as hell, there were tons of sharp edges, but most frustrating of all were the gatekeepers. People can handle challenges and pitfalls, but nothing quite takes the wind out of their sails like a smug asshole belittling them when they ask for help or express frustration.

Similarly with video codecs. In the 2000s when video codecs were a new thing, they had so many obscure options that almost nobody knew what would produce a good result, and so the gatekeepers came out of the woodwork to hold the sacred knowledge prisoner. They brought us byzantine software like the appropriately named Gordian Knot, and once again the forums were full of smugness, arrogance, and abuse as people despaired over how the hell to rip their DVDs into something viewable.

And it's similar in the Nix community, and countless others I've observed over the years.

In my experience, gatekeeping goes hand-in-hand with poor tooling and educational resources. The worse the UX, the more gatekeepers it attracts (because gatekeeping fulfills a need they have, so they flock to gatekeeping opportunities). Linux used to require an understanding of init scripts and environment variables and bash and crontabs and kernel recompiling and all sorts of esoteric knowledge just to get started. But now with mature tooling, it's easy to get started and then dig deeper at your leisure via the wealth of newbie friendly articles littering the internet.


I think It's funny. the blog is literally the opposite of gate keeping, trying to get non FP people into FP. Even provides a book, a course, doesn't throw around Math terms and yet your argument is "Well, some FP practitioners are smug and suck, so why bother?"

Elixir is a functional programing language (It's not pure like Haskell or PureScript, but thats besides the point.)

The Elixir community is absolutely awesome! All or welcome.

There are soooo many resources for people to learn FP, the gate keeping thing may have been true ten years ago but I certainly don't think that the case today.


On the other hand, it's really frustrating to explain this kind of things to the most Juniors. Many of them were too confident on their code because "they had everything in their heads" and anything else was like an incoherent and overcomplicated ceremony. "Why do all that when I can do a simple index.js file and add a simple if and that's it?"

There's a catch with FP, where many of their implications impose you to write better code. But it's not mandatory to go the functional way to do that, and I always try to explain the importance of those fundamental principles to other people, and they can try to implement it in whatever style they desire.

But when someone just doesn't want to learn, they always see me with a smug and gatekeeping attitude and I can't help but see them as people that just don't care at all.


In contrast, I find juniors to be the easiest to teach. Those with more experience, are stuck in their ways, often stubbornly.


Do you have actual, recent, examples of this? I keep reading the same attitude you have here, but I have never seen it first hand.


Git, Vim and general GUI come to mind, specifically on this site. There is always someone berating individuals for not using shortcuts, not using the console instead of a visual tool, using an IDE, etc. Specifically using Git with a GUI primarily is a great way to get flak from several subreddits, too.

NB: This is not an invitation for discussion on the pros and cons, to berate either preference, or gatekeep.


I only see people saying it's superior. But that's not the same thing as berating.


OMG you just reminded me of waiting outside some 101 class on day one typing on my laptop and this guy asks me what OS I'm running. When I told him "Ubuntu", he snickers and says "Talk to me when you're done playing with a toy"


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

That's exactly what worked for OOP—have we collectively forgotten just how hard OOP was pushed everywhere 15–20 years ago? Way more aggressive than any FP advocacy I've seen, and I've seen a lot. When I was learning to program every single book and tutorial pushed OOP as the right way to program; procedural programming was outmoded and functional programming was either not mentioned or confused with procedural programming.

I still have to deal with the fallout from OOP evangalism; I've had colleagues who unironically use "object-oriented" as a synonym for "good programming" and managers who believe design patterns are software engineering.


And yet today we only have partially-OOP languages being mostly used in procedural ways.


Exactly my point above, about Java.


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

Eh, I doubt it. I've encountered a few Haskell snobs in my time, but most people that I know who use FP do so completely silently and will discuss the merits and (and challenges) with you freely. I think the real issues with FP are lack of good tooling, package management, and being a major shift in thinking for most developers. It's a common theme for someone to say that learning FP (seriously) was the most impactful thing they've done to improve their software chops.

> The other major impediment is gatekeeping

??? what? I guess you could argue that pure vs non-pure functions are gatekeeping, but there are absolutely legitimate benefits to pure functions that basically everyone can agree on.


"A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

And it's not just the statement. It seems to me (from my outside perspective) that category theory is often used in a gatekeeping way.

In contrast, take SQL. How much of the mathematical theory of relations do you need to know to be able to write SQL queries? Yes, it might help, but the SQL gurus don't try to drag you into it every time you get close to a database.


> "A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

The original attribution of this line about monads comes from the (intentionally) comedic article "A Brief, Incomplete, and Mostly Wrong History of Programming Languages", published by James Iry in 2009: http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...

It is not a stance I have ever seen taken up in a serious manner by anybody in the FP community, and I work in PL academia (the region of the community from which I would expect to see the most snobbery). Please stop misrepresenting people based on a misunderstood joke.

There are people in the FP community who gatekeep and take on a snobbish tone — with that I do not disagree. However, their prevalence is generally overstated; it's a vocal minority situation. Most people I know or have talked to are very welcoming.


That article went round the Edinburgh mailing list when it was published, and Phil Wadler, who got monads into Haskell, replied saying something like "I didn't know this. Does anyone have the proof?"

The actual quote that monads are monoids in the category of endofunctors comes from MacLane, and is intended for mathematicians.


I'm honestly not sure the MacLane reference applies here.

Although the abstract phrasing can be traced to him, the particular use of "just" in the parent comment's quote tells me they're specifically thinking of the (deliberately condescending) version from the Iry post, especially since that's the version that gets memed throughout the FP community. After all, that particular phrasing is meant to convey a sense of "this is obvious and you are stupid if you don't understand it immediately", which is a far cry from the MacLane version (since that one is, as you said, intended for mathematicians).

But I probably ought to have included the full provenance regardless; thank you for bringing it up!


"an X is just a Y" is a common turn of phrase in mathematical writing. It means that Xs and Ys are the same thing, whereas "an X is a Y" may, depending on context, mean only that every X is a Y. A human is a mammal, but a human is not just a mammal.

The original quote (from Categories for the Working Mathematician) is:

> All told, a monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor.


Nobody claims you have to understand category theory to write Haskell. In fact, most will tell you it's not needed.

The saying "a monad is just a monoid..." is a cliché and an in-joke, not gatekeeping. It's the community having a laugh at itself.


I've had someone do that on this very website. I was assured that affine types in Rust are too difficult to understand without a solid grounding in Category Theory.

The years have proven that ease of programming and the burden of knowledge are the two most important elements of a programming language. FP zealots simply won't accept that their chosen paradigm is opaque to most for benefits that can't seem to be written out in human language.


> I've had someone do that on this very website

On this website you will hear anyone say the wildest assertions. There's the whole human range of expression here. But what makes you think actual FP practitioners (and Haskelers) really believe this?

The proof wouldn't be what some random person here on HN tells you. The proof would be you getting involved in an actual FP community trying to write an actual FP project and being told that you just cannot do this unless you understand category theory. Which, as I said, is not something that happens... at least not in my (limited) experience.

Never confuse what people tell you here on HN, random forums or even throwaway StackOverflow comments with what actually happens in the actual communities when trying to achieve real goals and not just chat about stuff.


But why is it funny? Isn't it funny because the community knows it comes off that way, at least some of the time (and/or some of the people)?

> Nobody claims you have to understand category theory to write Haskell.

I've seem the claim that you can't really use Haskell without it, here on HN, more than once. (Or at least something that I interpreted as being that claim...)


It's funny because the community knows some people too deep in the rabbit hole come across that way, but because the community acknowledges this is a hilarious assertion that would alienate newcomers -- if said with a straight face -- then it cannot be gatekeeping.

Gatekeeping would be if the community said this with a straight face and everyone got impatient when you just "don't get it".

> I've seem the claim that you can't really use Haskell without [knowing category theory], here on HN, more than once

I've almost never come across this assertion; it's certainly not common in the community (or wasn't when I was learning Haskell). I can tell you it's certainly false: you can very well write Haskell without knowing or studying category theory.


Well, try understanding the Haskell docs around Applicative, Functor or Monad without understanding at least basic category theory - not to mention the heavy use of notation/operators in place of traditional programming idioms (named functions).

Here [0] is an example:

> This module describes a structure intermediate between a functor and a monad (technically, a strong lax monoidal functor). Compared with monads, this interface lacks the full power of the binding operation >>= [...]

[0] https://hackage.haskell.org/package/base-4.17.0.0/docs/Contr...


Counterpoint: I learned about Applicative, Functor and Monad by reading docs and tutorials, and I haven't the faintest idea about category theory.

These are the building blocks of Haskell. You learn them as you go, just as you learn what a "method" is when learning OOP. You don't need to know category theory.


Learning what Applicative, Functor and Monad are is category theory. It's like saying "you can learn how to do unions, intersections, and differences on collections of unique objects without understanding set theory".


That seems a little silly to me and I think we're splitting hairs with what it means to do category/set theory. I don't know category or set theory, so I hope you will forgive me for using yet another allegory.

Let's say I make a type class for Groups (in the abstract algebra sense). The rationale behind this is that there's an algorithm for exponentiation which is O(log(n)) versus the naive O(n) algorithm. So if you make an instance that's a Group, you get to use this fast exponentiation.

Sure, to understand and use this type class you have to understand what a Group is. However, I think it's a bit of a stretch to tell someone "in order to use Group you must first learn abstract algebra" because they'll think you're telling them to take a university level course. In actuality, they don't have to know much at all (they don't even need to understand _why_ the exponentiation algorithm works) - they just need to know what is a lawful Group.

The first day of the intro to abstract algebra course I took introduced much more information than you'd need to use this made up type class, and I expect the same of the others.

Like this is my understanding of Functors/Applicatives/Monads. I kind of know their shape and how to use them. If you asked me about any of the underlying math I would shrug and maybe draw a pig (https://bartoszmilewski.com/2014/10/28/category-theory-for-p...).


That's not really true. When some people claim you must learn category theory to use Haskell, they mean they have to learn the math-subject-capital-letter Category Theory, which goes way beyond programming and comes with a lot of baggage.

They never mean you have to learn Functor, Applicative, etc as used in Haskell and taught in every tutorial. If they did mean the latter, it would be a tautology, which is not very useful.

Compare "in order to do OOP you have to learn what an object truly is, its essence, and possibly take some courses on the philosophy of being" vs "you have to learn about methods and objects".

> It's like saying "you can learn how to do unions, intersections, and differences on collections of unique objects without understanding set theory".

You can totally do unions, intersections and differences without knowing set theory.


SQL is intrinsically tied to set theory and borrows a ton of terminology and logic from set theory. Whether you understand that it's from mathematics or not doesn't really matter. You are using set theory and it's terminology regardless.

I've only really ever seen monoids referred to in Haskell. If you actually read my comment, Haskell is not all FP (no PL is). It's not even a significant portion of FP. So just don't use Haskell, it's hardly the first FP language I would reach for and it's probably not the one you should either.

Also, any person willing to learn would be intrigued by the terminology, not reject it as something that's "gatekeeping". Such a defeatist attitude will not get you very far in a field as complex as this.


SQL was explicitly designed to appeal to business people, and uses strictly familiar words and phrases. It was very explicitly designed (though I would say it has mostly failed) to read like pseudo-natural language.

SQL DBs and the precise semantics of SQL are somewhat based on relational algebra, but SQL syntax is definitely not. It's also not using set theoretic terms in general, with the exception of UNION and INTERSECT.


SQL tables are not actually sets at all since you can insert duplicate rows! In practice they usually are though.


You can't actually insert a complete duplicate row because the ROWID (or whatever your particular RDBMS calls it) is a implicit, but unique property of each row (and corresponds to the offset in the file the row is located at). Further, while tables can be the identity element of all records they contain, they themselves aren't really a set. Usually when people associate set theory with SQL, it's with queries (and their results).


Nobody says that seriously.

It tells a lot how people go for common jokes to complain about a community. It's easy to fall for that if you never actually interacted with the people.


> "A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

You just tried to badmouth whole community of FP programmers by taking serious a running gag, wow.



> "A monad is just a monoid in the category of endofunctors". Anyone who says that - or anything anywhere close to it - is gatekeeping, no matter how true the statement is.

Quite frankly, if you don't understand that sentence then you are in no position to judge whether the concepts could be made more accessible. Sometimes the important question is not "could this be easier to learn?" but "is learning it worth the effort?".


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.

This is a part of it. The other part is FP's a bit of a mind fuck if you're used to procedural programming. Take a classic example: Haskell. To do anything remotely productive it's advisable to understand how monads and the various monad design patterns fit together. This difficulty is further compounded by monads having it's foundation in category theory, a branch of mathematics. But hey, they're just monoids in the category of endofunctors, it can't be that hard ;) You can rely on `do` syntax a lot but it really helps to learn how they work.

> I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all.

I get where you're coming from but this is the end state as far as I'm concerned. Pure FP is where we're all headed. I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.

Rust was one of the first steps toward this. The language is procedural but a lot of the idiomatic methods of doing things are functional at heart. Not to mention most of it's syntax constructs return values. You can still mutate state, but it's regulated through the type system and you can avoid it if you really want to.

Rust's enums, arguably one of it's killer features are algebraic data types and they're combined together the way you would use monads in Haskell. This not only avoids null typing (if you ignore ffi), but also provides a bullet-proof framework for handling side-effects in programs.

I could be totally crazy, but I reckon many years from now, we'll joke about mutating state the same way we joke about de-referencing raw pointers.


I'm not sure why you're getting downvoted.

> I get where you're coming from but this is the end state as far as I'm concerned. Pure FP is where we're all headed. I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.

Having been around the block to catch a couple "FP will rule the world" cycles I can say this is likely untrue. While FP is useful and I personally enjoy it, it is not always the most efficient, nor is it the most clear. This is true in your example of Rust as well. Functional constructs are often slower than their procedural counterparts. For example, having to pass around immutable data structures becomes pretty memory intense even with a highly developed GC.

> I could be totally crazy, but I reckon many years from now, we'll joke about mutating state the same way we joke about de-referencing raw pointers.

Dereferencing raw pointers has been joked about as long as I've been in the industry, and probably even joked about in the 70s. We still dereference pointers today. Similar, state is a very natural thing to reason about. Sure, you can argue as many FP fans do that stateful programs can be rewritten with pure functions. I have yet to see FP code that does this that doesn't negatively effect readability. Even something as simple as large map/reduce/filter chains can quickly become extremely difficult to debug when compared to a very simple loop.

All this to be said there's a lot of benefit many languages can take from FP paradigms. Map, reduce, etc are great examples. Offering immutable state, and algebraic data types could also be beneficial especially in the areas of concurrent and parallel programming. In my professional experience the problems usually start when you begin talking about pure functions which in theory are awesome but sometimes don't map to a problem domain well, or become extremely hard for Joe Developer to get used to. Often times I will think in a functional way, but rewrite things into a procedural way, because communicating your idea is often just as important as the code you write.


> In my professional experience the problems usually start when you begin talking about pure functions which in theory are awesome but sometimes don't map to a problem domain well, or become extremely hard for Joe Developer to get used to.

It's a ripe field for research. I do wonder if there is a better way to program in a pure functional way that conforms to the impurity of the real world. I agree with you though, I think while pure FP is a tantalising abstraction, it must obey the the whims of the hardware. The only way you're getting pure FP to the absolute bottom is if you can monadically formalise the hardware you're running on, which has been attempted [1].

> Often times I will think in a functional way, but rewrite things into a procedural way, because communicating your idea is often just as important as the code you write.

I think this is perhaps the common benefit that is cited from learning FP. It provides an alternate way of expressing the same solution, but maybe in a simpler fashion. The reverse is also true, what is not straight forward in FP may be expressed simpler, procedurally.

[1] https://www.cl.cam.ac.uk/~mom22/itp10-armv7.pdf


> The other part is FP's a bit of a mind fuck if you're used to procedural programming

It's not "if you're used to procedural programming" - FP is simply hard for people to reason about. You can often very intuitively reason about imperative programs, while FP always requires putting your abstract thinker cap on. Look at any non-software person describing describing how to do something and see how often that sounds like an imperative program vs an FP one. Hell, most algorithms papers are themselves using imperative pseudo-code, not FP pseudo-code (unless they're specifically describing novel algorithms important for FP of course).

> I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.

Concurrent mutation is fundamentally hard regardless of what paradigm you chose to implement it in. Parallelism can be made safer by FP, but parallelism is actually pretty easy in every paradigm, with even a little bit of care. And concurrent updates of mutable state are just a fundamental requirement of many computer programs - you can try to wrap them in monads to make them more explicit, but you can't eliminate them from the business requirements. The real world is fundamentally stateful*, and much of our software has to interact with that statefullness.

* well, at least it appear so classically - apparently QM is essentially stateless outside its interactions with classical objects (the Born rule), but that's a different discussion.


Eh, it really depends. Some people find trees easier to navigate through recursion than iteration, same for lists and graphs. Some do better using decomposition on a list to get a combined value or create permutations than doing it through iteration and indexing. FP commonly becomes problematic when things are far more trivial to do stateful than stateless, with side-effects than without, and more.

FP usually shines in small sections of easily isolated code, where it is still trivial to grasp for most and its strengths show more than its weaknesses. Given that code isn't more easily expressed using iteration, anyway. The moment things have to be woven together and not leave any side-effects is when the difficulty jumps to 11.


Sure, there are contexts where the functional approach is actually more natural - especially already highly abstract contexts. I actually like to use certain functional paradigms in my day to day programming (or did, before switching to Go). I just dislike the claim that imperative code is simply more familiar and not more natural/intuitive (in general).


> It's not "if you're used to procedural programming" - FP is simply hard for people to reason about.

It's relatively straightforward once you understand the patterns and idioms, just like procedural programming. `age |> add10 |> substract9` is about as readable as `add10(substract9(age)`, but we're just so used to the latter.


Using Haskell as the classical example of FP is a bit like using Rust as the classical example of imperative programming.

You can do it but they're much harder than most other languages in the class and not anywhere near representative of the whole class.

My FP journey was: Scheme (different but okay) -> Haskell (wtf) -> Erlang -> Clojure -> Elixir. Nowadays when I reach for an FP it's Clojure or Elixir, mostly based upon the problem at hand.


I’ve used Erlang, Elixir, and Elm and found them all quite easy to learn. There are different coding strategies, sure, but for me FP is much less frustrating than OOP.


Totally agree. As a Javascript dev I got functional code pushed on me in 2016.

It's definitely good practice for devs to learn some of the pitfalls that FP prevents and solves, but implementing it on a massive scale front-end application just seems impractical.

Having worked on a large streaming service and considering the author's 3 MONTH struggle after his 40 YEARS experience, I'd estimate that a re-write of our codebase there would have taken our 20 devs over a decade.


Functional programming, and it's degenerate cousin, cramming random functional constructs (array comprehension methods, willy-nilly currying, ...) in Javascript has been the worst thing to happen to web programming.

What that has done is made a whole generation of developers completely detached of the impact on heap allocation and GC. If web programs are slow and bloated, it's partly because of using nuggets from functional programming just for the sake of it.


I'm actually all for the FP things that have been added to imperative languages, which increase their power tremendously and make a lot of tasks a helluva lot easier. But like any tool, it has its place. I'd be equally leery of a pure imperative solution as I would be a pure FP solution.


> implementing it on a massive scale front-end application just seems impractical.

I was using Clojure/ClojureScript around the same time. I’ve since worked primarily in TypeScript/JavaScript. I’m sure the FP experience influenced my opinion, but it seems impractical to me not to use FP techniques for a large scale frontend application. The applications I inherited and maintain now, which were certainly not originally implemented with FP techniques, have been gradually becoming much more maintainable as I’m able to eliminate or at least ruthlessly isolate mutations. Not only because the code itself is easier to reason about, but also because it’s easier to reason about dependencies between different parts of the systems—even parts which haven’t changed.


> implementing it on a massive scale front-end application just seems impractical.

That's true if a team was trying to do so from scratch in js or ts. However React borrows a lot from FP and works at scale. A better example would be Elm.


Anything beats the hundreds of incompatible classical inheritance models rammed on top of proptypes.

Today, functional has won so completely that devs don’t even notice. Using classes is almost entirely antipattern. Factories with object literals and Claire’s reign supreme. Everyone prefers map, filter, reduce, etc over manual looping. Const and copying as a default immutability is preferred to mutation. Nobody thinks twice about higher order functions everywhere.


> As a Javascript dev I got functional code pushed on me in 2016.

What do you mean?


> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)

I think one of the major reasons is because IO doesn't really fit well into the FP paradigm. All the theory and niceties take a second place when you find out that something as simple as "print a line here to console" isn't as simple as you thought it should be.


I think the context here is pure functional programming. So not just replacing e.g. loops with mapping and such things, but actually making effect-handling explicit. The IO type (or otherwise deferred-type) is essentially for that. Saying it doesn't fit into the FP paradigm doesn't make sense. Without IO, FP is useless because you simply cannot "do" anything at all.


Another problem is that as nice as avoiding state and side effects is, for any decently complex application there's a minimum amount of state that you simply have to handle, and as a general rule OOP languages seem to make dealing with state much easier. Mind you there are plenty of good ideas in FP, and discriminated unions is the feature I want added most to C#.


> as a general rule OOP languages seem to make dealing with state much easier

I disagree. One of the most significant benefits of FP (and pure FP in particular) is being able to deal with mutable state safely in concurrent programs. This is essentially the core problem which effect systems try to solve.


Isn’t this IO thing a bit overused? There are more FP languages that are not as pure as Haskell, but even if we only do IO through a special monad it is not too bad at all.

And I say that as someone who is very much fore “practical FP” — I do think that local mutability is sometimes simply the better fit for a problem.


> there are more FP languages that are not as pure as Haskell

I mean, that's precisely the point I'm making, that IO doesn't fit into the FP paradigm and that languages that make IO easier necessarily deviate from it. And once you bring non-FP concepts into an FP language, most people will reasonably question why do that instead of the easier way, which is bringing FP concepts into imperative languages.


Pure functional languages model IO with monads, which are very much functional (function composition within a context) and IMO easy to use. This ease of use is probably why JS adopted the pattern with native promises -- async/await. They fit well into otherwise imperative code and people seem to to like them.


Pure functional languages can very well have normal IO. It's Haskell's lazyness that mostly forced it to use monads for IO, for better or for worse. In a pure FP language with strict evaluation semantics, IO can easily be implemented as a pure function foo -> (World, Input) -> (World, Output).


That's not true at all. If you could use that implementation in a strict language (which you can't) you could equally-well use it in Haskell.


Arguably the best platform for the general io use case is functional.

From how when you console in to a cluster and run a command on another node, the standard io is redirected to you, to how tcp/up and up sockets are managed by the stdlib, it's really hard to beat Erlang.


100%.

IMHO, the ideal "future of programming" is like how a lot of game dev has C on the "back-end," and Lua "in-front."

Why one language or paradigm? Mix it up.


It is similar to a religion, because if everyone agrees to adopt the same reality, then there's easy communication and a shared background understanding that makes it easy for everyone to work together. Picking a religion is a society-wide coordination problem, and strategies of shaming and in-group/out-group exclusion make sense in this context, where they wouldn't about flavors of ice cream, for example. We see the same thing happening in tech all the time. I don't want to learn a new technology, which means I don't want you to learn it, because I don't want it to become something that engineers are expected to know. And so on.

The point is to understand that "right tool for the job" and "C++ for everything" are religious positions, too. There might even be a dominant religious position, which likes to style itself as just the disinterested rational viewpoint.

The FP folks might be a somewhat more fervent (crazy mathematical) sect, but there's no getting away from religion, though we can call it by other names (fashion, "best practices", ...).


Tooling. Tooling is the major reason why no one uses functional programming languages in industry. I was very into Haskell a decade ago. Today I don't even know where to start when it comes to installing a new package. If Debian doesn't have it in their repos I usually just try building it from source and hope make all works. If it doesn't. Well I probably didn't need that package anyway.


My company does FP and I think the tooling in Elixir is pretty great.


Thank you. I learned functional programming first, and I do think there is a lot of merit to it in a large proportion of software development tasks. But there are far too many people trying to fit square pegs in round holes with it. Pure functional applications which manage large amounts of complex and messy state are a nightmare to work with. There is a reason why game developers and simulation developers have almost defaulted to object oriented programming. There is a reason why cryptography developers have defaulted to procedural programming. There really should be a No Free Lunch theorem for programming methodologies.


I believe game developers aren’t against functional programming (John Carmack has great things to say about Haskell for example).

Pure functions excel at state management and reducing bugs. If there were a reason to make games with a functional language, this is the reason.

The big issues seem to be deterministic performance and resource usage. Garbage collection, lazy evaluation, etc all result in bubbles of weird performance. That doesn’t matter for the overwhelming majority of programs where the human limits in responsiveness is hundreds of milliseconds, but does matter it must scale into single digit milliseconds.

There are functional languages that have this capability, but outside of the partially functional Rust, they are basically unknown.


Until John Carmack actually writes a game in Haskell, I'm gonna choose to interpret his comments as "Hey, I like these ideas" over "Hey, this would be perfect to write a game in". He's had a lot of nice things to say about a lot of different languages, but it is far more telling to consider what he has actually written significant amounts of code in.

Pure functions excel at reducing bugs, I'll give you that. They also excel at transforming state. But managing it? Encapsulating it? No thank you. I'll take plain old classes over every state management idea that has ever been conceived for Haskell. Does anybody seriously believe they will manage the state for hundreds of thousands of different sprites, environments, bots, and players using the State Monad? I'd rather castrate myself. It's one thing to take immutable/functional ideas and code and use it in a stateful system, and another thing entirely to constrain 100% of your code so that it all fits in that neat theoretical box you've built for yourself.


Carmack ported Wolfenstein 3D to Haskell. That seems like a big enough project to make serious statements about the language in the context of games.


Really? Are we thinking of the same game from 1992? Innovative for it's time, sure, but nowhere near the magnitude of complexity of modern games. Porting an extremely old game that can be written in a few thousand lines of code sounds like a hobby project to help you learn a new language...not a proof that the language could take over decades of OOP-driven game design.

Do you know what would be a better statement about the language in the context of games? A game engine. And actual games. That people want to play in 2022.


It took a long time to write that engine and porting the whole thing properly also takes time. It just moves goalposts. Why didn’t he spend 80M on a new AAA game? If he spent any less than that, he certainly can’t draw any useful conclusions.

Have you ever looked over the codebase? It’s plenty large enough to draw useful conclusions from for most people let alone someone with his vast game experience.

https://github.com/id-Software/wolf3d/tree/master/WOLFSRC

Meanwhile, you are drawing bay conclusions with no credentials out evidence. As to actual games, setting aside the fact that Wolfenstein still sees play, loads of popular games are written in JS. Lots of others are in Java or C#. None of these make your case as Haskell, Ocaml, and StandardML (SML) are in the same performance range.

As to your argument about the efficiency of objects, what do you think functional languages use? Lets use SML as an example. There’s real arrays and they are also optionally mutable (yes, there’s linked lists too, but those can be used in C++ too).

Records are basically just C structs (they are immutable by default, but can contain refs which are mutable pointers). They can contain functions because functions are first class without the mess that many languages create.

You associate functions with datatypes which gives you the best part about methods. They also give you a kind of implicit interface too due to structural typing. I’d note that closures are mathematically equivalent to objects.

Finally, modules are everything a language like Java tries to get from classes (and more), but without any of the downsides of classes themselves.

People generally like the JS paradigm of factories and object literals (even if they hate the stuff like dynamic typing or type coercion). StandardML offers the same kinds of patterns, but with sound typing, simpler syntax without the warts, more powerful syntax, and performance in the same range as go or Java.

To me, your argument sounds like the people arguing that goto is better and more natural than looping constructs or the procedural guys arguing against OOP. I think if you messed around with StandardML, it would change your mind about what programming could be in the future.


Not only have I used SML, but I've also used F# and Scala extensively, both of which are well rooted in the ML philosophy. And for completeness, I've also used Common LISP, Haskell, Clojure, and Erlang.

Not once have I made an argument about the efficiency of objects oriented languages. Although that is important for performance wherever performance matters, it isn't my point in the slightest, and I already would agree with you that there are functional languages out there that are equivalent in performance.

My point is that the raw exposed complexity of functional state management (and especially pure functional state management) makes it worthless for game development. And until someone actually develops a real game, up to modern AAA standards using a functional language, all of your posturing is theoretical.

I can carry a pallet worth of goods to a supermarket using a pickup truck. It might be a little messy cause it can't pull up to a dock, so I'd have to unload it by hand instead of using a pallet jack, but the time I save by driving faster more than makes up for it. I could argue until I'm blue in the face about the theoretical benefits of using faster and more nimble pickups instead of slow and cumbersome lorries, but until someone actually builds a supermarket logistics network with them, those benefits are just theoretical. More importantly, the people who actually work in logistics industry would never attempt to do so because they can immediately see the problems with it.

So if you're that convinced your functional language would be better for game dev, then just do it already, cause nobody in the industry is gonna do it for you. I can guarantee it won't take long for you to understand what it is like when your paradigm choice puts you on the wrong side of the Expression Problem. And by the end of your experiment, you'll just be another crusty old insider that is too dumb and stuck in your ways to consider the amazing benefits of functional programming, just like what you're trying to do with me here.


What encapsulation exists in OOP or procedural code, but not in functions and modules?

The complexity in games is peanuts compared to any random business UI logic at whatever random company. Managing this complexity is an explicit reason to use functional paradigms. If OOP were better at managing complexity, why has everything been shifting from OOP to functional paradigms?

What about the functional approach makes it inferior at state management in your mind?


Nah, I'm not gonna write an essay just for some random dude on the internet. Try writing a game. Like a real one, not tic-tac-toe. You'll learn quickly enough.


An Entity Component System is literally the functional approach to programming under a different name and with mutation by default. There’s even been a huge move toward keeping ECS data classes separate from behaviors. At that point, it’s just a less ergonomic functional approach with better performance due to lower-level languages (though ATS or Rust are just as fast as C).

Maybe you’re thinking of FRP (functional reactive programming), but that isn’t even super mainstream among most functional programmers. I know some small games have used it, but it’s mostly used for specific kinds of UI in languages that enforce immutability and no side effects.


I don't think any paradigm is inherently better at managing complex and messy state. I've worked on several very complicated applications written in a pure FP style which handled complex, shared mutable state really elegantly (and many OO applications where state management was a terrible mess...). And more generally I think that pure FP and specifically effect-oriented pure FP gives you really nice, composable tools to handle complex state safely in concurrent applications.

But yeah, for game development pure FP is probably a bad fit because you need pretty precise control over memory allocation and layout (from what I understand, never actually done any game dev).


The popularity of various programming paradigms is mostly historically contingent and driven by social/business dynamics, not because of any reasonable constrained optimization.


I've always kind of wondered about something weird in relation to this type of programming style:

reversible computing.

Ever since I read some stuff feynman wrote years ago about computing it seems like quantum computing and/or max energy efficiency would be achieved if information was not destroyed.

And I thought functional programming might be a programming paradigm to support this kind of thing.


Discussing FP without mentioning induction and proofs always makes me wonder.


why?


>> navigating all of the offerings, examining their trade-offs

the amount of time you're afforded for that exercise better fit neatly inside a very small window of implementation (in other words you're probably not going to do it, or at least do it justice)

>> figuring out which ones fit best to the system being built in terms of constraints

constraints will always get ya. It always ends up to being what is the tool/paradigm that you (and your project manager) are most comfortable with because you'll most likely use it in leu of anything else (especially given you are not the only one that has to be convinced -- engineers don't operate in a vacuum, and they love predictability, ie what they already know)

>> You won't get a very optimal solution by ...

YAGNI. Premature Optimization. KISS.

I am not saying that ^^^ is true, I'm just introducing you to your new conversation buddies for the foreseeable future. People always bring'em along for the conversation.

>>> trying to talk you into a religion

advocacy among unbelievers is always gonna come off like this, especially when the evangelists have dealt with so many naysayers and the apathetic majority. And this is probably the crux of the entire issue. Students' first languages in school are generally imperative languages. They are taught in their youth a path away from functional programming. Which is funny to me because my school was always promoting that their studies were there to help you grow one's intellect and not necessarily fit into some cog of the industry. But, I don't recall much playtime given to functional languages (not as much as they deserved at least).

My point is that it would be nice if FP was the first tool that was given to students that pour out of universities into developer teams. It would be nice if the easy button for a group was FP, not the other way around. Then it would be much easier to beat the other drums (like the testing drum).


I am sorry, but you sound like you know for a fact that functional programming is better, but you have trouble making others recognize that fact.

Imho, FP has many tangible weaknesses, just a few off the top of my head:

- Immutability is non-intuitive: If I were to ask someone to make an algorithm that lists all the occurrences of a word on a page, they wouldn't intuitively come up with an algorithm that takes a slice of text and a list of occurrences, then returns an extended list, and an linked list with one more occurrence.

- Immutability can cause more problems than solve: If I were to create a graph of friendships between people, chances are that if I were to add a link between A and B, then not only A and B would need to be updated, but everyone who transitively knows A and B (which, since no man is an island would probably mean, everyone). This is highly complex, and probably just as bad as living with mutability.

- FP is not performant: FP code tends to be full of pointers and non-performant data structures like linked lists that have very little memory locality. It also makes optimizations such as updating things in a certain order to avoid recalculation impossible.

- FP has to deal with side effects: The real world has side effects, and your FP code is probably a bit of business logic that responds to a HTTP request, and fires of some other request in turn. These things have unavoidable side effects.


FP isn’t just limited to Haskell while your criticisms seem aimed only at that one language.

Immutability may not be intuitive, but neither are pointers, hash tables, structured loops, OOP, etc. In any case, maintaining immutable code is certainly more humane than trying to reason about 10th order side effects. Finally, the majority of functional languages are immutable by default with optional mutation when needed.

Your immutable argument is a red herring. It’s the wrong algorithm. Bubble sort is much more intuitive than heap or quick sort, but you certainly wouldn’t recommend it.

You are flatly wrong about performance. Ocaml and StandardML are just as fast as languages like Go or Java. This is incredible when you realize These FP languages are maintained by a handful of people with basically no budget and are keeping pace with languages that have tens out hundreds of millions of dollars pushed into them.

FP doesn’t mean no side effects (though it does tend to encourage limiting and separating them). Outside of Haskell and Miranda, basically all the other FP languages have side effects and even Haskell has an escape hatch available.


Haskell has to have an escape hatch in order to work, though fortunately it is hidden away.

Every program has effects. A program that does not have any effects (IO) would not be useful, as you can't get anything into or out of it. In FP we manage those effects, in order to help ensure correctness, with the additional benefit that good effect management makes the program easier to comprehend (nee reason about).

Contrast a procedural program with effects littered throughout the codebase, with a program wherein effects are pushed to the edges. In the latter, you and your team know exactly where all of the effects occur. Everything else is pure code: easier to test, easier to comprehend, easier to compose.

Category theory is not required for good effect management. It just so happens that monads like IO fit the problem space nicely; although the same could be achieved with a lazily evaluated definition of computation (i.e. a function).


On this friendship example, I think already FP is showing you something useful about the way you are designing your model.

A friendship is really an edge in a graph. if you manage friendships by recording a collection of friend PersonId's on each person, you are modelling a graph by recording the same edge in two different places, and then hoping the programmer is diligent enough to never make a mistake and update the edge in one place but not the other. If you model the friendship graph as a collection of id pairs, not encapsulated inside a specific person structure, this invalid state is no longer possible.

I'm not sure what you are on about regarding making transitive updates - that idea is obviously not going to work on any moderately sized network. Imagine if Facebook tried to record all your transitive friendships on your account!

Sure immutability is non-intuitive, but it's really helpful! Like borrow checking in Rust, immutability helps eliminate an entire class of bugs. And immutable code is often much easier to reason about, you need to keep significantly less of the context in your head.


I think you haven’t read much of the literature and don’t have any real experience with it on a real project.

There is no intuition when it comes to programming. Everyone has to learn it. You’re appealing to familiarity of concepts you’ve already learned and assume everyone else thinks the same way.

There are highly efficient graph algorithms using pure immutable data structures. You don’t seem to be aware of how much sharing is enabled by immutability nor the algorithms for efficiently updating graphs in a pure functional way. Try reading this book [0].

Again there is a whole zoo of optimizations available exclusively to pure functional languages that you don’t seem to be aware of. Try reading Counting Immutable Beans. Some of the optimizations GHC can do are impressive. It’s not all bad performance.

Want to talk about cache misses? Check out an average C++ code base. It has a dedicated cache-miss operator.

Side effects are everywhere. And yes, in a purely reverentially transparent language, it becomes a pain being forced to be explicit about every mutation, network call, and file handle operation when you’re used to throwing caution to the wind and flying free with them. Improper handling of side effects include common use-after-free errors, etc. After years of dealing with this mud all of software, at least for me, explicit handling of effects becomes exactly what you want in order to keep your sanity intact.

[0] https://books.google.ca/books/about/Purely_Functional_Data_S...


> There is no intuition when it comes to programming. Everyone has to learn it.

There is no intuition in anything aside from the very few things we are born with. Everything else is learned from experiences.

And to that end, chances are someone has learned to follow procedure-like steps like a recipe at a very early age - indeed, in some schoolbooks recipes and other "procedural" works are among the first things someone does.

Children learn things like "to do X, first do Y and then Z" from a much earlier age, often before even learning how to read and write (e.g. via simple papercrafting in kindergarden) than something like "X = Y*Z". Giving and taking commands (i.e. "imperative" programming) as a concept is something people meet in their lives much sooner and are more exposed to than abstract problems.


I think the jury is out on which programming paradigm is easier to teach to kindergarten aged children.

Let alone adults with no prior training with programming.

I don't believe it's as natural as you seem to think.


I think you're completely missing an entire world of FP languages which are not pure like Haskell, ocaml, elm, etc... There are languages with side effects as an easily accessible escape hatch but don't do things which make debugging a pain like data encapsulation and inheritance.

Also, there are FP langs out there that have better data/heap locality with indirected pointer data structures, because they don't share memory and so the pointers aren't tied to anywhere except where they are directly running.

I completely disagree with immutability being non-intuitive. Try explaining to a junior/bootcamp JavaScript or python developer why when you pass an integer into a function it doesn't reflect changes you made in the called function when you jump back into your calling frame... But if you do the same with a object/dictionary...

For junior programmers immutable passing is absolutely the default assumption and so I think it's reasonable to claim it's the intuitive choice.

There's even crazymaking shit where in python if you have a default array parameter you can seriously fuck up your model of what's going on if you mutate that array anywhere.


>> I am sorry, but you sound like you know for a fact that functional programming is better

I can see the room for misunderstanding. My only argument is that FP is worthy of more playtime than it currently enjoys in undergraduate academia. I'd wager that the current percentage of FP/Imperative playtime is 95/5.


Ah I see. I've been out of school for a while so I don't know how students are taught - but if you take a modern language such as Kotlin, you'll see it's a mishmash of concepts from various paradigms, including FP.


> - Immutability is non-intuitive: If I were to ask someone to make an algorithm that lists all the occurrences of a word on a page, they wouldn't intuitively come up with an algorithm that takes a slice of text and a list of occurrences, then returns an extended list, and an linked list with one more occurrence.

What do you base this on? Did you run a study of people who had never programmed before and then asked them to invent a pseudo-programming language to solve this problem? Or are you talking about the "intuition" of people that have already learned to program, mostly likely in an imperative programming language, in which case your claims to "intuition" don't mean anything.

> - Immutability can cause more problems than solve: If I were to create a graph of friendships between people, chances are that if I were to add a link between A and B, then not only A and B would need to be updated, but everyone who transitively knows A and B (which, since no man is an island would probably mean, everyone). This is highly complex, and probably just as bad as living with mutability.

Firstly, that's not true, it depends entirely on what sort of data structure is used. For instance, a trivial model is a simple list of graph modifications, ie. type Action = AddFriend(a,b) | RemoveFriend(a,b), type FriendGraph = List(Action). Any change to the graph thus requires allocating only a single node.

Secondly, if you were to model this mutably, a node removal means you would lose information about the fact that two people were friends at one time.

> - FP is not performant: FP code tends to be full of pointers and non-performant data structures like linked lists that have very little memory locality. It also makes optimizations such as updating things in a certain order to avoid recalculation impossible.

This argument is based on assumptions about the functional programming idioms, the compiler, the machine it's running on and more. I could just as easily say that imperative programming doesn't scale because you can't easily parallelize it across cores, and since single-core performance is now a dead end, even if everything you just said were true it's basically irrelevant for the question of performance.

> - FP has to deal with side effects: The real world has side effects, and your FP code is probably a bit of business logic that responds to a HTTP request, and fires of some other request in turn. These things have unavoidable side effects.

I'm not sure who denies this. The question is not whether useful software has side-effects, it's what sort of side-effects you actually need for any given program and how those side-effects should be modeled so you can properly reason about them and achieve the program properties you need.

If your programming language has pervasive side-effects then you have to deal with them everywhere, and the failure modes multiply. If your programming language does not support pervasive side-effects, then the side-effects are pushed to the edges of your program where they are more amenable to reasoning. Questions like "how did we get into this state?" are considerably simpler to backtrace as a result.


1. Immutability is non-intuitive. If I were to hand a regular person a book, and ask him how he would list all the page's numbers where the word cheese appears, he would probably say something like this: 'I would read through the book, keeping a scrap of paper at hand. Whenever I saw the word cheese, I would write down the page number' This is an imperative algorithm. I could give more examples, but I hope you get the point.

2. I just wrote the 'friends' example to make a point - common, complex programs often have huge graphs of interconnected objects, whether one would like it or not. Your solution is to build a journal of friendships and breakups - it's not a typical solution for object relations - seeing if 2 people are friends is a linear operation. Keeping the history around might not be necessary or useful.

3. Your FP code runs on a modern OoO CPU whether you like it or not - so it's safe to make that assumption. And multithreading might not be a silver bullet. FP does nothing to break long dependency chains. As for sharing values between CPU cores, considering the latencies involved, it might be cheaper to compute the value locally, but multithreaded optimization is a black art - it's not always obvious if sharing is better.

Another example is when I made a small toy Excel-like spreadsheet that supported formulas - while Excel itself is FP, I realized that the best way to update cell values, is to topologically sort all dependent expressions, and evaluate them one after the other - an imperative concept.

4. I was just making the point is it's easy to write a function in e.g. Java that is imperative on the inside, but is pure when called from the outside. Other languages will even allow the compiler to enforce this, while still being imperative. So both 'regular' and FP languages can avoid some side effects to some extent, but have to deal with others.


> 1. Immutability is non-intuitive. If I were to hand a regular person a book, and ask him how he would list all the page's numbers where the word cheese appears, he would probably say something like this: 'I would read through the book, keeping a scrap of paper at hand. Whenever I saw the word cheese, I would write down the page number' This is an imperative algorithm.

No, that's a purely functional algorithm. Note how he's just adding numbers to the end of a list and emphatically not mutating the existing items in any way. Tell me what you see as the functional real-world solution to this problem. Do you consider an accounting general ledger to also be imperative? Because it clearly isn't, it's an append-only log where all previous entries are immutable, which is exactly the same thing as the list noting the pages containing the word "cheese".

> 2. I just wrote the 'friends' example to make a point - common, complex programs often have huge graphs of interconnected objects, whether one would like it or not. Your solution is to build a journal of friendships and breakups - it's not a typical solution for object relations - seeing if 2 people are friends is a linear operation. Keeping the history around might not be necessary or useful.

The example doesn't make a point. In either imperative or functional programming, the operations you need and their time and space complexity will dictate the data structures and algorithms to use. Saying that programs have "huge graphs of interconnected objects" is not evidence of anything specific.

> 3. Your FP code runs on a modern OoO CPU whether you like it or not - so it's safe to make that assumption.

Make what assumption exactly? Microcontrollers are in-order. GPUs are in-order. You also made claims that FP programs are full of pointers and non-performant data structures. I have no idea what out of order execution has to do with this claim. Certainly early FP languages were pointer heavy, but early imperative programs were goto heavy, so I'm not sure what exactly you think this says about FP in general.

> And multithreading might not be a silver bullet. FP does nothing to break long dependency chains.

FP encourages and often forces immutability which does help significantly with parallelism and concurrency.

> Another example is when I made a small toy Excel-like spreadsheet that supported formulas - while Excel itself is FP, I realized that the best way to update cell values, is to topologically sort all dependent expressions, and evaluate them one after the other - an imperative concept.

Prove that that's the best way, and not merely the best way you know of, or the best way available to your programming language of choice.


> No, that's a purely functional algorithm. Note how he's just adding numbers to the end of a list and emphatically not mutating the existing items in any way.

They are clearly mutating the piece of paper to add new numbers to it. There is no linked list in sight. They are also flipping the pages of the book in order, another imperative paradigm.

The closest you could come to a pure FP algorithm expressed in physical terms would have been "Say there are still pages I haven't looked at, and say I have a stack of post-it notes on the current page; then, I would check if the word cheese appears on this page, and if it does, I would produce a post-it note with the page number on it, adding it over the stack of post-it notes; I would then flip one page; on the other hand, if I am looking at the last page of the book, I would add another post-it note to the stack if needed, and return the whole stack of post-it notes to you otherwise".

Imperative:

  foreach page in book
    if 'cheese' in page
      write(paper, page.number)
Functional:

  search page:rest_of_book post-it-stack = 
    search rest_of_book 
           (if ('cheese' `on page) 
              then number(page):post-it-stack 
              else post-it-stack)
  search [] post-it-stack = post-it-stack
Of course, we could write this easier with a map and filter, but I don't think there is any good way to express those in physical terms (not with a book - filters have nice physical interpretations in other domains though).

Later edit:

A version of the imperative algorithm closer to what was described in prose:

  repeat:
    word = read_word(book)
    if word == 'cheese':
      write(paper, current_page_number(book))
    if no_more_words(book):
      return paper


> They are clearly mutating the piece of paper to add new numbers to it.

No, they are simply adding a new record for the word occurrence. If the paper runs out, they grab another sheet and continue. This is clearly an append-only ledger, just like that used in accounting. These are both immutable abstractions.

The fact that this is happening on a single sheet of paper is merely an optimization, it's not a property of the underlying algorithm they're employing. The post-it equivalent you describe is simply a less space efficient version of exactly the same algorithm. You're basically saying that tail call elimination makes FP no longer FP.

> There is no linked list in sight.

What do linked lists have to do with anything? You don't think that FP or immutable programming have to use lists do you?

> They are also flipping the pages of the book in order, another imperative paradigm.

Traversing an immutable sequence in order is now an imperative algorithm? Since when?

What's really happening here is that you've already assumed that physical reality and people's intuitions are imperative, regardless of the contortions required.

This example of counting words and the general ledger are perfect examples: it's absolutely crystal clear that all of the recorded entries are immutable and that this log of entries is append-only, which is a characteristic property of FP and immutable abstractions, and yet you have to contort your thinking into looking at the paper itself as some mutable state that's essential to the process in order to call this an imperative process.


> The fact that this is happening on a single sheet of paper is merely an optimization, it's not a property of the underlying algorithm they're employing.

The person is describing the abstraction, not any optimization. If you were to translate their words directly into code, you would have to write code that modifies the piece of paper, because this is what they described.

In contrast, when using a persistent data structure, the abstraction says that I create a new structure that is the old one + some change, but, as an optimization, the computer actually modifies it in place. The implementation is imperative, but the abstraction is functional.

> You're basically saying that tail call elimination makes FP no longer FP.

No, I'm saying that there is a difference between writing an algorithm using tail-call recursion, and writing it using iteration; even if the compiler produces the same code. The tail-call recursive version is FP. The iterative version is imperative. How they actually get executed is irrelevant to whether the algorithm as described is FP or imperative.

In contrast, you're basically claiming that any algorithm that could be abstracted as FP is in fact FP. So, `for (int i = 0; i < n; i++) { sum += arr[i]; }` is an FP algorithm, as it is merely the optimized form of `sum x:xs total = sum xs x+current; sum [] current = total` after tail-call elimination.

> Traversing an immutable sequence in order is now an imperative algorithm? Since when?

Immutability and FP are orthogonal. Append-only ledgers are a data structure, not an algorithm; and it's algorithms that can be functional or imperative.

On the other hand, yes - traversing a data structure in order is an imperative idiom. In pure FP, the basic operation is recursive function application, not traversal. For loops and tail-call recursion obtain the same thing in different ways.

> This counting words and the general ledger are perfect examples: it's absolutely crystal clear that all of the recorded entries are immutable and that this log of entries is append-only, which is a characteristic property of FP and immutable abstractions, and yet you have to contort your thinking into looking at the paper itself as some mutable state in order to call this an imperative process.

By your definition, I understand this is an FP algorithm for producing a list of the first 100 natural numbers, since it's append-only:

  xs := []int{}
  for (i := 0; i < 100; i++) {
    xs = append(xs, i)
  }
You can certainly define FP = append-only, but that is emphatically not what others mean by the term. Instead, most people take FP to mean "expressing the problem in terms of function applications (and abstractions based on them), where function == pure function in the mathematical sense".

> What's really happening here is that you've already assumed that physical reality and people's intuitions are imperative

I've actually shown what I think is actually an FP algorithm represented in physical terms, and how that translates 1:1 to FP code (and the alternative imperative algorithm). I don't think it's fair to accuse me of assuming that physical reality is imperative - at least not without showing your 1:1 mapping of the description to FP pseudo-code.

Edit to ads:

Perhaps a fairer representation of the imperative algorithm should have been:

  repeat:
    word = read_word(book)
    if word == 'cheese':
      write(paper, current_page_number(book))
    if no_more_words(book):
      return paper
This is even closer to the prose version, and makes it clearer that it was describing an imperative traversal.


To any leads reading this,

>>> navigating all of the offerings, examining their trade-offs

>> the amount of time you're afforded for that exercise better fit neatly inside a very small window of implementation (in other words you're probably not going to do it, or at least do it justice)

Every project I have worked that has done a gap analysis has executed faster and smoother than those that didn't. A gap analysis being a more rigorous approach to an analysis of tradeoffs, solution capabilities vs requirements/existing software, and sometimes scoring/ranking (often with three recommended options depending on client's choices in tradeoffs).


>> the advocacy usually feels like someone trying to talk you into a religion

> advocacy among unbelievers is always gonna come off like this, especially when the evangelists have dealt with so many naysayers and the apathetic majority.

If an advocate sees unbelievers and naysayers around it is more a reflection of the advocate tactics. No offense intended. Most developers will gladly hear an overview of how a new technology can make doing X easier, better or faster. Even when there is a steep learning curve. They may not switch to it quickly or at all, but they are likely to form positive impression of it.

But if an advocate starts with "your tools are bad", "stop doing Y", "you need to unlearn Z" he is usually digging a hole for his technology, not advancing its adoption. If an advocate cannot produce powerful, convincing examples showing the goodness of his technology (without bashing existing tech) he should stop advocating and start learning. My 2c.


For the most part, agreed. But the arguments for FP are old and like you’re mentioning could be sold easily by the right orator (those teachers/evangelists exist btw in droves on YouTube and elsewhere). I’m only saying that given what academia and collective professional experience has proven around FP, it would be nice if FP or FP patterns were the first option(s) most stressed to budding SWEngineers/Developers and that imperative architecture was the afterthought (not the other way around as probably it is in our current state).

In other words, there would be little need to evangelize for FP if it was the base of understanding.

And while there are a lot that want to improve the field, I’d wager that most 9-5’s are tired, wanna finish a good days work and rest.


But why should FP be the basis for understanding?

Let's say I run a CS department. And let's say that I've accepted that what my department is really doing is jobs training for software engineers. Well, what are their jobs going to be, imperative or functional? For most of them, for most of their careers, the jobs are going to be imperative. Why should we start with FP?

You could say "FP is better", but then we're back at the OP's point. FP is better for some things, and worse for others.

For you to say that FP should be the starting point, you need to show that FP is better for the majority of programming situations, or else to show that FP is the better way to learn programming for the majority of students. I have never seen anyone seriously attempt to show either one. (Some argue "it fits how we think better", but what they mean is that it better fits how they personally think, ignoring that different people think differently.)


It’s about rope. Imperative languages generally give you a lot of flexibility (for x or y) and, therefore, rope to hang yourself. I don’t believe that starting students here, from a pedagogical perspective, is a good strategy. FP languages/paradigms, on the other hand, are all about restrictions(immutability, side effects, etc), and thus less rope. Less places to hang yourself, so to speak.

Also, even though, as you’ve stated sw engineers tend to work in an imperative environment (which I’m arguing is an artifact of their formative years), junior sw engineers should at least start with a bit of trepidation to use that rope (if only in their heads).

Plus, utilizing a functional style (and understanding the whys of functional style, where pragmatic) would improve many aspects of industry (e.g reducing the friction of adding tests - did I mention that I love tests??)


This rope only matters for production oriented systems. Most programmers are doing quotidian processing tasks. Manipulating CSVs, processing data to get statistics on, plotting points on maps, maybe writing a simple automation. Almost every software engineering class I read about when I was a graduate student teaching undergrad classes spent time discussing the pitfalls of the "rope of mutability" and explicitly discussed the idea of immutability to make this kind of programming safer. I agree with another poster that it's just much easier to teach general programming skills and thinking procedurally. I do think that programmers have to unlearn some of this when writing production-grade software, but most programmers will never write anything like that.


FP has one distinct advantage over many other paradigms, it has very sound and well understood theoretical foundations, being essentially based on the formal mathematical logics (e.g. System F and its derivatives). This has huge implications for correctness and reasoning.


I'll get made to take shortcuts to get the product out before I'll be allowed to be strict on correctness if it means pushing a deadline back even a single day. I love correctness, but it's just not often important to people who aren't me and for reasons completely outside of my control.


Correctness is important to everyone. Presumably everyone involved would rather software work rather than not work. Time commitments are what they are and sometimes quick and dirty is what you have to do but I don't actually think there is some hard tradeoff between fast and correct. The whole idea of having rigorous foundations is that you have a set of primitives which can compose together in well-understood ways so you can build correct programs quickly without necessarily re-inventing the wheel each time.


Another reason is that imperative languages have a lot of business inertia around them. It's expensive to rewrite existing code or switch to a new language, and most businesses can't justify this cost.

I love functional programming, but I doubt most companies that sell CRUD apps care about it.


Another reason is that imperative languages are all that's necessary for many (maybe even the majority) of business use cases.

Really a lot of it boils down to if this else that, and little more.


"all that's necessary" implies there's something fundamental about imperative languages that's intrinsically more basic. In fact, the opposite is true. If you're problem statement is "in response to event X, transform Y into Z, stick it in a database, then take A from the database, reshape it into B, and send it back to the user" then weak functional approaches are the natural solution. (languages like Elixir or idiomatic JavaScript).


We use Clojure at my work and it's basically functional Java with parens. No snottiness or evangelism needed. The fact that I don't have to write Java OOP boilerplate is a big enough advantage to me. Even if you just write it like Python or JS, it's still leagues better.


What does gatekeeping mean in this context? Genuine question.


Something like "oh, you're using map() and filter() in your C++ program? you're not doing actually doing FP unless [...]".

See https://news.ycombinator.com/item?id=33438320 for an example of this exact attitude.


Is that gatekeeping? It reads to me like they're saying it's not enough to just use a few functions to it FP. Their point seems to be that it's more than just using some functions, which seems reasonable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: