Hacker News new | past | comments | ask | show | jobs | submit login
Why no one uses functional languages (1998) [pdf] (acm.org)
66 points by todsacerdoti 3 days ago | hide | past | web | favorite | 126 comments

FP is going mainstream. In the same way that mainstream OO wasn't pure OO in the manner of Smalltalk but instead hybrid languages like C++ and Java (and more recently Python), mainstream FP isn't Haskell but languages like JS (w/ Typescript), Scala, Kotlin, Rust, Swift, and friends taking core ideas from academic FP and presenting them in way that works better for industry.

I almost think the Swift/Rust approach is the right one for 99% of practical cases. In these languages I end up writing a lot of functional code, but it's important to have an escape hatch to just make the computer do what I tell it when I need to. Especially when using libraries which are not themselves functional.

Agreed. I mainly work in Scala and I really like this ability.

Something like Elm is also perfectly learnable in mainstream contexts. Haskell gets too much attention.

Agreed, Elm does seem very nice from my very limited experience. I'm not sure it's going to go mainstream, though, but we'll see!

The pessimist in me thinks that going mainstream means it will be less useful for screening applicants/keyword searching job postings. Sort of repeating what agile has brought us.

Sure, but we'll all be into dependently typed languages then, or perhaps it will be the rebirth of logic programming, or maybe something that's not even on the radar. :-)

Those languages aren't FP though. They aren't even close. Kotlin advertises itself as 'multi-paradigm', like C++.

One reply observes they don't have pure functions which is rather fundamental to most modern definitions of an FP language. But they also use imperative flow control in which statement ordering matters, their APIs are object oriented, there's no lazyness, type inference is intentionally limited.

I don't think they've really borrowed many ideas from academic FP. Most such ideas were in fact explicitly rejected.

I think Ocaml is the O.G. impure functional language. I am having fun getting familiar with it again.

You also didn't mention F# or ReasonML.

Or are those too much like FP for your point?

Yes, Reason, O'Caml, and F# are in the mix as well, though I think adoption is much lower than some of the other languages I mentioned.

(I think Scheme and ML both have a claim to be O.G. strict impure FP, as both were developed at about the same time in the 1970s. O'Caml is more recent IIRC.)

Mmm maybe, but none of those languages (afaik) have explicitly pure functions which is the keystone of functional programming.

It's really really hard to get things right with pure functions. Haskell struggled with this for decades. You can't expect to be able to import this in some random language and just make it work.

So random languages just pick what they like (lambda expressions, weaker forms of algebraic data types, sometimes the complete typeclass concept, various forms of Monads from "unnecessarily specialized because whoever picked it didn't understand it" to "properly done"0.

And while having pure functions certainly forces you to get things right, you don't need to do this when copying ideas. And there are functional languages (some very old, like Lisp) that never had pure functions in the first places.

OCaml is a functional-enough language. It ain't pure either.

A language that can track purity is great, but not absolutely required.

Btw, the D language does have facilities to track purity. It's not a functional language, though.

Actually it depends who you ask. There's kind of two school of thought.

One says FP is about using functions with closures, higher order functions and anonymous functions as the main building blocks for modeling your program logic. This style is championed by Scheme and other Lisps. It is also slowly creeping up in all major languages.

The other is about immutable data and pure functions, and modeling your program using those as your main building blocks. Either as much as possible, and sometimes even going as far as restricting yourself only to this.

Tooling is such a huge factor. I'd love a functional language that had build tools as nice as rustup and cargo.

Indeed I'd place Rust as an example of a language that got it right. Sure, Rust isn't functional, but it's confusing on first usage, so close enough :D

In all seriousness, the big issue that functional languages have to get over is that most programmers are not great mathematicians, so declarative programming feels very weird and confusing. With Rust the big issue is all the rules about memory. How did Rust solve it?

For one, they made the language as familiar as possible. Rust's syntax is extremely close to C/C++, even if the semantics aren't. As much as PL people claim syntax doesn't matter, it does. ReasonML is a project with a similar philosophy.

Next, they put a lot of work into documentation and explanations, especially at the compiler level. Rust's compiler messages are fantastic and often anticipate beginner mistakes. I'd love a functional language that detects when the code looks kinda imperative and gently guides you towards the functional option.

And finally, they had a killer app like Wadler mentions. Performant memory safety is pretty hard to argue against. Even a manager could see the potential upsides. I wonder what sort of killer app one could create for functional languages? Reason with ReasonReact could have robust UI as one, but that's still not as good of a sell as performant memory safety.

Performant memory safety is not that big of a feature. GCs are fast enough unless you have real time constraints.

If have real time constraints, you probably do not care about memory safety that much, unless your system is safety critical.

If your system is safety critical, you are using lots of tooling for validation beyond memory safety - and probably some form of model based code generation. That tooling just does not exist for newcomer languages.

Where do you have real time constraints but don't care about things blowing up?

I guess there's lots of places that have soft real time constraints like in games and other entertainment? But hard real time usually means you do care about not blowing up.

You answered your own question.

Tracing GCs are only fast and predictable enough if you devote roughly half of your memory and a significant share of CPU/battery to them.

And of course garbage collection is only part of what's needed for memory safety.

I don't think the CPU overhead is that significant. The memory storage overhead is significant, but that's "solved" by just requiring more memory. I guess we can all thank Java for keeping the DRAM business in good shape.

I'm not talking about what's optimal or efficient, I'm talking about what's evidently "good enough" in the real world.

All GC overheads obviously depend a lot of the specific workload, but if I'm provisioning for a varied workload I would assume 20% CPU overhead and 100% extra memory.

Whether or not that's good enough depends on the task as well as on your margins, business model and competition.

The cost of hosting grows roughly in proportion with memory capacity (Yes I know it's more complex and nuanced than that).

If you deploy on user provisioned hardware (such as on mobile devices or existing/aging PCs), using half as much memory and significantly less CPU/battery than your competitors may give you a competitive edge.

That's not been true for a while.

GCs like ZGC can handle tiny heaps with no pauses, pretty minimal heap wastage and a ~10% throughput hit vs a more middle of the road GC. You're thinking of the Go GC and assuming they must be all like that.

I'll be very happy once claims about improved GC performance turn into actual reality for my workloads.

What is "pretty minimal heap wastage"?

As far as I can see, ZGC (which is currently marked as experimental) doesn't give any specific performance guarantees. It'll be interesting to try.

"How much headroom is needed very much depends on the allocation rate and the live-set size of the application. In general, the more memory you give to ZGC the better."


Heap wastage is tunable vs CPU time, but it's less than the 100% go uses. If you have spare RAM your app will run faster, is the usual tradeoff.

> I'd love a functional language that had build tools as nice as rustup and cargo.

For what it's worth, speaking from experience on Ubuntu, stack for Haskell has worked just as flawlessly for me as cargo has for Rust. (I might get some flack for this, but I'd encourage one to ignore the existence of cabal as much as possible for Haskell in 2020 and beyond.)

I'm quite happy for people to acknowledge the existence of stack, but I think it's worth pointing out that Cabal has worked flawlessly for me since version 3.

>> most programmers are not great mathematicians, so declarative programming feels very weird and confusing

You do not need to be a mathematician to appreciate declarative programming.

I think languages by themselves rarely become popular. There is some killer feature that a language offers which makes a certain class of problems very easy to solve. For example, Rails did it for Ruby, Go's stdlib makes it very easy to write performant network services, Java's promise of platform-independent code, huge stdlib and a free compiler and (later) free IDEs when almost all other compilers and IDEs were paid, C# - Microsoft's deeply integrated ecosystem, Rust's memory safety and the promise of fearless concurrency.

Functional languages are lacking such killer features which makes them 10x more productive for widely faced problems. I am learning Clojure to learn more about the functional paradigm, but I don't see any particular type of software that I could do with it which I couldn't achieve with Java, Go, JS or Python all of which I already know well enough.

I think you could argue that type-safe STM is such a feature for Haskell. It's something that I really miss when using threads or async essentially anywhere else. I would probably jump for Haskell over Rust as a result for heavily concurrent or multithreaded programs. Rust is memory safe, but it can't express purity and it can't enforce that STM transactions should be safely retryable.

I would also argue that Elm offers close to a 10x factor for certain types of front-end web applications. It's a very closed system, which is a matter of great controversy, but in areas where you can exist within that system it is a dramatic leap forward compared to anything else in my experience.

If you consider Erlang/Elixir functional languages, I would argue that BEAM can offer close to a 10x factor for certain types of distributed or concurrent applications as well.

You mentioned Clojure, though, and I don't know what the killer 10x feature would be there.

I suggest trying elixir instead of clojure. Webdev is extremely productive in it. There's also a new reactive web framework called Phoenix liveview that's eye-poppingly powerful. (Yes you can do similar things with blazor but I promise you when something goes wrong with your blazor code you will have either a security regression or a hair tearing out session because it's relatively hard to grok what c# is doing under to hood to transform your code)

I partially agree, but I wouldn't say: "a certain class of problems very easy to solve". Instead I'd say: "which have a product that first delivers on a new market use case".

As someone who loves Clojure, I make this correction, because for example, Clojure has no such product, and in that sense it isn't very popular. Yet I can still use it to solve many class of problems easily and often faster than it takes me in other languages.

For example, Ruby got big because it delivered on a new kind of web framework just at the time they were becoming popular.

Go became popular only because it has a runtime with more modern tuning, prioritizing responsiveness over performance just when that started to become the new priority due to the shift to software as a service.

Python became popular only once it gave rise to NumPy and the plethora of data-science libs it offers.

Java became popular because it delivered on the first enterprise grade virtual machine.

JavaScript because it was embedded in the browser.


Which takes me back to Clojure. Since you say you are learning it. I'd say for me Clojure's special in that it focuses not on any product as such, but instead on the language itself. Using it is more about one's enjoyment and productivity. The key part is, it gives this enhanced language while letting you choose what existing product you want to use it with. That's the hosted nature of it. In that sense, it provides a more enjoyable interface to some of the best products. So you can use Clojure's better syntax, core functions and abstractions and language semantics and extensibility and expressiveness and interactivity with existing high quality products like everything the JVM or JS offers, with some extras like .Net, Erlang, subset of C++, etc.

Disclaimer: I like Clojure, so am biased. Not everyone will find its syntax, functions, abstractions, interactivity, and expressivity to their liking, and if not, Clojure does not have a killer product and thus provide little value.

We should draw a distinction between FP as religion and FP as tool kit.

FP as religion has failed to gain acceptance because it imposes too much cost on the user. I have to rethink my whole stack in terms of category theory AND deal with your terrible ecosystem? Hard pass.

FP as toolkit, on the other hand, has been a smashing success. Most of the core ideas of FP are mainstream now and some of the latest advances in non-FP ecosystems (React, for example) are based on FP ideas.

FP ideas have been seeping into the mainstream for quite some time now.

The oldest: GC was originally invented for Lisp. It's common now.

Type inference was big in FP before it made the jump to language like Java or C++ much more recently.

Generics were a natural idea in a typed FP context. Mainstream languages got them, now.

I'm looking forward to algebraic data types becoming really common. (The simplest explanation is that they are C-style unions with tags to tell you which case you are in. The compiler enforces that the tags correspond to how you use them.) Some mainstream languages are starting to add them.

> The oldest: GC was originally invented for Lisp. It's common now.

LISP was so ahead of its time, its parents still haven't met yet. Not very FP, but another gem from SBCL: saving and restoring program state for later use. Now there's the CRIU [0] project for doing this with Linux and Docker containers.

> I'm looking forward to algebraic data types becoming really common.

I'm not too familiar with the full scope of algebraic data types. Wondering: does Typescript have this or is it still missing a few key components? Really like how it has Union types, which I wish Scala would have.

[0] https://www.youtube.com/watch?v=LrHW7Vvbie4

Algebraic data structures mostly just means union types.

(That's the + in the algebra. The * comes from bundling multiple values together, like in a tuple or in a C-style record, virtually all languages already have that.)

There's also Generalized Algebraic Datatypes (GADT). They are a bit more complicated. So I don't expect mainstream languages to pick them up anytime soon.

About GC: you _can_ do pure functional programming without a GC. But it requires lots of big guns from more advanced theory. (Mostly stuff like linear typing.) However imperative programming without a GC is comparatively simple.

So it's no wonder that historically, GC was invented for FP first, and GC-free FP was only discovered later.

(And for general CRUD or web programming, or basically anything outside of low level systems programming, GC is more productive in terms of programmer time than other approaches. At least given currently known techniques.)

Exactly. You can use FP ideas in pretty much any language (and most people do who like reliability). People who do not see value in immutability do all sorts of tricks to avoid the pitfalls of sharing mutable state. One example is in Java design patterns when they recommend creating a copy of the object you are handling. I can't remember the exact name of this pattern, but it is kind of funny.

Or the visitor pattern, which is.... the map function.

One thing I will note from recent experience: programming for a long time with immutability cripples your ability to think about mutable things. I was writing up a streaming merge sort this week and it was a brutal nightmare because of all the state I had to deal with. Seems like a call to action to deal with a bit of mutable state every now and then. Everything is too pure these days. We're programmers, not mathematicians dammit.

A more recent view: "Who Cares About Functional Programming?"


> 'Much of the attractiveness of Java has little to do with the language itself, but with the associated libraries ... (Much of the unattractiveness is due to the same libraries)'

Jane street is a very successful company that uses Ocaml for everything. Any other companies like that?

Brazilian nubank, they mainly use clojure

And they recently acquired Plataformatec, the company behind Elixir.

Syscog is a Portuguese company heavily focused on Lisp.

SimCorp uses Ocaml as well.

And APL. As I understand it, they have quite a bit of APL code, and only a little bit of OCaml and F# (and a crapload of C#).

Programs are incentivized by need, and businesses and users are the primary consumers of programs. If FP and OOP both satisfy those needs nearly indistinguishably, but one of them is easier for the vast majority of programmers, then guess which one everyone is going to use?

Working for a company has taught me that people will try to get away with doing as little as possible when it comes to satisfying a business need.

> Working for a company has taught me that people will try to get away with doing as little as possible when it comes to satisfying a business need.

Aka productivity.

Not really. Sometimes the more productive thing to do is to step back and rewrite everything. The up-front cost gets amortized over the long term benefits. The asymptotics become important for a business that intends to exist far into the future.

Sure but skipping all best practices in favor of fast delivery might bite you later on.

Functional programming is mostly touted as being able to keep productivity up in the long term.

(And detractors mostly complain that it takes too long to get started, not that the long term is unmaintainable.)

I think in the current crop of languages, the "functional language" concept has shifted its purpouse:

1 GP languages have adopted FP enabling features, so we can do FP just fine in many mainstream languages, and it is infact very common (see eg React, Ramda popularity on frontend and many recent FP features in Kotlin, Java, C++, etc).

2 At the same time FP leaning languages are more popular than ever, to the point it would be ridiculous to claim "nobody uses functional languages" given the visible positions of Clojure, Erlang/Elixir, ReasonML/Ocaml, F#, Scala, etc on the scene.

I think adding 1+2 together tells us that FP is on a real streak. People choose use FP langauges, not because of their capabilities, but because of the mindsets, ecosystems and culture they promote, along with promising to consistently pave the way for FP problem solving instead of often falling back to imperative.

(1) keeps cropping up but these new languages are hardly "FP" or even using "FP" features. I think we give FP research way too much credit.

Pure functions? No, none of those languages can even express the concept except maybe C++ with the const keyword (which is as old as Haskell itself).

Immutable data structures? Even very modern languages like Kotlin don't really have them. You can define a data structure where the fields are immutable after construction but it's not transitive. There are read only collections but not immutable collections, without doing extra work. Maybe immutable collections will turn up at some point, there's a proposal to add them, but it'll just be a library not a language feature.

Type inference? New languages don't use full blown Hindley-Milner TI, they do much more limited TI that's designed to strike a balance between readable error messages, documented APIs, IDE performance and so on. Meanwhile the concept of TI is obvious and required no inspiration from anywhere - anyone who ever looked at source code will be struck by the repetition required.

Generics? C++ templates started being discussed in 1985, and it's again a pretty obvious concept that would occur to anyone who wanted to offer type safe collections.

I honestly can't see the huge impact FP supposedly had on mainstream programming. All the ideas its advocates try to claim for their own are either not used, or are obvious but usually not implemented the same way. My guess is if Haskell had never existed modern languages would look exactly the same.

Many FP languages also lack these.

Try these: closures, first class functions, garbage collection, anonymous functions.

All except first class functions are found in Java. I don't think many people would claim Java is an FP language.

I think the discussion is in danger of going the way of a Möbius strip... Language features originating in FP have been adopted by GP languages, which was the point :)

But those features were found in Java 1.0 so the influence can't be that strong, given it was heavily advertised as an object oriented language at the time.

From the list, only GC was in early Java. GC was popularized by Lisp (was added in the 60s?) and other FP languages like ML (had gc from the start in 1973) decades before.

Maybe another reason is: there is two kinds of software, one is programms which process input data and transform them to output data. And then there is software which operates hardware, like steering of the stepper motors of a robot, or the firmware of a washing machine. Functional languages are (rightfully) immediately dismissed for these kinds of problems. Then there is a grey zone: GUI programming, this is almost hardware progamming and FP have been trying but not doing well in this domain (might be wrong though). Hardware programming doesnt come up often IT departments, so maybe this is a blind spot, but basically you are loosing a big share of the market.

Ps: something special might be with the erlang telco system, but i have no idea about it

One could argue that all software is operating hardware. In many cases the hardware you're operating is a CPU, GPU, memory and storage.

Most of the time it doesn't feel like that if you're doing something like web programming. But for something like computer graphics, where you're thinking carefully about how to lay out memory so that a multi-core CPU can traverse it in the most efficient way, functional programming feels similarly lacking as you're describing in the robotics case.

In my experience after "looking behind the curtain" for this kind of task where performance and low-level control is absolutely necessary, it's difficult to ignore all the inefficiencies which must exist in a more abstract system. But I'm not sure that's entirely justified.

This is true, but it depends which language we're talking about. There are two definitions to FP. The stricter one is about having immutable data and pure functions. In that one, it's fundamentally only a computing language, not a programming language, in that by definition it cannot tell the computer to perform any action, as purity implies no side effects.

That means you always need an additional layer which then drives the side effect. For a lot of programs, that adds some complexity to the program. The programmer has to model their side effects with this layer in mind. The layer means the programmer loses control over the fine grained details of the side effect as well, which might hurt performance, or make certain things more difficult.

That said, there are impure FP languages. Where you can program the machine to do whatever you want, and also make computation functionally with immutable data and pure functions.

Clojure is such a language. I'm very biased towards it FYI, as it's the language I enjoy most as of now. But I think it's important to bring it up. Defaulting to immutable and pure computations, but letting you perform all kinds of controlled mutation and side effect really seems like a good future to me.

Rust in a sense is pretty similar. While it doesn't use a strict functional sense of immutability and purity of data and functions. The borrow checker tracks all changes to data and assigns ownership of access, which is kind of similar to immutability and purity. Yet it lets you do unsafe things when required, often for interopping with peripherals and the machine itself.

Erlang like you mentioned is another example of this.

I think those languages might have a better future, and are slowly starting to pick up on popularity. Even in JS for example, lots of people are adopting this style. So if not a programming language, at least the paradigm seems to become more and more popular.

As an aside, for example, there is a Clojure dialect called ferret https://ferret-lang.org/ which is designed for use on microcontrollers. It has immutable data structures and support all the functional aspects of Clojure, yet targets real time control applications.

Immutable data and pure functions aren't as big as an obstacle as you might think.

Uniqueness types allow you to use immutable variables and still directly model side effects and state changes. Arguably better than mutable variables do so.

(Haskell is not a good language for that kind of control. Though you could make a language that looks quite a lot like Haskell that fits. And people have done so.)

Isn't uniqueness typing just a more restrictive form to Rust's linear types? And since Rust itself is often forced to rely on unsafe blocks for programming tasks (as opposed to computing tasks), then wouldn't that be true of a language with uniqueness types as well?

Also, I'm guessing you mean Clean or Miranda? Any idea why they aren't more popular than Haskell? Or why Haskell doesn't adopt uniqueness typing?

Linear types and uniqueness typing are so close together that people often mix them up. Yes, that's basically what I am talking about.

For general usage monads are much easier to deal with than Clean's "pass around the world" system. (I only used Clean for a bit, but haven't used Miranda, yet.) However, linear types for Haskell have been in the works for some time. See eg https://www.reddit.com/r/haskell/comments/dpr276/what_is_the...

(Similarly, GHC now support strict Haskell, too, on a per module basis.)

Uniqueness typing shines when you want to split up your 'real-world' object, so you can eg manipulate individual memory cells or ports etc independently.

Rust goes into unsafe mode for multiple reasons. You could eliminate most of those, if you were willing to take the complexity of dependent types. Ie more or less that means the programmer taking on a greater burden of proving code to be safe to the compiler; instead of relying on Rust's more automatic but conservative system.

> is software which operates hardware, like steering of the stepper motors of a robot, or the firmware of a washing machine. Functional languages are (rightfully) immediately dismissed for these kinds of problems

Given that there is literally a version of Haskell (https://clash-lang.org/) designed for programming FPGAs and ASICs I don't thin the dismissal is rightful. I've used Clash to program FPGAs with computer vision algorithms.

I have to disagree with you on the GUI side. The most pleasant GUI programming I have done has been with ClojureScript/re-frame and with Elm, despite all the horrors of the web platform.

Good to know that things start to move. I was looking a while back at Lisp and Haskell for GUI programming, and it at most looked a bit half-baked. To me then it was obvious why ppl would implement their GUI in VB or win32 C-API.

I've done GUI programming in a Haskell dialect. It was rather pleasant, thanks to a nice library by Neil Mitchell that took ideas from functional reactive programming and lenses.

You can get arbitrarily close to the hardware with functional languages. It's just that the flagship functional language that everybody knows and loves, Haskell, is not the one for the task.

You want a language with linear variables / uniqueness types for that.

Could you name the dialect and the name of the GUI lib? While Neil Mitchell mentioned he worked on GUI on his blog, I somehow couldnt identify it.

The dialect is Mu, Standard Charter's internal Haskell clone. I don't remember the name of the library, but it was also internal.

I've used JavaScript's React last year, and it feels somewhat similar in some respects; though a bit more messy and cumbersome.

Just a couple gripes, they seem like non-problems today:

Training -- Schools should be covering the training problem in this PDF, and i think that would help a lot.

Availability -- GHC is no longer an 'adventure' to install

Packagability -- Can't you just build standalone programs with most FLs?

Popularity -- Aren't they getting more popular?

"They don't get it" -- I don't think the example given of "i need some documentation and stability" applies anymore.

> Training -- Schools should be covering the training problem in this PDF, and i think that would help a lot.

This just raises a chicken-and-egg problem: why should schools (primary, secondary or tertiary) teach a language that is not "mainstream"?

Is there a language that is more "functional" than Python or Lua that's designed to be a teaching language?

"[...] this allows Haskell to be used as a scripting language for Microsoft's Internet Explorer web browser". Interesting. Never heard of that.


EDIT: Add link

If WASM had access to the DOM, I imagine we'd start seeing a replacement of JS and a democratization of web languages. I however dread the day I get to read a full-stack C/C++ webpage.

You can get a full-stack .net webpage with Blazor already

I’ve been playing with it in the last two days, and while it’s a bit annoying to have to deal with CSS for UI instead of XAML and the recompilation cycle, the framework is impressive. A lot of things can be done without touching JavaScript directly, and when in need the interop works well.

You can give WASM access to the DOM. What you describe is already possible, but most language runtimes are too big to be served for a website.

Are there efforts to cache runtimes? I imagine most programs would be rather small if you don't have to ship the runtime.

There isn't really a good workflow regarding dynamic linking and libraries in WASM right now, but things are moving.

Even then, what are the odds that your new user already has the particular version of your runtime cached already?

If you really have a non-trivial application, then downloading the runtime isn't that big a deal I guess. If you just have a website, it wouldn't be warranted.

Having said that, if you're careful, you can use Rust[1] or C/C++[2] with minimal overhead. I doubt that's going to become popular though.

[1] https://www.hellorust.com/demos/add/index.html

[2] https://floooh.github.io/sokol-html5/index.html

Well the chances are low if it's the first time he is visiting my site. But since this will be used for web apps the chances are very high that a user will revisit.

That case is already covered by browser caching.

Conjecture: programming will become split between “just build my app” and an elite set of programmers who do the most demanding work, eg cloud infrastructure for Amazon, Google etc. The latter will mostly use FP, the former will use JS et al.

I think the reason most coders (and I include myself) don’t use FP is that we don’t have enough training in it, we often don’t have a strong mathematical background, and we have an imperative mindset. It’s harder to learn those things, it’s a different way of thinking than we use in our day to day lives - much like mathematics is difficult for most people to grasp, because it doesn’t come naturally to them. Humans are not theorem provers.

As competition for jobs soars and wages plummet (hello, remote work revolution!), more of the top X% of coders will be forced to learn FP to maintain their competitive advantage, and you’ll end up with a lot more people using it. Some of that will trickle down, but most of us will be writing JS rather than Haskell for the rest of our lives.

>> an elite set of programmers who do the most demanding work, eg cloud infrastructure for Amazon, Google etc. The latter will mostly use FP, the former will use JS et al.

I don't think this is true. In fact Go seems to be at the front and center of most of the Cloud Native infrastructure projects such as Kubernetes, Docker, Prometheus, and many others.

Here is a list of companies using using Clojure & ClojureScript:


Lists like this strike me as somewhat meaningless. Those companies probably also use bash scripts, hardly represents a significant chunk of the product. How much of their codebase is in Clojure?

FWIW if you're not using algebra to write your software you're not quite doing FP just yet.

Doing math on software to make new software is the overlooked part of FP in my opinion. You hear about "first-class functions" and "referential transparency" a lot but not as much about "equational reasoning".

We have had these discussions for many years (even pg wrote about Blub IIRC), but the real reason (IMO) is that language choice matters far less than we like to think.

What business we choose to build with the language will outweighs the impact of language choice a hundredfold.

Our decisions on CI/CD, our discipline on building clean architecture with whatever language we like matters more.

In short, the things that make development a productive and enjoyable experience are linked less to the language than to the skill and professionalism of the development team - and "weird" languages tend to have more skilled and professional developers working in them, so the causality / correlation tends to get mixed up.

The upshot is, worry less about learning Lisp at work, and more about fighting your corner on why we should rewrite that 3,000 line monster engine module that we all call each day but making changes to it is a nightmare.

> "What business we choose to build with the language will outweighs the impact of language choice a hundredfold."

This is on one hand a truism that actually adds nothing to the discussion. In achieving success, of course the business matters. Even more so the best simplifications come from understanding the business domain, not from the tools used.


On the other hand it's also patently false. Build a web browser in Ruby. Build an operating system in JavaScript. Build your average web app in C/C++.

The tools used have inherent limits, being optimized for certain domains and not others, can make us more productive, can lower the defect rate and can keep us happy enough to see the project released.

In my experience _some_ static FP languages can lower the defect rate, can make certain problems easier to tackle, can make the code easier to maintain and refactor and due to performance being reasonable, they tend to be more general purpose than other languages — this trait being a consequence of static typing, making performance good enough, coupled with the languages themselves still being high level enough.


> "learning Lisp at work"

Ah, I see the problem ... many of you thought Lisp is some sort of secret sauce. It isn't, it never was, Lisp is a freaking mess.

The real problem in our industry is not applying the scientific method for advancing it. We thus rely on fashion and salesmen to sell us magic pills.

> On the other hand it's also patently false.

The sentence "language choice matters far less than we like to think", which means something like "switching between most pairs of languages in most situations doesn't make a big difference", and your examples which mean "there exist pairs of languages as well as situations where switching between them makes a big difference" are completely compatible.

In fact, I would say that the latter is not interesting because of course we know there are situations where high-performance languages are more or less appropriate, but the interesting question is whether languages that have similar aims and performance, and differ mostly in their linguistic features make a big difference.

> In my experience

Unfortunately, people will also support the effectiveness of homeopathic medicine based on their experience, and much more enthusiastically. I'm not doubting your sincerity, nor even the facts of your situation, but this doesn't count as evidence. On the other hand, the fact that different languages are tried in different places, in an industry with strong selective pressures, the fact that no single language or paradigm completely overtakes all others is evidence in favor of there not being a very big bottom-line impact.

> the fact that no single language or paradigm completely overtakes all others is evidence in favor of there not being a very big bottom-line impact.

Or perhaps because no paradigm is universally applicable to all cases. There will be a bottom line impact if the wrong tool is picked for the job.

Sure, but what I'm saying applies when comparing different languages that are reasonable choices for a given domain.

Anyway, this isn't a poll, and while opinions -- one way or another -- shape the popular opinion, they don't settle the empirical question.

>>> this trait being a consequence of static typing,

This is kind of my point. The goal is to build the right thing, right. Languages aren't the point, the advantage is techniques/ capabilities that come with it. 20 years ago memory management was a huge advantage in building the right thing right. And it had to be built into the language. but 20 years ago I had what I would now call a CD system, built around pxe-boot and bash. It made building the right thing right much easier, and had nothing to do with language choice. As does all the good things we consider - from engaged business teams to unit testing and pager duty. Our language choice impacts some of that, more often our organisational maturity matters more.

Yes some languages are "better" than others - but if the goal is to have an organisation doing something, then tools to build the organisation matter more. Software will of course be at the heart of that, but the language choice won't (much).

>What business we choose to build with the language will outweighs the impact of language choice a hundredfold.

This isn't always the case. If I tried to build a HFT company using Ruby for literally everything, it's almost guaranteed to be a completely failure, not only because it's extremely difficult to write allocation free low level code in pure Ruby, but also because anyone I tried to hire with relevant experience to build the platform would be like "WTF, why are you building a HFT platform entirely in Ruby, that's such a stupid idea, no way I'm working on that".

Or for that matter if I decided to start a company to build a web browser in pure Ruby.

I kinda agree but can't help observing that quite a few HFT firms use Java, where it's also hard to write allocation free code. They use various approaches like pauseless GCs or just sizing the heap so that they don't need to collect during the day.

HFT is also a continuum. There's a latency/strategy iteration tradeoff involved. Going "quite fast" can be better than going "super fast" if your quite fast strategy is better, but keeping at the edge of good strategies requires constant code changes. Hence the benefits of high level statically typed GCd languages.

That's basically the crux of it, syntax matters a lot less in language adoption. Most mainstream languages benefit from a "killer" app, feature, software-class or platform, which is what sees Swift become hugely popular because of iOS, Java/Kotlin getting a boost from Android, Browsers popularizing JS and many games & OS Software still written in C++ for its zero overhead abstractions.

Ecosystem, native libraries, community, knowledge, tooling, familiarity, domain suitability, hireability and overall general dev productivity & UX matters a lot more in language adoption.

I'm not seeing any big draw card from a Killer App that clearly highlights its superiority over more popular alternative languages. There's many niche areas where FP shines like implementing compilers, statistics & math models, but I don't see much appeal in mainstream usage where it adds enough value over dominant languages to overcome its deficiencies in other areas. A clear example is AI which is an area that FP would excel at but is still dominated by Python & its dominant ecosystem.

The value proposition of pure FP languages also gets diminished as mainstream languages adopts FP language features.

Language choice matters. But, except for specific cases, and within an usually wide margin, it matters less than you think. http://www.paulgraham.com/avg.html

Yes, you'll shoot yourself in the foot if trying to write a website in C++ or maybe Java. But between Python/Ruby/Modern PHP, they will probably work fine.

And the main issue I think is that, while some languages like Haskell (or Elixir, though it seems much better than Haskell) or Lisp or F#, etc might excel at some places, their advantages won't compensate for their disadvantages in the 80% of cases you don't need them (this is comparing with the modern language landscape - and even Java now is much better than Java 20yrs ago).

I love this phrase from the article above: "The safest kind were the ones that wanted Oracle experience." though I think there's a deeper meaning: they're not safer because they're using Oracle. They're safer because they're picking Oracle (hence showing that their world view is narrow). Sure, they could be geniouses in disguise and picking Oracle for that 1% where it really is way better than the competition, but most likely they aren't.

Amen. Or as Jobs said, "You've got to start with the customer experience first and work backwards to the technology. You can't start with the technology and try to figure out how you're gonna sell it"


And yet, iPad.

I'm responding to "You can't start with the technology and try to figure out how you're gonna sell it", not saying iPad is bad.

I was referring to the remarkable history of iPad, which was designed to have certain features, not designed for a particular use. Jobs started on it before iPhone, attempting to combine his and Ive's favorite features: simple, multitouch, thinner!, no keyboard, no mouse, no stylus, no buttons, lightweight, handheld (not used plugged it), great screen, etc.

It was not designed as a "solution" but as a great...thing...that would turn out to be great for...something. It was his baby, and he insisted that they keep working on it until they found a use, which they did: they turned it into an iPhone.

That was smaller (screen size) than Jobs had planned, so he made them keep going on bigger versions. They would be so wonderful, they would be great at...something. When they released the first one in 2010, they were intentionally vague about its target market. They presented it as maybe a substitute for laptops for productivity apps, they presented it as sort of a better Kindle for reading, as an artist's tool (for artists who fingerpaint), etc., with the hope that with such a great collection of features, it would be like the original LaserWriter and turn out to be perfect for some market.

Regardless of what that market would end up being, it did in fact start with the technology that two passionate designers wanted to exist for its own sake without knowing in advance what it would end up being used for.

What about it? Are you claiming the iPad started from the user experience or technology?

What do you mean by that? I’ve been using iPads as my main computing devices for the past five years (and almost exclusively for the past three). It depends on your workflows.

My feeling right now is i disagree with this. I think there is a certain amount of clarity in functional languages that more imperative languages are striving for as they develop maybe. You seem to be able to express more for less in most functional languages. Out of curiosity, do you yourself work well and quickly in functional languages?

CI/CD is also separable from this issue, i think. But the thing that you can do well with it which is write and regularly run tests is not.

My thinking is the main problem is people just aren't exposed to functional languages enough and get stuck working to line theirs and other peoples' pockets before they have had enough exposure.

I also have some gripes with the actual document which i will post in a top-level reply.

While true for the more trivial cases there are examples that show the opposite e.g. Whatsapp's choice of Erlang allowed it to scale up easier with less engineers - https://www.wired.com/2015/09/whatsapp-serves-900-million-us...

Yes, most languages you can use functional style

I like framing FP and OO as technologies, and technology takes money and time to improve. The technological aspect comes from the ability of language and IDE teams to translate your programming style into an efficient program.

I think there is a fundamental reason. The computer is inherently mutable. As a result, the basic abstractions of operating systems are mutable.

Thus, except for nicely contained problems such a text transformation (see Pandoc for an awesume use of Haskel), interfacing with the computer and the rest of the world becomes an impedence mismatch. Yes, monads are awesome, but you shouldn't need a math PhD, to print "Hello World" to the screen and understand what it is doing.

The other issue is performance. Yes, performance is generally good enough, except when it is not, and when it is not, it is not obvious why you are having issues or what you can do to fix them. This once again comes to the abstraction mismatch between the mutable computer and the pure functional language.

Monads are among the simplest abstractions you can find in computing. Compared to almost anything you need to understand in software development (how the hell do I make webpack do the thing I need?) they're ridiculously easy.

The fact that so many devs are scared of monads and believe outrageously silly things like you need a math phd to understand them seems to be similar to being afraid of drinking dihydrogen monoxide. Even though it's only water, it has a complicated sounding scary name, so it must be bad.

There’s also a huge number of people who are fairly bad at explaining what exactly it is. I don’t think I should fault them, per se, it’s sort of trying to explain why water is wet.

Modern CPUs re-order your instructions based on data dependencies.

If you want to argue from the bare metal, that should count for something, or not?

In any case, compilers are pretty good at making you not have to care too much at what goes on in the lowest level.

(Unless you care eg about absolute speed. But then, an imperative-only understanding is not going to cut it either.)

> you shouldn't need a math PhD, to print "Hello World" to the screen and understand what it is doing

Which one requires a PhD?

    main = putStrLn "Hello world"

    public class HelloWorld {
        public static void main(String[] args) {
            System.out.println("Hello, World!");

No doubt, however, writing programs using immutable variables makes programming easier. The compiler can do the transformation from immutable to mutable much safer than the human mind.

    main = putStrLn "Hello, world!"
No PhD required.

Interestingly written 3 years after the introduction of a functional language that eventually went on to become the most popular language there is.

JS has some properties that are typically associated with functional languages but is missing others. But then again, so does Java if you're willing to ignore the lack of syntactic sugar for them, so what do definitions even mean anymore.

Java only got some functional trappings fairly late in its life.

JavaScript was a relative transparent reskin of a (rushed) Lisp from the start.

Which language are you talking about?

The one that controls the web application you're currently using.

Controls the comment collapsing, everything else doesn't require JS.


My experience so far with functional languages (as an OOP-er):

* After a whole evening studying I am able to subtract 1 from 5.

* After spending hours checking what is wrong it just does not compile.

* After the hello world part there are no libraries that can be used for web development.

* Lack of good documentation.

* Lack of good tutorials.

But lately I discovered Elixir and I believe this could be a language that would change my mind about functional programming. Phoenix looks great and LiveView matches .NET Blazor (which I use a lot).

But then there is still the question: why even bother using a functional language? Because I have little experience in functional languages this is hard to answer for me.

Edit: "After a whole evening studying I am able to subtract 1 from 5." I should have formulated this different. What I meant was: sometimes you follow a very long tutorial about a functional language. You will learn a lot about the syntax but in the end when you try to use it, you will only get to the basics of writing a function that subtracts x from y.

Which language did you try to come to terms with, and what material did you use?

How much are you exaggerating?

I dunno man, I don't think I believe you. After a whole evening of studying you can subtract 1 from 5?

Funnily enough 5 - 1 works almost in every language the same way. Maybe with a different syntax but I cannot think about any FP language where I cannot do it.

I mean Lisp. But It's pretty debatable whether Lisps are functional languages. Imo, static typing is a requirement, at least culturally.

As far as I can tell at the International Conference on Functional Programming the Schemers are treated as full member of the functional programming community, despite lacking static types.

Static types are neat, and FP people tend to really like them, but I wouldn't call them a strict requirement even culturally.

You can make a statically typed Lisp, of course. Just like you can make a dynamically typed ML-like language. Haskell even sort-of supports that. See https://gitlab.haskell.org/ghc/ghc/-/wikis/defer-errors-to-r...

Yeah but (- 5 1). Maybe I just do not care about syntax anymore. Static typing is great but I can live without it. Static typing without the advanced features is not really as helpful. Ada got many things right about types. Like subtypes with constraints for example.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact