Hacker News new | past | comments | ask | show | jobs | submit login
Haskell is a Bad Programming Language (2020) (shitiomatic.tech)
340 points by fpoling 10 days ago | hide | past | favorite | 307 comments





Haskell is a language for experimenting, success is not their goal and it would remove resources to actual research.

They had promises and async/await in 1995. They're trailblazers that other languages can follow (like rust did).

I would add that, after stack, tooling is not a problem anymore. It's not as good and polished as rust's cargo but it's ahead several other languages.

Still, as a language, Haskell is not ideal for teaching and productivity.

There too many different ways of doing things (eg. strings, records); compiler errors need improvement, prelude has too many exceptions-throwing functions (eg. head); exceptions handling is not something I want in a pure language; ghc extensions change the language so much that using some extensions almost feel like having to learn another language.

On documentation, I can't say I feel the need for it, but I understand some developers may be used to program against documentation and feel lost without it.

I think that Haskell is a great language to prototype pure business logic because of the type system and focus on purity, but it has several warts, because haskellers focus more on language research than DX.

The reason I stopped using Haskell is because I was bit by exception handling (which is a feature shared by many other languages, incidentally!) and by GC spikes.

I still like Haskell, it's closest to my "ideal" language than any other, but for production Rust is more useable (albeit being a bit uglier)


> Haskell is a language for experimenting, success is not their goal and it would remove resources to actual research.

You rightfully point to Rust, which took a lot of inspiration from Haskell, but I think it's worth emphasizing just how much of the progress in programming languages in the last two decades was inspired by functional programming (many features were not invented in Haskell, but some like type inference were popularized by it).

For example: proper type inference, algebraic data types (enums in Rust) and consequently option types, pattern matching, property-based testing, immutability by default, parametric polymorphism (generics), ad-hoc polymorphism (type classes/traits/...), first class functions (very old idea but only recently common in mainstream languages), ...


The Rust docs claim most things actually came from ML and OCaml: https://doc.rust-lang.org/reference/influences.html

They only mention Haskell's type classes and type families as inspiring a Rust feature directly (traits).


It's kind of alluded to, but, while FP languages, as a broad category, were the progenitor of a lot of different PL ideas, Haskell ended up implementing many of them in a single place. It's fair to say "we were influenced by (this other language that originated an idea)", but it's also fair "and Haskell also has that feature". Do it enough times, and you start to see why the claim that Haskell is important; being a research language it's able to have all of these ideas implemented in it in pursuit of its design goals, where other FP languages pick and choose in pursuit of a different one.

I think it would be best to create a spin off of Haskell. A small subset with only the good parts of Haskell, few hand selected extensions, new prelude, with focus on performance, very easy tooling/build chain and use in production for businesses: "Production Haskell Lite".

The original rust compiler was written in ocaml

I learned type inference in Caml Light, Haskell was still Miranda back then, and I bet any language designer of all major languages has similar backgrounds in regards to ML type inference.

Even type classe ideas can be found in CLU, ML, Objective-C, before the paper that gave origin to their adoption in Haskell.


For history buffs: https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_sy... was invented in 1969 (and rediscovered my Milner in 1978).

AFAIK, that was the basis of type inference in many of the non-mainstream languages before they branched off into their own more powerful type systems.


from lisp and ML, but yes, haskell is a white-coat language.

a white-collar language

no.

white-coat as in white lab coat, i.e. a research language.


> I think that Haskell is a great language to prototype pure business logic because of the type system and focus on purity, but it has several warts, because haskellers focus more on language research than DX.

This is very true. We recently started https://github.com/digitallyinduced/haskell-ux to keep track of haskell DX improvements. Some of our suggestions are already issues on the GHC (haskell compiler) bug tracker.

Here's an example of a great UX improvement in GHC: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4711


Good to see that the article is at least underselling that people are actively working on this

Exactly. It's a research vehicle. Not everything in Haskell is useful for business or productivity.

The reason I stopped using it is because of performance. Despite having state of the art optimizations there it's still too slow for my needs and writing fast code is way to hard compared to C++ or C.

I was also quite disappointed to learn that a lot of useful concepts (Monad transformers/stacks) have a runtime performance impact when it looked to me like I was just playing with types.


> It's a research vehicle.

Well, it's also one that happens to make people lots and lots of money. Standard Chartered, for example, uses it extensively to earn lots of money. Facebook uses it for spam filtering logic. Niche, perhaps, but painting it as purely for research is just incorrect.

Yes, it's not going to be great if all you're writing is integrations with various vendors using SOAP or just legacy or odd protocols. Those have libraries or code generators on the JVM/.Net/etc. platforms... not so much in Haskell. However, this has nothing to do with the language, it's just a matter of people actually doing the work to support those things.

Everything else, though... you're golden. It has a learning curve, but there's a reason that Scala is moving ever closer to monads, adopting proper syntax for type classes, etc.


This is a good point.

The language doesn’t have to be good to make money, in fact in can be quite bad.

Oddly, while modern development embraces agility, many things often benefit from small changes, and a bad language has small change built-in.

Why? Well, if the language is bad, you have to pay your developers well to retain them, since there are few that want to program in that language. The developer comes aboard because of the money and the challenge. Once they join the company and become a developer in a bad language, there are fewer alternatives for the developer to find another job in that bad language. This means that they have to stay around and get to know that bad language better, making to even harder for the business to hire others to help or replace them. So, the development doesn’t suffer as much from team scaling problems, and change can’t happen as quickly.

This isn’t what you want for everything of course. Especially when talking with a VC about a startup.

But JS, an ok language, has gotten to a similar level of nonsense through the difficulty and complexity of its rapidly changing ecosystem and changing browsers.

And few languages have really avoided unnecessary difficulty over time in their ecosystems.


Absolutely.

I think we in the IT community should really strive to be better at nuance when discussing these things.

I mean, as much as I've seen "Haskell sucks" posts, I've also seen quite a lot of "Haskell solves all your problems!" posts. That's not how anything works in the Real World(TM) when solving concrete business problems -- whatever that business domain may be. Rather, it's trade-offs all the way down. It does get tiresome to read these re-hashes of debates which should have been over already. (EDIT: For anyone wondering, the answer is: It depends.)

I think the "tribes" blog post mentioned in https://news.ycombinator.com/item?id=25699983 is very insightful for the meta-level of this discussion.

EDIT: Final edit while I can. We see this a lot with "lol PHP" style comments... and all those achieve is a sense of elitism and making people who actually get lots of important work done in PHP feel bad. I don't want that world.


> The language doesn’t have to be good to make money, in fact in can be quite bad.

In fact, I think that there's an inverse relationship between how good a language is, and how much money has been made with it, on the condition that you have heard of it.

Why? Well if you've heard of a language, it's likely because it is a language that has proliferated through the community. The worse a language is the more it must have made people money to 'survive' and proliferate.

Think of the worst/ugliest languages you can think of (coming to my mind is Visual BASIC, C++, PHP, Javascript). These are languages that made an exceptional amount of money, and this allows them to survive despite being so bad.


Hi.

I am a professional software developer and my full time job is currently writing software in Haskell for a bank.

I also stream projects and open source work I do in Haskell once a week.

I am not a researcher. I don’t have a horse in that race. Although I am thankful for the research that is done as things like STM and type system extensions benefit my work greatly.

I really wish Haskell could shed this meme that it’s a “research,” language and that, “nobody uses it.”

I’m Nobody. I use Haskell every day. It’s a practical language for industrial strength programming in a pure functional language.


The fact that Haskell is a good industrial strength language is a byproduct of its quest for language design excellence.

Avoid success at all cost, implies an unwillingness to sacrifice on design to please corporate needs.

I don't think it's a "meme", there really is a focus on research over being production ready. Monads, Arrows, Dependent Types - Haskell is where a lot language research happens (and the language chosen by a lot of researchers).

Sure, you can ship Haskell in production if you're happy to fill in the gaps. I did some of that for a non-Haskell company and it wasn't as easy to deploy apps written in other languages and community-provided solutions for logging / monitoring were lacking.

That said, I'm always happy to hear about people using Haskell in production and wish you the best


> I would add that, after stack, tooling is not a problem anymore. It's not as good and polished as rust's cargo but it's ahead several other languages.

In a way Cabal 3 is even ahead of cargo, by being able of sharing build libraries between different projects and still having a sandbox like build for each project.

> There too many different ways of doing things (eg. strings, records), compiler errors need improvement, prelude has unsafe functions (eg. head), exceptions handling is not something I want in a pure language, ghc extensions change the language so much that using some extensions almost feel like having to learn another language.

For such a self called safe language, some things are almost comedy like, like getting exceptions for uninitialised record fields or unhandled alternatives in case expressions.

But Haskell still has a place in my heart and I‘m still following its development. But for my side projects Rust has replaced it, by being at some things even safer, but foremost safe at the places it‘s most important for me and also quite a bit more pragmatic. For me Rust combines the best parts of Haskell and C++.


> some things are almost comedy

We're trying to fix that! https://github.com/ghc-proposals/ghc-proposals/pull/351


> Haskell is a language for experimenting, success is not their goal and it would remove resources to actual research.

It's somewhat facetious motto is "Avoid success at all cost".


   Avoid $ success at all costs

I dunno why people downvote, obviously this means

    Avoid (success at all costs)

> async/await in 1995

Unix had fork and wait before 1979.

Unix shell:

  # run some commands asynchronously

  $ command1 &
  $ command2 &
  $ command3 &

  # wait for all of them

  $ wait

Multiprocessing and parallel programming is a different thing from async/await, which primarily has to do with with green threads and coroutines. You're right that the ideas go way back (hell, Knuth was writing about coroutines in TAoCP in the 70s!), but this does not qualify.

Unix was originally green threads, at least in kernel mode: it was a non-preemptable kernel running on a uniprocessor. This means that kernel code ran until hitting a voluntary context switch. User space was preemptible.

(User space being preemptible doesn't really make a semantic difference to the shell & and wait examples, unless some of the commands contain lengthy CPU-bound loops.)


> They had promises and async/await in 1995.

Really? Do you have a source on this? I was under the impression that C# was the pioneer here.


Async/await is specialised syntax for a more generic concept of monads which haskell was the language in which this was heavily researched.

Specifically continuation monad describes asynchronous computation (promises) and was one of the motivating examples In the early 90s going back all the way to Moggis original "notions of computations" paper from 1991 http://homepages.inf.ed.ac.uk/wadler/topics/monads.html

Async/await and LINQ were the brainchild of Erik Meijer. A haskell researcher.

Also see https://news.ycombinator.com/item?id=25565229 from a recent discussion here.

Specifically http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.... where Eric tells his story about bringing haskell concepts to .NET


Fwiw, c# wasn't even the first dotnet language with it. F# had async computation expressions a few years earlier.

But F# doesn't have async/await. As you write, it has "computation expressions", which are inspired by Haskell's "do-notation", but are more generalised and powerful (than both Haskell's "do-notation" or Scala's "for-comprehension", let alone async/await).

One of the things I liked best about F#.


Idk if you can strictly say that.

Haskell allows more generalized do blocks because of HKTs. I blogged about this a few years ago. https://www.sparxeng.com/blog/software/higher-kinded-fun-in-.... (Granted, I've never had a use for that beyond code golf).

And C# is nice because you can await anywhere in an expression instead of just at a "let!" declaration. This is something I do miss when using F#.

But agreed, F# is usually my language of choice.


I think the Haskell abstraction is what allows monad transformers. Saw some hacks in F# to allow similar, but you lose generic "lift", for instance.

More precisely, I was referring to monads and do-notation with that phrase.

Here are some references on when it was invented / implemented: https://www.reddit.com/r/haskell/comments/8rkrgq/when_was_do...

Also a JS gist on async/await and do-notation: https://gist.github.com/MaiaVictor/bc0c02b6d1fbc7e3dbae838fb...


promises, futures, etc... are much older - it takes 15 years for stuff pioneered in research to make it into general purpose langs :p

- https://d1wqtxts1xzle7.cloudfront.net/50657913/p260-liskov.p...

- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97....


Even though the concepts weren't invented for C#, you are technically correct in that C# pioneered the keywords async and await.

But since industries used Haskell, now they are concerned with backward compatibility, which AFAIK makes some parts of Haskell ugly. I wish there would be something like RHEL/Fedora model rather than letting Simon Peyton Jones being dictated by industry by ceding towards backward compatibility.

> It's not as good and polished as rust's cargo but it's ahead several other languages.

Until cargo supports sharing of binary crates it isn't as polished as I care about.

All the languages I use, except for scripting languages, support distribution of binary artifacts, including Haskell.


> prelude has unsafe functions (eg. head),

`head` is not unsafe, it is a partial function. An unsafe function can lead to u.b.; the behavior of `head` on an empty list is very much defined. Haskell is not a total language.


As an aside -- Rust's definition of "unsafe" (e.g. "can lead to u.b.") is not the only definition of unsafe one can use for a programming language, which can have different safety guarantees.

As a motivating example, in many languages converting a reference to an integer containing a memory address is unsafe (e.g. Java[1]/golang[2]/C#[3]/Haskell[4]), but this is considered safe in Rust.[5] All these languages literally use the word "unsafe" for it.

[1]: See sun.misc.Unsafe, which indirectly provides this https://stackoverflow.com/a/7060500/315936

[2]: https://golang.org/pkg/unsafe/#Pointer

[3]: pointer types are only allowed with the unsafe keyword, see: https://docs.microsoft.com/en-us/dotnet/csharp/programming-g...

[4]: I believe this requires unsafeCoerce https://stackoverflow.com/a/18563789/315936

[5]: https://doc.rust-lang.org/reference/unsafety.html


I think in all these cases unsafe actually means 'leads to UB'.

Converting a pointer to an integer address is problematic if you have a garbage collector due to compacting. Technically it's still the dereferencing that actually causes the UB, but I don't think this is too much of a leap.

C# marking all pointers unsafe also makes sense in the same way, because "passing pointers between methods can cause undefined behavior."


I agree, I'm just pointing out that not all languages have the same safety model, even if they are similar.

Converting a pointer to an integer address and never dereferencing it (e.g. to print it) does not lead to U.B., but it does leak ASLR information. Some languages consider that safe (Rust) and others do not. I think that is in an important distinction.


It is Haskell's definition, however.

All functions that have `unsafe` as part of their name have it because they can lead to u.b. with improper use.


Is it? The definition I see in the GHC docs seems to be broader than that: https://downloads.haskell.org/ghc/latest/docs/html/users_gui...

Regardless, I agree the function under discussion though, "head", is not "unsafe" in either sense. That's why I said "as an aside".


Thanks for the correction, changed to exception-throwing - that's the behaviour I don't like

I think it's worth pointing out that your values[0] may not align with Haskell's values. That's absolutely fine, but it doesn't mean that Haskell is "bad" in some objective sense.

AFAICT, most of your 'complaints' would apply equally to, say, Java or C#.

[0] I can't recall which exact presentation it was, but Bryan Cantrill had a brilliant segment on this in one of them. Perhaps others around here can remember?


> I can't recall which exact presentation it was, but Bryan Cantrill had a brilliant segment on this in one of them. Perhaps others around here can remember?

I think that's his discussion of language values in "Rust and Other Interesting Things" at Scale By The Bay 2018. https://www.youtube.com/watch?v=2wZ1pCpJUIM

[edit: This talk is also titled "Platform values, Rust, and implications for systems software". I'm not sure which title is preferred.]


Yup, sounds about right. That was an excellent insight from him.

They didn’t argue that it was “bad in some objective sense”, they pointed out a feature in Haskell that they specifically don’t like. Unless you consider Haskell to be a perfect language, discarding an opinion that was labeled as exactly that seems silly to me. Especially with such a broad strokes “love it or leave it” reply...

Those complaints apply to those languages too, yes. What’s your point here?


I may have overreacted to the title of the blog post and carried that over to the post I replied to :). That's my bad.

Haskell has plenty of legitimate criticisms for use as a production language.

However this article isn't it. Please don't read this and repeat the opinions posted here. A lot of what the author says is just plain wrong and shows a misunderstanding of basic functional idioms.

For example: "Functors are basically an Object with a internal state changing method in typical OOP terms." Uh, no. Functors are stateless, with a well defined semantic. This is nothing like an object with internal mutable state, with instance methods that mutate it in god-knows-what-way.

There are a lot of mischaracterizations like this in the article.


>>Haskell is not too hard to learn.

Obviously he did no even that then.


> programming languages are meant to ease the task of creating computer programs as opposed to writing assembly by hand

This! The haskell ecosystem is missing a certain kind of pragmatism. There's a lot of beautiful type abstractions, talking about monads, etc., but not enough builders doing actual application development. In my opinion it's not the language that is bad, but the ecosystem. Missing documentation, missing tooling and infrastructure, no focus on actually building applications.

We're trying to fix that with IHP, a new haskell framework with a focus on actual building applications. Imagine the productiveness of rails combined with the typesafety of haskell.

6 months after it's release it's already the second biggest haskell framework, and we just had a new record of weekly active users last week. To me this shows that by fixing the ecosystem haskell can reach a lot more than it's currently doing.

Check it out: https://ihp.digitallyinduced.com/ GitHub: https://github.com/digitallyinduced/ihp


This indeed seems to be the main/only point of criticism in this post that is valid: however, it is not that Haskell has no "pragmatic" libraries to get stuff done (e.g. WAI/Warp, Yesod, Servant, ... are top notch, practical libraries if you are writing network/HTTP services). They do seem to drown in the sea of libraries/blog posts that are focused on the academic/abstract stuff. The end result does not feel consistent, and reminds me of the horrors we had with the early C++ metaprogramming efforts: it looks cool, but you end up fighting the language and produce unreadable code.

For Haskell to become a successful "industrial" language, I think most of the dependent-typing stuff should probably go (to Idris/Agda etc.), so that a clear and consistent Haskell subset can be defined.

The other arguments in the article are just weak. The rant about data not having a type, is missing the point. Sure, often you receive data that you need to inspect to know what it is. You can easily do this in Haskell (just label it with the UnknownData type), and have a function that inspects it and returns, depending on the contents, the right type). The big advantage is that you don't have to keep on doing this same check.

Types being the cause of difficulty in refactoring when business requirements change, is the opposite of my experience. In large dynamically typed codebases, being sure that a large refactor caught everything, is very hard / costly in test coverage. I have seen this go wrong many times. Having the compiler point out what you have missed, based on the types, is very helpful.

While I think the arguments are a bit weak, I do agree that it is at least unclear if Haskell is a sound choice as a production language at this time. Fighting against an ecosystem is not something you want to be doing while building your product. But in contrast to the author, I do think this is fixable, and see steps happening in the right direction (e.g. with IHP, but also with the efforts around the Haskell Foundation and the Haskell Language Server)


> but not enough builders doing actual application development.

It's simply because the community is small.

People compare Haskell to other languages like Java or JS like there's an equal amount of manpower, and therefore, if not everything has a library, it must be because haskellers are slacking on useless stuff. It is not.

Other small languages communities have exactly the same issues, it's not specific to haskell. The only thing that is specific is outsiders feeling entitled to an ecosystem and blaming "these damn fp ivory tower types" for not providing it.


No, it's really not. Haskell is a pain in the ass to program in. It's a language developed for mathematicians, not the real world. It makes the same mistakes as OpenCyc and expert systems.

I know assembly, C, C++, Scheme, Common Lisp, Python, MATLAB, C#, some D, some Javascript, some Mathematica. Programming in Haskell looks promising initially, but just drains all the joy and productivity out of my programming.


> [Haskell] just drains all the joy and productivity out of my programming.

It's fine that you don't like the language. I mean, not everyone has to enjoy a language. On the other hand, there's a difference between voicing an opinion, which is personal, and making claims like:

> Haskell is a pain in the ass to program in. It's a language developed for mathematicians, not the real world.

Because then, you are discarding other people's experience about the language, while you claim your own experience is relevant.


> > [Haskell] just drains all the joy and productivity out of my programming.

> It's fine that you don't like the language. I mean, not everyone has to enjoy a language. On the other hand, there's a difference between voicing an opinion, which is personal, and making claims like ...

Indeed. Different people find different things fun:

https://news.ycombinator.com/item?id=25711790


I don't think that quote is complete enough though. If you really believe that programming languages are just tools for building programs (in particular "useful" programs that make money) then yes, Haskell has not much to offer you over PHP or Ruby and other languages in that vein.

On the other hand if you are interested in research, exploring the extent of the design space for programming languages and what you can (or can't) express in them, then Haskell is an excellent language for you. Both for the community of like-minded researchers and for the flexibility of the language/compiler itself.

On a wider scale, I've always liked https://josephg.com/blog/3-tribes/ as an explanation for why this topic comes up again and again and again. The writer of the article is more of a "type 3" programmer. He wants to build a house but does not care much which tools he uses as long as they don't get in the way. It is the end result they are interested in, not the process. Haskell is, by design as a research language, more process oriented than result oriented. And yes, you can still get results with it as IHP and SC and Jane Street have shown. But they seem to be the exceptions that show the norm.


I think for making stuff the speed of iteration is the most important metric.

Haskell is great for this because:

- In dev mode it's super fast, see https://www.youtube.com/watch?v=nTjjDo57B8g&feature=youtu.be

- You can change features of your product easier thanks to the type system. Doing a big refactoring with rails always adds many bugs. With haskell you spend less time on doing these kind of changes because the compiler tells you what needs to be changed.


> With haskell you spend less time on doing these kind of changes because the compiler tells you what needs to be changed.

This. I think what all the other folks in this thread that are just throwing shade on Haskell after struggling through a tutorial (admittedly, it can be pedagogically difficult to introduce concepts compared to Python or Ruby or JS, etc).

Perhaps Haskell will never beat C++ & friends (Rust, C, maybe Swift?) at performance, but what it will beat is the myriad other web apps written in Python and Ruby, which I bet a lot of folks on this site use. All that it lacks are the same levels of batteries-included frameworks like Django and Rails.

I can't even begin to mentally enumerate the number of times I've seen Python (and, much less often, Ruby) codebases at the places I've worked that were critical to the infrastructure (often having been the outgrowth of the initial POC that got the product started) that were mysterious Monoliths, which if you tried to change would be some Hydra situation: for every bug fixed, another 2 or 3 or 10 are created.

Perhaps if a team of really well-disciplined Python professionals can go 0-60 very quickly due to the language's flexibility, but it counts on that discipline being maintained going forward in the code, and training new hires or hiring only experts to work on the project.

Haskell is basically like encoding this discipline into the compiler itself. You don't have to spend time praying that your test suite is adequate (although tests are still needed) or picking over the minutiae of some pull request trying to remember all the things that the `|` operator may have been overloaded with from that "coding ninja" who decided to put esoteric things in your Python codebase did before he got poached by some other company and peaced out without documenting anything.

For all of the pitfalls that Haskell has from a language perspective (which can often be avoided by using an alternate prelude), the advantages when compared to other modern languages made for "moving fast and breaking things" are very prominent: I no longer have to wonder what the `data_object` parameter of my `update_callback_param_cache` method is supposed to be and waste minutes of my life desperately grepping through the code or, if possible, `pdb`ing my way through a live version and trying to trigger that code path.


Your example is live updating html in a browser, NOT live updating the type system. In other words, your evidence is trivial, unrelated to your main claim, and unconvincing.

> If you really believe that programming languages are just tools for building programs (in particular "useful" programs that make money) then yes, Haskell has not much to offer you over PHP or Ruby and other languages in that vein.

I'd agree with this generally. During grad school I loved hacking my research on haskell.

But this doesn't seem to be a consistent claim from the community. It'd be helpful if the core maintainers published this somewhere. Because I consistently get haskellers telling me that I'd be more productive as an engineer and my programs would have fewer errors if only I worked in Haskell.

Some other languages do have this sort of vision statement (or, at least a fairly well implied one) that makes it clear what they are offering and what they aren't. C++ wants to give you speed and flexibility without overhead for things you aren't using. That's been consistent for decades.


> Because I consistently get haskellers telling me that I'd be more productive as an engineer and my programs would have fewer errors if only I worked in Haskell.

i still think it's true

i have worked in haskell and i currently get paid to write more mainstream languages.

the mainstream languages are literal wastes of my time in comparison. not my money tho, so i'm fine with my employer paying me to waste time.


The quote

> programming languages are meant to ease the task of creating computer programs as opposed to writing assembly by hand.

is wrong: Assembly is certainly a programming language, which the author acknowledges when writing `writing assembly`.

As mentioned elsewhere, the whole blog post appears to be a troll post and should definitely not be taken as a part of a sober discussion on programming languages.


He should have written “as opposed to writing object code by hand,” because assembly does save some effort there. Some architectures were even designed around making assembly code more readable, rather than making it the compiler’s problem.

Thank you will check out. I was going to work on a web app in Haskell and am scouting libraries.

Not sure if you are the right person to talk to but there's a small mistake on the front page. Missing "language" in functional programming language (at least on my phone)

The line which sprung out most to me was the one claiming that "Clojure took the world by storm". This is simply not the case. JavaScript has taken the world by storm, as has Python. Ruby, PHP and others have taken the world by storm before and are slowly fading. Clojure has never taken the world by storm and (IMO) never will. Neither will Haskell btw.

To me, Clojure is very much on the same level as Haskell: an extremely niche programming language with a small (sub-1% in TIOBE) but dedicated following that is unlikely to grow much beyond what it already is.

Rest of the article: meh. It presupposes a lot of what it thinks Haskell should aspire to without investigating whether those things are actually what it is aspiring to be.


Yeah, I think Clojure people have a lot of insecurity around Haskell and it’s rarely a good look. Clojure is a language of sensible compromises without any real philosophical slam dunks, which doesn’t seem to satiate engineers’ natural desire to win internet points.

> Clojure people have a lot of insecurity around Haskell

I beg to differ. I think it's the other way around. I rarely see Clojure folk bashing on Haskell - they usually admit Haskell's strong points or any other language. They very often borrow ideas from other languages, libraries, and tools. I've heard they are figuring out interop with Python and R. They've built Clojure-like Lisp dialects that work on Golang, Erlang, compile to Lua, etc.

Haskellites, however, get sad and defensive anytime someone mentions that it's not so widely used in the real world.


That’s a nice thought, but I haven’t seen examples of that myself. And your claim of real-world use seems to be the opposite of what I found with a quick Google search:

https://redmonk.com/sogrady/2020/02/28/language-rankings-1-2... https://www.tiobe.com/tiobe-index/ https://insights.stackoverflow.com/survey/2020

Obviously this is far from scientific, but Clojure being a new, hyped language, it wouldn’t surprise me if that skewed people’s perception of how much it was used in production code.


I feel like at this point, it's neither new (coming up on 12 years old) nor hyped (it's been used in anger a lot by now).

I'm not sure how you read real-world utility from looking at graphs of SO and GitHub. It's like claiming that garbage trucks have no real-world utility because they are vastly less popular than cars.

Clojure is hovering around the same space as Rust, and beating WASM and Cuda. All of these languages occupy a useful niche, and a more-than-cursory search will reveal a lot of companies deploying them where they can play to their respective strengths.


I think you are ignoring the facts: Clojure at the moment is the most widely used programming language with a strong FP emphasis.

Clojure gathers more conferences and meetups, has more active podcasts, books, jobs, etc. Many companies are using Clojure in production: Apple, Walmart Labs, Cisco, Pandora, CircleCI, Roamresearch, Grammarly, Pitch - are just a few that come to mind. And they're not using it for the small stuff; for example, Cisco has built its entire security platform with it.

Clojure is the third most popular JVM lang. To be fair though, this is mainly due to Kotlin taking over Scala. Also, Clojure probably is the second most popular lisp dialect on GitHub (after Emacs-lisp).

So I think the claim is correct. At least within the FP world - Clojure currently dominates over Haskell, OCaml, F#, Scala, Elixir, Erlang, etc.


Are you not just moving the goalposts? The original article was "took the world by storm", not "took the FP world by storm". As part of the wider programming world FP languages are a tiny minority, regardless of how many impressive companies we rattle off. JS and Python absolutely rule the roost at the moment.

I read it as an emotional outburst by the author, and I don't think it actually contains any information relevant to the rest of the article.

If we wanted to treat it as an argument, we'd have to nail down the parameters for what "taking the world by storm" means objectively, or the author would have to clarify what they meant subjectively.


You claim that they’re ignoring facts, but provide no facts of your own beyond “some companies use it in production,” which is undoubtably also true of Haskell.

> but provide no facts

- more active podcasts:

https://www.fpcasts.com/

- more conferences:

https://purelyfunctional.tv/functional-programming-conferenc...

- more books:

https://www.amazon.com/s?k=Clojure

- more companies:

https://clojure.org/community/companies

- more jobs:

https://www.google.com/search?q=functional+programming+jobs

I've maintained an active interest in FP languages over the past several years. I'm not arguing here about the merits of choosing Haskell over Clojure, etc., or about other blogpost opinions (I'm afraid I have to disagree with many of them). I'm merely sharing what I know about the current state of FP in the industry.


The entire article just reads like a badly regurgitated Hickey talk.

Too bad the tone is so vitriolic, because there is an argument hidden in there somewhere.

Type systems allow us to add theorems to our programs about how they operate and have the compiler check their consistency.

We fight with bugs: programs behaving unlike we planned and specified. We encode plans and specifications as types, and now have formal proof that a certain class of bugs is eliminated. Excited, we want to use this strategy to eliminate more bugs.

A larger part of the programming task is moved over to this higher order type layer. As it becomes less trivial, bugs and antipatterns start to manifest.

We do not have any answers about the objectively best division of labor between static and dynamic aspects. This is a research project that's bigger than any single person, community or language, and I'd argue we need more of Haskell, Rust, Idris and their ilk rather than less. To merely argue that they did not result in a dramatically better OS or a word processor is beside the point.


I for one upvoted not because I agreed with the article but for what I hope is a constructive discussion.

Personally, and without wishing to sound entitled, I would love for it to be a lot easier to dip one’s toe in the water. A great start would be a web dev tutorial that starts from scratch with a sane Prelude that hides away anything I don’t need to know now (or perhaps ever). With of course a database, json, and html.

Coming from Flask where I was well served in that regard (no pun intended).

Aside: Thought ocaml might get there first but I’m not seeing that either


Vitriolic is an exaggeration. It's a little cynical, for sure, but not necessarily without reason.

I agree, the benefit of this, is that it creates a frontier where math can be applied more rigorously to programs. There is a related xkcd comic about this.

We’re in a “not there yet” kind of situation.

The author likely comes from a very mainstream product centric domain, as most do. In this context one has to agree with the criticism and maybe also admit that this is not what the language is for?


This is a good point. But I also want to add that as a freelance developer, I've done quite a number of medium sized projects for business use cases in Haskell singlehandedly. Empirically, the results were very low in bugs and remain at use at the companies in question without requiring much maintenance.

So this kind of thing is something that is well served by Haskell, in my opinion. I wouldn't seriously recommend someone else that the best path to doing this kind of work is to go and learn Haskell but I also feel this is something I wouldn't have been able to do as well using many other more popular languages.


I don't have the experience or knowledge necessary to judge this post, but I’m glad it exists and got posted to Hacker News because I believe the discussion will end up being insightful.

I am not generally a functional programming person. I grew up on C and C++ and went on to learn Python, JS, etc. Eventually I finally wrote actual code in Scala and thought it was fine. Didn’t love the syntax, not great at “thinking” functionally, but loved sum types and pattern matching.

I feel like Rust is the perfect mix for me because it gives me the aspects I like about functional programming languages (sum types, pattern matching, strong type inference) combined with composition based polymorphism and an almost C++ like attitude towards meta programming (though obviously with a clean slate.) I do think that learning to “think functionally” better would be good; maybe I should listen to the 4chan meme and read SICP after all. :)


I’ll be interested to see if this generates thoughtful discussion. It feels like a flame-y, ignorant piece of writing and my gut response is to say flame-y, marginally less ignorant things in response. Which I won’t, because that would be unhelpful and make everyone’s life who read it slightly less pleasant for no reason.

In my experience so far, SML hit a really sweet spot between ease of learning and features. I never had a "what would I use this for?" moment. Kotlin comes close, but is less elegant and especially lacks full pattern matching.

Unfortunately, sml is used approximately nowhere. Not sure why, but I also haven't used ocaml yet. Maybe one day. My hosting provider (hcoop.net)'s domtool is written in SML, and I hope to hack on it at some point.


+1 for SML. I've found F# scratches some of that itch. It is a CLR language with all the up- and downsides that come with .net

I kinda want every would-be Haskeller to just go use Rust for a few years. That gives us Haskellers more time to pursue "success at minimal costs".

One heuristic for the quality of a language is to literally ask "what is it good for?"

Just off the top of my head (I'm skipping a lot of categories and languages):

C/C++/Rust are great for systems programming, games, and performance critical applications.

Go/JS/Python are great for creating web servers, data processing (Python mostly), and small apps/scripts.

Haskell? I have no idea.


I am personally not a huge fan, but like many statically-typed functional languages, Haskell is great for

- creating new programming languages and DSLs

- problems in finance, energy, medicine, etc., that are more correctness-sensitive than performance-sensitive (it is widely used in Bitcoin brokerages / exchanges / etc, and in general blockchain technologies)

- highly-concurrent scenarios that would otherwise involve a lot of painful management of threads/locks/etc

Facebook’s Haxl is a good example of software that uses Haskell effectively: https://engineering.fb.com/2014/06/10/web/open-sourcing-haxl...


I'm not a big fan of DSLs. Yes, they allow for convenient notation to express things within specific domains, but they also create more barriers. It is both more impressive and more useful if one is able to create abstractions in mechanical sympathy with idiomatic uses of existing languages. Of course, this is harder to do.

I think DSLs are a bit of a lazy cop-out.

This is not a new idea. The myth of the tower of Babel and the observation that by creating lots and lots of communication barriers and interfaces, you lose the ability to coordinate major efforts is indeed very old.


I disagree. DSLs give you a new set of primitives which is more suited to the problem domain and makes it harder to make mistakes by accidentally encoding things which don't exist in the problem domain.

Of course, just making a DSL isn't enough. Some DSLs are badly designed and inelegant, or their features do not compose well with each other. But a well designed DSL can be a thing of both beauty and clarity.

I think this shouldn't be surprising as even modern programming languages are DSLs of a kind when compared to machine code.


>DSLs give you a new set of primitives which is more suited to the problem domain and makes it harder to make mistakes by accidentally encoding things which don't exist in the problem domain.

But domains are interrelated. If there is no common set of primitives upon which various different domain abstractions are built, then you have to relearn absolutely everything for each domain, even things that are not domain specific at all.

You end up with a huge number of special cases. Mistakes are made because only very few people will be able to remember all the semantics of all the DSLs they need for their work.


I agree, and that's why DSLs are not magical pixie dust and have to be used with taste and judiciously. But in cases where they do work, they are strictly better than free form code, IMO.

Designing good DSLs is hard. If you recommend that people construct DSLs to solve their problems they will. Most of the time these DSLs will not be good and they will present either barriers or bottlenecks.

The reason DSLs often become a problem is that people often do not think about the cost of maturing and maintaining them.

I've seen this happen quite a few times. Someone gets the idea that "we need a DSL for this". It is implemented, but the resources and time to do a proper job isn't there so the documentation, tooling and roadmap is severely lacking or entirely absent.

More requirements are uncovered and development becomes hampered by what the DSL can express or how it is implemented. If you are really unlucky, the original architect leaves or loses interest. I've seen entire products grind to a halt for 6 months because the only person with deep understanding of a DSL on which everything hinges has left the company.

DSLs, like any programming language in general, can be useful. But they are expensive pieces of software to write and maintain and not the "easy solution" that people tend to be misled into thinking that they are. And if expense is spared in their making, it has to be paid for later in troubles encountered further down the road.


Haxl is actually an example of "the more impressive and more useful" approach: it is an abstraction embedded within a Haskell library, that allows you to express your "intent" in simple, standard Haskell code, decoupled from the way this intent will be reached (i.e. the execution is determined/optimized by the library)

Haskell allows this to be done in a way that makes if very hard for the user to break this abstraction (the type system will prevent you from doing this), while still allowing the full language power of Haskell to be used.


> It is both more impressive and more useful if one is able to create abstractions in mechanical sympathy with idiomatic uses of existing languages.

... so, EDSLs?


Many of those communication barriers and interfaces are a feature, not an obstacle.

For example, in a typical enterprise application it's good to know that SQL queries and web resource access control (two typical DSL uses) cannot interfere with one another accidentally.


SQL is a DSL that is fairly heavy in that a lot has been invested in it over multiple decades. (It even has extensions or outgrowths that turn it into more of a general purpose language)

This results in ample tooling, lots of documentation, multiple implementations, some of which are really good and a lot of community knowledge.

One may like or dislike SQL, but it is undeniable that it is useful to and mastered by a great many people.

Most DSLs have few or none of these properties. In some environments DSLs tend to be presented as something you can create casually. That it is a lightweight solution. And what starts as a small solution can often grow - usually to a point where not having put in a lot of work in the basic design will make the language unsound.

I don't think DSLs are a lightweight solution at all. I think that giving inexperienced programmers the idea that DSLs are is something they ought to be designing probably isn't helpful.


> problems in finance, energy, medicine, etc., that are more correctness-sensitive than performance-sensitive

What does correctness-sensitivity mean in this context and how is it missing from a language like Python for example?


It means that the type system is flexible enough to encode specific domain logic in a way that can be verified by the compiler (and therefore always be correct at runtime unless there is a bug in the compiler). Likewise, it means that plainly incorrect domain logic (eg due to a typo) is far less likely to occur than in a Python program. Some of this can be encoded by statically typed languages like C (“the average of an array of floats is a float”) but not all of it (“given a function that maps floats to floats, applying that function to an array of floats returns an array of floats”).

Python of course has none of this (“this average function actually returns the string ‘fix me!’ on some inputs because the programmer forgot to fill in and else branch”). Mypy is a useful tool but it doesn’t stop type-incorrect code from being executed and isn’t as powerful as Haskell. Likewise, the Haskell compiler certainly doesn’t catch everything, but it does completely eliminate many common Python bugs.

There is obviously a “spectrum” since no general purpose language has a truly rigorous type system.


Idris is not rigorous enough?

The compiler and type system allows you to more robustly encode and enforce correctness. Even as someone who's written relatively little Haskell compared to Python, knowing that I can run it and it's not going to break in some weird, unexpected way is a fantastic feeling. If I had to write something that needed to be logically "bulletproof" and correct, I'd feel orders-of-magnitude more comfortable writing it in Haskell/Rust than Python. Python has too much magic, too many ways to do something unchecked, too many ways to work-around some issue. It's too easy to write poor, difficult-to-comprehend/maintain code in Python, at least with Haskell/Rust I can re-factor something and _know_ that I didn't break anything or change any behaviour - the latter especially is straight up not a guarantee I could make with Python in my experience.

Haskell is really good when:

1. You can afford the costs of a garbage-collected system.

2. Your collaborators know it or are eager to learn it.

3. You intend to maintain your system over an extended period of time while adding features.

4. You don't need close integration with the platform GUI.

That last one isn't impossible to do in Haskell, but it's painful enough that it overrides the benefits in a lot of cases.

What products niches does this leave? Servers and command-line tools, primarily, though there is room for GUI applications that use various cross-platform UI toolkits and games that build their entire UIs from scratch. (Yes, both of those exist. Even games. Unity uses C#, it's not like garbage collection is incompatible with games - it just has overhead you'll have to live with and work around.)

Honestly, condition 2 above is the biggest limit. A lot of people who like to talk about judging things on their merits refuse to learn Haskell because it's not yet another shallow skin over the same programming concepts.

My personal experience is that any time you think Ruby would be a good choice for a long-term project, Haskell is a far better option. It supports the same joy of green-field development but is less of a tarpit when the green field is 5 years behind you.


Seems like you'd be better off using Rust for most of these. A lot of the same correctness benefits of Haskell, but a much more practical langauge with a more complete ecosystem of libraries.

Rust? It's a fine C++ replacement. But it has nothing on Haskell in terms of practical expressiveness.

I'll give it another look when it's capable of expressing an idea as simple as traverse.


Amen. The real benefit of Haskell is not some hyper performance optimization (when you actually truly need C++ or Rust), but for business critical applications that need to perform fairly well and, most critically, need to not be filled with bugs as it continues to evolve.

Whatever amount of money someone might lose by using Haskell compared to Rust or C++ (e.g. in hiring trainers to train your engineers, the overhead in terms of your PaaS bill due to GC or whatever else) is very small compared to the savings:

- Compensating a customer for an SLA violation due to some inadequately tested code path that caused an outage - Wasted developer time trying to act like a Human Compiler (e.g. including a bunch of extra code to check type expectations and handle violations gracefully....at runtime) - Wasted developer time trying to understand the dynamic behavior of some code in a PR - Wasted developer time trying to understand old crufty parts of the codebase when you refactor

Perhaps it's slow(er) to compile than you might like or not as optimal as C++, but unless you can afford to hire the absolute best Python / Ruby developers available and have some airtight culture of documentation and best practices, I would venture that it's better off to stake one's intellectual property on something that can survive employee churn without that knowledge walking out of the building.


Rust is also very good for correctness for a lot of the same reasons as Haskell.

Rust is great and borrows many things from Haskell, but it's still very very far from Haskell.

Depends, I would certainly not pick Rust over C++ for graphics or GPGPU programming as I don't plan to be one doing the ground work to build something that has the industry acceptance of SYSCL, CUDA, Qt, COM/UWP, MSL.

Ironically C++ is closer to Haskell in regards to expressiveness.


Bit sad that everyone forgets OCaml when comparing Haskell.

Rather than merely lamenting that no one's mentioned OCaml, how about providing something specific about OCaml you think is relevant to contribute to the discussion?

For example, pointers about how OCaml compares to Haskell weak spots: - Basic features (e.g. records, strings, portability) - Undocumented, fragmented and/or half baked libraries - Bizarre and inconvenient "advanced" syntax - Extravagant memory usage

OCaml also can't implement traverse, so....

Rust is only an option if one needs deployment scenarios where having a GC is a not an option.

For everything else you are better off with a language that supports automatic memory management, and now Haskell is even supporting linear types anyway.


That depends on whether you find having to manage memory more of an impediment, or whether you find not having easy access to imperative constructs more of an impediment.

People keep forgetting that GC enabled languages like F#, Haskell, D, C#, OCaml, also provide mechanisms to fine tune memory allocation, its location on the stack and global memory segments besides GC heap, and even native heap.

Additionally some of them, like Haskell being discussed here, are also extending their type systems to provide Rust like guarantees for resource management, for the 1% uses cases where it really matters.

As for my short experiments doing graphics programming with Rust, it is naturally an impediment, that proves the point that Rust place is as systems programming language filling the same role as C++ on Android, UWP or Apple platforms, the low level drivers, compiler toolchain, windows system compositor.


It definitely depends on your domain. For web servers I've found there's little to no overhead for doing it in Rust.

It is still something that everyone using the language have to deal with, regardless how little it is.

Whereas with automatic memory management languages, if and only when it becomes an hurdle, the performance expert can pick up a profiler and only optimise what is required.

Using .NET as example, then one can think about struct vs class, new vs stackalloc vs Marshal.AllocHGlobal, direct arrays vs Span<>.

Or based on experience, already pick the right data structure and memory location right from the start.


My point is that there are other things in other langauges that you similarly always have a to deal with. For example, in Haskell writing impetative code is a pain. Not impossible, but difficult. In C# there are no Sum Types, so expressing an "or" data structure is difficult.

Which of these things you find to be more of an impediment is somewhat subjective.


Except that .NET is a polyglot runtime, F# is also an option with the same low level programming capabilities as C# to do C like low level programming, and it has sum types.

I just gave one concrete example, among many possible ones.


Haskell is pretty nice for building web applications.

Thanks to it's type system you can build much more stable web apps in less time. Usually later in the application life cycle you have a hard time refactoring stuff when working with e.g. JS or rails. Without tests you will definitly break stuff. With haskell you can confidently refactor your code without worrying about breaking stuff, the compiler will tell you.

The way data structure are declared in haskell is also very nice for domain modeling. You don't have as much boilerplate as e.g. when using PHP with doctrine.

The performance is also pretty good compared to python or rails.


Can you use it to write code that will be used (compiled) both server-side and client-side?

Because these days this is my bar for a language that is "nice for building web applications". I've gotten so much mileage out of Clojure+ClojureScript just because of this, it's not even funny.


You can, but it's... a bit messy unless you're using Nix as the build platform. Hopefully there will be a WASM backend for GHC... I mean, it's bound to happen right? :)

If you want statically checked types, I'd probably say that Scala is better at the server+client game (i.e. when you want to have both). Of course, Scala has its own drawbacks wrt. Haskell: lack of typed effects is a big one for me.

(Just for context: I work on a website+SPA written entirely in Scala which has been around for years and years. I also have quite a lot of experience in Haskell.)


> Scala has its own drawbacks wrt. Haskell: lack of typed effects is a big one for me.

There are libraries like ZIO, Cats Effect or Monix. Give them a try! Some people might even say that for example ZIO is even better than Haskell's IO.


Oh, I know about them. In fact, I'm trying to introduce ZIO in my Scala-mostly company. The problem is that a stray UUID.randomUUID() can destroy any and all guarantees. (We're already using Monix.)

That truly is the singular reason that I still prefer Haskell over Scala. I can get over syntax awkwardness, etc. etc. The impure-in-pure is... difficult when you can't just grep for unsafe*, etc.

EDIT: Fwiw, ZIO is definitely better than Haskell's IO. Better than RIO? Perhaps. Is it better than polysemy, tho? I don't think so. Btw, I know about polysemy's issues as well... hopefully lexi-lambda can get her GHC runtime changes merged so that we can have a true "free" effect system backed by a tailored runtime. Interestingly, Project Loom is also heading in a similar direction (first class continuations) on the JVM. Interesting times!


I'd pick elm for the frontend as it's kind of similiar to haskell syntax-wise but more optimized for the frontend context. Here's a great tutorial for this: https://driftercode.com/blog/ihp-with-elm/

I would like to humbly second this notion.

Especially since you can autogenerate Elm decoders based on Haskell types to reduce boilerplate and duplication between frontend and backend, while mostly retaining a strong sense of type safety.

You can also use Elm as an introduction language to new developers, without all of the complexity of haskell’s higher order abstractions, and then introduce them to haskell when they’re comfortable with elm.

(I did this with Servant as a backend, but IHP is wonderful as well!)


Yes, Haskell had ghcjs long before we even had wasm

For most web apps correctness is not the most important thing, this is why Python and PHP are/were so popular.

What is important is flexibility, Haskell's strict typing definitely impedes flexibility so I disagree with you that it's nice for building web apps.


The value of type safety is more that you're more productive because you can faster change things (as the compiler tells you what needs to be changed). This allows for faster iteration on your product.

Correctness is indeed not the most important thing, it's kind of a nice side effect.


Correctness is not the most important thing unless it is. Security bugs often stem from sloppiness and incorrectness. It depends on the use case.

You cannot create web applications without JavaScript these days. It does not matter what is your server side programming language, you need to send to the client JavaScript code to be executed in browser. And that in itself is a shitshow. End of story.

From another perspective you could say that JS is usually just a compile target these days, in many cases the source language is modern version of JS but in many other cases it's ClojureScript, TypeScript or something else.

Hmm. Time for a Haskell-to-Javascript compiler.


Empirically the evidence suggests all those languages without Haskells type system are doing fine considering that the vast majority of the software is not written in Haskell in its actually running.

Frankly hiring is just a way harder problem than building a Web app, and fixing JS, Java python or whatever engineers is significantly easier than finding reasonable Haskell engineers.


> the vast majority of the software is not written in Haskell in its actually running.

The vast majority of software has horrible domain and security bugs! I am not one of those people who complains about the bloated state of websites or app stores, but the vast majority of software is not “fine,” especially software written in Python or JavaScript.


Utter nonsense to imply that Haskell would suddenly fix bad security.

Haskell code can have security issues too hidden inside some too clever for it's on good language extension ridden Haskell code.

Too be honest I think there is likely only a handful of people like Ed Knett who actually grok and write effective Haskell code, and guess what he'd also write good C++ code as well.


I didn’t say Haskell would fix bad security! My point is that it’s nonsense to suggest that existing languages are “just fine.”

But many security bugs in C or C++ come down to sloppy types, sloppy pointers, or sloppy concurrency, all of which are almost impossible to do in Haskell. Haskell is not a replacement for C or C++ but its ideas are (and should be) influencing systems programmers.

I don’t actually like Haskell. But there is a reason why major organizations are considering Rust over C and C++: those languages are simply not sufficient for writing secure and robust software in the 21st century, and Rust has taken many of the “best parts” of Haskell to improve systems programming.


The vast majority of software is successful despite those bugs.

Haskell is good for correctness.

Where correctness really matters, you might find Haskell. That makes it a very niche language, as most developers don't really care about correctness.

But you do find Haskell in places where complexity is high and correctness matters. Mostly banking, infrastructure, defense, and research applications.


Everyone makes this point about correctness, but I have never seen an easy to understand illustration of this. Now I am by no means a leet programmer but I have been reading about programming for more than a decade and every article about this point feels confusing. I even tried writing small stuff in haskell, still don't get this point.

Correctness is only an interesting problem when it's not easy to underst example and. Easy bloggable examples aren't complex enough to be correctness challenges

Ok. Then what book can I read to understand this. I've read two books on haskell but they were beginner stuff though

Closest I can think of is Purely Functional Data Structure (Chris Okasaki) or Category Theory for Programmers (Bartosz Milewski), but they're not exactly what you're after.

I would submit that this paper from FB is more in the vein of what you're looking for: https://research.fb.com/wp-content/uploads/2020/08/Eliminati... .

This interview might also be insightful: https://www.microsoft.com/en-us/research/podcast/functional-...

The truth is that Haskell is still very academically focused, so you're likely to see papers of substance much more often than books or blog posts.

When you hire people to write Haskell there's two groups that always show up: young enthusiasts who are frustrated with imperative programming in their (usually first or second) day job, and academics who got transplanted into industry to work on hard problems. I've interviewed people doing formal verification of CPU circuits at Intel, people who work on compilers, people who work on verifying termination of programs (for missile guidance), and people who work on financial institution backends.

What I haven't seen is too many experienced, pragmatic engineers (rather than computer scientists) who have spent their career writing Haskell.


The thing is correctness should not be viewed as the role of the language alone. You have to look at the whole ecosystem and organizational engineering practices.

Obviously people were writing life-critical code in C or ASM. You wouldn't expect a car manufacturer to write an ECU in the same way a game dev studio writes a game, but C and ASM can accommodate both.

From my experience Haskell seems to make easy things hard. It's the same mistake as a small startup thinking they need to do whatever Google / Facebook / Amazon are doing, when they're operating at one millionth the scale.

EDIT> By not focusing on correctness as the role of the language alone, you have a smooth path from quickly iterating in a non-safety-critical environment to whatever level of safety you need. In engineering safety is empirical, anyway. You don't just build a system and then assume it will work absent tests, so the idea of getting the code perfect is a bit of a red herring. You're going to have to test it rigorously anyway before anyone will get on your airplane.


There's a large body of academic research on this, and many, many failures to learn from, including many that cost lives or cost hundreds of millions of dollars. In practice, there are lots and lots of bugs in any C program of any complexity. That's why flight system software is written in Ada, not C.

In the military where you need to verify correctness of security controls or answer questions like "does my missile guidance program terminate", you can't use C or Assembly.

To achieve what you're talking about with C or Assembly, you need this kind of process: https://www.fastcompany.com/28121/they-write-right-stuff

To achieve it with a dynamic language like Python...well, you can't (and shouldn't even attempt it).

The trouble is, nobody has the budget for that. Not even most military groups. So instead of relying on human process, we leverage machine-driven verification through the use of type systems, provers, SAT solvers, contracts, randomized testing and other academically sound techniques for quality assurance.

> From my experience Haskell seems to make easy things hard.

Haskell isn't designed to make undergraduate programming exercises easy. It's designed to make hard programming problems tractable. If you have an easy problem you shouldn't use Haskell. Hell, if you have a hard problem you probably shouldn't use Haskell.


Agree with most of what you've written -- thanks for the thoughtful response. Was going to respond that I recall the F-35 uses C++, but that's probably not a good argument ...

Regarding choice of language, I've reached a level of general competence where I don't really find any intractable problems that I can't handle with my toolset. So part of this may be just personal preference where there's no justification for slogging through the cost of learning yet another language, especially if I don't find it fun to program in. I used to invest a lot of time in learning new languages but I no longer find this a good use of my time.

I own a handful of Haskell books, and I was originally interested in Haskell for abstraction / DSLs. One idea was to go directly from a human-readable binary file-format description to an importer/exporter for that file type.

Questions on a couple of themes:

1) I have gone back to uni to study biophysics and learning biology deeply as well as previous machine-learning experience is making me think that fuzzy / probabilistic / redundant is actually the way to go for reliable complex systems. What we consider to be complex computer systems are actually ridiculously simple compared to biological mechanisms.

2) Are we in a transitional phase such that inventing better programming languages is an inefficient (though of course interesting and possibly instructive) path? Do we get machine-programmers soon enough that programming-language design stops mattering? By analogy think of all the effort still being spent on machine-readable data formats when we're close to having machines that read. At that point anything human-readable becomes machine-readable and your schema doesn't really matter.


> Was going to respond that I recall the F-35 uses C++

AFAIK its critical flight systems are still in Ada, but it's a beast of a software platform. My company joined the effort last year and found that even our tiny area was an incredible mess of different technologies.

> Regarding choice of language, I've reached a level of general competence where I don't really find any intractable problems that I can't handle with my toolset. So part of this may be just personal preference where there's no justification for slogging through the cost of learning yet another language, especially if I don't find it fun to program in. I used to invest a lot of time in learning new languages but I no longer find this a good use of my time.

Personally I wouldn't learn new tools unless your existing toolset proved inadequate. But I'm pretty jaded at this point. Learning new techniques, on the other hand, I would never stop doing. Everything from design patterns to compiler design, from dynamic programming to category theory - it all has made me a better programmer regardless of the tools I have at my disposal.

> !) & 2)

I agree on 1, to a certain extent. The trouble with that approach is the development time of solutions is pretty excessive. Ironically probabilistic programming suffers from this same issue. I would be satisfied is software engineering was just actual engineering, which it isn't.

I don't think we'll achieve the transition of software into an engineering discipline until we have better methods of communicating to both the human and computer in parallel. My guess is we might be 50 to 100 years away still.


Just to clarify, when Haskellers say that Haskell is good for "correctness" they don't mean it is good for "verification"[1]. "Verification" means something like "formally proving the behaviour of a system". It is very hard. "Correctness" to Haskellers means modest things like "the compiler tells me when I passed a NULL (Nothing) to a function that didn't expect it" or "the compiler tells me when I mixed a part name for a part serial number".

Modest, but nonetheless Haskell does this in a ergonomic way which is very effective for writing quality software.

[1] or at least any reasonable ones don't


Haskell is born out of academia and PL research so it has a great ability to add extensions to the language.

If you have a concept PL feature it is probably easiest to create a prototype in Haskell.

A handy guide to its [extensions](https://limperg.de/ghc-extensions/)

Not much practical purpose for most people but nonetheless a useful feature.


Haskell is fantastic for programming systems where types are not just a tool, but central to the goals of the program. E.g. a program for converting one type of markup files to another (from one type to another type), like pandoc [1], which indeed is written in Haskell.

[1]: https://github.com/jgm/pandoc


The advantage of Haskell is obviously that it's very easy to write code that is very bug free compared to most languages.

Java and Go are used and Haskell has comparable performance but certainly less bugs.


These are just wild unsubstantiated claims.

In my experience the worst errors are those of broken referential integrity in distributed systems, misinterpreted product specifications and plain old "this doesn't do what i thought it did" code


"This doesn't do what I thought it did" should have a strong connection to needing to rely on reading documentation made from comments. In some sense that form of literate programming is about limiting the scope of each individual misunderstanding without eliminating any of the misunderstandings.

The software engineering research community has almost no idea how to measure "ease of writing correct programs". So it becomes very difficult to make meaningful claims that Haskell programs will contain fewer bugs.

As far as I understand, Haskell’s original intent was to be like a testing ground for research into functional language design, which I think it really succeeded at.

Additionally, I do know of more than a few production deployments of Haskell in finance, but that doesn’t necessarily mean that that is what Haskell excels at.


It's probably quite a good language for “transformation of data” without side effects where correctness is more important than performance.

As someone who has extensive experience in both Haskell and Clojure, I can say that the latter is definitely better suited for “transformation of data”.

I don't think this adds much to the discussion. I also have extensive experience in both and I'd argue the reverse.

I think it does, as the OP was asking what Haskell is great at. Clojure actively positions itself as a language great for data transformations, as does the community, where Haskell doesn’t particularly tailor towards this. In fact this is the first time I ever heard someone say that Haskell is a great choice for that.

I hear many more Haskellers argue it’s a great language for writing parsers, and with that I very much agree.


But positioning as such doesn't necessarily make it true. Just because Haskell isn't advertised as the "data transformation" language doesn't mean it does better or worse than any other language at it.

In my personal experience of using Clojure, Haskell and Python for data transformation and parsers, Haskell does the best job for both. So this is the second time you hear it ;) Anyway, we are just throwing anecdata at each other. Personal stories are still useful for programming experiences since "which is better" isn't that easy to measure. I would be happy to hear more about your experiences with Clojure.


Haskell is great for screwing with undergraduates' minds. Box proofs anyone? I did it more than two decades ago and for some reason haven't decided to pick it up again.

Honestly, I feel like the success of Go has eaten some Haskell’s lunch.

I used to use it when I was reaching for a native binary with near-C performance, but wanted an easier to maintain, terser, GC language.

Nowadays that’s pretty much Go. But if the problem would benefit from a lot of higher-level, Haskells still a good choice, given Gos lack of generics.


D is my goto when I have similar requirements.

Haskell: 5 line quicksort and parsers/compilers. I do like the language, though.

Actually: in-place quicksort in Haskell is as verbose as in other languages. The 5 line quicksort uses linear order additional space.

The matter with Haskell is that one often doesn't use in-place updates and that the language is designed such that this can be more efficient than in many other languages though Ocaml probably does this better with it's incremental garbage compilation strategy.

Well, that 5 line quicksort is fake. Haskell is good for writing other languages like Elm for example

You can write quicksort in C++ in about the same number of lines using std::partition from the standard library.

Haskell is good for transformation type programs, e.g. compilers or document conversion (e.g. Pandoc).

It sucks for pretty much everything else.


As a mobile dev, I don't see much effort in Haskell community working on Haskell toolchain for mobile apps.

They seem to more interested in web frameworks. Hmm...


Compilers and logic programming

Making other languages feel like they are missing out :).

Building DSLs in finance

I’ve always felt like Haskell was purely an academic language used to argue about what monads are.

In re: Haskell's goodness or badness, compare and contrast with, say, PHP (crap language with wild success.) Or Prolog (a stately Elven language with deep but obscure success.) Haskell is what it is.

In re: types and data, FP is good for that. See e.g. "Domain Modeling Made Functional" by Scott Wlaschin ( https://www.youtube.com/watch?v=Up7LcbGZFuo ) it's about F# but the concepts apply cross-language.

In re: FP PLs "done right" I submit Elm lang. I've been using Elm recently and it gets the job done. It's weird though: on the one hand, as a experienced professional it feels like a toy. The error messages feel almost insulting, like I'm being patronized. On the other hand, once I got over that (silly) reaction, they're awesome. Changing code is a breeze, because Elm leverages the crap out of the type system, and the structure of the code and runtime prevent whole vast categories of errors.

Combine that with the sort of Domain-Driven development that Wlaschin is talking about and "baby, you got a stew going!"


I was introduced to OCaml through Elm - a deeply-opinionated language with strict guard rails. In Elm, there is either a happy path or there is no path. As a newcomer, you're not overwhelmed or paralyzed with a plethora of choices on how to get things done simply because Elm limits your choices. Each tool in your toolbox is documented with simple examples how to use that tool.

After finally jumping the fence and exploring/devving with other OCaml languages (specifically F#), I still come back to Elm to see how it and the community does things: namely their best practices and explanation of fp concepts.


> A very clear indication of this is how Haskell treats programming terms. Instead of explaining Monads like all other design patterns out there, they insist on using some obscure definition from category theory to explain it.

The author is bashing people for explaining a concept from Category Theory with... dum dum dum... Category Theory?! Just because OP was looking for articles explaining monads as design patterns doesn't mean there aren't other people who, God forbid, are looking for theoretical articles about CT/Monads explained with Haskell.

Yes, there is value in pragmatism. Rant posts, on the other hand, have little value IMHO. Haskell has shortcomings like any other programming language. That absolutely does not make it a bad programming language.


Well to be fair, Functor and Monad in base are pretty far removed from the Category Theory originals.

https://hackage.haskell.org/package/categories has something much closer to the math.

Maybe we should just save the math explanation for the latter, and just call the former something else? :O


Explaning monads in terms of category theory is like explaining regular expressions in terms of finite automata. It's a good idea if you're writing a textbook, but maybe not so much if you're writing documentation for users of a programming language.

> Rant posts, on the other hand, have little value IMHO.

Sure ;=)

But Haskell's explanations of Monads is in my experience really no adequate. Category Theory is the underlying since theory, but do you need to know about mechanics and gears to drive a car?

There are explanations of Monads which are much much easier to understand for most people (people not having much to do with Category Theory).

E.g. by now the concept of map, flat_map is fairly wide spread in most programming languages and you can teach about Monads in terms of that fairly easy. And then add the additional abstraction layer used in context of Monads like abstracting over the "external world state" (IO) and "combining computation descriptions".


> But Haskell's explanations of Monads is in my experience really no adequate. Category Theory is the underlying since theory, but do you need to know about mechanics and gears to drive a car?

Monads are part of the underlying category theory. So asking for a full explanation of Monads you are asking to explain part of the "mechanics and gears" in your car analogy.

> E.g. by now the concept of map, flat_map is fairly wide spread in most programming languages and you can teach about Monads in terms of that fairly easy. And then add the additional abstraction layer used in context of Monads like abstracting over the "external world state" (IO) and "combining computation descriptions".

I think I agree with your main point here that teaching the internal details first is often not the optimal method. You do not need to know how Monads work to use them to great effect in Haskell and other languages.


> but do you need to know about mechanics and gears to drive a car?

Depends on what you are trying to do. You don't need that knowledge in order to drive a car, but there are drivers that absolutely should know about mechanics and gears.


I think that 99% of programmers can get approximately all the value of monads by reading an article explaining how promises, options and lists all have this pattern in common, without any mention of monoids or endofunctors. I enjoyed the courses I did in category theory very much, but the benefit to my code has been zero.

You either think the astronomical number of bugs in delivered software is a problem or you don't (and good luck with that). The use of Haskell is a huge win on this metric and demonstrably so. You don't have bugs in any of the Haskell programmed OSes you actually use, nor your editor written in Haskell, your Haskell mp3 player nor your Haskell time machine and Haskell en-truthenator.

Pandoc is good. Some like xmonad window manager, git-annexe has some fans. There's probably 4 or 5 more too! Mostly centred around parsing.

And for all that you absolutely should learn Haskell. You'll enjoy it and it will enable you to think about programming in New and powerful ways. Just don't fall so deep you expect to actually ship anything you write.


The only Haskell software I regularly interact with is Hasura, and it has plenty of bugs (even ones around nulls and other things Haskell is supposed to magic away).

I wish functional advocates would stop saying this, it just isn’t true. There any many types of bugs and Haskell may help prevent some of those.

I wonder is there any good research/data for this claim correlation between bugs and used language. I know there is some for development practices but it's independent from the language used.

You might like these:

An Empirical study on the impact of static typing on software maintainability, Stefan Hanenberg, Sebastian Kleinschmager, Romain Robbes, Éric Tanter, Andreas Stefik/. Empir Software Eng, (2013-12-11). DOI: 10.1007/s10664-013-9289-1.

An Empirical Investigation of the Effects of Type Systems and Code Completion on API Usability using TypeScript and JavaScript in MS Visual Studio. Lars Fischer, Stefan Hanenberg, Proceedings of the 11th Symposium on Dynamic Languages (154--167), 2015.

A large-scale study of programming languages and code quality in GitHub. Ray et al., 2014

The TL;DR is: typing matter, but so does tooling. However, programmers in dynamic languages are slightly slower, appear to produce more defects. There is a measurable benefit of static typing, but it's small.


The paper by Ray et al. has been harshly criticized (https://dl.acm.org/doi/pdf/10.1145/3340571).

Interestingly, in this paper, Haskell displays a negative correlation in defect rate

Economics of Software Quality has a function point to language conversion data, and function point to quality charts, so you could possibly say infer from that.

> You either think the astronomical number of bugs in delivered software is a problem or you don't (and good luck with that).

We don't need luck. Competition has already given us the answer that most bugs are ok. The vast majority of the software that creates literally trillions of dollars of economic activity, and pays most of our bills, is not mission critical. Software fails all the time with the only significant consequence being a developer has to spend some time fixing it. Sure sometimes money is lost. So is money lost when factory equipment needs repair.

Some software does deserve to be bug free when it might put lives at risk, like flight software or medical software. Perhaps even operating systems. But the vast majority of the software I use on a day to day basis does not fit that category. What significant harm does a bug in VS Code or Slack or even my OS cause me? None.

If bug free software gave a significant economic competitive advantage, smart folks would start writing it and win big in the marketplace. Considering this has had decades to happen, and has not, it's very unlikely that bug free is the winning competitive advantage when it comes to software. I'd guess the winning advantage is that the software is useful. Much like a car with many small problems is still useful.

The truth is that much of the software that exists today simply would not be worth building bug free and would never be profitable.

You can extend this outside of software development to see that it's true in a more general sense. Most of the manufactured products we buy are not perfect and do not last forever. Some even have flaws from the day you buy them, but flaws that can be worked around. Once on vacation I bought screwdriver at a dollar store. Poorly manufactured and it does have "bugs" compared to something I would have paid 10x the price for. But years later I still have it and it's good enough for some jobs.

You could potentially build a car that doesn't fail for any reason for a few hundred years. Only Bezos and friends could afford it. With that said, I'd like to see the negative externalities of pollution and waste included in the true cost of things we buy, so that we don't produce so many disposable things that society pays for in the long run. But that's a different discussion.

Please don't take this to mean that I don't take great pride in writing quality software that is as bug free as possible. I do. I also take great pride in meeting budget goals and deadlines. All successful businesses understand that competing goals must be balanced against each other.


A well thought out reply, thank you. Haskellers would typically agree with you. Unfortunately harry8 who you were replying to isn't one. He was being sarcastic.

> The use of Haskell is a huge win on this metric and demonstrably so.

So.... where are all the haskell competitors that benefit from this lower bug count? (edit: ah! satire!)


Your respondents don't seem to have realised that you are being sarcastic.

It's always amusing to read critiques on Haskell written by people that don't know Haskell.

This entire post is borderline trolling and it's sad so many people are falling for it.


I definitely agree with the documentation side of things, particularly lack of concrete examples.

I found that overall however, learning Haskell made me a better programmer, even if it's more useful as more of an academic than practical language.

Dealing with pure functions, no access to loops so recursion is paramount and it is rather beautiful and useful some of the tail recursion and pattern matching stuff.

The Haskell quicksort is the classic example of this:

  quicksort [] = []
  quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
    where
        lesser = filter (< p) xs
        greater = filter (>= p) xs

> The Haskell quicksort is the classic example of this:

The Haskell quicksort is also classic example of something that is small, beautiful and still misses the point. Yes, it contains the core idea of quicksort (partion the list and divide and conquer) but it completely fails on the quick part, because the Haskell lists are are leaky abstraction of real computer memory.

A real quicksort in Haskell is much more convoluted.


What's a real quicksort anyway? You could argue that a reasonably fast implementation of quicksort is much more convoluted, which it most certainly is, but that doesn't make this implementation any less real.

> What's a real quicksort anyway?

A key aspect of quicksort is that it sorts the list in-place. If you dont sort in-place, you dont have quicksort and if you dont need in-place sort, then quicksort is the wrong choice anyway.

Of course, canonical Haskell does not have a concept of in-place, which makes showing quicksort in Haskell also a questionable idea.

> You could argue that a reasonably fast implementation of quicksort is much more convoluted, which it most certainly is, but that doesn't make this implementation any less real.

A reasonably fast implementation of quicksort is straight-forward in any language that has arrays/vectors with destructive updates. This implementation will have issues with pathological cases, but that's a problem of the quicksort algorithm, not of the implementation (whereas the Haskell one shown above has problems in the implementation).


Sounds like a hardware issue.

Which hardware does not have this issue?

One cann do this in Python, or any other language for that matter:

  def quicksort ( xs ):
   if len(list) == 0:
    return []
   else:
    less = [x for x in xs[1:] if x <= xs[0]]
    more = [x for x in xs[1:] if x >  xs[0]]
    return less + [xs[0]] + more
Not the most efficient implementation in either.

True, although you forgot the recursion. The Haskell filter expression is much nicer as well. You could perhaps be terser by using ternary conditionals:

  def quicksort ( xs ):
   return [] if len(list) == 0 else quicksort([x for x in xs[1:] if x <= xs[0]]) + xs[0] + quicksort([x for x in xs[1:] if x > xs[0]])

And IMO, that python version is one million % more readable. In particular, because you defined more and less before using them, vs after in the Haskell example. I don't know if there's a way that could be achieved in Haskell too though.

Edit: you forgot the recursive calls though.


Haskell has a `let ... in` syntax. Quoting from (1):

    quicksort1 :: (Ord a) => [a] -> [a]
    quicksort1 [] = []
    quicksort1 (x:xs) =
      let smallerSorted = quicksort1 [a | a <- xs, a <= x]
          biggerSorted = quicksort1 [a | a <- xs, a > x]
      in  smallerSorted ++ [x] ++ biggerSorted
1: http://learnyouahaskell.com/recursion

I used to think so as well, until I realised at some point that defining things like this means you focus on the actual "business logic" up front, but the applicable definitions are never far (visually, spatially and logically). In my opinion it lets you get to grips on the overall logic before hassling you with certain specifics.

The term "business logic" itself tends to be somewhat controversial:

https://adsharma.github.io/flattools-programs/


@wheybags I recommend achieving defined-before-use quicksort by throwing in some unnecessary Haskell syntax extensions:

  qs :: Ord a => [a] -> [a]
  qs = \case
    [] -> []
    (x : (partition (< x) -> (qs -> as, qs -> bs))) -> as <> [x] <> bs

It's worth noting though that in-place quicksort really doesn't play to Haskell's strengths - there are some examples of the code for it here: https://stackoverflow.com/questions/7717691/why-is-the-minim...

That is quite elegant for that variant, which is a great fit for Haskell's features. How easy is it to write a different variant, like using a different pivot, or sorting in place?

> I found that overall however, learning Haskell made me a better programmer

I had the same experience, but with ML. It almost feels like Haskell with the annoying parts removed.


Quicksort in one line:

    Quicksort = { []->[], [p,,m]->this(m??<p) + (m??==p) + this(m??>p) }

> I found that overall however, learning Haskell made me a better programmer,

I definitely had that experience, too!


> Look at Clojure. Weird parentheses, and yet it took the world by storm

Am I on another planet ? I'd rate clojure slightly above haskell in terms of market share. Is my radar broken ?


That raised my eyebrow as well.

Semi-recently I was interested in learning Datalog so I wanted to kick the tires on Datomic (written in Clojure). I ended up getting stuck on some things; posted questions to StackOverflow and didn't get a response. I then realized the Clojure tag is very low volume + engagement.

Then someone on Twitter told me that I'd be better off asking questions in the Clojurians Slack channel. So I go to sign up and... the signup link is broken. Had to flag down someone on Twitter or IRC to fix it.

As an outsider I felt a whiff of decay from the Clojure community (no offense)


The whole article felt it's coming from a traditional / industrial OO (everybody happy about UML usually doesn't see the world as I do).

broken slack links or not, the community is not as large as the author seems to convey.


Do people actually use UML in industry?

I've never actually seen it used (or mentioned for that matter) in the few years I've been a software engineer...


ffff .. fff

most probably yeah, at least there are some informal uses whenever you make some diagram docs .. you'll end up aligning with uml 'vocab'.

I helped a european project on a uml graph versioning between various industrial uml applications, but I'd consider these niche.

IBM was heavy on UML since they bought Rational suite.. but I know IBM didn't use Rational for themselves (at least my department).. but surely Rational users did.

I think heavily regulated sectors are the most prevalent users of UML.. they like having a standard, having a lot of documentation etc.


Yeah, it’s kind of mind-blowing to me that this post is so highly rated when to me it just seems like a bunch of unsubstantiated claims like this, as well as blatant misunderstandings of fundamental functional programming concepts.

I guess haskell is just rarely enough on HN FP to get a boost just for that. Content is not worth spending much time IMO. To each his own, if the guy really suffers with Haskell then so be it, may he have a lot of fun with perl or js.

If you try to set a google alert for "Clojure jobs" and "Haskell jobs", or just go through "HN: who's hiring" of the recent years and compare search results for Clojure and Haskell, you'd see that it's not "just slightly above". Clojure currently is the most widely used FP lang.

Doesn't Scala beat Clojure in industry? That would be my experience and it has higher number of jobs on Who's Hiring. My ranking for adoption goes: Scala > F# > Elixir > Clojure > Haskell > OCaml. F# doesn't make much of a showing on Who's Hiring but I think it has stronger adoption in industry than Clojure.

It used to be that way. But it looks like Scala is slightly losing (not to Clojure), mainly to Kotlin: https://snyk.io/blog/kotlin-overtakes-scala-and-clojure-to-b...

Note that I'm not bashing or defending any of the PLs mentioned. It is merely the fact - today, Clojure is the most popular FP language being utilized in the industry. Check the number of podcasts dedicated to Clojure https://www.fpcasts.com; the number of conferences: https://purelyfunctional.tv/functional-programming-conferenc... list of companies using it, job listings, etc.

It doesn't mean that this all makes the language better or worse. Also, the overall share of languages with strong FP semantics is still way too small compared to the use of imperative PLs. That fact doesn't make OOP better than FP and vice-versa.


> Functors are basically an Object with a internal state changing method in typical OOP terms.

This lets me safely ignore the rest of the article.


Wow, I honestly wonder if this was caused by mixing up “functor” in the C++ and Haskell senses…

To expand on that, for those in the audience:

In Haskell, a functor is a type constructor (like “list”, “optional”, “future”, “I/O request”, &c.) with a way to map a function over it, covariantly, in a way that preserves its structure—i.e. without changing the shape of the container or structure of the action represented by the constructor, just the contained elements or produced result.

This is based on the more general notion of a functor in mathematics, which is a mapping between categories. The Haskell version is much more constrained, though: it only maps between Haskell types, and it’s parametric (iow completely generic), not just any old mapping.

While in C++, a functor is a completely different thing: an object that can be called like a function. It’s thus equivalent to a closure, where the object fields are the captured values. And that sounds like the description being used here.


That’s a pretty reasonable colloquial description for anyone who does not grok functional programming.

If the author was using the description to explain Functors to someone who only knew oop it's a reasonable start. I got the impression the author was implying that is basically all you need to understand Functors and is not the case.

Care to say why? Or just going to hit and run?

Functor is a typeclass, which is the equivalent of an interface in Java, it's very basic, providing the ability to lift a function and execute it in the context of the functor (whatever that is, this is the interface, remember), a generalization of map.

So a lot of types have an implementation of Functor. In theory one of those implementations could be guilty of using hidden state and all that, but in practice all of them are just straightforward functions transforming values into a new value, not mutating them.

In short, not a single word of the description is correct.


Now I'm more confused. If there is no hidden internal state what is the difference between a function and a Functor? If it's just taking input and giving output without any internal state, that's just a function isn't it?

Edit: ok I did some more refreshing of memory. So Functor is an interface with some properties like identity[1] and distributive morphism (I think I'm wording that right). That's just an interface. I can implement that in Java or F# if I want. How is haskell helping here?

[1] https://wiki.haskell.org/Functor#Minimal_Complete_Definition


You can't define the interface in either language.

Implementations of Functor consist, in part, of type-level functions. In Haskell terms, these are "higher-kinded types". The standard example is the list type "[]" which, as a type-level function, takes an element type and gives back the type of lists whose elements are drawn from that element type.

In Java and F#, the only way to talk about the List type is in its fully applied context, where you've attached the element type. So maybe you've got "List<Int>", or you've got "List<String>" or you may have a generic "List<A>". What you don't have is the type-level function that's not been applied to anything. So there's no equivalent to the Haskell Functor implementation:

   instance Functor [] where
      fmap _     [] = []
      fmap f (x:xs) = f x : fmap f xs
This is barely half the story. What makes this useful in Haskell is the typeclass overloading, which makes it effortless to write functions that abstract over arbitrary Functors, and use "fmap" multiple times locally for different Functor instances, letting the type system figure out what implementation is needed to map over the particular type you're working with. And in such abstract code, where you may know very little about the Functor instance you're working with, it's extremely important that they all be absolutely law-abiding: in many cases, the laws are all you have to work with.

These two features, higher-kinding and typeclass polymorphism, make it worth talking about Functors, and I don't think you can appreciate Functors in Haskell without seeing the interaction of these features and just how much it impacts the code style of the average Haskeller.


Man nothing against your dedication of explaining this to me, but everytime I talk about haskell it feels like a jargon salad. I very, very humbly ask you, so what? Like you wrote a short essay on this, and I still can't grok even in the slightest why this matters. Every other language I talk about, can at least tell me why certain feature is helpful, even if I don't get it. What I got from this is this allows abstracting mapping over types. But what does that give you?

Think about it this way. If the primary importance of something is only apparent from the big picture how is a user supposed to decide whether to use it or not? The big picture is rarely available to most programmers.


> Think about it this way. If the primary importance of something is only apparent from the big picture how is a user supposed to decide whether to use it or not? The big picture is rarely available to most programmers.

Maybe a user isn't supposed to. I believe that's the premise of Paul Graham's Blub Paradox [1]. I certainly didn't learn, say, Common Lisp, because I was doing a feature comparison. I had no idea what a lexical closure was at the time, and a couple of toy examples wouldn't have convinced me of their worth. I'd been programming quite happily without them for some time by then.

[1] https://en.wikipedia.org/wiki/Blub_paradox#The_Blub_paradox


That is an excellent point. Haskell is such a fundamental shift in thinking, perhaps the only way to learn real application is to make something with it. But I would still maintain every language at least has a highly simplistic example of why certain features work well. In fact, my curiosity reignited by this discussion, I found this video series[1] on youtube which somehow made is 100x more clear where Functors, Applicatives and Monads are to be used. The tree example is pretty abstract, but it helped me connect these features to my work. I still don't know how to get most use out of it, but I think I get the USP. Still have the question about how haskell is helping here, because I can write a hidden state changing function in Julia if I want. But one step at a time

[1] https://www.youtube.com/watch?v=xCut-QT2cpI


> How is haskell helping here?

Haskell's functions are pure which make the typeclass laws more meaningful.


I will take a stab at it.

> Functors are basically an Object with a internal state changing method in typical OOP terms.

If I was writing a functional language, in an oop language, I could implement functors at least partially with an Object with an internal state changing method. I could not model/implement an Object with an internal state changing method in a fp language via a functor.

The main issue with the author's statement is that it makes a claim of approximate equivalence and does not back it up with additional evidence or examples.


> I could not model/implement an Object with an internal state changing method in a fp language via a functor.

I don't think anyone claimed that.

> I could implement functors at least partially with an Object with an internal state changing method.

Now we are getting somewhere. You said partially, what features are being left out?


>> I could not model/implement an Object with an internal state changing method in a fp language via a functor.

> I don't think anyone claimed that.

I was not trying to refute the opposite claim. I was giving info on the differences of functors and the referenced oop feature. My points run somewhat counter to the authors claim "Functors are basically an Object with a internal state changing method in typical OOP terms." or at least what I think some reads walk away with. Hard to say what is in the author's head with the provided text.

> Now we are getting somewhere. You said partially, what features are being left out?

I did not mean to imply features would be left out but rather I would use more than the one oop feature, "an Object with an internal state changing method", to implement functors.


> I did not mean to imply features would be left out but rather I would use more than the one oop feature

Awesome, we are still getting somewhere. What other oop feature would you be using


> Awesome, we are still getting somewhere. What other oop feature would you be using

I'm glad you think it was productive so far. I am not convinced it is a productive use of our time to continue/extend the thought exercise though.


I find the "I'm more productive with Python" claim in the post to be specious.

Maybe I've been abused (and abusive) by bad programming practices with python in the past, but it seems you really need to lint and exercise python code to have any kind of confidence that something you think is correct won't blow up at runtime.

False confidence that something is "ready to go", is the worst, and it can cost a lot of money, and sometimes human life.

Erlang has this problem too. It's so late binding you can make spelling mistakes and it won't be caught until runtime. It's actually the basis of some very powerful features, but you really have to know that coding in Erlang is not like coding in Rust, Haskell, or something else strongly typed.

So while I think the author has a point that concretions can get you into an inflexible, hard to refactor mess over time, I think sometimes those concretions don't have to be as bad as they seem.

Consider Go. Interfaces are a form of concretion too, but the advice is to keep them small. An interface of exactly one function can be a beautiful thing. I think it's better to have a type implement many interfaces than to have a type implement one huge interface.


python3 has type checkers. I like pyright. You can use protocols which are similar to Go interfaces.

The benefit of go is really in performance and packaging.


I disagree with this:

> One, types are a concretion. If you’re looking for higher level of abstractions to get flexible behaviour, you’re ultimately going to have a world of pain

I don't see why types should be in the way of flexible behaviour. Frameworks like Spring in Java use types to direct dependency injection and it works well.

Also this:

> Types wrap data and treat it like a black box whereas schema describes the shape and content of data.

Types can be made abstract and blackboxy, and sometimes that's what you want. But doesn't a record type give info about its fields? Doesn't a sum type give info about possible alternatives in the values?

> As such haskell ultimately suffers a lot when they have to interact with the real world. Suddenly they are left reeling as they find out that the real world is, in fact, dynamic.

The trick is to know what we are really modelling. If we are deserializing domain objects from JSON, it makes sense to have types for the domain objects. If we are writing a tool like, say, jq, perhaps we should merely have a datatype for the JSON tree itself: http://hackage.haskell.org/package/aeson-1.5.5.1/docs/Data-A...

> Bottom up design is something we’ve learnt collectively as a good way to be much more flexible in responding to change.

Even if you want to design bottom-up, the moment you want to add anything to your program, you need a little top-down thinking, if only at the micro-level. You want to create something new that isn't there, and then think how to accomplish that with the tools you have.


I discovered Haskell in a comparative programming languages course I took at university last semester, and it completely changed how I think about programming. I can't speak for an industrial use case, but for a hobbyist writing open source software and personal projects, programming in Haskell has been an absolute joy and has reinvigorated my love for programming.

Tools like IHP (https://ihp.digitallyinduced.com/) are a great example of not only the beauty of the language, but when combined with Nix and the IHP IDE, a better development experience than I ever had with Rails or any other language.

If you are pragmatically minded, you can get stuff done in Haskell. In my experience there's a lack of online resources for this kind of work in Haskell, but that's what I and others especially in the IHP world are working on. If you just want to experiment, Haskell is great for that too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: