With Haskell, one always finds articles explaining how great it is, how its features (pure functionality) lend to correct code. But, articles explaining the core use cases and how only this language enabled solving it are rare. I'd love to read someone's decision postmortem that goes something like 'Here is a problem we were looking to solve. And here is how specific features A/B/C of Haskell helped us solve it well. Here's why Haskell was the correct choice for solving this problem'. What we instead have is 1000's of articles trying to teach us monads. Till such articles become mainstream, it will remain a niche language.
The whole avoid success at all costs looks more like a meme to me than actual strategy. There are functional languages (Clojure(Script), Elm) which made pragmatic decisions about the feature sets and have found commercial success. But with Haskell, it seems a reverse process was followed. It starts with some (questionable?) assumptions like lazy evaluation is good and then builds on top of it. For commercially viable software, the process is generally reverse. You start with specific problems and build a feature set from it.
The best way to understand this academic Vs. pragmatic approach is to look at Golang's relative success vs Haskell. No matter how much PL researcher snobs complain about golang, its creators started from real problems and chose some constraints carefully. The result is its widespread adoption in its niche (networked infrastructure code that can tolerate GC). Such cases are not documented for Haskell or are rare to find.
It is perfectly alright and better to say, look this was a research language to find out how far we can go with certain choices and assumptions. But constant evangelism about how it is elegant etc does not help.
>But, articles explaining the core use cases and how only this language enabled solving it are rare
From my (limited) experience with Haskell, my impression is that it's strongest in domains that map _very_ well to formal mathematical models. Mathematical computation is an obvious example, but things like programming language parsers are also really intuitive to build in Haskell.
My experience trying to use Haskell for other stuff — mainly, building web backends, my bread and butter back then — was painful. Simple CRUD against a database is actually pretty decent thanks to there being decent Postgres drivers and libraries, but the moment you need to handle arbitrarily-nested data structures (like a JSON request/response with a nontrivial schema), you run up against a lot of awkwardness, to the point that for the longest time this was known in Haskell as the "records problem."
Lenses and similar tools have tried to solve this problem over the years, and maybe they have, but I moved on from the language when I saw how deep that rabbit hole went.
None of this is to say Haskell isn't fun or rewarding, but it definitely has its strengths and weaknesses.
I'm having trouble understanding what your actual point of criticism is.
The records problem is basically a small syntactical quirk. Undesirable, yes, but not what will make or break production software development.
Lenses solve some of that and much more. Lenses strictly improve on fields/records/getters/setters/etc. in any other language you know. In other words, they accomplish what they do and then some.
If that was your only problem with Haskell, then it might be one of your favourite languages if you get to know it again.
That was the most memorable issue I had with Haskell; thinking back, other stuff that bugged me was some of the idioms needed to move between pure and impure code. You may be right though, it's something I should probably revisit. It's been a couple years and I did end up taking some of the lessons learned from Haskell into other languages (mostly not at a low level, though -- it's more that Haskell informed how to approach some higher-level design problems). Railway-oriented programming in particular stuck with me as a _great_ pattern that's consistently useful across languages.
I still think it's worth learning by anyone who's got the time for it.
What?? I've gotten paid big money to write Haskell for like a decade. None of the work had anything to do with formal models. That's comically wrong. But it was good software and Haskell was a big part of that.
Thanks for writing about that experience, makes sense. My first to forays in the language ended the same way. I tried to do mainstream programming stuff in it but realised I wasn't making much progress in first few days and eventually moved on.
It sounds like you folks didn't give the language enough of a chance and may have felt the friction of trying to write Haskell in the style of another language. It takes a while to learn how to go with the grain, in any language.
While I totally sympathize with this, I'm not sure it's a very good vantage point from which to judge a language.
Which goes back to the root comment and the issue it raised:
Where are the practical Haskell tutorials. No preaching, no "you're holding it wrong", ONLY this: "you need to handle arbitrarily nested JSON? here's how".
> But with Haskell, it seems a reverse process was followed. It starts with some (questionable?) assumptions like lazy evaluation is good and then builds on top of it.
Haskell was created by researches who wanted to study properties of lazy functional programming. I’ve never heard anyone say they did it because lazy evaluation was “good”, it was their area of research. The co-creator has even said that if he had to do it over again it might be better if it were strictly evaluated.
I worked in Haskell for a few years before switching over to a very closely related but strictly evaluated language, PureScript.
The pureness and laziness of Haskell forced the creators to work out monads for IO which I find extremely useful. Haskell-like monads certainly can (and have) been implemented in strictly evaluated languages.
I don’t miss implicit laziness very often, but I’ll say with partially applied functions, it’s nice to know that values will be calculated as needed and results shared across invocations once those functions are fully applied. That’s something I wish we had in PureScript. Note that in addition to laziness this also requires a compiler optimization that moves let bindings out of inner functions, aka let-floating.
Laziness is also nice as it allows you to be very declarative, defining what things are without ever worrying about the calculations running when you don’t need the result. In practice it’s easy enough to create thunks manually and evaluate them when needed. Doing this can get tedious and the resulting code is less succinct.
> But, articles explaining the core use cases and how only this language enabled solving it are rare.
Articles of that form are usually BS though, for any language. These are general purpose programming languages. They all have different benefits and drawbacks, but you can usually use any of them to do anything you want.
I worked in a Haskell environment that used a lot of formal methods & some of those tools, such as Agda, would generate Haskell that could then be optimized for performance. It was pretty wild but for me, a test engineer, was a pain for more boring tasks like JSON parsing or calling Rest APIs etc :) It got me into property based testing though, and now I'm much more type-driven, so there were some really cool perks of working in such an env.
> The result is its widespread adoption in its niche
And everywhere else, unfortunately.
Go is an overreaction in the opposite direction. In 2010 I can't agree that there is a good reason to make a language without generics and reasonable error handling from the very start.
Go is only successful because Google spent so much money propping it up by paying people to write stuff in it. It has so many major flaws that it would have had no chance of taking off otherwise. If Google had spent the same amount of money on Haskell instead, it'd be even more popular than Go is today.
Go is very simple, provides decent static type safety, has a nice concurrency model, and compiles to fast static binaries across all major platforms. People choose it all the time because it’s a great choice. A wild number of popular open source tools are written in go, very few of them created by google. As someone who wrote go for many years I think you’re overstating how limiting it’s “major flaws” are. And fwiw go ranks much higher than haskell on stack overflow’s annual developer survey.
This a pretty hot take. I'm in the early stages of learning go. So far I like it a lot, but I come from the perspective of a data person who only knows python. I want to learn a compiled language, what would you suggest instead?
I'm kind of with you about Google being heavily responsible to the level of which Go became an success, but that has been the case (backed by large companies) for many popular programming languages such as C, C++, Java, Rust, etc... It helps, but not every corporate backed language is successful or makes it to the top 20.
I don't completely buy the argument that a Google backed Haskell would have been as popular as Go. Readability (which is subjective), familiarity (often C is the reference), and the ease of which the language can be learned are still major factors in widespread acceptance.
> But, articles explaining the core use cases and how only this language enabled solving it are rare.
There isn't much that only one programming language can do, so it's not fair to ask for this. The benefits of using Haskell over other programming languages are that it's easier to reason about code in it, and you can have more assurance that code is correct.
I may get some flak for this but Haskell still being called a research language is doing Haskell a huge disservice for adoption. The new ideas of Haskell are now old and well-known in programming languages research for over a decade. Seeing Haskell as a research language causes the community, by and large, to remain in an academic-mindset. But what the Haskell community needs now is an engineering-mindset. I think Doctor_Ryner in the OP states it perfectly:
“In other communities people mostly discuss regular production problems and data structures, while in a Haskell chat people discuss monads, applicative functors, crazy types and things like that.”
This is why use cases and how only Haskell enabled solving it are rare.
I suspect that the community is overall quite happy with increased adoption being a non-goal. That is, it does not need to shift to an engineering-mindset because it does not value the consequences of that shift.
I don't think that's correct. Especially since there's a community-led organization with the specific goal of increasing adoption https://haskell.foundation/
I think there are many different people with different opinions in the community.
The inofficial motto of 'avoid "success at all costs"' is not a joke. It's easy-ish to achieve success if you allow yourself to pander to the masses who don't always understand what they are missing, and there's always been an element in the Haskell community that tries to do what's right, notwithstanding what the large masses think.
I agree that there are many opinions and that Haskell people don't want to compromise on what they see is the value of the language, but I've also seen a lot of people express their desire for more adoption. For example in threads talking about the haskell.org download page, in threads about improving learning resources, and with the Haskell Foundation.
This desire is usually expressed with sentiments like "let's make this easier to learn" and "Let's fix this so it's not a footgun", and not "success at all cost".
'Tried to it right notwithstanding what masses think' and commercial success are not exclusive. Two examples. Clojure's maintainers very carefully vet what goes into the core language. It has been a source of frustration for some contributors too. But the net result has been more positive than negative and the language has been successful (relative comparison with Haskell). Second example is Lua. There was a comment by its creator (I'm paraphrasing from memory) 'Lua grows by answering why, not by answering why not'. But it still has found (again, relatively more) success, despite being conservative about language feature-set.
The lesson I derive from Haskell is that "If it compiles, it will work"
1. is inflated nonsense
2. has a lot less to do with Haskell-the-language than with Haskell-the-type-system
I have observed a similar phenomenon with Rust. That statement is not perfect, it's hyperbole. If it compiles, it doesn't mean it will run (otherwise, why bother writing code? We just write function signatures and call it a day).
But Rust's type system is incredibly sophisticated, just like Haskell's.
And guess what... Rust supports mutability. It doesn't just support it, it encourages it. It tells you "Step over that fence, it's okay, I got your back".
It's so liberating.
The secret here is not functional programming. It's not immutability.
Rust supports mutability that's bounded to a single program scope. That's a lot closer to functional immutability than to default-shared mutability as used in most procedural languages. Shared mutability is very clearly opt-in, and usually exceptional.
In my experience, people who ask questions like "why should I learn about X when I don't already see its practical applications" are just not people who learn a lot of things, relatively and generally speaking. That's not a value judgment, just an observation.
I don't think it takes any special ability or particular intelligence to be drawn to a language like Haskell-- All you really need is real genuine curiosity, an inclination to learn something you don't already know, and that's something that "geniuses and academia" have a lot of.
A good principle to abide by in software is that if the majority of people are complaining about something being too difficult to use, it is probably too difficult to use. Someone who uses it every day and is heavily invested in the software is unlikely to (want to) understand this, but that doesn't make it less true. I could write an essay about everything wrong with Haskell but the fact of the matter is the community driving it fails to see the forest for the trees.
The principles driving Haskell and functional language paradigms are fine by developers and they don't need to learn it either, nearly every developer is familiar with these. Case and point: the success of Elixir, Swift, Scala, etc.
There are many things competing for people's time and attention. Why should I learn this is a valid question when you have several dozen other potential things you're looking at learning.
While that's true, in practice I have not found that people tend to spend their time learning other unfamiliar things-- rather, they often either waste their time, or spend their time learning things they are already somewhat familiar with.
There's nothing wrong with that, but going outside the comfort zone is how you maximize perspective. In other words, it's pretty much completely impossible to convince someone that it would be really worth their time to learn more maths (eg.: calculus, linear algebra, whatever)... The only way they'll see the practical value of doing so is if they actually do it.
Yes, I think there is a culture in mathematics where people believe this to be the case, and I think its not correct.
I can think of several ways to motivate software engineers to learn more category theory (e.g. "did you know that MapReduce is basically an application of monoids?" or "did you know you can solve fast document layout updates with monoids?").
One thing I wonder about though. You say "curiosity" - can you explain what this means to you? Why are you curious about a certain concept? There are thousands of things to learn in the world - are you curious about all of them?
I think I use "curiosity" in the usual sense of the word, but perhaps what I mean by it is also coloured by my experience.
There are a number of subjects I used to really have zero interest in when I was younger (chemistry, accounting, law, finance, economics...), and in each instance I eventually realized how valuable time spent learning the subject was. After this happened a few times, I began paying closer attention to when I felt disinterested by some academic topic.
I don't think I've felt "uninterested" by any academic topic in years, so I suppose maybe I am curious about it all.
I don't buy that observation. Asking for practicality means asking if a thing connects with the outside world and whether it spans across its own domain. Most programmers with wide skillsets and experience I know are inherently practically oriented people.
Focusing on practical things means avoiding digging yourself into an ivory tower, it's important to remember that programming languages are human artifacts and it's very easy to get lost in inventions we made up. Practicality is a measure of outwardness, not of curiosity. Many geniuses are very prone to get stuck in their own sky castles.
In fact that's funnily mirrored in Haskell itself. I/O in Haskell was only added after the fact, the language started out kind of solipsistic leading to monadic IO as a solution to the question of "wait, how do we actually talk to the outside world guys?"
The essence of my comment was meant to emphasize that going outside one's comfort zone very often requires some leap of faith.
I'm not advocating learning only one thing and spending your life on that-- on the contrary, I'm saying if there's a subject you're not interested in (eg.: chemistry, accounting, actuarial science, invertebrate biology, whatever), it's probably because you don't know enough about it.
The people who do know about topic X, do see its applications. There's almost always more to gain from learning a different thing than we tend to believe at the onset.
There are two issues that are really hard to articulate in a language critique:
1) There is no easy way to learn the language. Try explaining that without looking ignorant.
2) The software written in this language is too hard to read and understand.
Do either of these problems manifest in Haskell? No idea. But I've seen them manifest in other systems and once I know about them suddenly there is an obvious and suspicious silence of people talking about clear problems. When a group of developers see something they don't understand they go quiet because it is hard to tell if the thing is inherently too hard or if the developer themselves just doesn't get it.
Happens a bit in the lisps, I think. There isn't any privileged syntax for loops and conditionals so there are no hints about what control flow constructs to use. People aren't about to stand up and say they don't really understand what control structures are should be used when. They'd just look silly. So the topic doesn't get a lot of discussion despite being important & people go use languages where loop statements get special syntax.
> Do either of these problems manifest in Haskell?
Haskell code varies a ton. Some people write simple, straightforward stuff in Haskell with lots of pure functions and relatively simple types, straightforward definitions. Some people invent complicated type systems for their app. Some use weird styles, like points free, or continuation passing. Some will rely heavily on abstractions like monads, applicative functors, arrows, zippers, lenses, unfolding, etc. Some write straight imperative code.
That’s not really what I’m referring to. There’s actually a tool in Haskell that can rewrite code into a points free style, and it can handle much more complicated cases than just simple composition, it can handle multiple uses of a variable, it can eliminate multiple variables, etc. you can create some real incomprehensible stuff this way.
> Point-free style can (clearly) lead to Obfuscation when used unwisely. As higher-order functions are chained together, it can become harder to mentally infer the types of expressions. The mental cues to an expression's type (explicit function arguments, and the number of arguments) go missing.
> Point-free style often times leads to code which is difficult to modify. A function written in a pointfree style may have to be radically changed to make minor changes in functionality. This is because the function becomes more complicated than a composition of lambdas and other functions, and compositions must be changed to application for a pointful function.
> Perhaps these are why pointfree style is sometimes (often?) referred to as pointless style.
I have observed some relatively extreme examples of pointfree in the wild, and that's what I'm referring to.
Not in simple cases. It can get weird very quick though. If I didn't know the intuition behind `(.) . (.)` I would have trouble deciphering it mentally, and would have to derive it with pencil and paper.
I've tried to articulate this through pointing out that the documentation culture of Haskell is lacking and indeed I've been called "willingly ignorant, and proud to state that fact publicly"
IMO thats really the only remaining problem of Haskell. Most of its documentation is just plain terrible. Rust is probably more difficult to learn as a language, yet the documentation culture is so awesome that one can power through.
A large part of this (and why you will find it hard to find Haskell developers who admit the state of the documentation is lacking) is that documentation isn't bad -- it's specifically apprentice-level documentation for specific libraries/use-cases that is bad.
In other words, journeyman users who are experienced in "following the types" will find plenty of really high-quality documentation. Total novices who need an introduction to the language will also find high-quality material (though this is more of a recent development).
However, once you're past the novice stage and at the apprentice level where you want to accomplish something specific using some libraries, there's nothing for you there. You have to ask someone at the next journeyman level to sit with you and guide you through following the types.
By "documentation", do you mean introductions/tutorials, or references? I'll agree the former could use some work, but I find the latter to be excellent.
I find the reference is not really great. Some of it stems from the fact that function argument names aren't really present in the documentation. Most languages that have excellent documentation will explicitly state what each argument is, something to the point of mind numbing repetitiveness. Often the documentation is written in a way that allows you to start reading anywhere and still understand everything, which is less common with Haskell
Until recently, learning Haskell for solving "real-world" problems is not at all easy. But there are now a lot more resources that one can consult.
I think it is highly rewarding to learn to program in Haskell. It offers you another perspective of approaching a computational problem and has a great ecosystem for many applications. And if you ever want to get into theorem-proving with Coq or Agda, knowing Haskell is a huge bonus.
My suggested learning path is as follows:
1. Read Get Programming with Haskell and do the Haskell MOOC at https://haskell.mooc.fi/ at the same time.
2. Read sections in Haskell Programming from First Principles not covered in 1.
3. Read Haskell in Depth.
Many years ago, I tried to learn Haskell from Haskell Programming from First Principles but couldn't get to the point I could write meaningful applications. I gave up after about a year. Early this year, I completed the Haskell MOOC and read most of Haskell in Depth and now I have a much better command of the language. And doing exercises on CodeWars, Hackerrank, and Exercism really help.
I would advise ignoring both the glowing praises and scathing criticisms. Learn it well enough and form your own judgement, assuming of course that you have time to do so.
How do you do this in Haskell: read a variable number of lists from standard input, until the nil symbol is seen. The lists are heterogeneous: they can contain any mixture of numbers, symbols, strings or any other objects. Then zip them together (truncating them to the length of the shortest one).
TXR Lisp:
1> [apply zip (gun (read))]
(1 2 3)
(a b c)
("foo" 3 :bar)
nil
[Ctrl-D][Enter]
((1 a "foo") (2 b 3) (3 c :bar))
CL:
[2]> (apply #'mapcar #'list (loop for x = (read) while x collect x))
(1 2 3)
(a b c)
("foo" 3 :bar)
nil
((1 A "foo") (2 B 3) (3 C :BAR))
That's the sort of thing I worry about when I'm coding in C.
Here, the parsing is done; the intermediate data structure is built in. For now, it's printed to the console and thrown away (becomes garbage), but if anything else is done with it, it will be in that same form.
If you want a language where the compiler does not care about how you use your data (and will not help you avoid silly mistakes) then Haskell is not the language for you.
It's possible to ask the Haskell compiler to not care, but this would be so far from idiomatic Haskell that it wouldn't help you if I wrote that code here, I don't think.
Is the question not just one of defining the sets of "things" that are being accepted here? It's not that the compiler shouldn't care, but that you have defined an appropriate data structure and set of acceptable types that you would nicely pattern-match on when using them later.
To @kazinator's point, it's not quite going to be just lists of random things, in that Haskell's list elements are all the same type, unlike what Common Lisp lets you do. I'd be more curious whether this could be lists of "Thing", where that Thing type is a typeclass that is inclusive of all the sorts of data you would expect to handle.
I really don't like the elitism coming off the title here. You don't need to be a genius or an academic to use languages like this. If people want to promote what the ML family can do they should concentrate on the problems that these languages solve and what tools the languages offer programmers.
Yes, Haskell started as a pure academic endeavour. In fact I think the creators tried very hard not to compromise on its pure Math-y focus. But that exactly contributed to its success (ironic). I see Haskell being used by Big tech as well.
I’ve found this explanation to be useful and easy: Haskell is the Mercedes of programming languages.
It has features you didn’t know you wanted or needed and eventually other languages will get there, and Haskell will seem average/harder to differentiate.
I’ve written a more practical post on why Haskell is worth using[0], excuse the clickbait title.
You're right, and that's why I like my analogy -- Toyotas get you there, but they got you there a lot more dangerously before airbags or ABS were invented (Mercedes innovated those things).
You don't need a Mercedes these days, and that's a result of other companies adopting things that and increase safety, etc. We should expect this of a healthy ecosystem!
I think it's better to put aside the hype about "geniuses" and break this down into two separate questions:
1. Is functional programming really better than other approaches? (My opinion: Yes, for most applications, although it can sometimes feel more difficult.)
2. Is Haskell really better than other functional programming languages. (My opinion: Haskell has some powerful features that other FP languages lack, but it is often less practical.)
Bottom line: The most important thing is to grok functional programming as a paradigm. There are several good functional programming languages to choose from. Personally, I'm a big fan of F#.
Laziness is "cool", but is it really needed? I don't know any other commonly known languages that have that as a feature. Is it needed in general, or is it needed in Haskell in particular?
Same thing seems to apply to monads. Haskell programmers use them a lot but other languages seem to work just fine without them.
So my guess is that many of the special features of Haskell are solutions to problems you will have if you insist that language is lazy and pure.
The common argument goes that "laziness is not the best choice for all problems, but defaulting to it forces you to make some hard decisions (referential transparency, immutability, controlled effects, control structures as first-class citizens) that are beneficial for a large class of problems."
In other words, you don't necessarily want laziness for its own sake (though sometimes you do) but you do want it for its second order benefits.
Could you get those benefits without laziness? Yes, you could. But in the history of programming languages that has happened very rarely, because under strictness it is very easy and tempting to sneak in a little mutability or uncontrolled effects. Under laziness you can't get away with that.
Other languages have monads too. They just don't call them that, and give each individual monad's bind operation its own name instead of having a single interface they all share. For example, std::optional in C++ is a monad, with and_then being the monadic bind operation, arrays in JavaScript are monads, with flatMap being the monadic bind operation, and async functions are monads, with await being the monadic bind operation.
There is the argument that lazy evaluation improves composability and thus code reuse.[1] Edward Kmett also described how he uses laziness on Reddit sometime ago.[2]
There are definitely scenarios where laziness is crucial. Haskell is the only mainstream language I know of where laziness is the default, but many languages offer laziness when needed. (E.g. IEnumerable interface in C#.)
Monads are essential for "effectful" programming in a functional language. Imperative languages don't need monads, because everything they do is effectful (which leads to unnecessary side effects, and from there to a plethora of errors).
I believe clojure has laziness out of the box as well.
While both languages are functional, in contrast to Haskell, clojure doesn't have functors or monads. My understanding is that the same problem is essentially solved in a different way with macros.
Except for teaching what a functional programming language is, what use cases does Haskell actual provide for employers? I’ve never seen a functional language used in industry. Did I miss it?
The whole avoid success at all costs looks more like a meme to me than actual strategy. There are functional languages (Clojure(Script), Elm) which made pragmatic decisions about the feature sets and have found commercial success. But with Haskell, it seems a reverse process was followed. It starts with some (questionable?) assumptions like lazy evaluation is good and then builds on top of it. For commercially viable software, the process is generally reverse. You start with specific problems and build a feature set from it.
The best way to understand this academic Vs. pragmatic approach is to look at Golang's relative success vs Haskell. No matter how much PL researcher snobs complain about golang, its creators started from real problems and chose some constraints carefully. The result is its widespread adoption in its niche (networked infrastructure code that can tolerate GC). Such cases are not documented for Haskell or are rare to find.
It is perfectly alright and better to say, look this was a research language to find out how far we can go with certain choices and assumptions. But constant evangelism about how it is elegant etc does not help.