Hacker News new | past | comments | ask | show | jobs | submit login
If Haskell is so great, why hasn't it taken over the world? (2017) (pchiusano.github.io)
123 points by tosh on May 20, 2018 | hide | past | favorite | 149 comments

The author notes the success of Go. Go's developers knew when to stop. Go is a mediocre language, but it has all the parts needed for server-side software. That's a big market.

I suspect that part of the success of Go is simply because its libraries are heavily used. You know that the library is serving a zillion requests a second in Google data centers. Major bugs have probably been found by now. Go has one solidly debugged library for each major function. Some other languages have a dozen half-debugged partly finished libraries. I once discovered that the all-Python connector for MySQL failed when you loaded more than a megabyte or so with LOAD DATA LOCAL. It passed the unit test, but clearly nobody was using it in production.

There's an intermediate stop between imperative programming and functional programming - single assignment. Variables are initialized once and are then immutable. This is becoming the default in newer languages such as Rust and Go - if you want mutability, you have to ask for it. This gives you the immutability advantages of functional programming without the run-on sentence syntax. Also, results have names, which improves readability, and give you something to display when debugging.

In other words, Go is designed for practicality. It shows design consideration to favor engineering while Haskell, as far as I can tell, barely cares.

Haskell actually doesn't care by conscious choice: It's meant as a research language, and has "avoided success at all costs" (to use the tongue-in-cheek quote) to retain the freedom to play around with things. Which in turn lead to quite a few inventions that are now being copied by other modern languages (like Rust), though they rename them to make them less scary (apparently programmers are easily scared by names ... like the one that starts with M.).

That's a bit sad. On the other hand: Imitation is the sincerest form of flattery.

I don't know that it's sad at all. If you want to make a language to be a research language, there's nothing wrong with doing so. If that's what you want, being influential, having other languages steal your ideas is what success looks like.

If you want your language to be used by working software engineers, that's a different metric of success. But then, you'd do things differently if that was your goal.

Haskell, in achieving some real-world use, has achieved beyond the wildest dreams of a research language.

It's actually "avoid (success at all costs)", rather than "avoid success, at all costs".

>In other words, Go is designed for practicality. It shows design consideration to favor engineering

To favor 1969-1975's engineering, you mean.

> implying that 1969-1975's engineering was worse than today's

Spoilers: it was (at least in regard to software development)

At least post-1975 software development had the Mythical Man-Month to reflect on.

Furthermore, advances in memory safe languages and type safe languages can both be considered good things. Unfortunately, Go is "almost there but not quite" in these two departments.


>I'm going to go out on a limb and guess that you've never had to work in that space. It's really easy to insult Go's solutions

It's really easy to insult my opinion as well. I do have worked in really large codebases.

The only thing where Go helps "really large codebases" is quick compile times. But it is really easy to create a fast compiler when the language is anemic in features, this is no feat at all.

If Go wanted to be useful for "really large codebases" it would have supported good exception handling (take a look at Common Lisp for exemplary exception handling), for a start. And don't get me started on the horrible kludge of using interface{}.

And for "really large codebases" you want to keep mindless boilerplate code repetition -which is the noise that puts haze over the signal- close to zero. Go has many design choices that do exactly the opposite -- increase boilerplate to the max.

> It's really easy to insult my opinion as well.

Fair enough, and somewhat deserved. (But only somewhat - your tone kind of led me into temptation. Still, I apologize.)

> The only thing where Go helps "really large codebases" is quick compile times.

Which was, in fact, explicitly their intent. In comments on this same article, people have been complaining about how long it takes to compile GHC. How long would it take for Haskell to compile a 10 million line program? For a program that has, say, 50 people working on it, and that lives for 20 years, that adds up to real money.

Now, you could argue that, if go didn't increase boilerplate so much, you wouldn't have so many lines to compile. That's a fair criticism. Still, just focusing on compile times, would you rather compile 10 million lines at Go's speed, or one million lines at Haskell's?

> But it is really easy to create a fast compiler when the language is anemic in features, this is no feat at all.

The "feat" is choosing to focus on compile time rather than on features.

I disagree with you on exceptions. Go's error handling approach means that you only have to worry about your function and the functions you directly call. Java's approach (or worse, C++'s, since the exceptions are unchecked) means that you have to worry about what every function in the call graph can throw. As the size of the program increases, knowing everything that can be thrown by everything you call becomes unreasonable. (I don't know enough about Lisp's approach to know whether it suffers from this problem or not.)

>This gives you the immutability advantages of functional programming without the run-on sentence syntax...

What exactly do you mean by this? Have you looked into something like let expression in Haskell? You can define as many symbols as you would like with it that you can use in the evaluation of the final expression...

Agree, one can even throw the routine, distracting definitions to the bottom in a "where" clause, which I always thought is a beautiful feature.

My experience with Haskell has been that if I've written a "run on sentence", it can factored into separate functions, usually resulting in better readability.


What facility does Go have for immutable values?

(I'm presuming you mean the val vs var in Scala.)

From my own experience, Haskell is hard to learn. I have no grounding in maths, but that hasn't stopped me learning most languages. It's a pretty massive barrier to Haskell though, and I find that a lot of the community really struggle to explain things in terms that non-haskellers will understand.

The biggest problem I've found is a real inability to explain _why_ the things it does are cool, or what real-world applications they have.

I must admit that I also don't think it's a very pretty language. I like my code to be easy to read, and I just don't feel like Haskell fills that requirement. But that's obviously very subjective.

At the end of the day, I've found Rust more fun to learn and use, so I've been putting my time into that lately. Haskell is cool, but the time investment is high and the community is small, and I like to build things, rather than spend hours trying to figure out what a string of symbols mean.

Sounds relatable.

I've tried to have a look at it a couple of times and usually gave up. And it was never fun, and I usually enjoy trying out new languages a lot. My only positive experience was when I finally got something done in PureScript, which, for lack of knowledge, looks very similar.

I especially detest languages with overly an overly "cryptic" alphabet of non-word modifiers, operators, etc. Perl was bad (but I've come to like it, 20 years later), Scala was worse and Haskell takes the crown with all the <*> and <=> and so on :|

I think it's hard to explain why the things it does are cool, because it's related to the aesthetic sense of beauty and it's very difficult to explain that one. It's like you take a person who is doing number theory, and you ask them to explain why they are so fascinated that you can approximate prime numbers distribution with a particular function.

I think it's related to aesthetics of seeing how things are connected into a "bigger picture". It gives you an immense sense of beauty, once you are able to see that things, that you thought are not related, are connected into a bigger picture, and are actually very much related.

In Haskell a lot of things are about this "bigger picture", while in many popular languages the things are just a bunch of practical tools, that have nothing to do with each other.

One of the issues I think with learning Haskell concepts are that once you learn and understand a certain concept, it becomes obviously easy. At which point it becomes difficult to relate with someone who doesn’t yet understand that concept, and to whom what seems intutitively obvious to you seems insurmountably hard.

> I just don't feel like Haskell fills that requirement..

About readability, Haskell in unreadable until you understand some basic precedence rules really well and until you internalize a hand full of things like function composition operator or the $ thing.

Then it becomes extremely readable.

Think of writing in shorthand [1]. For someone who does not know it, it looks quite cryptic. But once you internalize it, it becomes extremely concise and information dense.

So it is a case of a big initial investment and low long term/usage cost vs low initial investment and much higher usage cost.

So that, in a way answers the question raised by the post. Why hasn't Haskell taken over. It is because human beings are risk averse. ie we have preference for a sure outcome over a gamble with higher or equal expected value.

[1] https://en.wikipedia.org/wiki/Shorthand

Shorthand isn't necessarily a good act to follow, as most shorthand requires a solid understanding of context, and it's not uncommon that the person who wrote the shorthand is the only one who can completely understand it in original form.

Personally, I'm familiar enough with Haskell to be able to slowly work my way through most code, but I'm not at the level where I would be confident in writing anything but simpler programs.

I find idiomatic Haskell suffers from what I call "have to limit my line length-itis" - i.e. "Ord" instead of "Ordered", using "x" and "xs" as identifiers instead of e.g. "first" and "rest". This makes for compact code, but not necessarily very readable code. And for anyone who wants to tell me that it's closer to mathematic notation, well, frankly, maths could take a few hints from modern software engineering about readability.

Then there's the dissonance between actually declaring an algorithm a la Haskell/FP, and actually describing how the algorithm works.

It is sometimes nice to declare algorithms compositionally. This thing I want is the max of this joined on to five of these filtered by this criteria. Lovely. But opaque as to the implementation.

I've found that in my experience with most software I write, the parts where the exact implementation isn't that important don't take a long time to implement. Immutability, idempotency, and removing side effects can work just the same in non-FP languages here. The other parts, which almost always involve some sort of IO, require very precise control over implementation, or state, or timing, and are often very difficult to declare with "is" - sometimes the only sensible way I can seem to think of them is a series of processes.

When I try to implement this sort of thing functionally it feels like a retrofit, and never as elegant as the simple imperative "do this, then this, then that".

I'm not trying to be anti-Haskell or anti-FP - I love and use FP principles every day. But I'm definitely in the camp that thinks "pure FP" is the best solution to only a small set of problems.

To me, where Haskell fails is that it has very little to offer for these imperative problems. Which, for many applications, makes it almost worse than even a crappy old imperative lang.

>When I try to implement this sort of thing functionally it feels like a retrofit, and never as elegant as the simple imperative "do this, then this, then that".

Not sure what you mean by elegance. I am not interested in elegance. I am interested in the code being readable, many weeks from now. I am interested in the guarantee that there are no hidden dependencies in the code am looking into. I am interested in the guarantee that the code/computation wouldn't end up being a mud ball comprising of a dozen mutable variables and their transient state, that can go arbitrarily wrong in a million ways involving half a dozen loops, that cannot be examined in isolation. Those are the things I use Haskell for.

Also, I think a reason for the kind of difficulty you describe might be that a lack of fluency in the vocabulary of FP, which is things like maps, folds, zips, filters etc. Known these functions is one thing. Being fluent in their use by combining them is different.

The frequently encountered "Haskell is not readable" mindset stems from the fact that there are so many people, still somewhat new to Haskell, who know these functions, but are not fluent in their use and common patterns (which is a fact that is oblivious to them), try to read code written by people who are fluent in the same...

I agree around the desired outcomes regarding readability and side effects, but it's not like other languages can't be used in this manner. If you're an OO-ist and you design good interfaces with good contracts, the 'million ways involving half a dozen loops' can be very quickly limited in scope to a couple of methods in a small class.

That's not to say that it always happens, and certainly some less, uh, experienced developers will write terrible code. But they'll write terrible code in Haskell as well (or no working code at all, which has been my experience at least once).

I'd consider myself very comfortable with maps, folds, filters and zips. I wouldn't consider myself super fluent with their use in Haskell, which definitely contributes to my own struggles with the language.

But for something like the canonical quicksort example in Haskell, it takes me quite a while to figure out whether it's actually implementing quicksort by the book, or whether it's implementing something that sounds like quicksort but isn't. This is because I have to map the declaration to the underlying implementation when it matters. Probably more often than not it doesn't matter at all as long as the code works and isn't causing problems (which is where the functional primitives are fantastic), but I do find that digging deeper into a more complex functional algorithm can be a difficult task.

This is because you have to think about how every part might work behind the scenes. Am I doing something stupid like mapping my whole data set with a computationally intensive function? Is one of the innocent-looking predicates in my list comprehension actually some super intensive function that's had an operator overload? Am I going to be applying this predicate to the entirety of a massive list where in an imperative context I would have a really obvious switch case or set of ifs, a clear non-symbol invocation of reallyExpensiveFunction, and exited my loop early?

It's a little difficult to describe I guess, but for me reading [what I believe to be] idiomatic Haskell code at a high level is reasonably straightforward, if somewhat slow due to the compactness, but actually understanding what that code is doing can be incredibly difficult.

In some ways it's the same type of issue I have against liberal use of recursion. It might be reasonably easy to describe a recursive algorithm, but to really get in and understand it requires a much deeper understanding that's often easily more difficult than understanding its imperative cousin. There are real reasons why comp sci students struggle with it.

Because the industry isn’t dominated by well thought out solutions. Most programmers I talk to love the idea of spending time and using the perfect tools to design outstanding solutions to problems. They discuss how much they supposedly like to learn.

But when push comes to shove, your 9–5 career growth is probably going to be best optimized by cobbling a bunch of Python together and shipping it. Especially if you wrap around it a bunch of buzzword frameworks and deployment technologies. Unless your boss or your boss’s boss are technical and opinionated about good software engineering, nothing is going to be optimized for such. And a $100k+/year paycheck is a hell of an amount of inertia.

A two month boot camp is sufficient to pump out an individual who can be opinionated about their technology choice, and even produce an individual who can duct tape some services together to produce some semblance of value. Likewise a stock 4-year compsci degree. Be it a boot camp, a university, an online coding school, or an Ivy League: They’re not going to teach you Haskell (or Lisp or ...), they’ll teach Python, because it’s easy, even if it rarely produces simple solutions. And once somebody has put in the energy to learn one thing that has become profitable for them, they’ll need an enormous amount of rationalization to learn and invest in something else.

> Because the industry isn’t dominated by well thought out solutions. Most programmers I talk to love the idea of spending time and using the perfect tools to design outstanding solutions to problems. They discuss how much they supposedly like to learn.

I find that it's because "Well-thought-out" means different things to different developers. To developers who are familiar with Haskell, well thought out means sound categorical mapping from program almost in a Hask-is-a-category way. To enterprise developers, well-thought-out means UML diagrams. To a grey beard, well thought out means fitting everything nicely into 256 bytes.

I'm exaggerating in every way of course, and yes, the more mathematically minded will see that they're quite essentially the same (UML, categories, and even some sort of goedelization).

More importantly I think it's because there are different ways of expressing "well-thought-out"ness that developers tend towards the things they are familiar with - unit tests are easy to understand. Hindley Milner not so much - I find from teaching, it requires that conceptual leap to making formal inductions.

But it doesn't seem too popular in the open source world either. The only large project I can think of that uses it is Xmonad.

Pandoc ist also pretty well-known even outside the Haskell world.

The Elm compiler is written in Haskell.

Interestingly, the comp-sci program at University of Maryland does start with a list-esque system. They start with a very basic language and slowly add new features throughout the first course.

My experience at the U of MD (probably dated, but man your summary sounds familiar...): http://www.dadhacker.com/blog/?p=755

Thanks for this!

My writing is of course a bit hyperbolic. Some universities do teach non-imperative and non-Modula-style-OO as a part of their computer science course. But my experience shows that, unsurprisingly, fresh grads have the most comfort with a sprinkling of the most popular languages. To me, this is an indicator that industry-popular has tremendous influence on education-popular. For computer science, I disagree that industry should have such steep bearing on the construction of a good computer science education. For something like a vocational program, trade school, or whatever else that is close to SW engineering, letting the industry strongly influence the program is agreeable.

The master's computer engineering program at the Faculty of Engineering of the University of Porto, Portugal, starts off with Scheme in the first semester. The fourth year includes logic programming with Prolog.

I don't think any CS education teaches language ecosystems. Students learn them on their own for whatever language they want. The choice they make is probably much more influenced by the popularity of the language or a job market, but not in which language they have to do all those tiny mostly algorithmic assignments.

Off topic, but: I think that CS departments are teaching CS, not software engineering, and I think that 95% of their grads are going to work as software engineers. There's a bit of a mismatch there. One of the places it shows is precisely in not teaching ecosystems.

> Unless your boss or your boss’s boss are technical and opinionated about good software engineering

And if they don't care about ever making money off of your work. But most of us aren't doing [academic] research.

Technically correct is the worst kind of correct when your company runs out of money/revenue.

It’s possible to be non-academic and productive without pumping out mediocre, greedy-constructed code that doesn’t take advantage of the fruits of academic research, don’t you think? I think this is a characteristic of a pretty good software engineer.

Since when does "good software engineering" mean that you won't make any money?

Since the bar in that comment was set at "the perfect tools to design outstanding solutions to problems"

Ain't a lot of people got time and money for perfect and outstanding.

At my university (Chalmers University of Technology) Haskell is one of the first courses comp.sci students take to get introduced to programming. Haskell is immensely popular there with a lot of software engineering & comp.sci students

Same at my uni (Dresden University of Technology). The first two semesters have "Algorithms and Data Structures" where the first semester is taught in C (probably because students should have an unobstructed view to how data structures are laid out in memory and how algorithms traverse the pointers that make them up), but the second semester switches to Haskell.

This is the same at Imperial College London (my university) and while some students complain (mostly those who are already familiar with programming before coming to college), some are very interested; in fact, every year we have about 2-3 (each of 3-4 people) groups attempting (and succeeding) to write a compiler for an in-house language in the next academic year.

Ironically, here is a Haskell course taught at UPenn, an Ivy league school.


Oh yeah, this course is famous! People seem to recommend the Spring 2013 version usually, I'm not completely sure why though.

> I would not be surprised if Haskell were 100x better than Java for writing compilers.

The author sounds like someone who has glimpsed at the truth, but is only willing to take miniscule baby-steps away from their mistaken position.

Think about what it means to say Haskell is 100x better than Java for writing compilers. If you really believe this, quit your job and spend a year writing a Graal competitor in Haskell. You'd own the market for Java, Python, Ruby, Perl. You'd have built your own LLVM JIT. Literally every week you spend in Haskell is two man years of Java programming!

Of course this is nonsense. There is perhaps a 10x difference between Haskell and x86 assembly. I'd doubt it, but it's at least plausible. But compared to C? If Haskell really was some blessing from the gods, you'd actually know of more than a handful of random compilers written in Haskell. Like what actually is there, GHC and Elm? How is that a shining array of successes?

To stick to your guns and say, well, abstract composability is super important but maybe it breaks down around IO, is just not supported by anything approaching evidence. Yes, C++ causes LLVM issues, but writing LLVM isn't hard because of C++! The difficulty of a problem is not total_difficulty / language_expressiveness. It doesn't matter a damn how expressive your implementation language is when every step forward is a research project.

There's a dangerous mythology in programming that we can fix all the complexities of programming by piling on ever more contrived tooling. All I've seen this lead to is an inability to remember that you're actually programming to solve problems. Maybe you solve all the really easy problems with a different choice of language, but the really easy programs don't matter. To end with a Torvalds quote on Rust, but that applies just as well here,

> To anyone who wants to build their own kernel from scratch, I can just wish them luck. It's a huge project, and I don't think you actually solve any of the really hard kernel problems with your choice of programming language. The big problems tend to be about hardware support (all those drivers, all the odd details about different platforms, all the subtleties in memory management and resource accounting), and anybody who thinks that the choice of language simplifies those things a lot is likely to be very disappointed.

>all the subtleties in memory management and resource accounting), and anybody who thinks that the choice of language simplifies those things a lot is likely to be very disappointed

Those are exactly two of the things that Rust and many languages with complex type systems tackle and very successfully help out with:

Instead of having to get it right every time and having to pay special attention in the edge cases, you define it well and then let the compiler help you catch misuses.

Let's say you write the best image decoder library for the next great image format, and everyone wants it on every device that is connected to the internet.

Writing it in haskell would mean that only haskell users, or people willing to link in a large runtime, could use it.

So, you write it in C or Rust, so anyone can link it in and not care what it was written in. Maybe also write a native Java version, because that can already run on billions of devices.

Makes a lot of sense. Haskell comes with baggage and has an impedance-mismatch with other commonly used languages.

Similar reasons must have been to blame for the demise of Smalltalk. The best solution of its time many believe, but not easy to integrate with other languages and other ways of doing things.

Also let me add it was difficult to create small Smalltalk programs that wouldn't require most of the base-library to be tagged along. Then you couldn't import it to another environment where the base libraries were somewhat different.

D can also link to only the C runtime library.

GHC can produce native libraries just like C or Rust.

But you carry around a runtime with managed objects that can’t be freely traded across the boundary to C libraries.

Certainly it isn't the most attractive solution - still it is possible and works well.

When it comes to moving things across FFI barrier, that is also very possible as there are all kinds of types (including stable pointers) in the Foreign.* modules that map onto C types.

But you have to link in the runtime, including the garbage collector.

I don't see how that is a big problem.

It's a larger footprint, possibly drains battery when you don't expect, may mess with signal handlers and other global state.

And it does not really scale to many libraries. What happens when eveyone thinks theor language is best and wants to link in a big runtime? You need 30 different runtimes for 30 libraries? Ouch.

And if not careful about versioning, you could end up with symbol conflicts, etc.

All these problems can be overcome, but are just not acceptable if you want to be a ubiquitous library like libjpeg. Needs to be super simple from a runtime perspective.

Not to mention trying to get different garbage collectors to play nicely together.

History has shown that sophisticated and more superior technologies are almost never the winning ones.

Instead, technologies that prevail are often the ones that are more accessible to a broader audience of developers and the ones that manage to evolve faster in order to engage with the latest trends without breaking backwards compatibility.

The same philosophy applies when trying to find a solution to an engineering problem.

Good solution is rarely the super-well-optimised and super-efficient solution that applies all the best patterns written in the sacred books.

Good solution is the one that solves the problem quickly and at the minimum short-term and long-term cost including development, running and maintenance.

Depends on where you put your grading of superior.

Maybe Haskell is the end-all-be-all for some criteria, but it certainly isn't for others. So one could say it's not superior for all the use cases where these criteria don't match. I'd agree more with your points if I'd see more of a unique picture of people saying "Haskell is the superior programming language" and of course you see a lot of converts that say this, but it's a very vocal minority. This ends in a tautology with the broad masses still unconvinced.

Also a lot of them concede that it's not practical for everyone for everyday use.

I spent a large amount of personal time becoming an intermediate Haskell developer from 2009-2015, culminating in landing a job doing exclusively functional programming for an analytics team in a large company.

My experience made me give up on Haskell & functional programming entirel, despite my feeling that the principles of functional programming are often “better” than object orientation and other paradigms.

The only language-specific thing that turned me off of Haskell in a significant way was that so many important concepts in Haskell are implemented via pragmas that extend the language and either enforce syntax restructions, enable totally new (and often esoteric) syntax, or change the meaning of existing syntax.

This is really painful and makes you generally avoid great mew features and artificially limit yourself to more basic designs because of the learning curve and how committed to one siloed set of patterns you can become.

It reminds me of Dennis’s coffee shop idea in 30 Rock:

Dennis: One word. Coffee. One problem. Where do you get it?

Liz: Anywhere, you get it anywhere.

Dennis: Wrong. You get it at my coffee vending machine. in the basement at K-mart. You just go downstairs. You get the key from David. And boom, you plug-in the machine.

^^ that’s what it feels like reading Haskell tutorials when all you want to do is multiple dispatch or heterogeneously type some built-in container, simple things in so many languages.

This alone wasn’t what put me off though. The real problem is sociological.

Most companies hardwire a feedback loop between development teams and product or business managers that teaches developers they will be rewarded for their ability to unsafely hack things into a Jenga tower of system components for the sake of rapidly addressing ad hoc business questions even when, perhaps especially when, nobody has the slightest idea if answering the ad hoc business request is likely to be worth the additional instability the hacks will put into the Jenga tower.

This is very nearly philosophically at odds with the spirit of functional programming, from a first principles level, which means even if you eke out a platform capable of dealing with this in a functional paradigm, youknow for sure that the business willnot see your work as valuable, and safety guarantees, correctness proofs, automatic parallelization, etc., will often notbe rewarded, meanwhile just hacking stuff into some C++ or Python codebase that “just works” will be rewarded.

It’s not a satisfying phenomenon, but it convinced me that it’s not worth investing any more of my time into functional programming.

Could I summarize your objection to Functional Programming in business setting as the fact that Object Orientation makes it simpler to model the businesses and thus easier to solve their problems?

If you think about a Function it does just one thing. But real world "agents" that run a business do many things. An OO-class has many methods, not just a single input and a single output.

And inheritance: Businesses differ from others incrementally. Inheritance is a way of modeling such incremental differences to a degree, explicitly. Even if inheritance is not perfect and can cause problems, it helps in many cases.

I personally believe almost the opposite. Even when I write Python, I use classes sparingly and if I ever choose to get a lower back tattoo it would probably say “Inheritance Sucks” in Chinese or something.

Think of a constructor function for a class. It is so easy to shove new complexity in there. Just add new optional arguments, assign them as instance attributes, and off you go.

If you have a class like MonthlyReport, and all of the sudden your boss wants to know how many widgets per month you sell specifically in Narnia, well you can probably hack this into the existing methods of MonthlyReport very quickly.

Doing it once maybe isn’t so bad. But doing it a dozen times quickly leads to a bunch of functionality shoved into MonthlyReport that probably shoukd be refactored out, and lord help you if your boss all of the sudden says you need all of it to go into DailyReport, and you need a new YearlyReport class with overloaded behavior for FiscalYear, CalendarYear, and NarnianYear.

If I heard about a system like this, my guess would be that it’s got a crapload of copy/pasted code, constructors and general class methods with huge lists of parameters and switches, maybe a few subsystems where someone tried to rewrite this with misguided MixIn patterns and abstract base classes, and everyone is terrified to change anything because no one knows how it actually works.

Even an imperative design that just used boring modules of functions, maybe with a tiny amount of metaprogramming like decorators for repeated logic, would be waaay better, no functional programming needed. But well-crafted functional programming would be better still.

The problem is sociological. Business managers won’t tolerate being told, “No. I cannot get you this result for a daily Narnian calendar by tomorrow— for that, we’ll need to draw up a quick design plan to make sure we add it in a maintainable way.”

Functional programming mostly requires saying no, being careful, measuring twice and cutting once.

With object orientation, you optionally can push back, say no, and try to do it carefully, but you don’thave to because of all the different buckets of mutable state, you often can find a place to shoe-horn unplanned complexity in somewhere, and leave it for future people to worry about when the shoe-horned complexity causes a big problem.

> "and lord help you if your boss all of the sudden says you need all of it to go into DailyReport, and you need a new YearlyReport class with overloaded behavior for FiscalYear, CalendarYear, and NarnianYear."

Oh man I've had to do this once or twice. The best approach I've found is to use macro's (This was in SAS language these are kind of like C++ Templates if you are familiar with those) I ended up writing a generic "aggregation macro" that took two datetimes and an 'interval' argument and rolled up each variable between the supplied time range at 'interval' frequency.

Of course it doesn't help when you inherit someone else's code. In that case you are stuck with possibly grueling job of refactoring or you have to hack in a bodge ie. a hardcoded exception and like you said most managers just want it "done" so you end up with duct taped together crap that is a pain to maintain.

This, a thousand times this.

>The only language-specific thing that turned me off of Haskell in a significant way was that so many important concepts in Haskell are implemented via pragmas that extend the language and either enforce syntax restructions, enable totally new (and often esoteric) syntax, or change the meaning of existing syntax.

god the ghc extensions. good luck figuring out why something is happening that you don't expect.

What? They are opt in per module unless you manually enable them globally. Anyways the entire reason Haskell was created was to be a base for future language research. The extension system was largely the point.

I've done a 9-5 in mostly Haskell since 2013. If you have specific questions about using Haskell in production, I can share my experience.

If you've had a bad experience with a piece of tech and draw a broad conclusion, I think it's reasonable if unfortunate that it left a bad taste in your mouth. But I also think there's a brighter picture to paint :)

I have a specific question: do you hire remotely? :)

Sometimes! We're not a remote-first company but we consider experienced candidates who live in a not too dissimilar time zone.

what's your timezone?

New York, currently in EDT. We're not currently hiring at the moment but it's possible we'll post in the monthly who is hiring thread at some point in the future.

If you look at software at various scales, Haskell does a lot to get the finely grained and maybe even medium grained parts right and correct. Yet, I don't see a convincing value proposition for Haskell for the coarser grained scales. On that level, the benefits of Haskell as a language dont shine anymore as brightly, and this is an area that many teams struggle with.

And oh: Haskell's lazy evaluation is the reason why I wouldn't like to use it within the business context, introducing all kinds of weirdness I don't want to have in a business context.

You'd be surprise how much weirder it gets when everything is evaluated eagerly.

I disagree. If everything is evaluated eagerly, intuition usually works. Failing that, tools like logging and debugging and profiling will usually bridge the gap. A non-strict language like Haskell makes it much more difficult to predict run-time performance characteristics, unless you introduce overrides all over the place to force everything to be strict anyway.

exactly my experience. I personally find lazy evaluation a neat idea in principle. In practice, it introduces problems I never had in - let's say - java. One primary problem is that errors become delocalized in the code base. In a strictly evaluating language, the stack trace gives me a pretty accurate idea when and where an error is occuring. In lazy languages, it can happen when you don't expect it.

There is some irony in the fact, that a well-received tool for working with haskell code bases (intero) has bugs (space leaks iirc) that just cannot be tracked down properly.

> ...introducing all kinds of weirdness I don't want to have in a business context.

Despite being a Haskell worshiper, I strongly agree with this.

I consider myself to be a haskell fan (a 95 percenter, as I am not so happy with a few aspects, while being overall really positive about it). Unfortunately, the Haskell community is quite hostile to everyone not in line 100%.

> Unfortunately, the Haskell community is quite hostile to everyone not in line 100%.

Could you say a bit more about what you mean? This discussion is one of the highest voted threads ever on Haskell Reddit and suggests that many prominent Haskellers are not shy about offering criticism of the language:


Of course my statement is a statement of a subjective perception (I don't conduct empirical studies on this), also you'll find criticism and discussions about stuff in the Haskell community (which is fairly large).

However, my impression in other language communities, even in those, that are tightly-knit and feature a certain "cult of pride" (such as Python) I think flaws and shortcomings are discussed much more openly.

Part of the problem is, that (some) Haskellers are pretty quick to insinuate that the person complaining or asking "provocative" questions does not understand and should educate themself more.

I remember being rudely shut down when suggesting that Haskell docs could benefit a lot from a few introductory examples here and there (because the type signatures tell it all, you know!)

It doesn't help that some Haskell enthusiast seem to be allergic against mentioning of certain words by the uninitiated. I think Matthew Rocklin (cool guy, admirable programmer, definitely with a mathematical education) once blogged and mentioned the word monad as an example for code with unnecessary abstraction/complexity IIRC. I read some nasty twitter remarks from people who were not capable of extracting the context and the intent from the surrounding text.

I think the cabal/stack schism has brought at least some change into the community and I see improvements. Until there are some massive improvements, I respectfully decline to actively participate in that community and restrict myself to occasionally consume Haskell material to educate myself.

> “It’s too hard to learn” (If your pet technology were 1000x more productive than, say, Java, would this learning curve really be a substantive barrier?

I think the simple answer is "yes". It doesn't matter how much more "productive" something is if the vast majority of people simply aren't capable of learning it (or learning to use it well enough or doing it in a reasonable amount of time or whatever).

He does go on to point out:

> Though this question is more complicated than you think

And suggests one read his more detailed treatment of the question, including what I found most relevant, which is a more nuanced treatment of the notion of productivity, having to do with labor costs, fungibility of that labor, market competition, and overall limited resources.



I would expect that any technology that can be classified as sufficiently exotic is viewed as a huge risk, one not worth taking, by larger companies, which has traditionally included obsolescent tech and bleeding-edge/unproven tech, but easily includes a language that's too hard for the average programmer learn.

>>Writing CRUD apps? Haskell isn’t as much of a win.

If you are dealing with databases, having typed results from data base queries, and ability to define tables with custom typed columns (Opaleye library) can provide quite a lot of type safety....

That is just one example. If you are good with type level programming, it is possible that you will be able to find a way to encode the in variants of your business domain into the types and have the compiler assist you in building correct programs...

> "Avoid success at all costs."

> I mentioned this at a talk I gave about Haskell a few years back and it’s become quite widely quoted. When a language becomes too well known, or too widely used and too successful suddenly you can’t change anything anymore. You get caught and spend ages talking about things that have nothing to do with the research side of things. Success is great, but it comes at a price. -- Simon Peyton Jones

There's not much in the way of marketing for Haskell. There isn't any drive to make it super-simple like Python, or super businessy like C#. It's fast if you make it so, but it's never going to be as fast as C. The cognitive defects required to like it are far less prevalent than those of JS-lovers. It's not built for the mainstream, even though it'll happily run GUIs and games and webpages and chat servers and whatever else is cool.

Even if it wanted to, taking over the world takes a lot of effort and luck. And it doesn't want to, and spends its effort and luck elsewhere.

Did you just call JS-lovers and Haskell-lovers cognitively defective?

It's not an uncommon meme that programmers and mathematicians possess a weird twist in the mind that allows them [in the strong form of the meme] to do, and [in the weak form of the meme] to enjoy doing, what they do. If you grant that, then it's a small step to admitting the existence of different sub-twists that incline you towards different parts of the abstract space. If you don't grant that, then consider it a rhetorical device for pointing out that comparatively few people like Haskell versus JS.

Why the word "defect" though? There must be other words that are less controversial. Maybe "abnormalities" or "specialities"?

How about "possess an outlier brain design" among the human or even general IT population.

Because less controversial words would be less funny.

"Interests"? "Appreciations"? "Proclivities"? "Personalities"?

But defects are what make diamonds have colours![0] I hope you didn't interpret that phrase as an attack.

[0] https://en.wikipedia.org/wiki/Crystallographic_defects_in_di...

I read it as more of an assertion that loving any language requires cognitive defects, some a bit more than others.

What are the cognitive defects associated with liking Haskell? I'm always happy to learn more about my cognitive limitations.

Besides the reason given in the article (most everyday applications have a large surface area where they interact with a larger system, and that system is largely imperative and untyped), I think that network effects are a big problem.

If you want to create some new application in Haskell, you'll probably need some collaborators. If all your potential collaborators know Haskell that's great, but the chances are a lot higher that most of them know Javascript or Python or Java. So, that's often what you end up using if the main goal is just to solve some immediate problem with the least hassle.

Another factor is that most of the high-profile projects people are familiar with (linux, gcc, firefox, llvm, inkscape, etc..) are fairly old projects. Haskell has also been around for quite a while, but it's really only in the last decade or so that it's become a pleasant language to use. That's both because of libraries and tooling, and also because it took a long time for the Haskell community to go from "we don't know how to do IO in this language" to "we have this IO monad thing, but we don't really know how to use it" to "this is a language we know how to write applications in and lots of people have done it."

At the time that many projects got started, Haskell either didn't exist or hadn't progressed to the point where every little thing wasn't blazing a trail into the unknown.

As a mathematician who works with category theory, Haskell is a piece of cake. Much simpler than java, python, ruby, etc. It just requires a kind of thinking that is not tought to computer scientistis or on any programming courses. I recommend the book Category Theory (Steve Awodey ,2006).

Simple: The productivity boost is not worth the time investment.

Which will give developers a better return on productivity?

200 hours spent learning Haskell or 200 hours spent learning more about the business domain they work in?

200 hours spent learning Haskell or 200 hours spent improving their soft skills?

200 hours spent learning Haskell or 200 hours spent networking?



Though, will 200 hours spent learning Haskell gain you, well, anything productive?

Surely a non-genius would require at least 1000.

Part of this is that Haskell is seen as a language by and for academics. You can't even read about the history of Haskell because it is only published through the ACM for a steep fee. So unless you are part of the academic circle you won't be exposed to it unless you are ready to invest.

The biggest growth driver for Haskell is the financial world where it has seen quite a bit of adoption.

> You can't even read about the history of Haskell because it is only published through the ACM for a steep fee

Not so! http://haskell.cs.yale.edu/wp-content/uploads/2011/02/histor...

I haven't ever encountered a Haskell-related paper or thesis that is paywalled? The authors put them on their personal homepages for anyone to access, usually. Not saying there aren't such cases, but in actual Haskell learning studying and working practice not my experience nor anyone else's that I heard from

Cool, thank you!

Which finance companies have even >10k line Haskell codebases

Barclays, Standard Chartered, Tsuru Capital, Alpha Heavy Industries (not sure if they're still going), Karamaan Group

In his talk about "milestones in the development of lazy, strongly typed, polymorphic, higher order, purely functional languages" David Turner mentions that the wan't adverse to SASL (https://youtu.be/QVwm9jlBTik?t=1819) and Miranda (https://youtu.be/QVwm9jlBTik?t=2330) —both predecessors of Haskell—to be used in industry. Getting out of the ivory tower was not a last minute idea, it seems.

The Peter Landin paper "On the Mechanical Evaluation of Expressions" (https://www.cs.cmu.edu/~crary/819-f09/Landin64.pdf) makes it totally clear that, for Landin, FP was an alternative to the bookkeeping required in systems programming.

Since he stands at the beginning of the tradition that led, via Turner's work, to Haskell, I think it's not a huge leap of the imagination to attribute Haskell's lack of success, as a language for building systems, to this inherited attitude that the programming system should be elegant, principled and mathematically structured. No concessions are made to the practical needs of someone writing, for example, an operating system, except to the extent that they provide an occasion for a new theoretical construct. (Lenses are a recent example of this.)

And take implicit data structures, for example. I know Edward Kmett has explored this idea in Haskell, but really, it's a totally alien concept for the FP philosophy. Just as traditional operating systems tend to require a little bit of assembly code in addition to C, the purist FP system that wants to manipulate genuine implicit data structures will need to call on some outside language with the power to manipulate them... That seems like an intentional state of affairs.

The article makes a good point. When you get to the edge of any system it behaves like a stateful object. So the only way to get rid of 'objects' completely is to extend your system to cover the whole world. I doubt the author thinks he can do this, but even what he describes sounds ambitious.

Another way to look at it is that it's good to learn FP and apply it where it makes sense but don't neglect your object modeling skills (where object = something that exibhits behaviour in response to messages)

I think it's a question of inertia, people and organizations do things the way they have been doing them for some time. It has worked so far since the business is still up and running, so why take a risk and change.

Choosing a not widely adopted solution even if it would be somewhat better than others is always a risk.

Also it's not a question of whether to Haskell or not. There are many more choices, each with their pros and cons.

It's a bit like with religions, why should I pick just one :-)

It takes an insane amount of time to compile ghc. I can't stand apps that use it because of that. Could take hours on some machines, would have considered riding on that bandwagon if it wasn't for that limitation.

I think it’s pretty safe to assume this limitation/problem affects almost nobody else, so not really relevant for most people.

Also: how do you run running Firefox and Chrome? Do you download and build the whole chain of dependencies and build tools needed to build and run those? And if so, why?

I actually use firefox or chrome so that's a non-issue. Ghc is a dependency not the main app,therein lies my complaint. Depending on the use case I might complile some applications and by no means was I implying others should build from source as well.

It's just that I have run into situations where a Haskell app pulls ghc and that needs a source build. If I ever build something in Haskell then there is a chance others might go through that as well.

I never had that problem with any other language (even a heavy weight lan like java).

I use lots of versions of GHC across several architectures, and can't remember the last time I compiled GHC. I think you must be doing it wrong. It's not without its flaws, but `stack` can do a lot of the work for you too.

On some platforms that build from source haskell apps pull ghc as a dependency and build it from source as well. I try to avoid those apps but it is getting harder.

You can't compile GHC unless you already have a GHC, so why don't you just use the one you have?

You can use prebuilt binaries of ghc. If you don't need the bleeding edge builds.

My two cents: because, more often than not, worse is better.

And, recognizing that worse often wins, maybe we should consider the possibility that what we were categorizing as worse, was actually solving a different, more concrete problem.

Your last point is really key: programmers love to decide that one or two favorite features are game changers but no real-world decision should be that simple. I’ve seen people dismiss Python’s readability as a simple feature which anyone could copy but then decline to do so because they didn’t really respect that as a goal (“it’s just for newbies”, etc.), despite the mountain of evidence showing that it deserves more serious consideration.

(Oh, and since I left it out earlier: I’ve loved seeing the Rust team take usability issues so carefully. That’s a great model for the field.)

There is already a kind of unison for Haskell:


I have to say that I have yet to see significance practical problems that functional programming solves better than procedural or OOP. I've had debates over this many times, and almost every example is a case of a bad framework and/or a bad language (such as Java's "stiff" OOP model). FP is a language band-aid, not a solution. There may be a few edge cases, but why complicate the 99% for the 1%? I'd be happy to debate more scenarios. Bring it on!

FP is not new. OO is not new. They are tools in the toolkit. Take an object with a bunch of instance methods. They are all equivalent to functions with a hidden first parameter. If you make 'this' explicit you've got pure functions. It's not magic.

Intellij even has an automated refactoring for it in kotlin. You can take a function and convert it to an extension method and back again:

    fun String.foo(a: Int)

    fun foo(self: String, a: Int)
The real fun of FP is building simple functions that operate on a single value and then stringing them together, generalising to lists of values, filtering etc.

MapReduce is functional programming. At epic scale. It's name is literally a portmanteau of two of the most fundamental higher order functions.

FP rocks.

> Take an object with a bunch of instance methods. They are all equivalent to functions with a hidden first parameter. If you make 'this' explicit you've got pure functions.

Well, you’ve got normal imperative procedures that maybe return a value.

Pure functions mean something else.

The domain and use-case is not clear to me here. I want to know what real-world task is to be performed, not how to convert one language to another language.

I agree with this from my limited experience. However it feels like this ends abruptly. Did I skip over something or did I miss how we could remove composability boundaries at IO?

The first 3/4 of this article reminded me of Zed Shaw's "The Web Will Die When OOP Dies" (though for some reason I remembered that title as the converse).

It doesn't look like unison.cloud is aiming to replace the web as an end-user application platform, though. What would a pure functional platform look like? More like Datomic, I think, and maybe then I wouldn't run into 404s so much.

When the ratio of "introduction to monad tutorial blogposts" to useful applications gets somewhere below 1,000 perhaps Haskell can be considered as more useful in performing data transforms than it is entertaining the programmer.

Given the number of people who have learned the language there is really not much written in haskell that is used for something that isn't programming a computer.

Let's look at an audience where Haskell (or FP in general) ought to be popular, if any of the hype is true: mathematicians and logicians writing pseudo-code in research papers.

What we see is that pseudocode is almost always imperative. Sometimes it even uses GOTO!

Sounds like another rediscovery of "Worse is better" https://en.wikipedia.org/wiki/Worse_is_better

See also, Lisp.

Lisp already took over the world. Most of the fancy things that originated in Lisp were adopted by other languages, and thanks to Lisp, these other languages are bearable and actually useful now.

Haskell's one of these languages.

Because it is not great at solving real world problem.

because the world is too short

Because imperative languages require less optimization (hw wise) at first sight.

i have railed/ranted (occasionally belligerently) about haskell after having a professional brush with it two years ago. haskell hasn't taken over the world because the community is haughty about haskell's giant warts. yes it's very welcoming and open to indoctrinating you on "best practices" but as soon as you diverge in the slightest (because someone before you diverged or because you need to do something unconventional) they're completely obstinate about how ugly the language/experience can get.

the most of obvious instance of this is point free form combined with overloaded operators combined with 8? arities. if you bring this up on r/haskell or #haskell you will get shouted down about how it's not best practices to abuse that so why even bother discussing because you can write unreadable code in e.g. python. this is absolutely true but the ugliest python is still clearer than the brainfuck homage that haskell turns into with stuff like

$<><>$%%^^^ a b (@@@<> c d) $$$ e f

the point here is not to rehash the debate but that no one takes this complaint seriously because elegant haskell is the one true scotsman and everyone else's haskell is irrelevant.

another obvious thing is the complete lack of good tooling. stack/cabal/cabal-sandboxes/another cabal thing that's been created since i stepped away last year. there's no ide because real programmers write code using butterflies flapping their wings so why would you need anything other than cobbled together vim/emacs plugins. wanna debug? set a breakpoint in ghc but you can set breakpoints in ghc only 20 calls deep! don't even get me started on how slow the compiler is. does anyone listen to these complaints? no. why? my hypothesis is that it's like a fraternity with a hazing ritual (combined with many serious cases of stockholm syndrome).

i think a fair comparison is rust (despite being imperative). linear types are probably as foreign as purity/immutability to people and yet rust has 10x more mindshare than haskell. why? because rust maintainers care about ergonomics!

i could go on but i'm sure i'll get responses that chastise for not sticking with it long enough or not putting in the work or something else when the reality is that it wasn't worth it - i wanted a strong type system and the price i had to pay for it was too high. plenty of other strongly typed languages (ocaml, rust, f#) without all of the pain of haskell and the intransigence of the community.

This is not a valid argument. I don't know Haskell beyond basic fundamentals and I know that your argument is ridiculous.

Just consider C++ and boost::spirit. Overloading operators can lead to ridiculous syntax in pretty much any language that supports it.

C++ doesn't have any _real_ problem with overloading in _real_ code bases. I wouldn't expect Haskell to have this problem either. Being able to construct ridiculous code is not an excuse for a language not catching on.

Tooling is exactly the same point. Tooling in C++ land is completely ridiculous. In comparison to Rust there is no argument, Rust is strictly better. However, C++ is still doing quite well for itself in industry.

Also, Rust's type system is not linear, it's affine, you can declare a variable and not use it.

>Just consider C++ and boost::spirit. Overloading operators can lead to ridiculous syntax in pretty much any language that supports it.

you can't define new operators in c++? you can only override existing ones and not even all of them https://stackoverflow.com/a/8425207/9045206

>Tooling is exactly the same point. Tooling in C++ land is completely ridiculous.

Microsoft Visual C++ is arguably the best ide period. make/cmake are rough sure but i'll take that in-exchange for being able to set visual breakpoints.

>Also, Rust's type system is not linear, it's affine, you can declare a variable and not use it.

<rolls eyes> does that make more or less foreign to users?

>$<><>$%%^^^ a b (@@@<> c d) $$$ e f

Have you ever heared of C language? It is much worse:


Good rant and thanks for the example. Seriously, is this really Haskell:

$<><>$%%^^^ a b (@@@<> c d) $$$ e f

What does it do?

i made that up but absolutely you can make this a thing in haskell because of operator overloading

You can make completely unreadable code in most languages- in none of them, except esolangs like BF, would something like that be best practice... That seems a bit unfair...

You can't actually quite make that a thing, because you started with an operator.

Haskell is not a toy, but it's not a general purpose workhorse either. Part of the appeal is that it is independent of the practical side of things. It allows you to write theoretically interesting programs.

Just an advice: read the article before writing a comment.

I also agree with you. The author’s assertion that Haskell has failed in the real world due to the Illuminati is clearly wrong.


The reason why Haskell didn't succeed is because programming languages don't actually matter.

The best programming language is the one that is used by the most developers in any given field. All other criteria don't really matter.

You can add as many constraints as you want to the language but it's not going to stop developers from writing bad code.

The bottleneck with programming is human incompetence not programming languages. The language changes nothing.

Haskell is a young language, and the vast majority (99%+) of programmers have not learned functional programming.

Another observation is that the programming industry does not care about correctness or writing stable software, since they have no liability for broken shit. There isn’t a strong push for writing consumer software that isn’t riddled with bugs.

Haskell is 28 years old.

And in recent poll of programming popularity Haskell came just behind PL/1 :-) I bet there are not many hn regulars who have used Pl/1 in production.

Its also funny that Paul seems to think there where no high level languages before c

30 ish years is young for a programming language now?

PL1 is approximately 54, and it was put together to cover the application domains of 2 previous language successes, Fortran and COBOL, which still live. Fortran is more than twice 30 years old, BASIC approximately 53, RPG 59.

Haskell is the youngest language that isn’t derivative.Monads for IO weren’t being seriously used until the late 90s. All the imperative C derivative languages are much older.

A very young language that is (to my knowledge) not derivative of older ones unless you count Excel: https://www.luna-lang.org

It comes with C, JS, and Haskell interoperability, and it's implemented in Haskell, but the heart of the language is its visual representation. I have personally never seen anything like it before.

Some devs prefer being woken up on their on-call duty to fix a Javascript bug that could've been caught by GHC. It only makes sense when you put on the business hat, e.g. the big pro is that they are first to market, etc.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact