
If Haskell is so great, why hasn't it taken over the world? (2017) - tosh
https://pchiusano.github.io/2017-01-20/why-not-haskell.html
======
Animats
The author notes the success of Go. Go's developers knew when to stop. Go is a
mediocre language, but it has all the parts needed for server-side software.
That's a big market.

I suspect that part of the success of Go is simply because its libraries are
heavily used. You know that the library is serving a zillion requests a second
in Google data centers. Major bugs have probably been found by now. Go has one
solidly debugged library for each major function. Some other languages have a
dozen half-debugged partly finished libraries. I once discovered that the all-
Python connector for MySQL failed when you loaded more than a megabyte or so
with LOAD DATA LOCAL. It passed the unit test, but clearly nobody was using it
in production.

There's an intermediate stop between imperative programming and functional
programming - single assignment. Variables are initialized once and are then
immutable. This is becoming the default in newer languages such as Rust and Go
- if you want mutability, you have to ask for it. This gives you the
immutability advantages of functional programming without the run-on sentence
syntax. Also, results have names, which improves readability, and give you
something to display when debugging.

~~~
tanilama
In other words, Go is designed for practicality. It shows design consideration
to favor engineering while Haskell, as far as I can tell, barely cares.

~~~
dirkt
Haskell actually doesn't care by conscious choice: It's meant as a research
language, and has "avoided success at all costs" (to use the tongue-in-cheek
quote) to retain the freedom to play around with things. Which in turn lead to
quite a few inventions that are now being copied by other modern languages
(like Rust), though they rename them to make them less scary (apparently
programmers are easily scared by names ... like the one that starts with M.).

That's a bit sad. On the other hand: Imitation is the sincerest form of
flattery.

~~~
AnimalMuppet
I don't know that it's sad at all. If you want to make a language to be a
research language, there's nothing wrong with doing so. If that's what you
want, being influential, having other languages steal your ideas is what
success looks like.

If you want your language to be used by working software engineers, that's a
different metric of success. But then, you'd do things differently if that was
your goal.

Haskell, in achieving _some_ real-world use, has achieved beyond the wildest
dreams of a research language.

------
robotmay
From my own experience, Haskell is hard to learn. I have no grounding in
maths, but that hasn't stopped me learning most languages. It's a pretty
massive barrier to Haskell though, and I find that a lot of the community
really struggle to explain things in terms that non-haskellers will
understand.

The biggest problem I've found is a real inability to explain _why_ the things
it does are cool, or what real-world applications they have.

I must admit that I also don't think it's a very pretty language. I like my
code to be easy to read, and I just don't feel like Haskell fills that
requirement. But that's obviously very subjective.

At the end of the day, I've found Rust more fun to learn and use, so I've been
putting my time into that lately. Haskell is cool, but the time investment is
high and the community is small, and I like to build things, rather than spend
hours trying to figure out what a string of symbols mean.

~~~
buvanshak
> I just don't feel like Haskell fills that requirement..

About readability, Haskell in unreadable until you understand some basic
precedence rules _really well_ and until you _internalize_ a hand full of
things like function composition operator or the $ thing.

Then it becomes extremely readable.

Think of writing in shorthand [1]. For someone who does not know it, it looks
quite cryptic. But once you internalize it, it becomes extremely concise and
information dense.

So it is a case of a big initial investment and low long term/usage cost vs
low initial investment and much higher usage cost.

So that, in a way answers the question raised by the post. Why hasn't Haskell
taken over. It is because human beings are risk averse. ie we have preference
for a sure outcome over a gamble with higher or equal expected value.

[1]
[https://en.wikipedia.org/wiki/Shorthand](https://en.wikipedia.org/wiki/Shorthand)

~~~
spronkey
Shorthand isn't necessarily a good act to follow, as most shorthand requires a
solid understanding of context, and it's not uncommon that the person who
wrote the shorthand is the only one who can _completely_ understand it in
original form.

Personally, I'm familiar enough with Haskell to be able to slowly work my way
through most code, but I'm not at the level where I would be confident in
writing anything but simpler programs.

I find idiomatic Haskell suffers from what I call "have to limit my line
length-itis" \- i.e. "Ord" instead of "Ordered", using "x" and "xs" as
identifiers instead of e.g. "first" and "rest". This makes for compact code,
but not necessarily very readable code. And for anyone who wants to tell me
that it's closer to mathematic notation, well, frankly, maths could take a few
hints from modern software engineering about readability.

Then there's the dissonance between actually declaring an algorithm a la
Haskell/FP, and actually describing how the algorithm works.

It is sometimes nice to declare algorithms compositionally. This thing I want
_is_ the max of this joined on to five of these filtered by this criteria.
Lovely. But opaque as to the implementation.

I've found that in my experience with most software I write, the parts where
the exact implementation isn't that important don't take a long time to
implement. Immutability, idempotency, and removing side effects can work just
the same in non-FP languages here. The other parts, which almost always
involve some sort of IO, require very precise control over implementation, or
state, or timing, and are often very difficult to declare with "is" \-
sometimes the only sensible way I can seem to think of them is a series of
processes.

When I try to implement this sort of thing functionally it feels like a
retrofit, and never as elegant as the simple imperative "do this, then this,
then that".

I'm not trying to be anti-Haskell or anti-FP - I love and use FP principles
every day. But I'm definitely in the camp that thinks "pure FP" is the best
solution to only a small set of problems.

To me, where Haskell fails is that it has very little to offer for these
imperative problems. Which, for many applications, makes it almost worse than
even a crappy old imperative lang.

~~~
buvanshak
>When I try to implement this sort of thing functionally it feels like a
retrofit, and never as elegant as the simple imperative "do this, then this,
then that".

Not sure what you mean by elegance. I am not interested in elegance. I am
interested in the code being readable, many weeks from now. I am interested in
the guarantee that there are no hidden dependencies in the code am looking
into. I am interested in the guarantee that the code/computation wouldn't end
up being a mud ball comprising of a dozen mutable variables and their
transient state, that can go arbitrarily wrong in a million ways involving
half a dozen loops, that cannot be examined in isolation. Those are the things
I use Haskell for.

Also, I think a reason for the kind of difficulty you describe might be that a
lack of fluency in the vocabulary of FP, which is things like maps, folds,
zips, filters etc. Known these functions is one thing. Being fluent in their
use by combining them is different.

The frequently encountered "Haskell is not readable" mindset stems from the
fact that there are so many people, still somewhat new to Haskell, who know
these functions, but are not fluent in their use and common patterns (which is
a fact that is oblivious to them), try to read code written by people who are
fluent in the same...

~~~
spronkey
I agree around the desired outcomes regarding readability and side effects,
but it's not like other languages can't be used in this manner. If you're an
OO-ist and you design good interfaces with good contracts, the 'million ways
involving half a dozen loops' can be very quickly limited in scope to a couple
of methods in a small class.

That's not to say that it always happens, and certainly some less, uh,
experienced developers will write terrible code. But they'll write terrible
code in Haskell as well (or no working code at all, which has been my
experience at least once).

I'd consider myself very comfortable with maps, folds, filters and zips. I
wouldn't consider myself super fluent with their use in Haskell, which
definitely contributes to my own struggles with the language.

But for something like the canonical quicksort example in Haskell, it takes me
quite a while to figure out whether it's actually implementing quicksort by
the book, or whether it's implementing something that sounds like quicksort
but isn't. This is because I have to map the declaration to the underlying
implementation when it matters. Probably more often than not it doesn't matter
at all as long as the code works and isn't causing problems (which is where
the functional primitives are fantastic), but I do find that digging deeper
into a more complex functional algorithm can be a difficult task.

This is because you have to think about how every part might work behind the
scenes. Am I doing something stupid like mapping my whole data set with a
computationally intensive function? Is one of the innocent-looking predicates
in my list comprehension actually some super intensive function that's had an
operator overload? Am I going to be applying this predicate to the entirety of
a massive list where in an imperative context I would have a really obvious
switch case or set of ifs, a clear non-symbol invocation of
reallyExpensiveFunction, and exited my loop early?

It's a little difficult to describe I guess, but for me reading [what I
believe to be] idiomatic Haskell code at a high level is reasonably
straightforward, if somewhat slow due to the compactness, but actually
_understanding_ what that code is doing can be incredibly difficult.

In some ways it's the same type of issue I have against liberal use of
recursion. It might be reasonably easy to describe a recursive algorithm, but
to really get in and understand it requires a much deeper understanding that's
often easily more difficult than understanding its imperative cousin. There
are real reasons why comp sci students struggle with it.

------
reikonomusha
Because the industry isn’t dominated by well thought out solutions. Most
programmers I talk to love the idea of spending time and using the perfect
tools to design outstanding solutions to problems. They discuss how much they
supposedly like to learn.

But when push comes to shove, your 9–5 career growth is probably going to be
best optimized by cobbling a bunch of Python together and shipping it.
Especially if you wrap around it a bunch of buzzword frameworks and deployment
technologies. Unless your boss or your boss’s boss are technical and
opinionated about good software engineering, nothing is going to be optimized
for such. And a $100k+/year paycheck is a hell of an amount of inertia.

A two month boot camp is sufficient to pump out an individual who can be
opinionated about their technology choice, and even produce an individual who
can duct tape some services together to produce some semblance of value.
Likewise a stock 4-year compsci degree. Be it a boot camp, a university, an
online coding school, or an Ivy League: They’re not going to teach you Haskell
(or Lisp or ...), they’ll teach Python, because it’s easy, even if it rarely
produces simple solutions. And once somebody has put in the energy to learn
one thing that has become profitable for them, they’ll need an enormous amount
of rationalization to learn and invest in something else.

~~~
opnitro
Interestingly, the comp-sci program at University of Maryland does start with
a list-esque system. They start with a very basic language and slowly add new
features throughout the first course.

~~~
kabdib
My experience at the U of MD (probably dated, but _man_ your summary sounds
familiar...):
[http://www.dadhacker.com/blog/?p=755](http://www.dadhacker.com/blog/?p=755)

~~~
opnitro
Thanks for this!

------
Veedrac
> I would not be surprised if Haskell were 100x better than Java for writing
> compilers.

The author sounds like someone who has glimpsed at the truth, but is only
willing to take miniscule baby-steps away from their mistaken position.

Think about what it _means_ to say Haskell is 100x better than Java for
writing compilers. If you really believe this, quit your job and spend a year
writing a Graal competitor in Haskell. You'd own the market for Java, Python,
Ruby, Perl. You'd have built your own LLVM JIT. Literally every week you spend
in Haskell is two man years of Java programming!

Of course this is nonsense. There is _perhaps_ a 10x difference between
Haskell and x86 assembly. I'd doubt it, but it's at least plausible. But
compared to C? If Haskell really was some blessing from the gods, you'd
actually know of more than a handful of random compilers written in Haskell.
Like what actually is there, GHC and Elm? How is that a shining array of
successes?

To stick to your guns and say, well, abstract composability is _super
important_ but maybe it breaks down around IO, is just not supported by
anything approaching evidence. Yes, C++ causes LLVM issues, but writing LLVM
isn't hard _because_ of C++! The difficulty of a problem is not
total_difficulty / language_expressiveness. It doesn't matter a damn how
expressive your implementation language is when every step forward is a
research project.

There's a dangerous mythology in programming that we can fix all the
complexities of programming by piling on ever more contrived tooling. All I've
seen this lead to is an inability to remember that you're actually programming
to _solve problems_. Maybe you solve all the really easy problems with a
different choice of language, but the really easy programs don't matter. To
end with a Torvalds quote on Rust, but that applies just as well here,

> To anyone who wants to build their own kernel from scratch, I can just wish
> them luck. It's a huge project, and I don't think you actually solve any of
> the really hard kernel problems with your choice of programming language.
> The big problems tend to be about hardware support (all those drivers, all
> the odd details about different platforms, all the subtleties in memory
> management and resource accounting), and anybody who thinks that the choice
> of language simplifies those things a lot is likely to be very disappointed.

~~~
mixedCase
>all the subtleties in memory management and resource accounting), and anybody
who thinks that the choice of language simplifies those things a lot is likely
to be very disappointed

Those are exactly two of the things that Rust and many languages with complex
type systems tackle and very successfully help out with:

Instead of having to get it right every time and having to pay special
attention in the edge cases, you define it well and then let the compiler help
you catch misuses.

------
jeffdavis
Let's say you write the best image decoder library for the next great image
format, and everyone wants it on every device that is connected to the
internet.

Writing it in haskell would mean that only haskell users, or people willing to
link in a large runtime, could use it.

So, you write it in C or Rust, so anyone can link it in and not care what it
was written in. Maybe also write a native Java version, because that can
already run on billions of devices.

~~~
vapourismo
GHC can produce native libraries just like C or Rust.

~~~
reikonomusha
But you carry around a runtime with managed objects that can’t be freely
traded across the boundary to C libraries.

~~~
vapourismo
Certainly it isn't the most attractive solution - still it is possible and
works well.

When it comes to moving things across FFI barrier, that is also very possible
as there are all kinds of types (including stable pointers) in the Foreign.*
modules that map onto C types.

------
ultim8k
History has shown that sophisticated and more superior technologies are almost
never the winning ones.

Instead, technologies that prevail are often the ones that are more accessible
to a broader audience of developers and the ones that manage to evolve faster
in order to engage with the latest trends without breaking backwards
compatibility.

The same philosophy applies when trying to find a solution to an engineering
problem.

Good solution is rarely the super-well-optimised and super-efficient solution
that applies all the best patterns written in the sacred books.

Good solution is the one that solves the problem quickly and at the minimum
short-term and long-term cost including development, running and maintenance.

~~~
wink
Depends on where you put your grading of superior.

Maybe Haskell is the end-all-be-all for some criteria, but it certainly isn't
for others. So one could say it's not superior for all the use cases where
these criteria don't match. I'd agree more with your points if I'd see more of
a unique picture of people saying "Haskell is the superior programming
language" and of course you see a lot of converts that say this, but it's a
very vocal minority. This ends in a tautology with the broad masses still
unconvinced.

Also a lot of them concede that it's not practical for everyone for everyday
use.

------
mlthoughts2018
I spent a large amount of personal time becoming an intermediate Haskell
developer from 2009-2015, culminating in landing a job doing exclusively
functional programming for an analytics team in a large company.

My experience made me give up on Haskell & functional programming entirel,
despite my feeling that the principles of functional programming are often
“better” than object orientation and other paradigms.

The only language-specific thing that turned me off of Haskell in a
significant way was that so many important concepts in Haskell are implemented
via pragmas that extend the language and either enforce syntax restructions,
enable totally new (and often esoteric) syntax, or change the meaning of
existing syntax.

This is really painful and makes you generally avoid great mew features and
artificially limit yourself to more basic designs because of the learning
curve and how committed to one siloed set of patterns you can become.

It reminds me of Dennis’s coffee shop idea in 30 Rock:

Dennis: One word. Coffee. One problem. Where do you get it?

Liz: Anywhere, you get it anywhere.

Dennis: Wrong. You get it at my coffee vending machine. in the basement at
K-mart. You just go downstairs. You get the key from David. And boom, you
plug-in the machine.

^^ that’s what it feels like reading Haskell tutorials when all you want to do
is multiple dispatch or heterogeneously type some built-in container, simple
things in so many languages.

This alone wasn’t what put me off though. The real problem is sociological.

Most companies hardwire a feedback loop between development teams and product
or business managers that teaches developers they will be rewarded for their
ability to unsafely hack things into a Jenga tower of system components for
the sake of rapidly addressing ad hoc business questions even when, perhaps
_especially_ when, nobody has the slightest idea if answering the ad hoc
business request is likely to be worth the additional instability the hacks
will put into the Jenga tower.

This is very nearly philosophically at odds with the spirit of functional
programming, from a first principles level, which means even if you eke out a
platform capable of dealing with this in a functional paradigm, youknow for
sure that the business willnot see your work as valuable, and safety
guarantees, correctness proofs, automatic parallelization, etc., will often
notbe rewarded, meanwhile just hacking stuff into some C++ or Python codebase
that “just works” will be rewarded.

It’s not a satisfying phenomenon, but it convinced me that it’s not worth
investing any more of my time into functional programming.

~~~
galaxyLogic
Could I summarize your objection to Functional Programming in business setting
as the fact that Object Orientation makes it simpler to model the businesses
and thus easier to solve their problems?

If you think about a Function it does just one thing. But real world "agents"
that run a business do many things. An OO-class has many methods, not just a
single input and a single output.

And inheritance: Businesses differ from others incrementally. Inheritance is a
way of modeling such incremental differences to a degree, explicitly. Even if
inheritance is not perfect and can cause problems, it helps in many cases.

~~~
mlthoughts2018
I personally believe almost the opposite. Even when I write Python, I use
classes sparingly and if I ever choose to get a lower back tattoo it would
probably say “Inheritance Sucks” in Chinese or something.

Think of a constructor function for a class. It is so easy to shove new
complexity in there. Just add new optional arguments, assign them as instance
attributes, and off you go.

If you have a class like MonthlyReport, and all of the sudden your boss wants
to know how many widgets per month you sell specifically in Narnia, well you
can probably hack this into the existing methods of MonthlyReport very
quickly.

Doing it once maybe isn’t so bad. But doing it a dozen times quickly leads to
a bunch of functionality shoved into MonthlyReport that probably shoukd be
refactored out, and lord help you if your boss all of the sudden says you need
all of it to go into DailyReport, and you need a new YearlyReport class with
overloaded behavior for FiscalYear, CalendarYear, and NarnianYear.

If I heard about a system like this, my guess would be that it’s got a
crapload of copy/pasted code, constructors and general class methods with huge
lists of parameters and switches, maybe a few subsystems where someone tried
to rewrite this with misguided MixIn patterns and abstract base classes, and
everyone is terrified to change anything because no one knows how it actually
works.

Even an imperative design that just used boring modules of functions, maybe
with a tiny amount of metaprogramming like decorators for repeated logic,
would be waaay better, no functional programming needed. But well-crafted
functional programming would be better still.

The problem is sociological. Business managers won’t tolerate being told, “No.
I cannot get you this result for a daily Narnian calendar by tomorrow— for
that, we’ll need to draw up a quick design plan to make sure we add it in a
maintainable way.”

Functional programming mostly _requires_ saying no, being careful, measuring
twice and cutting once.

With object orientation, you _optionally can_ push back, say no, and try to do
it carefully, but _you don’thave to_ because of all the different buckets of
mutable state, you often can find a place to shoe-horn unplanned complexity in
somewhere, and leave it for future people to worry about when the shoe-horned
complexity causes a big problem.

~~~
bigger_cheese
> "and lord help you if your boss all of the sudden says you need all of it to
> go into DailyReport, and you need a new YearlyReport class with overloaded
> behavior for FiscalYear, CalendarYear, and NarnianYear."

Oh man I've had to do this once or twice. The best approach I've found is to
use macro's (This was in SAS language these are kind of like C++ Templates if
you are familiar with those) I ended up writing a generic "aggregation macro"
that took two datetimes and an 'interval' argument and rolled up each variable
between the supplied time range at 'interval' frequency.

Of course it doesn't help when you inherit someone else's code. In that case
you are stuck with possibly grueling job of refactoring or you have to hack in
a bodge ie. a hardcoded exception and like you said most managers just want it
"done" so you end up with duct taped together crap that is a pain to maintain.

------
danharaj
I've done a 9-5 in mostly Haskell since 2013. If you have specific questions
about using Haskell in production, I can share my experience.

If you've had a bad experience with a piece of tech and draw a broad
conclusion, I think it's reasonable if unfortunate that it left a bad taste in
your mouth. But I also think there's a brighter picture to paint :)

~~~
wwwater
I have a specific question: do you hire remotely? :)

~~~
danharaj
Sometimes! We're not a remote-first company but we consider experienced
candidates who live in a not too dissimilar time zone.

~~~
wwwater
what's your timezone?

~~~
danharaj
New York, currently in EDT. We're not currently hiring at the moment but it's
possible we'll post in the monthly who is hiring thread at some point in the
future.

------
wirrbel
If you look at software at various scales, Haskell does a lot to get the
finely grained and maybe even medium grained parts right and correct. Yet, I
don't see a convincing value proposition for Haskell for the coarser grained
scales. On that level, the benefits of Haskell as a language dont shine
anymore as brightly, and this is an area that many teams struggle with.

And oh: Haskell's lazy evaluation is the reason why I wouldn't like to use it
within the business context, introducing all kinds of weirdness I don't want
to have in a business context.

~~~
vapourismo
You'd be surprise how much weirder it gets when everything is evaluated
eagerly.

~~~
Silhouette
I disagree. If everything is evaluated eagerly, intuition usually works.
Failing that, tools like logging and debugging and profiling will usually
bridge the gap. A non-strict language like Haskell makes it much more
difficult to predict run-time performance characteristics, unless you
introduce overrides all over the place to force everything to be strict
anyway.

~~~
wirrbel
exactly my experience. I personally find lazy evaluation a neat idea in
principle. In practice, it introduces problems I never had in - let's say -
java. One primary problem is that errors become delocalized in the code base.
In a strictly evaluating language, the stack trace gives me a pretty accurate
idea when and where an error is occuring. In lazy languages, it can happen
when you don't expect it.

There is some irony in the fact, that a well-received tool for working with
haskell code bases (intero) has bugs (space leaks iirc) that just cannot be
tracked down properly.

------
mmt
> “It’s too hard to learn” (If your pet technology were 1000x more productive
> than, say, Java, would this learning curve really be a substantive barrier?

I think the simple answer is "yes". It doesn't matter how much more
"productive" something is if the vast majority of people simply aren't capable
of learning it (or learning to use it well enough or doing it in a reasonable
amount of time or whatever).

He does go on to point out:

> Though this question is more complicated than you think

And suggests one read his more detailed treatment of the question, including
what I found most relevant, which is a more nuanced treatment of the notion of
productivity, having to do with labor costs, fungibility of that labor, market
competition, and overall limited resources.

[https://pchiusano.github.io/2016-02-25/tech-
adoption.html](https://pchiusano.github.io/2016-02-25/tech-adoption.html)

[https://news.ycombinator.com/item?id=17114935](https://news.ycombinator.com/item?id=17114935)

I would expect that any technology that can be classified as sufficiently
exotic is viewed as a huge risk, one not worth taking, by larger companies,
which has traditionally included obsolescent tech and bleeding-edge/unproven
tech, but easily includes a language that's too hard for the average
programmer learn.

------
sras-me
>>Writing CRUD apps? Haskell isn’t as much of a win.

If you are dealing with databases, having typed results from data base
queries, and ability to define tables with custom typed columns (Opaleye
library) can provide quite a lot of type safety....

That is just one example. If you are good with type level programming, it is
possible that you will be able to find a way to encode the in variants of your
business domain into the types and have the compiler assist you in building
correct programs...

------
Y_Y
> "Avoid success at all costs."

> I mentioned this at a talk I gave about Haskell a few years back and it’s
> become quite widely quoted. When a language becomes too well known, or too
> widely used and too successful suddenly you can’t change anything anymore.
> You get caught and spend ages talking about things that have nothing to do
> with the research side of things. Success is great, but it comes at a price.
> -- Simon Peyton Jones

There's not much in the way of marketing for Haskell. There isn't any drive to
make it super-simple like Python, or super businessy like C#. It's fast if you
make it so, but it's never going to be as fast as C. The cognitive defects
required to like it are far less prevalent than those of JS-lovers. It's not
built for the mainstream, even though it'll happily run GUIs and games and
webpages and chat servers and whatever else is cool.

Even if it wanted to, taking over the world takes a lot of effort and luck.
And it doesn't want to, and spends its effort and luck elsewhere.

~~~
pitaj
Did you just call JS-lovers and Haskell-lovers cognitively defective?

~~~
Smaug123
It's not an uncommon meme that programmers and mathematicians possess a weird
twist in the mind that allows them [in the strong form of the meme] to do, and
[in the weak form of the meme] to enjoy doing, what they do. If you grant
that, then it's a small step to admitting the existence of different sub-
twists that incline you towards different parts of the abstract space. If you
don't grant that, then consider it a rhetorical device for pointing out that
comparatively few people like Haskell versus JS.

~~~
pitaj
Why the word "defect" though? There must be other words that are less
controversial. Maybe "abnormalities" or "specialities"?

~~~
tabtab
How about "possess an outlier brain design" among the human or even general IT
population.

------
mlevental
i have railed/ranted (occasionally belligerently) about haskell after having a
professional brush with it two years ago. haskell hasn't taken over the world
because the community is haughty about haskell's giant warts. yes it's very
welcoming and open to indoctrinating you on "best practices" but as soon as
you diverge in the slightest (because someone before you diverged or because
you need to do something unconventional) they're completely obstinate about
how ugly the language/experience can get.

the most of obvious instance of this is point free form combined with
overloaded operators combined with 8? arities. if you bring this up on
r/haskell or #haskell you will get shouted down about how it's not best
practices to abuse that so why even bother discussing because you can write
unreadable code in e.g. python. this is absolutely true but the ugliest python
is still clearer than the brainfuck homage that haskell turns into with stuff
like

$<><>$%%^^^ a b (@@@<> c d) $ _$_ $ e f

the point here is not to rehash the debate but that no one takes this
complaint seriously because elegant haskell is the one true scotsman and
everyone else's haskell is irrelevant.

another obvious thing is the complete lack of good tooling. stack/cabal/cabal-
sandboxes/another cabal thing that's been created since i stepped away last
year. there's no ide because real programmers write code using butterflies
flapping their wings so why would you need anything other than cobbled
together vim/emacs plugins. wanna debug? set a breakpoint in ghc but you can
set breakpoints in ghc only 20 calls deep! don't even get me started on how
slow the compiler is. does anyone listen to these complaints? no. why? my
hypothesis is that it's like a fraternity with a hazing ritual (combined with
many serious cases of stockholm syndrome).

i think a fair comparison is rust (despite being imperative). linear types are
probably as foreign as purity/immutability to people and yet rust has 10x more
mindshare than haskell. why? because rust maintainers care about ergonomics!

i could go on but i'm sure i'll get responses that chastise for not sticking
with it long enough or not putting in the work or something else when the
reality is that it wasn't worth it - i wanted a strong type system and the
price i had to pay for it was too high. plenty of other strongly typed
languages (ocaml, rust, f#) without all of the pain of haskell and the
intransigence of the community.

~~~
voxl
This is not a valid argument. I don't know Haskell beyond basic fundamentals
and I know that your argument is ridiculous.

Just consider C++ and boost::spirit. Overloading operators can lead to
ridiculous syntax in pretty much any language that supports it.

C++ doesn't have any _real_ problem with overloading in _real_ code bases. I
wouldn't expect Haskell to have this problem either. Being able to construct
ridiculous code is not an excuse for a language not catching on.

Tooling is exactly the same point. Tooling in C++ land is completely
ridiculous. In comparison to Rust there is no argument, Rust is strictly
better. However, C++ is still doing quite well for itself in industry.

Also, Rust's type system is not linear, it's affine, you can declare a
variable and not use it.

~~~
mlevental
>Just consider C++ and boost::spirit. Overloading operators can lead to
ridiculous syntax in pretty much any language that supports it.

you can't define new operators in c++? you can only override existing ones and
not even all of them
[https://stackoverflow.com/a/8425207/9045206](https://stackoverflow.com/a/8425207/9045206)

>Tooling is exactly the same point. Tooling in C++ land is completely
ridiculous.

Microsoft Visual C++ is arguably the best ide period. make/cmake are rough
sure but i'll take that in-exchange for being able to set visual breakpoints.

>Also, Rust's type system is not linear, it's affine, you can declare a
variable and not use it.

<rolls eyes> does that make more or less foreign to users?

------
elihu
Besides the reason given in the article (most everyday applications have a
large surface area where they interact with a larger system, and that system
is largely imperative and untyped), I think that network effects are a big
problem.

If you want to create some new application in Haskell, you'll probably need
some collaborators. If all your potential collaborators know Haskell that's
great, but the chances are a lot higher that most of them know Javascript or
Python or Java. So, that's often what you end up using if the main goal is
just to solve some immediate problem with the least hassle.

Another factor is that most of the high-profile projects people are familiar
with (linux, gcc, firefox, llvm, inkscape, etc..) are fairly old projects.
Haskell has also been around for quite a while, but it's really only in the
last decade or so that it's become a pleasant language to use. That's both
because of libraries and tooling, and also because it took a long time for the
Haskell community to go from "we don't know how to do IO in this language" to
"we have this IO monad thing, but we don't really know how to use it" to "this
is a language we know how to write applications in and lots of people have
done it."

At the time that many projects got started, Haskell either didn't exist or
hadn't progressed to the point where every little thing wasn't blazing a trail
into the unknown.

------
tuvok
As a mathematician who works with category theory, Haskell is a piece of cake.
Much simpler than java, python, ruby, etc. It just requires a kind of thinking
that is not tought to computer scientistis or on any programming courses. I
recommend the book Category Theory (Steve Awodey ,2006).

------
sheepmullet
Simple: The productivity boost is not worth the time investment.

Which will give developers a better return on productivity?

200 hours spent learning Haskell or 200 hours spent learning more about the
business domain they work in?

200 hours spent learning Haskell or 200 hours spent improving their soft
skills?

200 hours spent learning Haskell or 200 hours spent networking?

Etc.

~~~
spronkey
Indeed.

Though, will 200 hours spent learning Haskell gain you, well, anything
productive?

Surely a non-genius would require at least 1000.

------
jacquesm
Part of this is that Haskell is seen as a language by and for academics. You
can't even read about the history of Haskell because it is only published
through the ACM for a steep fee. So unless you are part of the academic circle
you won't be exposed to it unless you are ready to invest.

The biggest growth driver for Haskell is the financial world where it has seen
quite a bit of adoption.

~~~
mlevental
Which finance companies have even >10k line Haskell codebases

~~~
tome
Barclays, Standard Chartered, Tsuru Capital, Alpha Heavy Industries (not sure
if they're still going), Karamaan Group

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=13450828](https://news.ycombinator.com/item?id=13450828).

------
danidiaz
In his talk about "milestones in the development of lazy, strongly typed,
polymorphic, higher order, purely functional languages" David Turner mentions
that the wan't adverse to SASL
([https://youtu.be/QVwm9jlBTik?t=1819](https://youtu.be/QVwm9jlBTik?t=1819))
and Miranda
([https://youtu.be/QVwm9jlBTik?t=2330](https://youtu.be/QVwm9jlBTik?t=2330))
—both predecessors of Haskell—to be used in industry. Getting out of the ivory
tower was not a last minute idea, it seems.

~~~
theoh
The Peter Landin paper "On the Mechanical Evaluation of Expressions"
([https://www.cs.cmu.edu/~crary/819-f09/Landin64.pdf](https://www.cs.cmu.edu/~crary/819-f09/Landin64.pdf))
makes it totally clear that, for Landin, FP was an alternative to the
bookkeeping required in systems programming.

Since he stands at the beginning of the tradition that led, via Turner's work,
to Haskell, I think it's not a huge leap of the imagination to attribute
Haskell's lack of success, as a language for building systems, to this
inherited attitude that the programming system should be elegant, principled
and mathematically structured. No concessions are made to the practical needs
of someone writing, for example, an operating system, except to the extent
that they provide an occasion for a new theoretical construct. (Lenses are a
recent example of this.)

And take implicit data structures, for example. I know Edward Kmett has
explored this idea in Haskell, but really, it's a totally alien concept for
the FP philosophy. Just as traditional operating systems tend to require a
little bit of assembly code in addition to C, the purist FP system that wants
to manipulate genuine implicit data structures will need to call on some
outside language with the power to manipulate them... That seems like an
intentional state of affairs.

------
discreteevent
The article makes a good point. When you get to the edge of any system it
behaves like a stateful object. So the only way to get rid of 'objects'
completely is to extend your system to cover the whole world. I doubt the
author thinks he can do this, but even what he describes sounds ambitious.

Another way to look at it is that it's good to learn FP and apply it where it
makes sense but don't neglect your object modeling skills (where object =
something that exibhits behaviour in response to messages)

------
galaxyLogic
I think it's a question of inertia, people and organizations do things the way
they have been doing them for some time. It has worked so far since the
business is still up and running, so why take a risk and change.

Choosing a not widely adopted solution even if it would be somewhat better
than others is always a risk.

Also it's not a question of whether to Haskell or not. There are many more
choices, each with their pros and cons.

It's a bit like with religions, why should I pick just one :-)

------
badrabbit
It takes an insane amount of time to compile ghc. I can't stand apps that use
it because of that. Could take hours on some machines, would have considered
riding on that bandwagon if it wasn't for that limitation.

~~~
josteink
I think it’s pretty safe to assume this limitation/problem affects almost
nobody else, so not really relevant for most people.

Also: how do you run running Firefox and Chrome? Do you download and build the
whole chain of dependencies and build tools needed to build and run those? And
if so, why?

~~~
badrabbit
I actually use firefox or chrome so that's a non-issue. Ghc is a dependency
not the main app,therein lies my complaint. Depending on the use case I might
complile some applications and by no means was I implying others should build
from source as well.

It's just that I have run into situations where a Haskell app pulls ghc and
that needs a source build. If I ever build something in Haskell then there is
a chance others might go through that as well.

I never had that problem with any other language (even a heavy weight lan like
java).

------
muxator
My two cents: because, more often than not, worse is better.

And, recognizing that worse often wins, maybe we should consider the
possibility that what we were categorizing as worse, was actually solving a
different, more concrete problem.

~~~
acdha
Your last point is really key: programmers love to decide that one or two
favorite features are game changers but no real-world decision should be that
simple. I’ve seen people dismiss Python’s readability as a simple feature
which anyone could copy but then decline to do so because they didn’t really
respect that as a goal (“it’s just for newbies”, etc.), despite the mountain
of evidence showing that it deserves more serious consideration.

~~~
acdha
(Oh, and since I left it out earlier: I’ve loved seeing the Rust team take
usability issues so carefully. That’s a great model for the field.)

------
agocorona
There is already a kind of unison for Haskell:

[https://github.com/transient-haskell/transient](https://github.com/transient-
haskell/transient)

------
tabtab
I have to say that I have yet to see significance _practical_ problems that
functional programming solves better than procedural or OOP. I've had debates
over this many times, and almost every example is a case of a bad framework
and/or a bad language (such as Java's "stiff" OOP model). FP is a language
band-aid, not a solution. There may be a few edge cases, but why complicate
the 99% for the 1%? I'd be happy to debate more scenarios. Bring it on!

~~~
erpellan
FP is not new. OO is not new. They are tools in the toolkit. Take an object
with a bunch of instance methods. They are all equivalent to functions with a
hidden first parameter. If you make 'this' explicit you've got pure functions.
It's not magic.

Intellij even has an automated refactoring for it in kotlin. You can take a
function and convert it to an extension method and back again:

    
    
        fun String.foo(a: Int)
    

vs

    
    
        fun foo(self: String, a: Int)
    

The real fun of FP is building simple functions that operate on a single value
and then stringing them together, generalising to lists of values, filtering
etc.

MapReduce is functional programming. At epic scale. It's name is literally a
portmanteau of two of the most fundamental higher order functions.

FP rocks.

~~~
dragonwriter
> Take an object with a bunch of instance methods. They are all equivalent to
> functions with a hidden first parameter. If you make 'this' explicit you've
> got pure functions.

Well, you’ve got normal imperative procedures that maybe return a value.

Pure functions mean something else.

------
jchw
I agree with this from my limited experience. However it feels like this ends
abruptly. Did I skip over something or did I miss how we could remove
composability boundaries at IO?

------
ken
The first 3/4 of this article reminded me of Zed Shaw's "The Web Will Die When
OOP Dies" (though for some reason I remembered that title as the converse).

It doesn't look like unison.cloud is aiming to replace the web as an end-user
application platform, though. What would a pure functional platform look like?
More like Datomic, I think, and maybe then I wouldn't run into 404s so much.

------
harry8
When the ratio of "introduction to monad tutorial blogposts" to useful
applications gets somewhere below 1,000 perhaps Haskell can be considered as
more useful in performing data transforms than it is entertaining the
programmer.

Given the number of people who have learned the language there is really not
much written in haskell that is used for something that isn't programming a
computer.

------
xamuel
Let's look at an audience where Haskell (or FP in general) ought to be
popular, if any of the hype is true: mathematicians and logicians writing
pseudo-code in research papers.

What we see is that pseudocode is almost always imperative. Sometimes it even
uses GOTO!

------
2sk21
Sounds like another rediscovery of "Worse is better"
[https://en.wikipedia.org/wiki/Worse_is_better](https://en.wikipedia.org/wiki/Worse_is_better)

------
ska
See also, Lisp.

~~~
phoe-krk
Lisp already took over the world. Most of the fancy things that originated in
Lisp were adopted by other languages, and thanks to Lisp, these other
languages are bearable and actually useful now.

Haskell's one of these languages.

------
tanilama
Because it is not great at solving real world problem.

------
agumonkey
because the world is too short

------
leowoo91
Because imperative languages require less optimization (hw wise) at first
sight.

------
anonlastname
Haskell is not a toy, but it's not a general purpose workhorse either. Part of
the appeal is that it is independent of the practical side of things. It
allows you to write theoretically interesting programs.

------
shp0ngle
Just an advice: read the article before writing a comment.

~~~
olliej
I also agree with you. The author’s assertion that Haskell has failed in the
real world due to the Illuminati is clearly wrong.

;D

------
grosjona
The reason why Haskell didn't succeed is because programming languages don't
actually matter.

The best programming language is the one that is used by the most developers
in any given field. All other criteria don't really matter.

You can add as many constraints as you want to the language but it's not going
to stop developers from writing bad code.

The bottleneck with programming is human incompetence not programming
languages. The language changes nothing.

------
paulgramcracker
Haskell is a young language, and the vast majority (99%+) of programmers have
not learned functional programming.

Another observation is that the programming industry does not care about
correctness or writing stable software, since they have no liability for
broken shit. There isn’t a strong push for writing consumer software that
isn’t riddled with bugs.

~~~
boothead
30 ish years is young for a programming language now?

~~~
paulgramcracker
Haskell is the youngest language that isn’t derivative.Monads for IO weren’t
being seriously used until the late 90s. All the imperative C derivative
languages are much older.

~~~
Smaug123
A very young language that is (to my knowledge) not derivative of older ones
unless you count Excel: [https://www.luna-lang.org](https://www.luna-lang.org)

It comes with C, JS, and Haskell interoperability, and it's implemented in
Haskell, but the heart of the language is its visual representation. I have
personally never seen anything like it before.

