
Why hasn't Haskell taken over the world? And the curious case of Go - atombender
https://pchiusano.github.io/2017-01-20/why-not-haskell.html
======
sandhundred
I struggle with this. After years of studying OO and design patterns in Ruby
and JavaScript I was having troubling building complex asynchronous systems
and stumbled into Go. I saw the value of types for managing that complexity
and the runtime for supporting asynchronous primitives, but internally was
very limited by the lack of generics for things like collections and higher
order control flow for things like error handling.

Eventually I arrived at Haskell after strongly considering Clojure, and I'm
very happy about what I've learned and my new way of approaching programming
complexity. Unfortunately there is nowhere else to go from here since I can't
find employment as a Haskell programmer. OCaml, F#, Scala, Elixir, and Elm all
feel like a step back. Now I'm a Java programmer and quite miserable. I feel
hampered by the language nearly everyday in terms of how easily I can express
my thoughts in code. Haskell isn't perfect but is the best fit for my
mathematical mindset.

I am trying to lead by example. I help host my city's Haskell meetup and
contribute to a Haskell reading group. The path is lonely, my co-workers poke
fun at me, and don't care to put in any effort to understand what I have to
say. I love teaching and explaining things but there is zero interest because
the machine keeps moving. Not many at the office even enjoy Java but are
resigned to do it for our large pool of enterprise clients. All in all each
work day is a void I put 8 hours into, which is fine, compared to most working
conditions; it causes no suffering beyond the lacuna.

I'm past the stage of trying to convince other programmers anything. I
recognize many are happy with their tools. I yearn for that happiness and
don't seek to spread my misery. I offer my time to those who are interested
and want to learn more.

Anyway, this article nails it, Haskell has advantages but they aren't enough
to change things without the infrastructure the author is building. I look
forward to being an early adapter of Unision and continue to remain hopeful
for the future despite the long odds.

~~~
Kenji
I have two questions for you:

\- How does Haskell fare with large projects that use APIs that are inherently
stateful, like OpenGL? Don't things get messy and ugly as the pure world of
Haskell is being tainted?

\- How do I optimise Haskell code without having studied the language for
decades? I had a lecture where we were taught Haskell, so I know the language,
but darn, even a simple hash map seems to be so very complex in the language.
The fact that everything is a linked list (bad for caching & performance) and
everything gets copied around really turns me off.

~~~
nbouscal
> Don't things get messy and ugly as the pure world of Haskell is being
> tainted?

Haskell is excellent at handling state. "Pure" doesn't mean no state, it means
all state is handled explicitly in a type-safe way.

> How do I optimise Haskell code without having studied the language for
> decades?

It requires learning some new things, but honestly when I worked at a company
that used Haskell I didn't really have to worry about this much. You
occasionally make some things strict and there are some good heuristics about
when this is needed, but overall it just didn't come up often.

> I had a lecture where we were taught Haskell, so I know the language

No you don't. You can learn Python in a lecture if you know Ruby; you can't
learn Haskell in a lecture unless you already knew SML or OCaml, and even then
probably not.

> The fact that everything is a linked list (bad for caching & performance)
> and everything gets copied around really turns me off.

Everything isn't a linked list in production code. It's easy to swap lists out
for vectors wherever it's correct to do so, because you can write almost all
of your code generalized to the necessary type class rather than specific to
one single container type. Real Haskell code performs well, poorly-performing
code is just used for examples when teaching beginners because it's easier and
introduces fewer things at once.

~~~
axlprose
> _you can 't learn Haskell in a lecture unless you already knew SML or OCaml,
> and even then probably not._

I just want to emphasize this part because it's really quite true, as someone
who came to it from F#/OCaml.

The thing with learning "haskell", is that there's _faar_ more to it than
learning the basic language constructs. You can learn haskell _the language_
in about a day, but you can't really learn haskell _the paradigm
/philosophy/mathematical discipline_ in a day, much less actually _program_
haskell that quickly.

That being said, it's not as difficult as it's made out to be, the mountain
you need to climb is much shorter than people realize. The issue is largely
one of jargon, and getting used to using and thinking in all the new terms and
concepts that really have few equivalents in other languages.

~~~
jack9
> The thing with learning "haskell", is that there's faar more to it than
> learning the basic language constructs

That would be a compelling reason as to why it's not popular. This basically
means your programming experience is not transferable, leaving it as a weird
side language that some people stumble into due to uncommon factors.

~~~
axlprose
Oh it almost certainly is the main reason why it isn't popular. Then again,
that is also the same reason why abstract mathematics in general isn't
popular.

Saying the experience is "not transferable" is looking at it the wrong way.
Learning the discipline behind haskell isn't even remotely the same kind of
beast as learning how to use a niche legacy framework in COBOL for example.

It's a lot more akin to learning to read human language for the first time,
because all it's doing is familiarizing you with patterns that exist in code
and computation, _independent of Haskell_. The actual language part of the
haskell equation is literally the least relevant/significant, because what it
teaches you is to recognize mathematical/computational patterns that you can
find in any language, _despite_ the difficulty of expressing some of those
concepts succinctly in some languages (e.g. java).

The biggest thing that's 'not transferable' is the ability to communicate the
high level patterns to others who haven't gained literacy with that particular
branch of mathematics/CS yet. But that's true of all sciences and mathematics.
You can call it a "weird side language", but that's kinda doing it a
disservice as a language that's intended to express complex relationships and
computational patterns in the most general way possible.

------
88e282102ae2e5b
It's like I accidentally stuck my hand in a running blender and someone asks
whether I'm angry because of the noise of the motor or because the buttons
aren't arranged in a reasonable order.

I tried learning about Haskell just so I could understand a flurry of articles
I came across that were trying to explain monads. But mostly I found super
abstract notation, something about bind and unit and >>=, and I came away just
feeling dumb. I gave up pretty quickly, but to be fair I suppose I wasn't
super motivated to work through it.

Later, I learned Rust just for fun. Options? Sounds like a good way to express
uncertainty about a function call. Oh, and you can compose it with other
Options using this function called and_then(), and things will short-circuit
if they fail. Neat!

A year later I find out I had been productively using monads the whole time
without even knowing it. It's so frustrating knowing how easy it could have
been to learn this concept explicitly the first time around.

------
Animats
Go is successful because it's exactly what Google needs to write back-end
code. It's (almost) memory-safe. It comes with a set of well-debugged
libraries for doing most of the things you need to do on a web server. That's
good enough for most web-related work.

From a language theory perspective, there's a lot to criticize about Go. The
"goroutine" and "channel" thing turned out to be less useful for general
concurrency than expected. But it's good enough to service lots of network
connections. The lack of generics means that functions like "sort" are
painful. But you can still get sorting done; it's just clunky.

Haskell hasn't taken over the world because it's designed by theorists for
theorists. It's too clever, or, "l33t", like LISP. Also, anything interactive
is dominated by I/O, while the functional model is better matched to pure
computation. The IT industry today is mostly I/O dominated interactive
applications.

(It's not clear yet if Rust is "too clever" for widespread use. The jury is
still out.)

~~~
kazinator
Google is the key operative word there.

Without backing/promotion from Google, Go would be completely obscure and
disused.

Google could promote a language that is a complete piece of crap and it would
be instantly more successful than something well-engineered from some unknown
hackers.

~~~
crystalPalace
What about Dart([https://www.dartlang.org/](https://www.dartlang.org/))? Dart
is developed by Google and has features that are tailored for or lend
themselves to specific modern use cases i.e. IoT, mobile, and web
applications. The language is only two years younger than Go but as far as I
can tell has attracted nowhere near the community or adoption.

~~~
blacksmythe
I can't find the reference, but I recall someone saying that Dart has huge
adoption measured by revenue resulting from its use as an internal Google
sales tool.

[https://news.ycombinator.com/item?id=7955819](https://news.ycombinator.com/item?id=7955819)

------
mark_l_watson
Haskell is one of my favorite languages and just wrote a book using Haskell
(you can read it free online: [https://leanpub.com/haskell-
cookbook](https://leanpub.com/haskell-cookbook) )

That said there are a lot of tasks that I use Ruby for (text wrangling),
Python (machine learning), and Java (huge number of useful libraries).

I use Haskell for NLP, some web apps, and coding algorithms that don't require
libraries that aren't available for Haskell. If Haskell had a huge set of 3rd
party libraries like Java, then I would probably use it for most of my
development.

~~~
innocentoldguy
_Haskell Tutorial and Cookbook_ is one of the two books I've been using to
learn Haskell (the other one being _Haskell Programming_ , by Christopher
Allen and Julie Moronuki). Thanks for the excellent book!

~~~
bubblesocks
I can vouch for Haskell Programming, but I haven't read the other book. Thanks
for the recommendation.

------
phs2501
I kind of like Haskell, but the learning curve is brutal and the documentation
is often intensely non-helpful if you don't want to learn a whole lot of
mathematical terms.

i.e. when browsing and trying to understand parts of the XMonad code, I came
across the "Endo" type. I Hoogle'd it, and the only documentation for it is
"newtype Endo a: The monoid of endomorphisms under composition." Thats.... not
helpful for me.

~~~
evincarofautumn
The biggest thing missing from Haskell docs is examples, I find. The
mathematical language is tempting when you’re writing docs, because it’s
succinct and precise, and can say a lot about how something’s _implemented_ :

    
    
        -- Endo is…
        newtype Endo a
          = Endo { appEndo :: a -> a }
    
        --                      ^
        --                      |
        -- …the monoid of endomorphisms…
        --         |
        --         V
    
        instance Monoid (Endo a) where
    
          mappend (Endo f) (Endo g)
            = Endo (f . g)
    
          --          ^
          --          |
          -- …under composition.
    
          mempty = Endo id
    

But not so much about how to actually _use_ it:

    
    
        -- If you’ve got a list of functions…
        pipeline :: [Int -> Int]
        pipeline = [(+ 3), (* 2), abs]
    
        -- …you can compose them with the generic mconcat.
        run :: [a -> a] -> a -> a
        run = appEndo . mconcat . map Endo
    
        run pipeline (-1) == 5

~~~
stcredzero
None of that helped me understand anything.

~~~
yawaramin
OK, the basic idea is that you have two functions, `f_1` and `f_2`, of type `a
-> a`, and you want to run them both on an input value `x` to get an output
`y`. You can think of `f_1` and `f_2` as a data pipeline that `x` passes
through and then the output comes out the other end. So,

    
    
        y1 = f_1 x
        y = f_2 y1
    
        -- or,
    
        y = f_2 (f_1 x)
    

But what if you have an _arbitrary list_ of functions as your pipeline?

    
    
        y = f_n (f_{n-1} (... (f_2 (f_1 x)) ...))
    

Well, the solution is that functions of type `a -> a` are _composeable,_ so
you can 'add them up' as if they were real values (which they are). The
Haskell function composition operator is `.`, so:

    
    
        y = f_2 (f_1 x)
    
        -- is the same as,
    
        y = (f_2 . f_1) x
    

Now, how do you go from 'composing two functions' to 'composing a list of
functions'? Well, monoids are a typeclass (a statically-enforceable design
pattern) that encode the idea of being able to combine two things into one,
and 'for free' give you a function `mconcat` that combines a list of those
things into one. So if you have a monoid instance for functions of type `a ->
a`, you're now able to combine many of them together, get a single composed
function, and apply your input value `x` to that.

`Endo` is just a fancy name for 'function of type `a -> a`'. I agree to a
large extent that all this category theory stuff can get distracting. I
personally think type theory is a much more rewarding field of study. To
'endomorphism', a type theorist would say, 'Oh, you mean `a -> a`?'

~~~
lobster_johnson
This is an extremely lucid answer, thank you. Can you please now rewrite all
the Haskell documentation in the same way?

~~~
majewsky
This. Documentation is _the_ most frustrating part of the Haskell experience
to me. More than once I've come across a module on Haddock and the module
description just says

    
    
      This module is inspired by the following paper: <DOI>

------
montanonic
Honestly, I think for a lot of people who've used both Haskell and Elm, the
answer is pretty clear: lots of Haskell in the wild can be very hard to read
and think about, even with significant time invested in it; and when it simple
to read, it's likely not using many more features than Elm is. Yet, even with
a very strict subset of its features, it is a very expressive language: sum
and product types are an incredibly powerful concept; I'm blown away by how
much more straightforward managing state is with them. Expressive types and
superb type inference generally make refactoring a breeze. There are just many
huge wins overall.

But you can have the bulk of these wins in a simpler language like Elm, or a
language like Rust which tries to only add more type complexity in cases where
it would significantly improve code in practice, rather than in theory.

Haskell is necessary; it pushes the state of the art forward because of how
experimental it is (compiler extensions). But it's pretty clear that the
community is skewed towards research rather than industry, and culture
strongly influences what a language is practically capable of.

This does not at all mean that Haskell isn't fully capable of industrial
applications; we have direct evidence suggesting otherwise. But with a
philosophy to "avoid success at all costs", it should be fairly clear that
moving out of the realm of obscurity isn't exactly a goal of the language.

------
verletx64
In the end, Haskell's ramp up time (to be productive) will slow it's momentum,
and I feel that's with all languages with a ramp up time. Most popular
languages have a pretty low ramp up time, and part of that is business
concerns (getting a whole team to being productive in a language with a long
ramp up time takes...well, time, and it's rarely the right option to have them
make that transition), but also the personal, human side of it - we suck at
being bad at things and the experience of it is hard to push through for
people.

Actually, jumping back into mathematics outside of work, I'm having to go
through this period of uncomfortable 'badness' and resisting my urges for the
greater good. It is emotionally taxing to be 'bad' at something. I think
that's part of why we favour languages that allow us to be productive quickly,
despite it probably paying off in the long run.

I wouldn't do it with everything (you need quick wins _somewhere_) but it's a
good skill to have, to be disciplined and persevere despite what your heads
saying.

I'm no psychologist though, so this is largely conjecture and anecdotes.

------
stcredzero
_And so the history of programming has been a series of advancements in both
removing barriers to composability, and building new programming technologies
that better facilitate composition._

This is like saying that the history of architecture has been a series of
advancements in reducing the cost of structures. _This is the fallacy of
narrow focus a single sexy metric._ (See below.)

 _Stage 4: Composability is destroyed at program boundaries, therefore extend
these boundaries outward, until all the computational resources of
civilization are joined in a single planetary-scale computer_

Smalltalk tried to do this with the Image, which contained everything within
it, including compiler, source control, and development environment.
(Smalltalk started out as an Operating System.) However, what this actually
did was to sequester the community in the boundaries of the image while the
rest of the programming field was busy building varied forms of infrastructure
to allow interaction.

 _I think this hypothesis also offers an explanation for why Go is popular,
even though the language is “boring” and could have been designed in the
1970s._

Java won over Smalltalk because the Java community understood much better how
to win mindshare. Ruby took over Smalltalk's pure object schtick because it
could play nice with the rest of the programming world, and it had a killer
app in Rails.

Go wins because it is exceedingly well designed for convenience in a myriad of
ways where other languages drop the ball. You can have very fast incremental
compilation in C++, if you are careful. In Go, it just works. You can have a
good deployment process with Java, if you are careful. In Go, it just works.
You can have easy to use concurrency in other languages, like C#, provided you
find and learn the correct libraries. In Go, it's baked into the language, and
people can learn how to use it competently with about as much effort as
learning how to implement FSM using switch{}, and very good documentation and
tutorials are dead easy to find.

A Lamborghini is way faster than a Corolla. A Ford F300 can haul many times
more groceries. A Corolla doesn't have remotely near the potential for track
day fun as an Ariel Atom. The above 3 qualities are sexy metrics, but in terms
of overall utility and convenience for the largest number of people, for the
most favorable cost/benefit, the boring old Corolla kicks butt. (That said,
I'm pretty sure that someone can figure out how to disrupt programming in the
way that companies like Tesla are poised to disrupt the above story. It might
well be a "boring" version of functional programming that has many "just
works" qualities.)

------
evincarofautumn
A programming language doesn’t need to “take over the world” to be successful.
Nor does it even need to be “the best”, just serve a niche, as the article
goes into. Haskell is the best language I’ve found for the software I write,
so I use it. If other people aren’t using it, then I figure they have a good
reason, or it’s their loss.

------
elihu
As someone who has done a lot of programming in Haskell, I'm glad to see that
the limitations of composability across program boundaries is being recognized
as a problem. (And it may be a problem that Haskell can't fix without some
help from the operating system.) When you have to convert data to type
"String" in order to send it over a pipe, and then convert it back in the
receiving program, that sort of defeats the purpose of having a strong type
system in the first place. Similarly, if you have a bunch of processes, each
with their own IO Monad interacting with each other, it starts to have an
uncomfortable resemblance to the shared mutable state that Haskell tries so
hard to avoid.

As for the question of why Haskell isn't more popular, there are a lot of
possible reasons. GC overhead and less control over memory layout makes it
less performant than C/C++ and not really suitable for real time tasks. Some
algorithms are hard to express without mutation. (The ST Monad is usually
appropriate for those cases, but it's kind of cumbersome to use.) Laziness
makes memory and CPU utilization hard to reason about. The learning curve is
very steep. There aren't many books on "advanced" Haskell programming, or
writing system software in Haskell.

One of the things I really like about Haskell is that you never stop learning
new things, and new abstractions and libraries are being invented all the
time. After 25 years or so, we're still figuring out new ways to program in
Haskell. This has a couple of unfortunate side effects, though. One is that
programmers can become distracted from the task at hand and direct all their
effort at figuring out a better abstraction to solve their problem. A lot of
great things come out of such work and sometimes it pays off it terms of
productivity, but I think the common criticism that there isn't enough
practical, "boring" software being written in Haskell is justified. Another
problem is that it's hard for Haskell programmers to communicate with each
other or understand each other's code when they haven't learned or are
comfortable with exactly the same set of abstractions.

I do think Haskell is on to something good. I don't expect Haskell to take
over the world, but I think in a hundred years people will look back and say
that ideas from Haskell were a major influence on later languages that were
more popular. Learning Haskell is probably a good way to prepare for languages
of the future that haven't been invented yet, in the same way that learning
Smalltalk would have been a good way to prepare for the object-oriented
languages that came after.

------
innocentoldguy
I completely agree with the heart of this article, and its reasoning for Go's
popularity (e.g. familiarity for people who know languages like C and Java).
It speaks to my experiences, anyway.

I went down the Go path myself for a while, because it _was_ familiar, but I
found that it had many of the same issues I was trying to get away from in
other languages. As the article points out, I also found Go "boring."

After searching for a while, I ended up wanting to use Haskell, but it was
taking me too long to become proficient with Haskell's unfamiliar syntax and
functionality; which is why, the author posits, Haskell hasn't taken over the
world. I ended up using Elixir instead. It offered me the functional paradigm
I was looking for in Haskell, and the ease of managing large-scale
applications, while also offering a familiar and easy to learn syntax. I'm
still studying Haskell, and may eventually employ it professionally, but for
now, Elixir is working for me just fine.

~~~
spudlyo
I like Go specifically because it _is_ boring. I really enjoy the simplicity
of it, which makes it easy to read and understand the core packages. I feel
like the language doesn't try to do too much and gets out of my way so I can
concentrate on the task at hand.

I don't write large complex applications though, thus far I've used it to
write programs that fit into Go's sweet spot -- high performance network
daemons that do a lot of I/O.

~~~
throwaway949
Go will lead to a generation of "stupid" developers writing "stupid" code
because they were told you don't need "this or that", then the next generation
will rediscover things such as functional programming,generics or type
classes. Or Go will evolve to something more complex, like every language do
and early adopters will be pissed off things aren't like the "good old times"
anymore as they are incapable of using constructs they don't understand.

Go developers already have that "dumb-down" reputation.

~~~
marcrosoft
I guess I am a "stupid" developer.

I hope Go does not "evolve" to support generics (whatever that is) because me
and thousands of others have written Go code that produced value to many
without it.

~~~
Zach_the_Lizard
And others of us have wasted thousands of man hours because now we need an
interval tree that works with date ranges and numeric ranges. Now we need to
generate code (extra build step, slowing down development) or copy and paste
(error prone). And then another repo adds another interesting use for this
single data structure and that type needs to be supported.

And that's one data structure. We've got thousands of developers, many of whom
are now writing Go. This comes up all the time and is not a niche use case.

At this scale it's quite costly.

IMO slapping on even basic Java / C# style generics does not harm you and if
anything lets you optionally choose to use higher order functions or novel
data structures you can't implement today without reflection.

~~~
marcrosoft
Can you point me to a real world use case for what you are describing? I am
honestly asking, so I can understand the use case.

Edit: I have never written code that required me to copy and paste or generate
code.

~~~
Zach_the_Lizard
Finding overlapping intervals and querying on those ranges. E.g. answering
this question: on January 2nd, what payment schedules apply (more than one
because different countries pay out differently)? Or: if I add this schedule,
does it overlap another one? If so, which ones?

That's a date range based use case for this interval tree structure.

Another team is doing something similar query-wise, but with integer based
ranges. I forget what their use case is, but our data structure is exactly
what they need, but we either need to use reflection (unacceptable) or copy
and paste (also garbage) or code generation (bad) to accept and return the
right types.

This is trivial in Java, where I have a version of this written and used
without complaint.

~~~
marcrosoft
I could be totally off but, if all the intervals were integer based would this
problem go away?

If so (probably not), would that mean that the lack of generics in Go
influence good design from the beginning?

Edit: Thank you for the use case!

~~~
Zach_the_Lizard
>I could be totally off but, if all the intervals were integer based would
this problem go away?

No.

Even if you represent date ranges as pairs of integer timestamps in UTC (for
consistency) you now need to do annoying and possibly bug prone casting and
converting. Remember, code not written is bug free. Those tests won't write
themselves. Also think of N dimensions; those come up often and are hard to
handle with this input type.

The moment someone needs a pair of GPS coordinates that are doubles you're
SOL. Which incidentally is a common situation with interval trees: find things
in a viewport, intersecting roads, etc.

Even the tree itself is actually generic. You could build on a red black tree
and using generics use an internal data type to order the tree with some extra
book keeping and thus use the same underlying tree implementation for, say, an
ordered set structure, interval trees, etc.

You can do this all now with error prone casting. I don't know if you've done
Java pre generics but it is a vastly more error prone world than post generics
Java.

Incidentally the lack of these other structures leads to lots of
OrderedXYZTypeSet type code in our codebase that are generated to avoid
cssting.

With generics we'd not only have one implementation for all types, but we'd
likely open source it since the broader Go community could use it and add
additional useful types and functions.

Go has very few built in data structures which exacerbates this problem. It at
least partially has so few because the generics support in maps and slices
etc. is basically a special cased hack since it's not baked into the language
and is thus likely hard to maintain.

------
alkonaut
Ask yourself this: are/were the designers of Haskell aiming to create a
popular/widespread language? No. They were not. Quite the opposite. They have
succeeded well in their aim to keep it s niche language.

Perhaps a more reasonable question is why F# isn't eating C#'s lunch.

------
nabla9
Haskell is pure functional language and it enforces purity. Haskell did not
take over the world for the same reason as pure OO programming, pure logic
programming, pure relational databases, or pure anything never took off.

Programmers have impure toughs.

~~~
insulanian
We have dirty thoughts... Never thought about it that way :)

------
bubblesocks
I agree with the author that Haskell's unfamiliar syntax and functional
constructs are why it hasn't taken over the world. I also think that Haskell's
syntax and functional constructs are precisely why it should though. I also
like the fact that Haskell isn't a mega-corp-owned technology, but rather
grown by intelligent engineers to do intelligent things in intelligent ways.

I'm not the greatest Haskell programmer, but I love it. I recommend learning
the basics of Haskell, if you haven't yet. Doing so improved my code in other
languages quite a bit, so it was worth studying for that reason alone.

------
cwmma
This is an interesting article about Haskell because it does concede that
haskell might not be the best language for everything and other languages
might be useful for certain kinds of apps though not exactly, the actual claim
was that haskell was not superior enough to make learning it worthwhile for
crud apps.

The big disconnect between fp/haskell devotees and other programmers is that
everyone else doesn't consider it self evident that haskell or fp is better
and the haskell and fp group does a terrible job explaining why they are
better paradigms without making assumptions like 'all side effects are bad in
all circumstances' or 'imperative programing is always worse way to impliment
algorithms' without justifying them empirically and saying your paradim is
better without justifying it empirically.

------
Shank
You could ask a similar thing to any functional language. The internet has a
very wide variety of introduction points to imperative languages, and quite a
bit fewer for functional languages. This means that the barrier to entry isn't
just "can learn a language" but the vast majority of the time, you have to
convince programmers who already know another language to adopt a new one (and
in an entirely new design paradigm than they're familiar with).

Functional languages have to be penetrable to newcomers and offer huge
benefits to veterans to switch, or they're going to remain nearly esoteric. Is
mathematically provable code nice? Sure is. Is that enough to win people over?
Well, language adoption says otherwise.

~~~
Kenji
It has nothing to do with newcomers or skill or any of that. Often, we need to
make computers do a list of things. Conditionally, sometimes, but
nevertheless, a list of things. This is human thinking. A C function is
nothing other than a list of things the computer does, one by one. Functional
programming essentially throws rocks into anyone's way who thinks like this
(which is pretty much everyone) and makes everything unnecessarily complex.
There are few areas where functional programming is great, parsing and
compilers come to mind. Other than that, in my eyes, it is mostly an academic
pursuit.

~~~
stcredzero
_Often, we need to make computers do a list of things._

How often do we need to do those things in a specific dependency order? How
often do we need to simply get those things done, regardless of order?

~~~
douche
All the time? Human beings, on average, are more apt to think imperatively by
default than otherwise, unless they've been explicitly trained to think
otherwise.

------
zzzcpan
Well, every language is a product that has to compete with other languages.
Nothing to do with composability or advancements in out-of-touch research
areas.

And these days languages compete on a lot of things, on performance,
reliability, security, productivity, on syntax, on tooling, on libraries, on
deployment and supporting infrastructure, on interoperability. Syntax is not
even that big of a deal, it's not hard to compete on it, you can just take
Go's approach or you can do it thoroughly and properly, not forgetting about
newcomers, their likely previous experience and designing the whole learning
experience into the language too. You can't leave it as is though or it's
going to be a huge disadvantage.

------
hacker_9
Main reason I don't make the jump into fully functional language, is the
difficulty of refactoring. It's nice if you get the program structure right
first time, but of course that doesn't happen and so the inability to easily
pivot makes me stay away. saying that, I do get hit by bugs that come from
state, and each time I wished I'd used the functional approach, but it's not
enough of a reason to make me switch.

~~~
nbouscal
That's… exactly the opposite of my experience, and that of every Haskeller I
know. Refactoring is 100x easier in Haskell than any other language I've
worked in.

~~~
preordained
100 times easier than right-click refactor in a modern Java IDE? I beg to
differ--if we are talking about your typical Haskell vim/text editor rig. For
a language with such powerful types, it's like a gun with no bullets lacking a
good IDE to take full advantage.

~~~
nbouscal
I've never used a modern Java IDE (as I've been fortunate enough to never have
to program in Java), but still feel confident in the claim that refactoring
Haskell is 100 times easier, yes.

~~~
zigzigzag
Then you probably should, before making such claims. IntelliJ can do
refactorings like the following:

Extract code into a function, automatically suggested by duplicated code
detection. Inline functions back into call sites.

Convert imperative for-loops to functional style streams and back again.

Change function prototypes globally by adding or removing parameters.

Extract classes and interfaces.

Common sub-expression elimination (i.e. select an expression, introduce a new
variable, all uses of that expression can be replaced as well)

Detect dead code and automatically remove it.

Refactorings across languages.

Replace inheritance with delegation or vice-versa.

Automatically generify code. Extraction of type variables.

Obviously a whole suite of structural code changes like renamings and other
smaller things.

That's without getting into all the other code intelligence features.

Regardless of how fancy you feel the type system of a language is, these sorts
of keyboard-driven refactorings are tremendously helpful for getting code and
APIs right.

------
harry8
New language to learn? Write a toy. Spend more time on that toy than it
warrants, re write it and someone else find it useful or fun.

These programs are almost wholly absent in Haskell. List all the programs used
for something that isn't writing code that were written in haskell.

Xmonad Pandoc Git-annexe maybe ... And.

Please really do add to this list, its length is informative and tells us
something about haskell's strength and weakness.

~~~
codygman
> List all the programs used for something that isn't writing code that were
> written in haskell.

As smugly as possible and with a huge grin: Purescript

Some I can think of off the top:

\- PostgREST

\- hoodle

\- Microsoft Bond

\- git-annex

\- darcs

~~~
harry8
So you don't use purescript for writing code according to your definition? Ok.
Anyway, that aside the haskell pushers out there really need to address why
there's so very little in the way of application written in haskell. The
article notes that haskell is used very successfully for compilers. The number
of successes elsewhere is very, very much smaller.

Why? It's important not to just yell "rah rah" but actually analyse it as a
problem and maybe, just maybe, you know, solve it.

~~~
nbouscal
Haskell is used successfully in industry, e.g. at Standard Chartered where
they have >1MM lines of Haskell in production. Take a look:
[https://wiki.haskell.org/Haskell_in_industry](https://wiki.haskell.org/Haskell_in_industry)

------
tikhonj
One critical assumption behind discussions like that is that popularity is
strongly correlated to quality. But that just doesn't seem true in
practice—popularity is the result of complex social dynamics and isn't
strongly correlated to _any_ intrinsic qualities of whatever becomes popular.
We come up with compelling narratives about whey one thing gets popular and
another doesn't _after the fact_ , but these are just rationalizations; we
can't use them to make good predictions and they do a poor job of representing
the social processes involved.

We can see this in a microcosm when we consider music. What makes music
popular? To a large extent, it's music that's listened to by the right people
at the right time—perhaps seeded intelligently with exposure and marketing.
There are some minimal bars the music itself has to pass, of course: it can't
be _terrible_ and it has to be _accessible_ , but those aren't high bars to
clear.

There are thousands of bands as good or better than most of the ones you hear
on the radio but they don't get anywhere. You just never hear them, or they
never catch on among your friends and never make inroads into your social
network. (Or, more importantly, into the social networks of the labels that
push music in practice.) Unless they do, in which case you have some unknown
band "going viral"—virality says a lot about how something spreads and little
about the thing itself. It doesn't even matter what you mean by "better": some
objective notion of quality, musical sophistication, aesthetics, "catchiness",
pertinent lyrics... whatever. That's not the main driver of popularity.

Nautilus had a great article[1] about this a while back, based on some
experiments run with music. They created several large groups of participants
listening to the same set of 40 musical tracks over time. People in the
control group listened to music independently; the other groups all had social
feedback mechanisms _within_ the groups. The results were all over the place:
the popularity of songs was completely inconsistent across groups, and could
be traced to early chance decisions that snowballed over time.

The article had a great analogy about how the whole process worked:

> _…a single match is not the entire reason for a wildfire starting and
> spreading. But that’s exactly how we naturally think about social wildfires:
> that the match is the key. In fact, there are two requirements: a local
> requirement (a spark), and a global requirement (the ability of the fire to
> spread). And it’s the second component that is actually the bottleneck: If a
> forest is dangerously dry, any spark can start a fire. Sparks are easy to
> come by, and are not intrinsically special._

The programming language in question? Its qualities? Pragmatism? Purity?
Elegance? All just sparks. Which one starts the biggest fire depends far more
on its context than the spark itself.

[1]: [http://nautil.us/issue/5/fame/homo-narrativus-and-the-
troubl...](http://nautil.us/issue/5/fame/homo-narrativus-and-the-trouble-with-
fame)

(The article was written by a researcher in the field and cites a few papers
on the topic, if you want to read more.)

------
stonogo
Because the entire programming model breaks down at I/O. When you have to
compromise the language's _raison d'être_ to perform a basic function of the
computer, the cognitive dissonance is real.

~~~
douche
This is what always fucked me up with Haskell. To introduce printf debugging,
you get sucked into the morass that is the IO monad, and all the
clusterfucking that involves.

~~~
tene
You don't need IO at all for printf debugging. Just call trace or traceShow.
[https://www.stackage.org/haddock/lts-7.16/base-4.9.0.0/Debug...](https://www.stackage.org/haddock/lts-7.16/base-4.9.0.0/Debug-
Trace.html)

~~~
douche
Apparently things have changed since 2007... Or else my Ivy league CS
professors didn't know what the hell they were talking about. As I grow older
and more cynical, the difference between those options continues to narrow...

~~~
TheCoelacanth
I wouldn't be surprised at all if Ivy league CS professors didn't have in
depth programming knowledge. They are Computer Science professors after all,
not programming professors.

~~~
douche
If you're teaching a course in _X using Y_ , I don't think it's out of line to
expect someone to be proficient in both X and Y. That's kind of their job.

~~~
bubblesocks
I recently finished a Linux course at school, part of which was focused on
Vim. I got points taken away for using gg to jump to the top of a document and
G to jump to the bottom. My professor said neither option would work. While I
think it is reasonable to expect professors to know what they're talking
about, it is fairly common that they don't.

------
acchow
Who says Haskell is so great? What does "better" even mean?

~~~
allengeorge
From what I've inferred from the article "better" means: easier to write
large-scale, composable systems that engineers can reason about easily.

~~~
_yosefk
It doesn't seem easy to reason about the performance of a program under lazy
evaluation. Generally there are many aspects of a system that you may want to
reason about, and making it easier to reason about one often comes at the
expense of making it harder to reason about another. Pure functional code, for
example, makes it easy to reason about what outputs are produced from what
inputs, but that is one (important) aspect among many (and to take one thing
that's really simple without purity and really tough with it - consider "pure
functional data structures" vs data structures in imperative languages
assuming a mutable RAM.)

~~~
solidsnack9000
Laziness is a real problem for reasoning.

Reasoning about pure functional data structures in the absence of laziness
isn't that difficult, any more than the many mutable data structures that have
copying/rebuilding/rebalancing that takes place only sometimes. The benefit of
pure functional data structures is most visible when there is some kind of
sharing -- concurrency of some kind -- in which case, reasoning about side
effects becomes more difficult.

There are good approaches to handling in-place array update and similar
operations in pure functional programming (the ST monad, for example), it's
just not the default. As long as the side effect is somehow accounted for in
the return value, referential transparency is maintained; we might say the
return value is there to help us reason about side effects.

------
mcphage
The gain from composition isn't that it's easier for our brains to comprehend;
lots and lots of tiny pieces are often harder for our brains to understand
than a much smaller number of larger pieces.

------
xyzzy4
Haskell doesn't make it clear what things will process the fastest. For
example in Java you can use hashmaps or arraylists depending on the situation
and it is easy to optimize for run time. But I'd have no idea how to do this
in Haskell.

~~~
catnaroek
Efficiency is a matter of using appropriate data structures and algorithms for
your problem, regardless of the programming language. Unfortunately, pervasive
laziness is a serious disadvantage here. (But being functional is not.)

