
Why does Haskell, in your opinion, suck? - kccqzy
https://www.reddit.com/r/haskell/comments/4f47ou/why_does_haskell_in_your_opinion_suck/
======
akavel
Haven't seen it listed neither here nor there, so not sure if I'm the only
one, but: for me, the first and currently blocking obstacle is of "graphical"
syntax. I'm of the kind of people who hear the words they read as a voice in
their head, so when every line is interspersed with multiple "random" <:> >>=
<=< @>-,-'\-- and whatnot other ascii-art I can't verbalise, I distictly feel
my brain stumble, mumble, and grind to a halt and sad emptiness. And I don't
even know how to google any one of them. Don't know how to solve this for
myself. I've once found a list of suggested names for some common ones, but -
I don't like having to learn all that by heart before I even start with the
language; and also I feel pretty certain those are not all, and many libraries
seem to like to invent their own new ones, so back to square one.

That's I believe the main reason I find SML more attractive: it seems to use
operators to a much, much lesser extent, apparently preferring alphanumeric
function names in the core, and in 3rdparty libraries too.

~~~
wirrbel
I felt the same way (and still do in general, with notable exceptions). I.e.
to put this first: I think a lot of the Haskell community is obsessed with
mathematical "cuteness", which basically they take to mean "infix-operator-
heavy" notation.

Nevertheless, I have learned to like some operators,

    
    
        <$> <*>, *>, <* 
    

are ones that come to my mind. The operator

    
    
        <$>
    

is basically fmap as infix notation, so `fmap f [1,2,3]` would be `f <$>
[1,2,3]`. This makes mapping as easy as a normal function call (and the
operator name `<$>` was not choosen arbitrarily. `$` by itsself is the
function application operator, so `f $ 3` does on a single element, what `f
<$> x` maps over an instance of Functor.

With applicatives, the pattern of

    
    
        f <$> a <*> b <*> c
    

is very common (with a type signature

    
    
        f :: a -> b -> c -> ....

~~~
JadeNB
I think that your second indented block, which is currently

    
    
        <*>
    

, is supposed to be `<$>` (based on the rest of your discussion).

~~~
spion
Yes, <$> is the general `map` (`fmap`)

    
    
      <*> 
    

takes function(s) out of a mappable and applies it (them) to argument(s)
within a mappable.

    
    
      (+) <$> [1,2,3] = [add 1, add 2, add 3]
    
      (+) <$> [1,2,3] <*> [10, 20] = [(1 +), (2 +), (3 +)] <*> [10, 20] = [11, 21, 12, 22, 13, 23]
    
      (+) <$> Just 3 <*> Just 5 = Just (3 + ) <*> Just 5 = Just 8
    
      (+) <$> readLn <*> readLn = ... 
    

well that depends on the inputs :)

------
darksaints
Haskell has 3 major problems that are completely distinct in my opinion.

Biggest technical problem: lazy evaluation. Other people have said more than
enough here.

Biggest cultural problem: technical oneupmanship and code golf. Haskellers can
get so caught up in no-compromises stylistic competition that it makes it hard
to get anything done. If you finally finish up something that accomplishes
your goals, your teammate is gonna come in and rewrite everything you just
wrote so he can feel happy with it too. And then lecture you about how lenses
are wonderful and how you need to learn the principle of least power.

Biggest community problem: a small minority of extremely obnoxious and toxic
assholes. They raid other communities and make fun of their languages and
members. They rant against and mock everything deemed inferior. They treat
people like they're idiots for not understanding basic but unfamiliar
principles of the language. And nobody ever seems to shut them down, despite
the fact that the majority are pretty welcoming.

~~~
js8
> Biggest technical problem: lazy evaluation. Other people have said more than
> enough here.

I am actually wondering about that. It is considered essential by the classic
paper "Why functional programming matters", for better program composition.

In particular, I wonder how are you supposed to pass around IO monad (or any
other thing whose evaluation will give you side effects) as a value, if you
have strict evaluation. I thought the whole point of Haskell was to lazily
create a "recipe" as an IO monad, which will then be evaluated after it is
returned from function main. In particular, I wonder how Idris deals with this
problem, as I read it has strict evaluation.

~~~
Peaker
Laziness and IO are unrelated.

Idris, as you mention, is eager and strict -- and is still pure and uses IO in
the same way as Haskell.

Evaluation in Haskell does not cause side-effects, including evaluation of IO
action values. It is the _execution_ of these IO actions (by the RTS, due to
their inclusion in `main`).

For example:

    
    
      let x = map print [1..10] :: [IO ()]
    

x is just a list of values, each representing the action "print the number N"
(for varying N). Evaluating this list doesn't cause printing. Now if you use:

    
    
      let y = sequence_ x :: IO ()
    

You now have another pure value, y, which represents the action to print all
the values from 1 to 10. Evaluating y still doesn't cause anything to happen.

main = y

NOW you've actually scheduled the effect denoted by 'y' to execute.

In other words, main is assigned a composition of effects, that by virtue of
being inside main, will get executed. This has nothing to do with evaluation
order, and when or how these effect descriptions got evaluated.

~~~
js8
Ah, OK, thanks for explanation!

------
zzzcpan
Most of these apply to other languages as well:

"Haskell sucks because compilation takes too long."

"Aye. And it takes too much memory too."

"My major gripe with haskell was that I could never tell the space/time
complexity of the my code without serious analysis (that among other things
involves second-guessing the compiler's ability to optimize). This makes
writing good quality code harder than it needs to be."

"Debugging boils down to wolf-fences and print-statements most of the time"

"Profiling is needed for any stack traces, and is not enabled by default, so
you end up spending hours rebuilding your sandbox just to get a stack trace of
an error."

"Coming from a Scala background, the tooling seems terribly cumbersome and
immature. I couldn't get it to work in IntelliJ."

"You need to turn on dozens of language extensions to do pretty much
anything."

"I'm a developer on a pretty large Haskell codebase, and we're in a corner of
'can't add any libraries without them conflicting with each other." Which is
OK, by itself, but leads to my biggest complaint of 'why aren't API changes
reflected by major/minor versioning'."

"Haskell sucks because both the standard library and the ecosystem lack
guiding ideas."

~~~
YeGoblynQueenne
For those who, like me, wondered what wolf-fencing is:

"Wolf fence" algorithm: Edward Gauss described this simple but very useful and
now famous algorithm in a 1982 article for communications of the ACM as
follows: "There's one wolf in Alaska; how do you find it? First build a fence
down the middle of the state, wait for the wolf to howl, determine which side
of the fence it is on. Repeat process on that side only, until you get to the
point where you can see the wolf."[10] This is implemented e.g. in the Git
version control system as the command git bisect, which uses the above
algorithm to determine which commit introduced a particular bug.

[[https://en.wikipedia.org/wiki/Debugging](https://en.wikipedia.org/wiki/Debugging)]

So now you know what that thing you always end up doing is called :)

~~~
manojlds
So, binary search? Why did a new name come up for this well known thing in CS?

~~~
brianwawok
Binary search sounds kind of 2D. Wolf fence also lets you work with unsorted
data.

I.e. if I asked you to find a squeaky noise coming from somewhere in your
house. Does it make sense to "binary search" your 3D 3-story house?

Wolf fencing makes more sense.. stand on floor 2, see if it is on your floor.
Or is it coming from the stairwell to floor 3, or the starwell to floor 1? You
just used 1 step to narrow your search space down by 1/3.

How would you sort your house so you can binary sort it?

~~~
formula1
How can wolf fencing work on unsorted if you have to find out where it
happens? Wouldnt that involve iterating on each item to find out where the
wolf is?

~~~
true_religion
Binary search is looking for a value. Wolf-fencing is looking for a side-
effect of that value.

As per analogy, you're not checking an animal in Alaska to see if it is a
wolf, you are waiting till the wolf howls (side effect of wolf-existence) then
focusing your efforts in that area.

So you can't really use wolf-fencing to say... find a number in an array; it
makes no sense.

~~~
brianwawok
I like this explanation ;)

So apply a binary search to an effect instead of a value.

------
YeGoblynQueenne
>> Haskell sucks because both the standard library and the ecosystem lack
guiding ideas. The former often feels like it is an amalgamation of PHD theses
that have been abandoned soon after being complete.

That's a bit on the nose. [1]

[1] As in "ouch".

------
davesque
My main gripe is that the syntax is too sugary. I've often wondered to myself
what a Haskell with syntax more in the spirit of Python would look like. I
just wish I didn't always feel like I had to literally memorize what various
custom operators meant with flashcards or something. Also, Haskell has
relatively complex parsing and evaluation rules. The meaning of well-written
code shouldn't be obscured by syntax.

And, along the lines of the syntax, the culture of style in Haskell. Single
letter variables abound. Not just throwaway vars where the meaning is pretty
clear from context. Entire libraries written with only single letter or
acronymic variable names. Good luck understanding what a library author was
trying to do by reading their code. I think this culture problem is largely
being driven by the fact (which I mentioned) that Haskell's base syntax is
already very opaque. I understand that the language takes some cues from the
mathematical community (not surprising given its history). However, the
language needs to ditch this to gain broader acceptance.

And, again, leading into my next point: the culture of the community in
general is too academic. I'll make another comparison with Python. The Python
community, among scripting languages, leans more toward the academic side but
in a good way. They've found a nice balance. It's easy to get started and, if
you want to learn more about the details, you can do so. With Haskell, you
immediately get hung up on the details. Most of the details have to do with
how Haskell fits into the framework of category theory. However, most
programmers don't give a shit about provability. They just want to write
software that is faster, safer, and more concise without being terse. I'm not
convinced that it's impossible to have this in a functional language with the
same goals as Haskell. Beginners should be able to get started quickly and
without much confusion. If they're curious about the underlying principles,
they should be able to gradually peel back the layers and learn more.

------
amelius
My biggest complaint would be about incomplete documentation of GHC. Given
that Haskell is a pure functional language, it should be possible to easily
use any part of the compiler, and include it in your own project. However, the
documentation is rather incomplete, difficult to understand, and always behind
the current state of the code. That said, the scientific literature on Haskell
is the complete opposite, and contains beautiful work.

Also, I'm wondering why there are so few fully compliant implementations of
the internet RFCs. Such specs seem an excellent opportunity to showcase the
power of functional languages, yet the implementations keep lacking.

~~~
lallysingh
I haven't found a language where that isn't the case.

~~~
TheEzEzz
The Roslyn C# compiler is exposed as various libraries (and is open source)
and is quite nice to work with. I was able to utilize it to make a C#-to-
HLSL/GLSL compiler fairly easily.

------
POSIXprog
Bryan Cantrill's take:
[https://youtu.be/0T2XFSALOaU?t=2021](https://youtu.be/0T2XFSALOaU?t=2021)

My answer would be lazy evaluation by default. It's extremely unusual, and
I've never seen a convincing enough justification for it, and it gives rise to
performance bugs (space leaks) that can be fiendishly difficult to track down
and fix and are disastrous in production. With the arrival of Idris, I think
we can pretty conclusively say this was a mistake and that the Miranda branch
of the PL family tree is likely to prove to be an evolutionary dead-end.

~~~
Tyr42
My perspective on this is that lazy be default made sticking to purity much
more compelling as if you just dropped print statements in you weren't sure
exactly when they get evaluated. This lead to important developments like IO
(they started with user input just being a lazy list! Very possible to
accidentally block trying to read too much), which might not have happened
otherwise.

But now, I do feel like lazy is best kept for core data-structures and places
where you know you can use it, and a strict by default variant of Haskell
would be better, now that we have learned most of the lessens from lazyness.

~~~
oldmanmike
"My perspective on this is that lazy be default made sticking to purity much
more compelling as if you just dropped print statements in you weren't sure
exactly when they get evaluated."

And that's basically a death sentence to the feature because it reduces it to
a silver lining of desperate post-mortem optimism. As far as my understanding
goes, lazy evaluation was originally included in the language because it was
perceived to be a powerful optimization technique enabled by pure functional
programming. You reduce the amount of work a program is doing dynamically at
runtime while not trading off on readability or modularity. All win. Except
there were subtle trade-offs that have made themselves clear over the years
and like you said, most would agree that lazy-by-default is too extreme of a
feature and doesn't really pay its share of the rent at the end of the day.

My main gripe with lazy evaluation is that it's implicit behavior. It's to
evaluation what garbage collection is to memory or dynamic types are to types.
It hides something from the programmer that is useful to know more often than
not.

~~~
js8
> My main gripe with lazy evaluation is that it's implicit behavior. It's to
> evaluation what garbage collection is to memory

Are you saying you would like to return to the languages with explicit memory
management? Or why is, generally speaking, GC considered good and LE
considered bad?

~~~
oldmanmike
> Are you saying you would like to return to the languages with explicit
> memory management?

Not really, but that does seem to be the logical conclusion at the end of the
day. Haskell makes a really good case for the use of a garbage collector via
pure functional programming, but I'm open to more fine-grain programmer-driven
mechanisms for handling memory. I need to try out Rust's borrowing system on a
meaningfully large project before I can start saying anything conclusive.

------
bernalex
Hallo. I see some of you are listing things here. It really would be very
helpful to me if you could list them on Reddit as well, using the format of
the thread, so that I only have one source to deal with.

I don't mean to instigate any Reddit vs. Hacker News rivalry, or whatever, but
I just don't have time to go over several sources for my talk and PDF. So this
is just a tip for if you want me to include your Haskell "complaint".

Glad to see the thread has sparked discussion here too!

~~~
shmageggy
You already have this page open... how much more time is needed?

~~~
arianvanp
Because it avoids duplicates. The idea is that we have one place where all the
problems reside, and where the problems are sorted by upvote count.

------
tlogan
I had chance to debug one tool built using Haskell. I was amazed how Haskell
solution for clean and easy to understand (it was a parsing library generating
better optimized SQL queries).

The main problems are:

\- it is hard to find examples/snippets of real-world solutions on net
(majority of examples are like Hello world and these examples are not useful)

\- lack of documentation about building real-world solutions

Maybe more work should be done in explaining how Haskell can be used to solve
some real-world problems. Parsing seems like place where Haskell shines and
more examples should be available.

So maybe just focus on "parsing with Haskell" and win that space.

~~~
taktoa
You may be interested in the State of the Haskell Ecosystem [1] and School of
Haskell [2].

[1]: [https://github.com/Gabriel439/post-
rfc/blob/master/sotu.md](https://github.com/Gabriel439/post-
rfc/blob/master/sotu.md)

[2]: [https://www.schoolofhaskell.com](https://www.schoolofhaskell.com)

------
angerman
Having these kinds of threads, I think, is very important for the community.
On a related note, there has been quite some fallout from the haskell
compilation times got worse thread, which resulted in the development of
tooling around improving the situation.

~~~
tome
I agree. Thanks to everyone contributing ideas!

------
devishard
I agree with the poster complaining about the wide variety of symbolic
operators: readability becomes extremely important as programs grow large, and
Haskell simply doesn't have it.

Lazy evaluation really doesn't pay off in my experience. Any major Haskell
program turns out off for performance reasons. And when you don't, performance
isn't the only problem: lazy programs often produce really inscrutable errors.

------
malisper
My reason is that there are eight different folding functions:

foldl, foldr, foldl1, foldr1, foldl', foldr', foldl1', and foldr1'.

There are twelve if you count:

scanl, scanr, scanl1, scanr1.

Also, the implementation of monad transformers is disgusting. It requires
O(n^2) code generation where n is the number of monads.

~~~
Peaker
Folds can be left-associative or right-associative. (l or r suffix).

Folds can be eager or lazy (prime suffix for eager).

Folds can work with at least 1 element to avoid the empty case ('1' suffix)
(e.g: a function like `maximum`).

This gives 2^3 cartesian space of folds. Each and every one may be useful in
some cases. For ordinary lists, though, foldr and foldl' cover virtually all
uses (just use 'error' for the empty case when you need to).

Monad transformers don't require O(n^2) anything. You're talking about the
'mtl' package on top of the 'transformers' package. The 'transformers' package
is useful on its own and doesn't have this problem.

The 'mtl' automatic lifter framework does require something in the order of
O(n^2) instances (for n=6 or so). However, this is not really just 6 copies of
the same code over and over. The threading of monadic state through different
monads varies. For example: `Reader.local` is implemented very differently for
`WriterT` and for `ContT`. So there really are in the order of O(n^2)
different interactions to write code to account for.

------
yongjik
I only dabbled in it a little for fun, but in my beginner's opinion:

Pros: The compiler eliminates a lot of potential bugs, so once my code
compiles, I've saved hours that would have been spent debugging if I were
using, say, C++.

However,

Cons: It takes more hours to figure out why my code is not compiling!

Basically, instead of debugging my code, now I have to "debug" GHC, trying to
figure out what it is thinking and why it's rejecting my code.

Put another way, C++ allows me to try "quick and dirty" solutions first, and
then later iterate and improve. Haskell requires me to be perfect at first
try.

~~~
ufo
One thing that helps A LOT when it comes to those compiler error messages is
writing lots of type signatures. They help keep the error messages more
"localized" than they would if you rely on global type inference. I would
recommend at least writing a type signature for every top-level function in
your programs.

That said, Haskell still has many features that make errors more complicated
than other languages. One example is that the use of whitespace for function
calls means that some things (like forgetting a comma in a list) become type
errors instead of syntax errors. Another example is the overloaded numeric
literals, which add "invisible" typeclasses to any code you write that uses
numbers.

Coming back to your comment, I think that the "quick and dirty" bit has more
to do with you being more used to C++ than you are with Haskell. You can also
write dirty code in Haskell if that is what you really want :)

------
johan_larson
Haskell sucks because you can't really get anywhere before understanding
monads well, and monads are (take your pick) a) too hard for most programmers,
or b) too distant from the abstractions used in most programming languages.

Not that's you're done mastering Haskell when you've figured out monads, of
course. There are plenty of harder abstractions running around. But monads are
the orgo of Haskell, the place many earnest dreams go to die.

~~~
devishard
This would be a reasonable critique except that monads are really why Haskell
ends up being so powerful and composable. Sure, they can be tough to wrap your
head around, but without them Haskell wouldn't be particularly special or
useful.

~~~
pron
It's more complicated than that. Monads don't make a language more powerful,
but they are an absolute necessity given Haskell's design, which is built
around extensional/value semantics, which basically approximates a
program/subroutine as the function it computes; this is a useful approximation
as it lets us treat computations as if they were referentially transparent.
That approximation has a cost, though, as computations are not quite functions
(they are _processes_ that _compute_ functions), and while the approximation
works well enough much of the time, some things require recapturing the more
accurate definition of computations as continuations (i.e. a computation can
block and then resume). If your language does not have continuations, only
functions, monads achieve the same thing. If, OTOH, your language has
continuations (as most non-pure languages do, although most don't have
_reified_ continuations, which are just as "programmable" as monads only more
composable), monads don't really help.

See [http://blog.paralleluniverse.co/2015/08/07/scoped-
continuati...](http://blog.paralleluniverse.co/2015/08/07/scoped-
continuations/)

~~~
Peaker
It's not really Monads making the language more powerful.

It's the taking away of non-pure primitives and libraries -- and replacing
those with type-labeled effects.

That these effects are composed monadically is a minor detail and
unfortunately the thing that's emphasized.

Haskell gains power in its framework for _restricting_ code. In Haskell, you
can know so much about what code _doesn 't do_ \-- and that's what makes
Haskell special and powerful compared to other languages. You have to
explicitly opt out of these restrictions - making the majority of code easier
to reason about, simply because there is so much it cannot do.

~~~
pron
> It's the taking away of non-pure primitives and libraries -- and replacing
> those with type-labeled effects.

PFP and typing are quite orthogonal. You can have effect systems in non-PFP,
continuation-based languages (i.e. languages that don't equate computations
with functions). You can have non-pure, non-monad-based type-labeled effects.

> Haskell gains power in its framework for restricting code.

Sure, but Haskell does this within a very specific design, based on
extensional functional semantics. There are other ways of restricting what
code can or cannot do that don't have the same semantics. Personally, I think
that various effect systems may hold promise and are certainly interesting
enough to try, but the PFP abstraction has so far failed to yield results
commensurate with its cost.

~~~
Peaker
> PFP and typing are quite orthogonal

They are orthogonal in 1 technical sense. But the benefits are reaped from the
combination.

> the PFP abstraction has so far failed to yield results commensurate with its
> cost.

You say this based on what? Me and other Haskell users believe that the costs
are very minimal and the benefits are quite huge.

~~~
pron
> But the benefits are reaped from the combination.

Well, I happen to think that some forms of rich typing are quite beneficial,
but that value-semantics is a negative. I don't see Haskell having any
tangible benefits whatsoever over, say, OCaml (other than ecosystem-related
stuff), which has one (relatively rich typing) but not the other (PFP).

> You say this based on what?

Based on the fact that the few Haskell shops out there -- despite them being
composed of avid enthusiasts and people who devote a lot of thought into the
Haskell way of thinking, are not reporting even 2x productivity gains
(although I don't know what huge means to you). I mean, some say they _feel_
those gains, but when you look at iteration speed, time to market etc., you
see negligible advantage if at all. As to the cost, I won't argue with you,
but I encourage people who are interested in languages as well as in software
engineering to try Haskell and judge for themselves.

~~~
Peaker
There are a lot of tangible benefits over, say, OCaml. Let me use one
particular representative one: STM.

Haskell can successfully implement a performant STM with static guarantees
regarding transactions. How would you add practical, guaranteed STM to OCaml?

> are not reporting even 2x productivity gains

An overall 2x productivity gain is _huge_. A 10% productivity gain is worth
millions over the course of a year, for even a medium-sized software shop.

There are other, very productive languages (at least for initially writing
programs) but they tend to be far less reliable. There are other relatively
reliable languages (e.g: Ada) but they are far less productive.

~~~
pron
> How would you add practical, guaranteed STM to OCaml?

I don't see the relevance. Clojure has STM and isn't pure at all. I can't see
why OCaml cannot do the same.

> There are other, very productive languages (at least for initially writing
> programs) but they tend to be far less reliable. There are other relatively
> reliable languages (e.g: Ada) but they are far less productive.

I understand that the Haskell community wishes this to be true -- and maybe it
is -- but to date there's no data to support this. Whatever little data we
have on bugs (the "GitHub study", which might not be dependable, but that's
all we have) shows negligible-to-nonexistent advantage to Haskell over other
languages.

~~~
Peaker
> Clojure has STM and isn't pure at all.

Not _guaranteed_ STM. If you do IO in Clojure's STM, you just get a runtime
error (assuming the IO does not forget to use the runtime-check that it isn't
executing in STM context).

> but there's absolutely no data to support this.

Data of this sort is _extremely_ expensive to collect reliably.

I remember reading about a "GitHub study" that did not correctly classify what
a "type error" is. Is that the one?

~~~
pron
> Not guaranteed STM. If you do IO in Clojure's STM, you just get a runtime
> error

Two things. 1/ "Guaranteeing" effects has little to do with PFP. 2/ There's no
data to support that this level of guarantees has any effect on program
quality. I'm all in favor of effect systems (though not PFP) because they're
worth a try; but there's a long way to go from "interesting" and "it actually
works!" especially if you have no data to support this.

> Data of this sort is extremely expensive to collect reliably.

Fine, but the alternative is to use an unproven, negligibly adopted (Haskell
industry adoption rates are between 0.01-0.1%), badly tooled language, that
requires a complete paradigm shift on faith and enthusiasm alone. I don't
need, or want, to prove that Haskell isn't effective (TBH, I really wish
Haskell, or any other novel approach did have some big gains); it is Haskell's
proponents that need to support their claims with at least _some_ convincing
evidence.

> Is that the one?

I don't remember, but what does it matter? Again, I don't want to prove
Haskell's ineffectiveness; it's people who want to convince others to use
Haskell that should collect some evidence in its favor.

~~~
tome
Peaker started off by explaining that

> Haskell gains power in its framework for restricting code

and you have nudged the thread in the direction of your pet topic

> it's people who want to convince others to use Haskell that should collect
> some evidence in its favor

How exactly did we end up here?

~~~
pron
I started off by explaining why Haskell needs monads and why they don't add
power, and later why PFP and restricting what code can do are orthogonal.
Peaker then spoke of tangible gains, and I said that something is not tangible
if you can't show it, and then you nudged the thread in the direction of your
pet project which, apparently, is patronizing people.

More to the point, if you claim Haskell (or, in particular, monads) has
theoretical benefits, you need to be able to explain them (and restricting
effect is not a theoretical explanation as it doesn't require monads); if your
explanation is "this has benefits in practice" then you're really claiming
empirical, rather than theoretical, benefits, but then you need to be able to
support those. If you go around saying monads have theoretical benefits but
when debated claim empirical benefits and then don't support those, expect to
be called out for selling snake-oil (and just to be clear, my point can be
summarized as follows: Haskell takes a very clear, very opinionated
theoretical approach[1], which is beautifully elegant but is not theoretically
better or worse, just very different, with its particular pros and cons.
Empirically, I claim, Haskell has not yet shown significant benefits).

[1]: Subroutines as functions; mutation as effect; HM types (+ typeclasses).
Type system aside, there are obviously many alternatives (other than
"classical" empirical languages). For example, languages that require full
verification for safety-critical realtime code often employ the synchronous
model. In that model, each subroutine isn't a function (but a continuation),
but the program itself can be viewed as a function from one program state to
the next, and mutation isn't a side-effect, but is very much controlled (see
[https://who.rocq.inria.fr/Dumitru.Potop_Butucaru/potopEmbedd...](https://who.rocq.inria.fr/Dumitru.Potop_Butucaru/potopEmbeddedHandbook.pdf)).
This is a model that lends itself very nicely to formal reasoning, and there
are others.

~~~
tome
> if your explanation is "this has benefits in practice" then you're really
> claiming empirical, rather than theoretical, benefits, but then you need to
> be able to support those

I disagree strongly with your position on this.

Peaker has clearly found, as I have, that Haskell is more effective for him.
We have pointed out repeatedly what the benefits are (as well as pointed out
the drawbacks). It would be simply impossible for the stated benefits not to
be beneficial in practice. The only question is whether the benefits are
outweighed by the drawbacks. If you have already paid the one-off cost of
learning the gnarly corners of Haskell then those drawbacks are significantly
diminished.

You must realise that your insistence on empirical research is an idosyncracy
and that people make decisions on programming languages all the time without
such research. They are not, in its absence, merely "guessing" which language
to use. They are making a decision based on their understanding of their own
needs and on the strengths of the languages under consideration.

Furthermore, I wouldn't trust any empirical research on languages any more
than I would trust empirical research on cholesterol[1].

In the absense of convincing empirical research pointing in either direction I
think people should be free to make informal claims that "Haskell is more
reliable than Python and more productive than Ada" based on their own
experience and the experience of their colleagues. It's all we've got to go
on. It's not ideal, but it's also not wrong.

[1] [http://www.cbc.ca/news/health/old-cholesterol-warnings-
steep...](http://www.cbc.ca/news/health/old-cholesterol-warnings-steeped-in-
soft-science-may-be-lifted-in-u-s-1.2953462)

~~~
pron
> It would be simply impossible for the stated benefits not to be beneficial
> in practice. The only question is whether the benefits are outweighed by the
> drawbacks. If you have already paid the one-off cost of learning the gnarly
> corners of Haskell then those drawbacks are significantly diminished.

Provided that you think that the only significant drawback is the learning
curve. I think that the PFP abstraction itself is a drawback, and that you can
get most of Haskell's benefits (leaving aside their real-world value for a
moment) without it. In particular, I think that you cannot point at _any_
real-world benefits of the Haskell approach over, say, the OCaml approach, and
that's even before applying things like effect systems to OCaml.

> people make decisions on programming languages all the time without such
> research

Of course, but if they do, they cannot claim real benefits that aren't real.
The reason some people use Haskell is that it fits well with how they like to
think about and write code; the reason many people don't use Haskell is
because it doesn't fit with their preferred style, _and_ because there is no
compelling evidence for why they should even try to change their methodology.

My only "insistence" is that you either claim actual benefits _and_ present
empirical data to support it, or don't present empirical data but claim only
personal preference. What you don't get to do is say, "Haskell leads to code
with significantly fewer serious bugs" _while at the same time_ not show any
evidence that it does. The reason you don't get to do that is that such a
claim is one with serious (theoretical and financial) implications, and strong
claims require strong evidence (or at least some more convincing evidence than
what we have).

> Furthermore, I wouldn't trust any empirical research on languages any more
> than I would trust empirical research on cholesterol

OK, but you're saying that I should go full vegan based on even less than
that.

> It's not ideal, but it's also not wrong.

How do you know it's not wrong?

But let me refine this. I agree that it's very likely that "Haskell is more
reliable than Python and more productive than Ada", but that's not the real
argument. The real argument is that Haskell is _a lot_ more reliable than
Python and _a lot_ more productive than Ada. I don't see how you can possibly
claim that based just on personal experience.

But let me refine this further: there's personal experience and personal
experience. There's personal experience based on measuring actual project
costs and comparing them -- even though projects are not _exactly_ comparable
-- now _that 's_ not ideal but not wrong, and there's personal experience
based on gut feeling. I don't know how you can say that that's not wrong.
Also, there's collected personal experience from hundreds of projects in many
domains and various sizes -- _that 's_ not ideal but not wrong -- and there's
personal experience from a handful of projects, nearly all quite small, in one
or two domains. I don't know how you can say that's not wrong (unless you
qualify the domain, which you don't).

At this point in time, all we can say about Haskell is this: some people
greatly enjoy Haskell's programming paradigm; some people report possibly
significant but not big gains in the handful of medium-to-large production
projects where the language has been used. So far the approach is showing some
(though not great) promise and requires further consideration.

~~~
tome
> Provided that you think that the only significant drawback is the learning
> curve.

No, you misread me. I acknowledge other significant drawbacks, such as
immaturity of tooling and infrastructure.

> I think that the PFP abstraction itself is a drawback, and that you can get
> most of Haskell's benefits (leaving aside their real-world value for a
> moment) without it.

OK, that would be great! I am genuinely interested in understanding how to do
that. I would _love_ to see PFP as a drawback and obtain its benefits without
requiring its rigors. So far I have failed to understand your ideas about how
to do that and I can only continue to see PFP as a massive boon.

> In particular, I think that you cannot point at any real-world benefits of
> the Haskell approach over, say, the OCaml approach, and that's even before
> applying things like effect systems to OCaml.

(One of) the real-world benefit(s) is that I can write a substantial part of a
program and _know_ from inspecting only a single line (its type signature)
what effects it performs. How can OCaml give me that benefit?

> What you don't get to do is say, "Haskell leads to code with significantly
> fewer serious bugs" while at the same time not show any evidence that it
> does. The reason you don't get to do that is that such a claim is one with
> serious (theoretical and financial) implications, and strong claims require
> strong evidence (or at least some more convincing evidence than what we
> have).

I think that says more about how you interpret informal comments on the
internet than it does about those making the comments.

> > Furthermore, I wouldn't trust any empirical research on languages any more
> than I would trust empirical research on cholesterol

> OK, but you're saying that I should go full vegan based on even less than
> that.

Interesting. Where did I say you _should_ go (the equivalent of) "full vegan"?

> > It's not ideal, but it's also not wrong.

> How do you know it's not wrong?

At least, it is not _known_ to be wrong.

> But let me refine this. [... snipped useful elucidation ...]

> At this point in time, all we can say about Haskell is this: some people
> greatly enjoy Haskell's programming paradigm; some people report possibly
> significant but not big gains in the handful of medium-to-large production
> projects where the language has been used. So far the approach is showing
> some (though not great) promise and requires further consideration.

Entirely agreed. I think my gripe with you at this point is that you read too
much in to people's informal claims and cause unnecessary aggravation by
derailing threads. It's quite clear that you could actually contribute
constructively, so I wish you would. Please can you explain in detail how I
can get the benefits of PFP without its drawbacks?

~~~
pron
> I would love to see PFP as a drawback and obtain its benefits without
> requiring its rigors.

> (One of) the real-world benefit(s) is that I can write a substantial part of
> a program and know from inspecting only a single line (its type signature)
> what effects it performs. How can OCaml give me that benefit?

That's a property, not a real-world benefit. It's like saying that one of the
real-world benefits of Haskell is that its logo is green. In any case, don't
know about OCaml, but in Java I just click a button, and get the call tree for
the method (or, inversely, get the reverse tree for all methods eventually
calling printf). More to the point, see some of Oleg Kiselyov's work on type
systems for continuations here:
[http://okmij.org/ftp/continuations/](http://okmij.org/ftp/continuations/)

> I think that says more about how you interpret informal comments on the
> internet than it does about those making the comments.

I think that you should decide whether this is a serious discussion or
grandstanding.

> Where did I say you should go (the equivalent of) "full vegan"?

Maybe you didn't say that I _should_ , but you did say that it's _better_ to
be vegan (i.e. make a very significant "lifestyle" change without any evidence
to its effectiveness).

> At least, it is not known to be wrong.

That's true. But I wouldn't go around telling people that being vegan has
great health benefit if all we know is that it's not been found to kill you.

> I would love to see PFP as a drawback and obtain its benefits without
> requiring its rigors. So far I have failed to understand your ideas about
> how to do that and I can only continue to see PFP as a massive boon.

OK, but first, a few things: 1/ I don't know if by "rigors" you meant
difficulties or rigorousness, but if the latter, then I don't see why you
conflate rigor with the pure-functional abstraction. Most formally verified,
safety-critical software is written in languages that are far more rigorous
than Haskell, yet are not pure-functional. Which leads us to 2/ these are not
"my ideas"; if correctness is your goal (as it seems to be), most languages
guaranteeing correctness do not espouse the PFP abstractions. Haskell, Coq,
Idris and Agda are used far less than other approaches to ensuring software
correctness. Finally, 3/ I'd like to be careful when I say "benefits", because
we don't know whether they are true benefits, neutral or even detrimental to
software at large. All I can say that in this context, when I say "benefits" I
mean things that I (and you) believe to be positive and see as potentially
advantageous in the "real world".

Now, I will give you two examples (of languages used more than Haskell/Coq
etc.) for "correct" languages, both of them are very rigorous in the sense of
being completely formal, yet they do not suffer from PFP's downsides mainly by
being measurably _much_ easier to learn/teach/adopt. They are not generally
applicable, but neither is Haskell. The first is the set of synchronous
langauges, now used by the industry to design safety-critical realtime
software, as well as a lot of hardware. Instead of PFP, it relies on what's
known as the "synchronous hypothesis". It has been proven over three decades,
as an effective, practical method of writing verifiably correct software by
"plain" engineers in hundreds of critical real-world systems. You can read
more about it here[1]. A generalization of the approach is called Globally
Asynchronous, Locally Synchronous, or GALS, and I believe it has the potential
of being a great, more widely applicable way of writing software in a way that
lends itself to careful reasoning.

The second (you probably saw it coming), is TLA+. It's not a programming
language, so I won't compare it to Haskell but to Coq. Unlike Coq, TLA+ does
not rely on PFP, or Curry-Howard (neither do other verification tools, like
Isabelle) but goes a step further in not being typed at all. It is not
functional yet fully mathematical ("referentially transparent"), and its main
advantage is that while having the same "proving strength" as Coq _when it
comes to verifying algorithms_ [2] while taking days to learn instead of
months, and not requiring any mathematical background more than what any
engineer already has. I guess that the answer to "how?" would be "in a manner
bearing a lot of resemblance to the synchronous hypothesis".

There are, of course, other formal approaches (like CSP), but synchronous
programming in particular has had a lot of success in the industry.

If you want to know about (typed) monads vs. continuations, and their
relationship to typed effects, I'll refer you again to my blog post on the
subject: [http://blog.paralleluniverse.co/2015/08/07/scoped-
continuati...](http://blog.paralleluniverse.co/2015/08/07/scoped-
continuations/) and to Oleg Kiselyov's work, which I've linked to above.

[1]:
[http://link.springer.com/chapter/10.1007%2F978-3-319-11737-9...](http://link.springer.com/chapter/10.1007%2F978-3-319-11737-9_15)

[2]: Coq may be more powerful when proving general mathematical theorems, but
Coq was designed as a general theorem prover, while TLA+ is a language for
verifying software in particular.

~~~
tome
Firstly, and briefly, I don't agree with your approach to epistomology. I
think we're never going to agree there. Let's just agree to be mutally
antagonistic on that front so we can get to the important issue, which is
improving software development.

Secondly, I'm interested in general purpose programming, so as useful and
interesting as your explanation of synchronous languages and TLA+ are, they
are not relevant to me.

I _am_ interested, though, in your thoughts on effects, monads and
continuations. I've read everything you've written on the topic including your
code on Github (and much of what Wadler and Oleg have written) but I'm afraid
I'm no closer to understanding what you're getting at.

Does your notion of "continuation" require threads? If so, Python fails to
have "continuations", right?

~~~
pron
> I'm interested in general purpose programming, so as useful and interesting
> as your explanation of synchronous languages and TLA+ are, they are not
> relevant to me.

There's nothing non-general-purpose in that approach. See, e.g., the front-end
language Céu[1], by the group behind Lua (I think). The short video tutorial
on Céu's homepage can give you a good sense of the ideas involved (esp. with
regards to effects), and their very general applicability. I find that just as
the functional approach is natural for data transformation, the synchronous
approach is natural for composing control structures and interaction with
external events. I think it's interesting to contrast that language with Elm,
that targets the same domain, but uses the PFP approach. The synchronous
approach in Céu is imperative (there are declarative synchronous languages,
like Lustre, that feel more functional) and allows mutation, but in a very
controlled, well understood way. The synchronous model is very amenable to
formal reasoning, and has had great success in the industry.

It's just that hardware and embedded software has always been decades ahead of
general-purpose software when it comes to correctness and verification, simply
because the cost difference between discovering bugs in production and bugs in
development has always been very clear to them (and very big to boot). There
have been several attempts at general-purpose GALS languages (see SystemJ[2],
a GALS JVM language, which seems like a recent research project gone defunct).
OTOH, I believe Haskell would also be considered by most large enterprises to
not be production-quality just yet.

Also, I believe that spending a day or two (that's all it takes -- it's much
simpler than Haskell) to learn TLA+ would at least get you out of the typed-
functional mindframe. Not that there's anything wrong with the approach (aside
from a steep learning curve and general distaste in the industry), but I am
surprised to see people who are into typed-pure-FP who come to believe that
this is not only the best, but the only approach to write correct software,
while, in fact, it is not even close to being the most common one. In any
event, TLA+ is very much a general purpose language — it’s just not a
_programming_ language — and it will improve your programs regardless of the
language you use to code them: it is specifically designed to be used
_alongside_ a proper programming language (it is used at Amazon, Oracle,
Microsoft and more for large, real-world projects). What's great is that it
helps you find deep bugs regardless of the programming language you're using,
it's very easy to learn, and I find it to be a lot of fun.

> I am interested, though, in your thoughts on effects, monads and
> continuations.

Hmm, I’m not too sure what more I can add. Any specific questions? Basically,
anything that a language chooses to define as a side-effect (and obviously IO,
which is “objectively” a side effect) can be woven into a computation as a
continuation. The computation pauses; the side effect occurs in the “world”;
the computation resumes, optionally with some data available from the effect.
Continuations naturally arise from the description of computation as a process
in all _exact_ computational models, but in PFP computation is _approximated_
as a function, not as a continuation. To mimic continuations, and thus
interact with effects, a PFP language may employ monads, basically splitting
the program/subroutine into functions that compute between consecutive “yield”
points, and the monad’s bind that serves as the effect. Due to the insistence
of such languages on the function abstraction, having the subroutine return
just a single value, composing multiple monads can be challenging, cumbersome
and very not straightforward. Languages that aren’t so stubborn may choose to
have a subroutine declare (usually if the language is typed, that is) a normal
return value, plus multiple special return values whose role it is to interact
with the continuation’s scope. An example of such a typed event system is
Java’s checked exceptions. A subroutine’s return value interacts with its
caller in the normal fashion, while the declared exceptions interact with the
continuation’s scope (which can be anywhere up the stack) directly. This
normally results in a much more composable pattern, and one that is simpler
for most programmers to understand.

> Does your notion of "continuation" require threads? If so, Python fails to
> have "continuations", right?

"My" notion of continuation requires nothing more than the ability of a
subroutine to block and wait for some external trigger, and then resume.
Languages then differ in the level of reification. Just as you can have
function pointers in C, but that reification is on a much lower level than in,
say, Haskell or Clojure, so too languages differ in how their continuations
are reified. So, a language like Ruby, is single-threaded and does not reify a
continuation at all (I think). You can't have a first-class object which is a
function blocked, waiting for something. Python, I think, has yield, which
does let you pass around a subroutine that's in the middle of operation, and
can be resumed. In Java/C/C++ you can reify a continuation as a thread
(inefficient due to implementation). In Go you can do that only indirectly,
via a channel (read on the other end by a blocked lightweight thread). In
Scheme, you can have proper reified continuations with shift/reset (and
hopefully in Java, too, soon, thanks to our efforts).

[1]: [http://ceu-lang.org/](http://ceu-lang.org/)

[2]:
[http://dl.acm.org/citation.cfm?id=1823324](http://dl.acm.org/citation.cfm?id=1823324)

------
Tinned_Tuna
I actually really like the language, but there are several key points which
stop me from using it in any production systems:

First and foremost, the build system. Oh god, the build system. Cabal is
wonderful in some respects, but I have often found myself in a situation where
the answer is "delete the ~/.cabal and ~/.ghc-pkg". This should not happen.

Laziness. You get concise programs, but then the memory performance is unknown
until you read it and/or learn to read GHC's IR. You know, I'd really enjoy
being able to know roughly how much memory I need to buy up for the systems
I'm deploying.

Experimental or non-standard extensions. Just no. I'm not going to rely on
non-standard or experimental "stuff" that tie me to a single compiler. This is
made all the worse in that they're all hard-bound to various GHC versions and
you're expected to use them in production (I'm looking at you, web
frameworks). We have standards, stick to them.

It is for these reasons that I can't recommend Haskell to people as anything
more than an interesting language to learn. Until the above is fixed (many of
which are cultural) I will not use Haskell in production.

------
olavk
Haskell, like Perl, is optimized towards writing code rather than reading
code. I would like a language which is to Haskell what Python is to Perl.

~~~
PeCaN
How about OCaml? It's not as theoretically advanced as Haskell, but it gets
work done, is easy to read, fast, easy to reason about the complexity of, has
decent libraries, and is generally quite pleasant.

~~~
oldmanmike
I would be interesting in learning more about the "easy to reason about the
complexity" part of OCaml. The problem I'm having with Haskell isn't that it's
too slow, but that it's so high-level and gets so aggressively optimized at
compile-time, that the performance of the resulting binary has performance
characteristics that are unpredictable and non-deterministic. Performance can
regress between GHC releases, for instance. This just seems to be a problem of
high-level programming languages in general, they're really slow until you
start optimizing them at compile-time. And with each one of those
optimizations, you get another layer of indirection with regards to how
disconnected the speed of the code you wrote _should_ be vs. how fast the
final product actually is. Haskell has no business being as fast as it is
already, but GHC gives it to us at the price of long compile times and very
fuzzy performance properties that are very finicky and expert-friendly.

The problem I see here is just the unsolved computer science problem of how to
reduce the complicated, drawn-out, and error-prone job of a C programmer into
the succinct job of a functional programmer without fundamentally pretending
modern computers work in a way they don't.

~~~
sid-kap
Not trying to detract from your point, but it's interesting to note that even
C and x86 are "pretending modern computers work in a way they don't" (see
[http://blog.erratasec.com/2015/03/x86-is-high-level-
language...](http://blog.erratasec.com/2015/03/x86-is-high-level-
language.html#.VxQh7d8zqHs)).

------
bojo
Very interesting and constructive thread. A lot of information about various
Haskell warts, problems people have, and possible solutions to overcome some
of them.

------
tome
Can I encourage everybody to classify their complaints according to the
following scheme?

[http://h2.jaguarpaw.co.uk/posts/complaining-about-
languages/](http://h2.jaguarpaw.co.uk/posts/complaining-about-languages/)

I believe it would help everybody understand how we can improve people's
experience around Haskell.

~~~
pekk
These are all interacting parts which make up a system. You can't separate the
parts out so cleanly when they have so many interactions.

~~~
tome
Care to give some examples?

Some examples which from my point of view seem pretty clear cut:

Laziness and purity fall squarely under "language specification". There's no
way of seeing them as a problem with libraries or community.

'I think a lot of the Haskell community is obsessed with mathematical
"cuteness", which basically they take to mean "infix-operator-heavy"
notation.' is a problem of community (and arguably language specification),
certainly not of tooling or infrastructure.

"String should not be [Char]" is a problem with libraries (and arguably
specification). Certainly not community or tooling.

------
alectheriault
To everyone complaining about laziness, the new GHC 8.0 compiler (due to be
released any day now) has a ['Strict' pragma option][1].

[1]:
[https://ghc.haskell.org/trac/ghc/wiki/StrictPragma](https://ghc.haskell.org/trac/ghc/wiki/StrictPragma)

~~~
tome
Unfortunately (?) this doesn't really make Haskell a strict language.

------
Findeton
I just prefer to use languages without a Garbage Collector.

~~~
hkjgkjy
Any experience doing functional programming without garbage collector?

I'm doing a lot of Clojure, and can't really see how it would be a pleasant
experience without GC but curious to hear any interesting thoughts.

~~~
qb45
ATS claims to support memory safe functional programming without GC. I never
used this language, though.

IIRC the idea is that allocating memory or writing something through a pointer
returns a magic "cookie" value which certifies that the pointer now points to
a block of certain size or data of certain type. This cookie can be bound to a
variable like any other value and passed to other code together with the
pointer so that the code can know that it's safe to access the pointer.

There are rules - cookies can't be pulled out of thin air or cloned, have to
be destroyed during deallocation, etc. All of this is checked during
compilation and removed from the final executable.

[http://www.ats-lang.org/](http://www.ats-lang.org/)

------
denim_chicken
Haskell sucks because mutability and strictness are extremely important for
writing large complex programs, and Haskell deliberately makes both of these
things inconvenient.

Also, Haskell's ecosystem and tooling is still crappy, probably because
writing complex programs in Haskell needlessly difficult (due to laziness and
immutability).

And the "type-safety" guarantees are a red herring. They are not that useful
in practice.

~~~
tome
> Haskell sucks because mutability and strictness are extremely important for
> writing large complex programs, and Haskell deliberately makes both of these
> things inconvenient.

That's funny, because I'd say that _immutability_ is _essential_ for writing
large on complex programs.

Strictness, on the other hand, hmm, I can take it or leave it.

> Haskell's ecosystem and tooling is still crappy

No argument here.

> because writing complex programs in Haskell needlessly difficult

Argument here :)

> And the "type-safety" guarantees are a red herring. They are not that useful
> in practice.

I've found the opposite. Type safety has saved me so many headaches I'm never
going back.

~~~
denim_chicken
Well, I can't say much other than that I encourage everyone to try Haskell for
themselves and form their own opinion -- if they can find the time to get over
the language's extremely steep learning curve.

------
virmundi
Mine is that it's built on a lie. A nobel lie, but a lie none the less. In
order to be pure it has "stop the world and record it" with monads to do any
IO. It pretends that the world can be snapped shotted and immutable. In
reality the monad is only an extra box with a belief that the world stops. The
world in fact does not.

~~~
solidsnack9000
The IO monad doesn't involve actually recording anything...it's not like MVCC.
What are you actually trying to say here?

~~~
virmundi
It's that a monad looks pure, but actually isn't. Call addition with the same
args and you should get the same answer. Calling next on a stream of io is
destructive.

~~~
solidsnack9000
> ...a monad looks pure, but actually isn't.

This is not true. The arguments for `next` include both:

* The target stream.

* A "state token" or universe.

You get different results with different state tokens.

At a high level, the thing to consider is that we can model all imperative
programming with functional programming -- we can model Turing Machines with
Lambda Calculus -- as well as go the other way around, because Turing Machines
and the Lambda Calculus are, for all intents and purposes, equivalently
expressive.

Modeling IO with state monads isn't a hack; it's the point.

~~~
virmundi
Thank you for the explanation. Haskell's not as bad as I thought in this
respect. My understanding was IO streams were a single input function (ie the
stream). As a result the destructiveness would be less obvious.

Can you replay a get from stream by passing in an older token?

~~~
solidsnack9000
Because the token is passed only within the implementation of `>>=`, which is
private to the IO module, you never have access to it. So no need for the
runtime to keep snapshots lying around.

Monads provide a way to combine them and a way to inject values into them; but
not all monads provide the inspection interface that would allow you to
examine that state token. The definition of a monad guarantees only the
injection and the combination (the "unit" and "multiply").

