Hacker News new | comments | show | ask | jobs | submit login
What I Wish I Knew When Learning Haskell 2.0 (stephendiehl.com)
251 points by tenslisi 1396 days ago | hide | past | web | favorite | 94 comments

One thing that helps alot is the various undergrad course syllabuses that have been given with emphasis on type systems and aspects of FP, immutability, segregating and strictly marking side effects etc.



https://www.inf.ed.ac.uk/teaching/courses/inf1/fp/ (P Wadler)





also the Apress "Beginning Haskell" looks pretty good, tho the writing isn't perfectly clear. The example topics and sample code look good, and that's what mostly counts.


The books by Hutton and Simon Thompson ("Craft of FP" 3rd ed)are good intros as well. Haskell school of music is really good, but not sure for people who aren't versed in music topics (harmony/theory, composition, MIDI, DSP.



I want to highlight this particular course: http://www.seas.upenn.edu/~cis194/

It's REALLY good and can be gone through quickly while still getting a feel for the "point" of Haskell.

I like Hutton's book, it seems simple and elegant without the flair of LYAH (which is good too, don't get me wrong). I picked up the newer Beginning Haskell and it seems decent as well; it has a lot of material akin to Real World Haskell.

also (I think OP linked the #cabal anchor to highlight sandboxes and other cabal hygiene) take note of the 1.20 release, Dependency freezing and no relinking by default


I think it's kind of amusing that right after sternly warning the reader to avoid monad tutorials, he then proceeds to write a monad tutorial. I guess you just can't resist...

The monad tutorials the author is suggesting to avoid are those that make monad analogies. "Monad are like ..." because the analogies are always leaky and pretty much always gives some wrong impressions to beginners.

The presentation here however just gives laws and usage examples. This approach makes it harder to give incorrect impressions to beginners.

This is my favorite monad analogy, though: "Monads are burritos?" http://chrisdone.com/posts/monads-are-burritos

After I finally understood monads I thought about writing a tutorial. My thought process was "Well I understand it now, why couldn't someone have just explained it this way". Then I reevaluated and realized everyone has those thoughts. It's just a hard concept to grasp on the first try. One really just has to grind away and force themselves to use monads for a long enough time to really grok them.

I wouldn't call monads "hard to grasp" so much as "hard to explain" or "easy to obfuscate." That's what the "mumble mumble endofunctors" joke is getting at: sometimes they're a useful generalization, but more often they're a way to make easy things sound hard, or to make bad teachers sound smart.

I agree that they often make things seem harder then they really are. The monad tutorial that works best for me is the "don't mention monads" tutorial. Thats the one where you just show some specific task to complete, like composing (a -> Maybe b) functions, and show that there is a neat operator that does it. Then, at some future time, show another composition task, like composing (a -> [b]) functions, and show the same operator again.

Starting with the abstract just messes people up and gets them confused. You don't start throwing around Kleisli Arrows and endofunctors right off the bat.

>"hard to explain" or "easy to obfuscate."

I wouldn't call it either of those things. They are so general that people have a hard time getting them. That's all.

>That's what the "mumble mumble endofunctors" joke is getting at: sometimes they're a useful generalization, but more often they're a way to make easy things sound hard, or to make bad teachers sound smart.

That is not at all what the joke is getting at (and that doesn't even make sense).

From my own experience teaching Haskell, monads are incredibly hard to explain in terms that are easy for CS students with a systems programming background to understand.

They are easy to explain to 2nd or 3rd year math students.

The difference is in the audience's cognitive bias.

If I understand correctly (and I may not), monads live in two different worlds.

A (Haskell) monad has to be a (mathematical) monad. Haskell is counting on it not just to have certain function signatures, but to have certain properties - certain identities that it satisfies. This lets Haskell do some transformations under the hood. If you hand Haskell something that you say is a monad, and it doesn't satisfy the identities, you're going to get some really unpredictable results.

That's the part that's easy to explain to 2nd or 3rd year math students.

But, unless you're writing an application in group theory, as a programmer you don't really need a monad. You need something that will let you operate on a value that's in a Maybe, and therefore you need a monad. That is, you don't care about it being a monad for the sake of monads, you care about it as a way to solve a particular programming problem.

That's the part that's hard to explain to CS students. They just want to operate on a Maybe. They don't understand why they have to care about it being a monad, or what the connection is between "monad" and "a kind of smuggler that will smuggle this function in to operate on a value that's stuck in jail in a Maybe" (or whatever analogy they're using).

If you see a place my understanding can be improved, feel free to correct it...

    Words confound the mind
    Concepts will obscure the truth
    Code alone brings forth

There's also a very detailed tutorial on writing a toy compiler with Haskell and LLVM on this site:


On a small monitor (11" MBA), not all the topics in the list are visible and the list does not seem to be scrollable.

Somehow related: For those who (like me) were searching for a long time for a quick and complete Haskell tutorial (but were left with very introductory guides for non-programmers like Learn You a Haskell for the Great Good), look at this tutorial:

https://www.fpcomplete.com/school/to-infinity-and-beyond/pic... Haskell Fast & Hard.

I reccomend LYAH even to programmers, there are enough things in the language that it makes sense to go slow at first.

I think beginners get lost because they try to go too fast at first and get impatient.

For example, see below about the guy who tried to do GPU programming early on and got frustrated.

Maybe you're right. I'll never know because I've read LYAH almost entirely before getting to HF&H.

I thought this was supposed to be a list of things to learn _before_ learning Haskell but most of these are things you obviously have no idea what they are if you don't know at least a little Haskell.

Note that these are intermediate level Haskell topics, as intro describes.

This was very timely and interesting to me as I just started learning Haskell. Thanks!

The main force holding me back from learning Haskell is the cryptic syntax. I totally understand that the type system is (supposed to be) superior to other languages but it simply make it really hard to find the time. I am also unsure how useful this language can be in every day use. There are so many great languages out there nowadays, Clojure, Go, Erlang. It is easy to cover the spectrum with these and I don't see where Haskell would be a better fit. maybe somebody could shed some light on the strengths of this ecosystem.

"The main force holding me back from learning Haskell is the cryptic syntax."

Then you should go ahead and start. The syntax is pretty thin. It's not what you're used to, but it's thin.

It's hard to describe what Haskell would be good for without sounding overenthusiastic, so I'll confine myself to saying that Haskell will expand your brain in ways that few other languages will. That said, Clojure is potentially one of those few other languages (depends on your personal development history, though IMHO enough stuff has been stolen from Lisp over the years that it's less of a mind-blow than it used to be), and you ought to learn about modern concurrency from at least one of Haskell, Go, Erlang, or Clojure (or possible Scala via Akka). Haskell opens you to the most correctly-implemented modern concurrency paradigms at once but any of those will get you most of the experience you need.

Ah. So what isn't Haskell good for?

Haskell is not a great language for programs that need to fit within tight memory or latency limits. It's also slower in some cases than C/C++, but it's definitely competitive with other high-level languages. Other than that, it's pretty excellent for everything.

Well, in all seriousness, a situation in which a mind-expanding language isn't called for, because the minds in question don't currently work that way, and I am being 100% serious. I really like Haskell. It's among my favorite languages. However, I don't advocate for it where I work and tend to sort of tamp down on anyone else who tries because despite the fairly polyglot nature of where I work, we don't have the background to use Haskell very effectively at scale. I do this even as I advocate to all and sundry that they learn it anyhow for the mind expansion.

When one is fluent in Haskell I think the language naturally tends to drive you towards well-factored, reusable designs that can handle change well. I say "fluent" because I don't think one needs to be "expert" to get to this level, but certainly there's a level of experience required to be here. Beginners and pre-fluency intermediates in the language produce god-awful messes of code. (I know, because I have! Fortunately I had the foresight to do that to throwaway personal projects, rather than critical business projects.) I think a great deal of the Haskell propaganda is justified, but beware the Haskell enthusiast who recites the propaganda, but can't clearly explain in their own words the mechanisms that Haskell uses to accomplish it.

Actually, scratch Haskell out of that. Beware any tech enthusiast who can't give a coherent explanation in their own words of why the tech stack has the claimed effects, but just repeats the propaganda again.

There's some sound wisdom in this post, jerf. A couple of questions, though:

First, is it Haskell that drives you toward well-factored, reusable designs? Or is it functional programming? (I know, you can't do Haskell without functional programming. You can do functional programming without Haskell, though. Can you separate the two, and say which one is driving you here?)

Second, have you ever seen Haskell deployed in a large, long-lived production code base? How large? How long-lived? How did it work? (I ask because I remember reading an article on Lisp, which said that X in Lisp is just like Y in an object-oriented language, "with the proviso that in Lisp, all the types exist in the programmer's head". I'm pretty sure that won't work very well on a million-line code base that lives for 30 years. Haskell has a very solid type system, so it won't run into that particular problem but... does it have a track record for large-scale work out in the real world?)

"First, is it Haskell that drives you toward well-factored, reusable designs? Or is it functional programming?"

In my opinion, it's the purity, and the general simplicity of the individual pieces which forces you in certain directions. Since this conversation is dead-ish now, it's probably not a big deal to link to my longer musing on this: http://www.jerf.org/iri/post/2908 Even functional languages that make impurity easy and available end up making it too easy to jump down the escape hatch.

"Second, have you ever seen Haskell deployed in a large, long-lived production code base?"

I personally have not. As I said, where I work I tend to tamp down on any efforts to use Haskell, and there's not many! But I trust the people who have and do, and who knows, perhaps may join them someday.

As to Haskell's syntax, I think it's really just as complicated as it needs to be to express the ideas Haskell embodies succinctly and precisely. It doesn't have any more syntax than most other languages; it's just a very different syntax than most non-functional languages. And this is precisely because the semantics of the language are also very different. Haskell, for example, is not block-structured, as nearly all imperative languages are. (It does simulate this with do-syntax, but this is only a simulation). I understand though - I find Clojure practically unreadable, and this has kept me away from it to a degree.

As to where Haskell fits: you're correct, there are many other great languages out there. But I think Haskell is really unique in offering what it does. Clojure, Go, and Erlang are all great in their own regard, but none of them provides the degree of safety that Haskell does. For example, none of them enforce purity, and all three of them freely allow the use of `null` values (which is really quite surprising considering how many problems nulls cause). Clojure and Erlang avoid mutable state like Haskell does (though not to the same degree of rigor) but don't provide the elegant solutions for simulating mutable state that Haskell does with its monads, which almost allow you to "have your cake and eat it too" with state. None of them encourage the degree of abstraction that you'll find in Haskell. Also, at least according to the Computer Benchmark game, Haskell beats all three of them in terms of speed, as well. Haskell's "place" is in generating highly robust code with good performance, and also works as a beautiful language for expressing certain mathematical concepts.

This doesn't make Haskell a perfect language, but it absolutely has a lot to offer that other languages don't. And it's so much different than other languages, that at the very least it can change your perspective on a lot of things.

I think one has to be careful about performance claims.

Of course you can tweak things to be fast in hakell, but to be able to actually program haskell in such a way to, beat other languages in performance, is not a trivial skill.

There are numerous stackoverflow and r/haskell posts "Why is my haskell code 10x times slower than language X?" Or observe this set of web framework benchmarks: http://www.techempower.com/benchmarks/

Of course you can tweak things to be fast in hakell, but to be able to actually program haskell in such a way to, beat other languages in performance, is not a trivial skill.

It actually is trivial in a lot of cases. It's often as simple as swapping out lists for vectors and inserting a few strictness annotations into your types.

...and using the ST monad. In the end it pretty much ends up looking like an imperative program.

The good news is that it stops at the function boundaries. Your quicksort may be implemented in the ST monad, looking like some bastard child of C, but the function type ends up being

  qsort :: Ord a => Vector a -> Vector a
And that's one of the likeable properties of Haskell: it doesn't deny that real-world programs sometimes require mutability, but allow you to isolate it through the type system. So, even though qsort may use mutable memory for sorting, it is a pure and referentially transparent function.

That good performance is not trivial to achieve is not really here nor there. Haskell is a difficult language and many people come into it with assumptions that come from imperative or strict languages and don't hold up in a lazy functional language. That there are programmers who are using the language in a non-optimal way, and coming away with reduced performance, is not a mark against the language itself. To the extent that the benchmarks prove anything, they prove that you can write Haskell code to be competitive with or any other language, at least in regards to those specific problems. Of course, it certainly doesn't prove that there are no areas in which Haskell - as a language, not in regards to specific uses - is not in the upper level of performance. As to the web frameworks benchmarks, I can't speak to them, as I don't know whether they were written expertly or with the latest available libraries. Maybe they aren't as fast. Regardless, it's clear that it's more than possible to write very performant code in Haskell, so ruling it out on those grounds makes little sense.

I don't think I made a mark against the language itself, or tried to rule it out, or tried to indicate it cannot attain the highest levels of performance -

Yes, my point was just that there are programmers using Haskell in a non-optimal way. Beginners seem to be surprised when their Haskell code isn't automatically in the upper level.

I'm not really a great fan of benchmarks, or this particular benchmark, but at least the source of every test is available:




> at least the source of every test is available

Also true of the benchmarks game. (Also the command line used to compile and run the program, etc)

Except the code used in the benchmarks game is hardly idiomatic

According to who's idea of what is or isn't idiomatic? Yours? Don Stewart's?


Is this "idiomatic": http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

"One can, with sufficient effort, essentially write C code in Haskell using various unsafe primitives. We would argue that this is not true to the spirit and goals of Haskell, and we have attempted in this paper to remain within the space of “reasonably idiomatic” Haskell. However, we have made abundant use of strictness annotations, explicit strictness, and unboxed vectors. We have, more controversially perhaps, used unsafe array subscripting in places. Are our choices reasonable?"

"Measuring the Haskell Gap" http://www.leafpetersen.com/leaf/publications/ifl2013/haskel...

The meteor example isn't bad at all, no.

No, that's not idiomatic Haskell code for an application. Foreign.Ptr (and companions) are hardly ever used in normal Haskell code unless you're optimizing for that last bit of performance -- and even then I'd be very hesitant about using it because of the lack of bounds checking, etc.

FWIW, I don't Don Stewart[0] would consider that idiomatic either. He just knows a bit more than the rest of us about how to squeeze more performance out of GHC-compiled code and was willing to use non-idiomatic code to do so.

Also FWIW, personally I don't think language shootouts are very informative most of the time, but... IMO they should at least use idiomatic code -- otherwise they tell you even less than they already do about what real-life performance is like.

[0] Just an example, I know he didn't contribute the particular benchmark you linked to. He did contribute a few of the other programs AFAIK. I think the following sentences also apply to (essentially) all the Haskell shootout contributors.

>I know he didn't contribute the particular benchmark<

This otoh: http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

>Foreign.Ptr (and companions) are hardly ever used in normal Haskell code unless you're optimizing for that last bit of performance<

Do you think "optimizing for that last bit of performance" doesn't happen in the wild?

>IMO they should at least use idiomatic code<

Provide some!


GHC 7.8.2 installed, cabal 1.2 stuff available as-needed.

> Do you think "optimizing for that last bit of performance" doesn't happen in the wild?

Of course it does, but the cost increases non-linearly and you have to find the right trade-off for the situation at hand. Do you want maintainability or absolute performance?

> Provide some!

Of course one could, but that would that accomplish? Everyone who thinks these benchmarks have value just looks at the fastest-performing version of $YOUR_LANG_BENCHMARK. Hint: Mostly it won't be idiomatic regardless of the value of $YOUR_LANG.

>but the cost increases non-linearly<

Do you think the source code for those optimized for the last bit of performance programs tells us something about that?

>Of course one could, but what would that accomplish?<

Could one? Perhaps there'd be complaints that your "idiomatic programs" were just naïve.

A passive website like the benchmarks game can never be enough, by itself, to educate and inform all (or even most) readers -- reading selectively is far too commonplace and errare humanum est.

The benchmarks game is a resource. There are enough examples that when someone draws a misinformed conclusion, we can usually show them a counter-example and ask them to think.

But only if someone contributes those (whatever they consider idiomatic to be) idiomatic programs as counter-examples.

> Of course you can tweak things to be fast in hakell, but to be able to actually program haskell in such a way to, beat other languages in performance, is not a trivial skill.

No kidding:

It's mostly a matter of knowing C, then knowing how to torture GHC into doing what the C would do.

Linking to the benchmark game as if it means something should be a bannable offense. When you look at something that encourages speed at all costs, that is what you will see. People do low level optimizations like that to gain 0.5% better performance. That does not mean you need to do anything like that to get good performance.

> That does not mean you need to do anything like that to get good performance.

Please contribute your Haskell regex-dna program that does not do anything like that and get's "good" performance.

Here's the task description: http://benchmarksgame.alioth.debian.org/u64/performance.php?...

Here's "How to contribute programs": http://benchmarksgame.alioth.debian.org/play.php#contribute

GHC 7.8.2 installed, cabal 1.2 stuff available as-needed.

>Please contribute your Haskell regex-dna program

No. I stopped bothering with that cluster fuck like a decade ago. It is a complete waste of time.

We can stop bothering about your -- "That does not mean you need to do anything like that to get good performance."

You can do whatever you want. But drawing nonsense conclusions won't suddenly become reasonable just because you are defensive about the shootout.

My conclusion is that you've made a claim and done nothing to support that claim.

According to the benchmarks game, one has to be careful about performance claims :-)


The plain Haskell language, without extensions, has a pretty small and trivial syntax. The Haskell report describes it in approx 250 lines[http://www.haskell.org/onlinereport/syntax-iso.html]. Admittedly, the syntax can be overwhelming when multiple extensions are involved because there is no single resource which explains them all in full as succinctly as the report.

The difficulty one might find comprehending Haskell is probably because of the semantics rather than syntax - particularly if you have a background in languages like C++ and Java - the first thing you'll be attempting to do is find parallels between this new obscure syntax and what you already know - but they don't necessarily fit, because the language is fundamentally different.

A common example of misunderstanding for beginners is function signatures - in Haskell, one simply defines a function name, followed by two colons, followed by a type declaration which states the function's type. It's as simple as

    add :: Int -> Int -> Int
If you're attempting to draw the parallel to the equivalent in another language such as C#, you might think this is the same as

    int Add(int a, int b) {
While pretty close, the Haskell equivalent of that is really a coupled argument.

    add :: (Int, Int) -> Int
The C# equivalent of the original is closer to this

    Func<int, Func<int, int>> add = a => (b => ...);
At this point it should become a bit more obvious why that "cryptic" function signature is the way it is - it's because all functions in Haskell are secretly single argument functions, and additional arrows indicate that the return type is another function. In other words, the arrow, -> associates to the right, which makes it unnecessary to parenthesize the returned functions - but you can optionally write them for clarity:

    add :: Int -> (Int -> Int)
As a result, you can partially apply add with a single value, e.g. `add 5`, and a valid function which adds 5 to its argument is returned, e.g. `(add 5) 6 == 11`, and because function application (denoted by a space) associates to the left, you can omit the parens and simply say `add 5 6`.

Given that you have some Erlang background, the syntax for implementing functions should not be foreign to you, because they're quite similar, except perhaps "do-notation", which will won't be very intuitive until you grok monads.

Thank you, I found this very helpful. I'm being selfish, but I'd love some more examples like this where you compare the basics in Haskell against more mainstream, imperative languages - it makes it easier to digest.

Unfortunately, the semantics get even more different from there. Its like learning to speak a new language, the faster you stop trying to translate things in your head, and just understand them directly, the easier it is.

For example, this is the default implementation of foldr, in the Traversable typeclass, in terms of a foldMap implementation:

    foldr :: (a -> b -> b) -> b -> t a -> b
    foldr f z t = appEndo (foldMap (Endo . f) t) z
This does not take long to grok in Haskell, but translating it to any imperative language would make no sense. It requires an understanding of the semantics of Monoids, currying, function application, and Endofunctors. None of these are hard, but none of them exist in more mainstream imperative languages.

But heck, this is at least something like what it is doing:

    from functools import partial

    def foldr(f, z, t):
        for endo in (partial(f, i) for i in t):
            z = endo(z)
        return z
Only, um, not quite, because iterables are not exactly foldables, it lacks the generality of actually calling the foldMap function (which can be different in different types), it has no type signatures so it is less strict about its parameter, and this is cheating a bit because python already has nice functional programming tooling compared to many other languages, and so on. But the jist is there, and even some of the performance qualities of lazyness if you get how iterators work in python.

I personally suggest you just learn it from scratch. Its not just a different skin over the same ideas, like so many other languages.

I think my main beef with haskell isn't the syntax per se, but the culture of single-letter variable names, which this post so wonderfully demonstrates :P

(I know this is just a short demo rather than production code, but all the "good examples of haskell" I saw at university were like that too, and much of the online documentation I saw :/ )

I was a bit unnerved by the single variable names as well. But frankly, using full world variables adds little actual clarity in most cases where they are used. What the variables are is clear from the types, and what the types are is usually stated very clearly within a few lines, barely a glance away for reference. I will concede however that the python example could use variable names for clarity, since no such types are stated nearby. I have found that long variable names don't help, and sometimes even hinder my understanding of haskell code.

    foldRight :: (accum -> item -> item) -> accum -> foldable item -> accum
    foldRight combiner initialAccum foldables = appEndo (foldMap (Endo . combiner) foldables) initialAccum
... that didn't help. After all, each variable was used once, and there were only 3 variables in the entire scope. Most functions are like that, and you can easily keep track of it without breaking a sweat. What is important to keep track of is what types those are, and what functions are valid for those types, and how those types are changed with function application and whatnot.

So its not just because of the test example. I actually copied and pasted the original out of the standard library, so it is about as canonical as it gets. The best possible description for f is (a -> b -> b), and while you might hand wave it as a "combiner" or "accumulationStep" or whatnot, that does almost nothing to clarify how this function works.

The cultural choice is not arbitrary. The real cultural difference is in the level of abstraction so often being "above" the semantics. In java, or python, or whatever else, I will religiously use nice, expressive names. But the short names, and other even crazier things like pointfree syntax actually feel somewhat more natural in Haskell.

The problem is that the functions are so abstract, the variables barely have any meaning. They could be almost anything, so there aren't really any obvious names to give them.

I think this is what makes Haskell so difficult initially. Everything is abstracted to the extreme. Explanations must either stick to the abstractions (making it difficult to grasp), or give concrete examples (necessarily limiting the scope of what is learnt).

In the case of folding at least, I think there is a function, a value, and a collection -- personally I'd call them something like "func", "val", and "list" rather than "f", "z" and "t" -- still very generic, but not completely generic (though perhaps in haskell, functions, values, and collections are interchangeable? I can't imagine how that would work though, if you pass a collection in as the first parameter to foldr...)

There are some single letter names that are kind of standardized and don't get better with longer names, kind of like i, j and k. For example, f and g will usually stand for functions, a and b in a type signature will usually stand for some opaque "value" type and x and y will be used to refer to variables of these types. xs is a list of x and m usually stands for monad something (`m a` is a monadic type, mapM is the version of map for monads, etc).

Going back to the foldr example, its actually not that bad. The f is already clearly a function (from name and type) and the full type of foldr is [1]

   foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
so you actually are told what the `t` is (its a Foldable). I could see them using `zero` instead of `z` but its a 3 line function that does what you would expect from the name so its not that big of a deal. TBH, the bigger problem is having a folding abstraction in the first place. If no one ever teaches you about it, its hard to get the point just by reading the source code but once you know about what a fold is you don't need to actually look inside the implementation that much.

That said, there are cases where I do think that longer variable names would help a bit. For example, you might come across types with 5 or 6 type parameters like [2]

    data Pipe l i o u m r
and I think in these cases the more descriptive names like in [3]

    newtype Form m input error view proof a 
might help a bit. Its still a tradeoff though - types get harder to read if they get too long and types show up all over the place so they tend to need shorter names than variables.

[1] http://www.haskell.org/hoogle/?hoogle=foldr

[2] http://hackage.haskell.org/package/conduit-0.5.0/docs/Data-C...

[3] http://www.happstack.com/docs/reform-0.1.1/doc/html/reform/T...

My guess is that this comes from the math culture where single letter variables are the norm.

I think this[1] blogpost has one of the best introductions to Haskell syntax that I have ever seen, and it doesn't get lost in other stuff. Perhaps you might find it useful.

After some experience, I think the biggest pain point in Haskell's syntax is that it is very light on keywords so many things that would show up as syntax errors in other languages might show up as type errors in Haskell. For example, forgetting a comma in a list will result in a function application between the two list elements instead of a syntax error.

[1] http://blog.ezyang.com/2011/11/how-to-read-haskell/

In this video Erik Hinton talks about using Haskell in the fast-paced New York Times environment:


This is one of the better Haskell videos I've seen - it's nice to see use cases in a high speed environment like the NYT.

I don't think it takes any longer to learn Haskell syntax than all of Clojure's reader macros (you will need them and the semantics is not obvious), or Scala's type system which has a lot going on, and I'm not sure about Erlang but I hear grumblings about syntax.

There isn't actually that much in the way of actual syntax in Haskell (LYAH is a great place to start, don't just try to eyeball random code snippets), and it becomes preferable after some familiarity.

What will take time is learning new concepts, programming purely and learning the tools to manage effects.

This is not true at all in my experience. Theano, a machine learning library for the gpu uses pure programming with syntax extremely similar to numpy, took about an hour to understand and "do useful stuff with".

I've spent more than 20 hours on Haskell syntax and feel like I am not even halfway there to "doing useful stuff".

EDIT: And I think time to be able to do something useful is a very important benchmark because after that point, further learning takes little effort or motivation.

Maybe it is the pattern matching aspect that is throwing you for a loop? Are you familiar with Prolog, Mathematica, Erlang, or syntax-case macros from Scheme? Or is the the dollar sign ($), or something else that may be bugging you? Maybe post a simple example of one of the worst offenders.

It is not one thing in particular. It is just that there are so many things.

While I havent used that library, there are plenty of ways to write obfuscated code in Haskell.

Some libraries may define their own operators (Scala suffers from this too), which can be impenetrable. Some libraries are better than others.

Although, I don't consider defining new operator names part of the syntax of the actual language itself.

I guess I'd just hope one doesnt start out jumping in to a complicated library. Its easier to start with more foundational materials.

* I can see your point about speed to attain proficiency. I wish I had a good answer for this.

Are you sure it's the syntax? Haskell can be pretty complex to understand due to the abstractions involved, especially if you are less familiar with FP, laziness, etc.

When I struggle with Haskell it's more "I don't understand the type of this" than "I don't understand the syntax". Syntax is the easy part!

> It is easy to cover the spectrum with these and I don't see where Haskell would be a better fit. maybe somebody could shed some light on the strengths of this ecosystem.

One of Haskell's unique selling points is the fantastic ease with which you can build software that is correct-by-construction and concise at the same time.

For someone I'm mentoring, I recently came up with two examples of this [1][2].

I don't have time ATM to explain all the details (maybe someone else does), but based on them I intend to write in the near future a blog post titled "Using the expressive power of Haskell's type system to build software that is correct-by-construction". I'll submit the link to HN when that happens.

It will be part 3 of this[3] ongoing series.

[1] https://gist.github.com/dserban/11176875 [2] https://gist.github.com/dserban/11139419 [3] http://techblog.rosedu.org/haskell-part2.html

This is not idiomatic code. Briefly:

    data Inches = MkInches Double deriving (Eq,Show)
There is newtype for this.

    (MkInches u) \+/ (MkInches v) = MkInches ( u + v )
First two pairs of parens are redundant.

    from_inches_to_cm vin =
        MkCm (vin_as_pure_double * 2.54)
                (MkInches vin_as_pure_double) = vin
Surely you want:

    inchToCm (MkInches x) = MkCm (x * 2.54)
In Haskell we use camelCase.

Besides these fairly trivial mistakes there are simply better approaches for dealing with units.

> The main force holding me back from learning Haskell is the cryptic syntax. [...] There are so many great languages out there nowadays, Clojure, Go, Erlang.

Erlang's syntax is very similar to Haskell's, is it not?

Conceptually, theres a lot of stuff thats similar. I found Erlangs syntax to be a little unconventional so it depends on your background.

If you just want to learn a FP language, I'd go with Haskell because you learn a pure FP language that teaches you all the concepts that you will encounter in half-breed languages :) .. people will love me for that comment.

Mixed breeds tend to be healthier then pure breeds. All the narrowing of the gene pool tends to lead to higher incidence of genetic disorders.

Reading the Learn you a haskell book and digesting it really slowly was the way to go for me personally. I saw a lot of parallels with things I love in Python and that drew me in.

>The main force holding me back from learning Haskell is the cryptic syntax

Haskell doesn't have a cryptic syntax. The same complaint would be just as (in)valid leveled at any commonly used language. We're not talking APL here.

>There are so many great languages out there nowadays, Clojure, Go, Erlang.

Go does not belong in that list.

>I don't see where Haskell would be a better fit

For writing software. I realize that sounds snarky, but that really is the answer. To see the problem with your question, just turn it around. Where would clojure or go be a better fit than haskell?

Haskell's a good language, but I would add to this list the use cases as to where Haskell is the right tool for the job. I think this would help a lot beginners get a better grasp of Haskell.

For example, I wouldn't tell someone to write a web server backend in Haskell if it's going to be in production. Sure, you can do it, but there are more adequate tools for the job. On the other hand, if you were looking to create a DSL or write a parser, Haskell would be great. This is how Galois and Facebook are taking advantage of Haskell.

There is greater leverage with Haskell in tackling hard problems or problems where there are existing quality libraries that can be leveraged (such as parsing). Haskell is probably not a good choice in comparison to C/Go/Java if having the best possible CPU performance is a requirement. I am not sure what you mean by a server backend, but that could potentially describe most Haskell code written in the modern networked world.

I think the idea though that one language is objectively better at a certain task gets strongly warped by subjective factors. Ultimately language decision is more about the team of programmers you have, the available libraries and ecosystems, and the team's understanding of it.

Perhaps not surprisingly, we have found that for programmers already experienced in Haskell, Haskell is a great language for most problems.

Yes, I should of been more specific. I meant web server backend. Changed.

I agree. Modern languages are converging on ideas more than ever so it starts to become subjective. However, there exists a subtlety in feature focus. Compare concurrency and distributed programming in Erlang and Haskell, for example. If you're proficient in both and your domain demands high concurrency, why would you choose the lesser of the two which is Haskell?

Why wouldn't you write a web server backend in Haskell? What is more adequate than Haskell for web server backends and why?

You give no support to the claim that "There are more adequate tools than Haskell for writing web server backends".

I'm surprised by this statement as well, considering the rather remarkable performance that Warp server[1] has been able to achieve. Writing highly concurrent servers is arguably one of Haskell's biggest strength.

1: http://aosabook.org/en/posa/warp.html

Warp seems plenty fast, but still didn't seem to do as well as I'd hoped, in this particular example, I wonder if they're doing it wrong.



Warp didn't do as well I had hoped either. Keep an eye on the next tech empower web benchmark though because ghc 7.8 and mio made a lot of stuff faster.

The JVM, for one. The JVM far exceeds Haskell in this regard. Superior libraries and equal if not better performance. I didn't say you couldn't, but for production purposes if I had to choose.

Define superior. Making such bold claims without any proof/examples is very poor form.

Superior is indeed difficult to define. I think most Haskell libraries define far more elegant abstractions than their Java counterparts.

However, as a user of both Haskell and the JVM, there are some things to be said for the Java library ecosystem. For instance, the most widely-used Java libraries are often more mature, maintained better (usually because its authors are paid to do so), and more API-stable (for most of my Haskell code, a dependency version bump usually implies updating my own code). Of course, this is all reasonable, since the Haskell ecosystem exploded far more recently than Java's. The older Haskell pages (bytestring, text, vector) tend to be very stable and predictable.

Another aspect where the JVM is currently ahead is VM monitoring. E.g. if there is a bottleneck, I can easily attach Visual VM or even my IDE to a running Tomcat and peek around. The JVM can do code hotswap (even more effectively with JRebel and LiveRebel).

Note: I posted this to provide some balance to the discussion. I believe Haskell is superior in many other respects, such as the type system, REPL-driven development, web frameworks with type-safe URLSs, etc.

JVM might have superior libraries (though I think that is debatable) but Haskell's performance is pretty top notch. If any compiler is approaching the sufficiently smart compiler, it's the GHC

Haskell happens to be excellent for writing programs.

So is C++ and every other modern language. Every language has its positives and negatives depending on the implementation. Haskell is no different. This is why having some impartiality to languages is important. You use the right tool for the job.

And what is right job for whitespace?

I would like to think so too, yet if you suggest e.g. using Perl or PHP on HN you'll have the typical backlash of "But why not Ruby or Python or etc. it's so much newer and better"

This has been my personal observation since following HN for the past few years.

>For example, I wouldn't tell someone to write a web server backend in Haskell if it's going to be in production.

You probably shouldn't be giving advice then.

>This is how Galois and Facebook are taking advantage of Haskell

Facebook is using haskell for a high performance, high concurrency server for filtering. You do realize how similar that is to a web server right?

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact