Hacker News new | comments | show | ask | jobs | submit login
Why Not Haskell? (neugierig.org)
200 points by yaaang on Oct 17, 2011 | hide | past | web | favorite | 130 comments

I've only written two notable haskell programs.

One (git-annex) is a large-ish, serious work, and I have been very pleased with how haskell has made it better, even though there was a learning curve (took me two weeks to write the first prototype, which I could have probably dashed off in perl in two days), and even though I have occasionally been blocked by the type system or something and had to do more work. One concrete thing I've noticed is that this is the only program where I have listed every single bug I fixed in the changelog -- because there have been so few, it's really a notable change to fix one!

My other haskell program was essentially a one-off peice of code, which converted ten years of usenet posts from the 80's into "modern" usenet posts. At that point I was over the learning curve, so I wrote it as fast, or possibly faster than I would have written the equivilant in perl, banging out a 800 lines of code in 12 hours or so. And the code is clean, pure, and even has reusable modules, which would never have happened with any other language I've used. And it all worked the first time. Converting the entire known corpus of A news articles to B news, and from there to C news, with success on the first try is an amazing feeling.

I'm going to be sticking with haskell. I do worry that some of my haskell code may need fiddling to keep working for 5 or 10 years though.

I was very excited when I first discovered git-annex. When I saw it was written in haskell that sealed the deal for me. The haskell code I run is all very robust and I was very happy you'd selected it for g-a. Have you posted anything else about how specifically Haskell has made it better? I'd enjoy reading more about that.

I've been meaning to blog about that sometime soon. (Edit: I did write this post earlier http://kitenet.net/~joey/blog/entry/happy_haskell_hacker/) Also been meaning to do a screencast showing what I think of as type driven refactoring, since while I've heard haskell programmers discuss it I've not seen it actually demonstrated

I would love to watch that. Sounds really exciting.

Once you manged to grok the basic stuff about Typeclasses, Functors, Applicative, Monads, Monad Transformers, different notions of recursion, a minimal understanding of how lazyness can be quite tricky, the multitude of different ways of how errors and exceptions are handled in various libraries, and perhaps basic STM, Haskell has a tendency to become even more complex. If you want to do efficient IO and prevent space leaks, you have to learn about Iteratees, Enumerators and Enumeratees. If you wish to create GUIs in a bearable manner, its imperative that you learn about FRP and Arrows eventually. Then there's the innumerable amount of language extensions to Haskell of the GHC flavor, each of which comes with only scarce documentation and examples, but with a lot of theory (Fundeps, Type families, Existential types, GADTs ...). But there's more: Comonads, Kleisli Arrows and probably things I haven't even heard about. The whole point being: If you want to be able to handle 'real world stuff' in Haskell, its not enough to stay 'mostly pure' and use StateT, unfortunately. I like Haskell - I shudder when I think of C, C++ and Java - but at this point in time I find it overwhelming.

So what? I'm a computer scientist. I dig this stuff!

Take a look at this stuff in your favorite flavor of the Blub programming language. Most of the same stuff that the Haskell community calls with strange names exists in the imperative/OO world. However, in the imperative/OO world these things tend to be ad-hoc constructions ("design patterns"?) that don't have any theoretical background and are difficult to reason about.

Are most of those names for simple concepts / design patterns / types?

I'm finding more and more that the hardest part of haskell is understanding the syntax and the terminology.

A lot of stuff is quite easy, after reading http://learnyouahaskell.com/ - but there is a tendency in the Haskell community, perhaps not in the core but at the fringes to look more elite, and with a lot of hand waving to make simple things (Monads come to mind) something complex sounding.

Take the IO Monad. Everyone who wrote some web/servlet code, that needs to output stuff to the web over several classes (might not be the best design in the beginning) uses a Writer (or other class) to aggregate output. He therefore, as global vars are bad (and thread local are worse ;-) adds a Writer to every method.

public T doSomeStuff(T input, Writer io)

This is nasty and ugly, and the IO monad is in essence just another way to write this in a neat form, make it composable and control the IO environment.

public IO<T> doSomeStuff(IO<T> input)

But you would not get this insight from any of the monad tutorials written by Haskell people (except the link above). Simple monad insight comes form other people outside the Haskell community, e.g. James Iry


There was recently a survey on the State of Haskell in 2011 [1]. As many people here seem to be echoing, OCaml was the main possible "replacement language". Also included in the survey are ratings of various aspects of Hackage and also Haskell libraries in general.

It was a really interesting read. I made a follow-up post [2] that analyzed the free-form responses to the last question, "What do you think is Haskell's most glaring weakness / blind spot / problem?".

Number one by far was libraries (spread across: quality + quantity, library documentation, Hackage, cabal). There are a lot of people working on this though, and progress is being made.

The runners up were 'Tools', and 'Barrier To Entry'.

I think Haskell, at the very least, is a fantastic way to learn some different ways of thinking and good habits. The (little so far) coding in it I've done has been very enjoyable for me, but I don't expect everyone to feel the same way.

It has a very warm community, and a fair bit of momentum. As time goes on I predict (and hope) it will become more viable for a greater number of people to use.

[1] - http://blog.johantibell.com/2011/08/results-from-state-of-ha...

[2] - http://nickknowlson.com/blog/2011/09/12/haskell-survey-categ...

Haskell is the first language I was ever fluent in. These days I mostly work in python and erlang with a smattering of ocaml but learning haskell had a huge influence on the way I think about software.

I find it hard to articulate the ideas that I came away with. Something like 'code should be built out of small, composable abstractions that obey simple algebraic laws'. It's incredible how powerful this is in combination with pure code (which enables easier composition and algebraic reasoning), typeclasses (which make it easy to express the interface to an abstraction) and quickcheck/smallcheck (which make it easy to express and test the laws which the abstraction should obey).

Others have written about this more clearly than I ever could. In particular, Conol Elliot's writing on denotational semantics [1] and Chris Okasaki's Purely Functional Datastructures [2].

The haskell community tends to be dominated by academic research so it's easy to dismiss the typical examples as impractical. Right now I'm trying to apply the same ideas to Kademlia routing in erl-telehash [3]. Hopefully I will eventually be able to demonstrate what I struggle to articulate.

[1] http://conal.net/papers/type-class-morphisms/

[2] http://books.google.com/books/about/Purely_functional_data_s...

[3] http://github.com/jamii/erl-telehash

I wished I lived in the authors world. A world where you don't have to maintain the code you wrote- or worse yet, someone else wrote. Where you could declare a program "done" and walk away and never have to revisit it. A world where "mostly works" is good enough- secure in the knowledge that you're not going to be rousted out bed at 2 in the morning on a weekend because some server in outer Mongolia hit that corner case you never thought to test. A world where all programs are small enough to fit into my head, completely, in all details. So I could simply know, if I change this, that will break.

That is not, unfortunately, the world I live in. If it is the world you live in, more power to you. Count your blessing and live happily. But the rest of us need all the help we can get.

agreed, maybe I haven't expanded my mind enough but once I started using a widely used language to program in (not saying which) where I can write relatively easily, scalable maintainable code, find plenty of work and pay the bills, my desire to experiment with other, more obscure languages has kind of fallen off, there has to be a reason why some languages are widely used and others aren't. No language is ever going to please everyone but thats how life is, you can't always change the world to suit your preferences

My company (ThoughtLeadr) is using Haskell to analyze big data (social networks) to great effect. Although, I will admit finding great Haskell programmers in the wild is rather challenging.

Network effect with programming languages is a difficult nut to crack. That's the reason most new languages build on the syntax and features of existing "successful" languages.

The toolchain aspect has been less of an issue for us since we're using it for pure data analysis rather than shoehorning it into building webapps etc.

I wrote a post that was picked up by HN a while ago about why I left Haskell, so I won't rehash my arguments, but ...

On reflection, I love(d) Haskell, but the tradeoff that created the IO monad was too much for me. After 5 years using Haskell and a couple of years away from it, I've become convinced that the way Haskell isolates IO and state is not the correct solution. I'm not smart enough to know the correct solution, but creating monads, monad transformers and arrows and then providing sugar to get everything to look imperative again feels wrong. Uniqueness Types feel a bit healthier.

Now that I've said that, let me say that I'm most certainly impressed with the work around IO and state in Haskell because it was an amazing bit of intellectual horsepower and is an incredible step. But it doesn't feel like the final step/answer. Unfortunately, I had to get work done for clients and had to step away from Haskell, but I'm excited to see what the community brings forth. It's the most beautiful language on earth.

I've been doing a whole lot of scientific Python lately, and every time I let my program run for like an hour only to crash on an array of the wrong dimensions, or pass in a scalar where an array is expected or vice versa, I swear and wish I was using a strongly typed language like Haskell. If only it had the same tools as SciPy I'd be all over doing my scientific work in Haskell. (Not to mention it would have faster run times.)

(There are a few array libraries, but I'm pretty sure nothing as complete as NumPy+SciPy+matplotlib exists for Haskell..)

Sounds to me like you're doing it wrong. I write scientific Python code every single day for work and I (nor anyone else who works with me) never run in to those kinds of problems. Why are you running simulations that take an hour before you even know your code work? Where are your unit and integration tests? Every piece of our software stack has (or should have...) tests that verify all the bits fit together correctly in a matter of seconds or perhaps minutes tops.

Strong typing would be nice to have sometimes, but in practice I think it limits the amount of ad-hoc data analysis and exploration you can do while trying to piece together a piece of software. I think the scientists in our company would murder the software engineering team if they had to deal with strong typing to get things done.

The point is that a whole class of things that you would have to write unit tests for, you get "for free" with strong typing, and for a lot of the others, there's QuickCheck. Or as the old saying goes, why get a dog and bark yourself?

Except you don't get them "for free". You get them in the form a ton of restrictions on you that you need to be mindful of when writing it in the first place. You just pay for it in different ways. The biggest difference is that in dynamic languages, you can sometimes get away with being lazy and not writing the tests to verify the things static languages enforce to varying degrees (whether or not that's a good thing is another issue).

Haskell frontloads type considerations and tests via a required, tightly-coupled type system. Dynamic languages backload them via optional testing.

In either case you still have to think about those issues when correctness is a requirement. When it isn't (ad-hoc analysis, prototyping), dynamic languages are more pleasant to work with. When it is a requirement, Haskell is uniquely powerful.

Problem is, many projects are subject to both requirements at the different times. Exploratory at the start, strict once the problem to be solved is identified. Know the tradeoffs and pick your poison.

Who wants to explore errors? Seriously a well designed array library would not limit exploration at all while preventing one from wasting time with trivial mistakes.

Funny that you mention ad-hoc data analysis in the bottom part of your comment, because while i was reading the top part, i was thinking "ha, no _way_ am i going to be writing extensive unit tests while doing exploratory data analysis..."

For what it's worth, of course i do rudimentary testing before running things for an hour, but there's often small things that are missed that don't show up on smaller datasets for one reason or another. Of course it couldn't catch every possible runtime error, but in my real-world experience I have definitely come across things that would have shown up during compilation with strong type, _especially_ if the array sizes were encoded into the type system.

There's something practical about how Haskell makes you think about program structure and data representations ahead of time instead of it being an afterthought; but I agree, this can also be restrictive when you are being exploratory.

I agree, and this is something I am investigating also. Data Parallel Haskell is an awesome base for something like this, but it is hard to tell if anyone is actually still working on it - and even harder to tell how one would contribute. Especially since there are so many spin off projects like Repa which have some of the functionality but just don't quite make the cut (lack primitives, hard to implement new parallel operations, etc...).

The potential is there, it just needs to be exploited.

I think strong type systems for arrays are very much an open problem (as soon as you stick integers in there you go into dependent types land), so for dimension matching you'd probably end up with runtime type errors even in Haskell.

It's not Haskell but its closer than Python. Have you seen Scalala for Scala:


save more intermediate values to disk so resuming upon failure is easier.

It's painful even when it only takes a single time step to reach the error (sometimes upwards of a minute or more). It's as if I've gone from an interpreted language with no compile time to a compiled language with horrendous compile times.

With the trend of SOA and webservers like Mongrel2, perhaps its a good time to start putting the more esoteric languages in production in the form of small services (w/ ZeroMQ for example).

Agreed, zmq and some form of cross language serialization library (we're using protocol buffers) is a seriously powerful tool to have in your box. There are some patterns that seem perfectly suited to a small piece of haskell at the end of a zmq socket.

I am using AQ to help get OCaml into "real" use: http://gaiustech.wordpress.com/2011/05/26/ociml-new-feature-...

I agree with the author, but for different reasons. I think Haskell suffers from a "network effects" problem. It's not made for real world use cases because nobody is using it for the real world. And guess what? To get it adopted in the real world, it needs to support real world use cases.

I think that Haskell is doomed to a life of being a research toy language for the simple reason that that's what people are using it for, not because functional programming is necessarily much harder (because I don't think it is).

I think we're actually just on the verge of the real world network effects starting to kick in. Technically, Haskell is something like 20 years old, but it reminds me in practice of how Python felt when I first came to it in the 1.5.2 era; promising, but missing a lot of infrastructure. It's starting to come in, though. The recent work on performant bytestrings and text representations was one big stopper, and the web frameworks for Haskell are, well, for one thing plural now, which is a nice sign, and many are improving rapidly.

If you're interested but unsure, you might take a look around but I don't feel all that guilty suggesting waiting another year or two.

Haskell has some notoriety in the financial industry. Obviously the language isn't used as extensively as, say C++ in commercial applications, but to go to such extremes as saying "nobody" uses it is plain wrong.

A couple of years ago, the only ads I would ever see on gmail in my haskell-cafe folder were for Jane Street Capital.

But they don't use Haskell. The ad basically asked if they could talk me down to OCaml. It came up for discussion on the list, and as I recall they preferred OCaml so they could write speedy code without getting hung up on lazy evaluation and its sometimes-difficult-to-predict execution time.

I still haven't looked at OCaml, although I had some exposure to SML in college. I'm looking at Scala these days (The type inference isn't even Hindley-Milner! Scandal!).

Have you seen all the libraries up on Hackage? Haskell probably has way more real-world stuff than you think.

Indeed I have. And most of those libraries are unfinished, poorly documented, or both. In short, it's great that people are putting the effort in. But Haskell has a long way to go still before it can compete with languages like python or ruby.

Don't get me wrong. I love Haskell, and am willing to put up with things I wouldn't put up with in other languages. But it makes it difficult for me to sell the language to others who will also have to work with it.

Sorry, but if we're talking about the quantity of poorly documented, broken, shat-straight-onto-github libraries out there, then ruby wins by a mile.

This isn't a challenge, but I'm curious what standout libraries that ruby or python have for which Haskell is lacking in a good alternative.

This is like claiming that Latin is a more useful language than English because almost all of the Latin you see is excellent writing by amazingly talented authors, whereas in English you have all these badly spelled text messages from your friends and Tweets from famous people and love notes from your sweetheart and baby babbles and second-grader scrawls and five-paragraph essays by high-school sophomores and job offers and breaking news and Reddit jokes and HN rants and Wikipedia articles and text-adventure games and comic-book speech balloons and bestselling dime novels to wade through in order to get to the well-written stuff.

And, of course, you rarely hear Latin mispronounced. In fact, the only thing rarer than hearing Classical Latin being mispronounced is hearing it being pronounced correctly. E.g.:


In Greek, during Caesar's time, his family name was written Καίσαρ, reflecting its contemporary pronunciation. Thus, his name is pronounced in a similar way to the pronunciation of the German Kaiser.

On the bright side, Latin does embody a vast array of fascinating tidbits like that one. Plus you can read two-thousand-year-old poetry and cast spells like Harry Potter.

It's not the quantity, it's the proportion.

I discovered this when I looked at using Scala for a particular project. The ratio of good-to-unfinished/poor libraries was much lower than for Ruby. It was also evident that code style/programming conventions had not solidified yet; it looked like the early stages of Ruby, where people weren't yet quite sure how to write "rubyesque" code. (Take a look at Ruby's standard library; it's for the most part very much out of date with "modern style" Ruby.)

I see the same kind of uncertainty with some Haskell projects, like the Text.Regex package, which I tried to figure out how to use, and failed, even with repeated google searches to find examples. The author seems to attempt at a certain ambitious programming style based on typeclasses, but since its usage is undocumented (or was, at the time I tried it ~6 months ago) you have to be a level 15 Haskell wizard to untangle its API. Similar Ruby experiments exist, and are eventually deprecated because of "too much magic".

Sure, Ruby developers create way too many projects that are never finished (or equally bad, are abandoned). But the stable of good-or-great libraries is actually very solid. Probably more so for web development than other things (Ruby doesn't have anything like Python's NumPy, for example); if you start a web project with Ruby you can get absurdly productive by just harnessing a few existing gems.

If the question is, should we use Haskell to solve this problem, or should we use JavaScript, I can't imagine that the answer is often Haskell. (Maybe if the question is, should we use OCaml or should we use Haskell...)

That said, what really came out of left field was the OP saying that he and his friends are more and more using Go.

Is Go adoption happening? I'd love to have a systems programming language that isn't C or C++, but I'd always assumed that Go was dead-on-arrival specifically because it wasn't C or C++.

This presentation has a few examples of real world uses: http://golang.org/doc/talks/io2011/Real_World_Go.pdf

Heroku and Atlassian being the notable names.

Also interesting to note, the 15-440 Distributed Systems class at CMU is being taught in Go this semester...

OP works for Google.

What I want to know is, is Go still undergoing major revisions? I don't seem to have been able to find an answer to this, and it's annoying. I remember there being some fairly big changes some time ago -- not so recently now, but after it was already public -- and I want to know whether that's going to still be happening before I bother learning it.

Go version 1, the first stable version of the language and implementation, is due out early next year. That sounds like a good time to start learning.

There have been many changes to both the Go language and the stdlib, but it is really easy to keep your code updated (specially since gofix was introduced: http://blog.golang.org/2011/04/introducing-gofix.html ) so this should not be an obstacle to learning the language.

That said, version "1" of the language is due for early next year, which should provide a long-term stable language and set of libraries, for details see:


> Is Go adoption happening?

Slowly (not surprisingly giving the youth of the language), but it is happening, see: http://go-lang.cat-v.org/organizations-using-go

"If you've written a Haskell program that runs, it's highly likely to be a correct solution."

having done a bit of work with haskell myself, i'd say it's even better for this for haskell programmers. if you're using type signatures and you've written a haskell program that passes type-checking, it's highly likely to be a correct solution

But just in case it isn't, remember that QuickCheck is a remarkably easy and powerful way to test weird edge cases:


First, the title annoys me: "Why not X?" is almost always the wrong question to ask, because it implies that X is so great that don't even need a reason to use it. If you want me on board, the relevant question is "Why X?".

Second, A resulting correct Haskell program is likely more reliable, maintainable, and perhaps faster seems fallacious to me. It might as well be that because Haskell is hard, only people with a deep understanding and appreciation of computer science ever venture into it. So, in other words, Haskell programs are good because they're written by very bright people thinking very hard about what they are doing. The same reason old mainframe code works well and average PHP doesn't.

I am guessing that you have never worked with Haskell?

Your hypothesis -- that difficult languages result in more bug-free code, due to selection bias -- is, to say the least, somewhat of a minority opinion.

But even if we accept this, Haskell is different. The difficulties of Haskell are not arbitrary. Nor are they related to the difficulty of understanding the machine at a low level. In fact, Haskell insulates you from all that; it's easily the highest level language in common use.

It's that Haskell insists on program correctness. You have to consider the entire range of values that any function could ever process. And due to its purity, Haskell applies a razor to your thought process, cutting out all unjustified or implicit assumptions.

Luckily it also gives you tools of unparalleled flexibility when it comes to creating abstractions. Creating a new abstraction (like the composition of two functions) isn't just as easy as function application, that's how you do function application in Haskell. Typically with less syntactical fuss than it takes Java to initialize a variable.

No, I have not worked with Haskell.

I agree that a good type system eliminates an entire class of bugs (being strongly typed is what makes Java bearable), but I rarely see bugs in well-written software caused by not having considered the full range of inputs to your function.

Really? Try harder :)

Remember that the "inputs" in non-pure languages are not only the function arguments, but can be global variables, objects (singleton or otherwise), external files, and so on. These are all inputs.

Plus, Java's type system is a toy compared to Haskell.

Try harder what? Not seeing bugs I don't see?

Yes, I'm aware what "inputs" are. One property of what I consider well written software is the absence of global state. "External files, and so on." are, unfortunately, necessary for any program, even Haskell ones, to do anything useful.

> Plus, Java's type system is a toy compared to Haskell.

Plus, so what? I'm not even beginning to compare Java with Haskell.

'"External files, and so on." are, unfortunately, necessary for any program, even Haskell ones, to do anything useful.'

I would point out that Haskell can in fact do useful things, so these problems must have been solved. It is a common misconception that Haskell has "problems" with external state, when in fact what it has a way of explicitly managing such state where almost any other language does not. It isn't so much that Haskell is lacking the ability to manage external state as that other languages lack the ability to manage external state, and end up with a solution that from the Haskell point of view looks like a punt rather than a great solution.

But it's difficult to explain how that works in an HN comment; I'd suggest Learn You a Haskell up through the Monad chapter, then spending some time with STM until you grok how and why the type system actually manages to guarantee proper use of STM at compile time. This is interesting because STM has proved effectively intractable in languages that can't maintain the STM constraints at the type level. While the course I've just recommended may take a week of education, that's about all it would take; it's not months, and it's actually a very valuable perspective to gain on the problem of creating quality code I'd recommend to anyone.

Regarding Haskell's type system, it's not even remotely like any type system you're familiar with if you haven't used Haskell yet.

Haskell is 'meta typed' (not a real term, just made that up). In Haskell, vars have data types, functions have type signatures, and even type signatures have types called kinds. A type signature is a specification of what types of parameters a function takes, and what type of result it returns. A kind specifies how many rounds of currying a type signature requires.

And it gets more sophisticated from there. Languages like Java with just mere data types are baby typed at best. Not even remotely comparable.

I'm learning Haskell right now. It's the most recent language I've decided to pick up since R back in 2006. Despite dabbling in Lisp about a decade ago, and the functional flavoring of R, it's been a pretty difficult ride. However, it's definitely teaching me to think about writing programs in a very different way.

I can attest to the painfulness of the IO monad. I still haven't gotten it fully. The error messages are also fairly cryptic. The first program I wrote was about 15 lines and it took me about 4 hours of time. But it worked flawlessly and efficiently. The difficulties notwithstanding, this is probably the most fun I've had learning a language ever.

I've found that the error messages quickly start making more sense (most of the time anyway...); in fact, now I parse the Haskell type error messages faster and more accurately than similar messages in Java or the like. I'm sufficiently spoiled now that error messages in languages like Python just throw me off completely...

For those interested in learning Haskell, I can recommend the lesser known "Haskell Road to Logic, Maths, and Programming":


It's great to go over old and new math concepts and do so while exploring Haskell.

I don't know. For me this sounds like a very bad idea.

What is putting me off (tried building small things in Haskell a couple of times) is the already heavily buzzword compliant community, throwing not only a new language with new concept at me, but adding words that at first seem to be made up on the spot.

Additionally, as others have said here, I'm having most troubles interacting with the world (IO!). Pure functions (Math!) are ~easy~ to represent.

You suggest starting with that language and learning 'new math concepts' on the go?

(I'm sure this works for some people and hats off to you guys, but for me this increases the mental complexity immensely)

I think you raise a good point and I wouldn't want to put others off if this isn't commensurate with their learning style. I suspect the overlap between math geeks and haskell learners is pretty big, but one should, as you are, evaluate one's own preferred learning style against this approach.

I find human languages are the same for me. I have always liked to approach them from a bottom up linguistic direction, but this means largely self-structured study.

I've read numerous Haskell docs online and they are pretty good. However I felt the need for a more in-depth book and I chose "Haskell: The Craft of Functional Programming" (3rd edition) and I'm LOVING it. It's excellent and I recomment it for anyone starting on Haskell.

ps.: I'm not a professional coder and my experience has been on using Python/Shell Script for sysadmin stuff only. However, I constantly read C code to figure out issues and how the book tries to make a parallel between FP and Imperative is quite nice for me.

I'm going through this now. I've no math/logic background but wish to learn, so is perfect for where I'm at. Interactivity is a great learning tool. The book's focus isn't on real world software engineering though, so anyone expecting that should look elsewhere.

I've been spending time learning Haskell lately, as part of an investigation into tools which are amenable to static analysis at work.

To learn about the situation, I've put together similar programs in Lisp, OCaml, and Haskell, as well as installed compilers for Haskell & Ocaml on the PPC.

Well -

I've coded for over a decade, and I've never encountered such a difficult to use language and jargony community & documentation (including AWK & SED). The only reasons I have been able to do anything are Real World Haskell, Learn You A Haskell, and Stack Overflow.

I'm not going to say Haskell is useless, or has no libraries, etc. Those aren't true. It's also not a bad language because it's weirder than my Blub (Common Lisp). It's a really sweet language, and I think in the hands of an expert, Haskell can dance.

But, I'm going to say Haskell is nearly impossible for an experienced procedural programmer to pick up and go with on the fly. There are a few reasons for my opinion:

* Special operators out the wazzoo. >>= ` ++ :: etc. The wrong 'dialect' of Haskell leads you to believe it's Perl and APL's love child. It's just not clear what something does until you find a reference book. Google doesn't help here - I don't even know the verbal names for some of them. :)

* Monads & in particular, the IO Monad. The number of tutorials and explanations (and the number of new ones) suggest that this is not the most obvious concept in the land. It seems to be very simple if you know what you're doing (and what operators to use), though.

* The REPL is not identical to the compiler. This means that you can't trust the REPL. Coming from Python and Lisp, that is a pain.

* Type messages that are quite unclear, and probably require referring to the Haskell98 report to fully understand.

Regardless, the above are surmountable problems and reasonable when moving to a new paradigm (very frustrating, though).

However, there are two key issues that are close to deal-breakers, with a third more minor one.

* Time to put a small program together. Easily 3x-10x my time working on Ocaml, a language which I am less experienced in (in both languages, I am amazingly inexperienced).

* Building the compiler on PPC (business reasons why I would need to do this). Ocaml builds with the traditional ./configure && make. Very straightforward. GHC requires a cross compile with some funky source tweaks, or possibly a binary package (but the bin package dependency tree required replacing libc++, at which point I stopped). This is a dealbreaker unless I can straightforwardly guarantee my boss a good ROI with Haskell vs. (OCaml or other statically typed language).

* Human costs for my code. It's not professional to have a codebase only I can use in a team. Yes, the team could learn Haskell, but would it be a good ROI? If OCaml gets us there faster...

So Haskell is probably not going to work for me at work. :-( We'll see though.

The REPL is not identical to the compiler. This means that you can't trust the REPL. Coming from Python and Lisp, that is a pain.

I can't agree more. This is by far my biggest issue with Haskell. It would be so much easier to learn the language if you didn't have to learn the REPL separately from the language proper.

This problem with the repl is actually going to disappear in the next ghc release (7.4.*). In particular, as of that ghc release you'll be able to interactively define types and functions (both!) in the repl.

Help is on the way. Support for accepting all top-level declarations in GHCi has been recently added to GHC HEAD: http://www.reddit.com/r/haskell/comments/kmxf2/ghci_now_supp...

I found that one of the largest difference between Haskell and OCaml is the amount of time I spent figuring out how to make code go fast. Because OCaml is eagerly evaluated and that its compiler is extremely simple, I find that the "tools" that I learned in school and in my time coding in other languages apply to making OCaml code fast (and predictibly fast). With Haskell, you need to be more keenly aware of how evaluation is handled, and I guess I have still a lot of things to learn there, because I still have problems making really simple Haskell functions that don't crash.

>because I still have problems making really simple Haskell functions that don't crash.

Do those functions compile, and then crash anyway? I'd be interested to see examples. In my limited experience, if you can get your code to compile, it's pretty stable. Would be interesting to see counter examples.

Non-exhaustive patterns are one thing the compiler can't catch:

  fn 0 = return ()
  main = fn 1
Giving an empty list to head/tail is another:

  main = head []

Speaking of the first example, compiling it with -Wall provides some hints:

  $ ghc --make test.hs -Wall
      Warning: Pattern match(es) are non-exhaustive
               In an equation for `fn':
                   Patterns not matched: #x with #x `notElem` [0#]

Wow, that's cool. It even works for non-Bounded argument types, as in fn [0] = 0:

  Warning: Pattern match(es) are non-exhaustive
        In an equation for `fn':
            Patterns not matched:
                #x : _ with #x `notElem` [0#]
                0# : (_ : _)

I'm at school all day, but I'll post something tonight.

OK, so we were learning about greedy algorithms at school and I implemented a very naive implementation of a change making algorithm for Canadian coins. Here's the Haskell code:

    makeChange :: Int -> [Int]
    makeChange amount = loop 0 [200, 100, 25, 10, 5, 1] []
        where loop total coins@(c:cs) solution
                  | total == amount = solution
                  | null coins = error "no solution"
                  | otherwise = if total + c > amount then
                                    loop total cs solution
                                    loop (total + c) coins (c : solution)
(I could make this a lot better by returning an [(Int, Int)] and by using integer division, but I wanted to just follow the algorithm described in the textbook.)

To make sure that my code was correct, I wrote a QuickCheck property:

    quickCheck (\(Positive n) -> sum (makeChange n) == n)
However, running this after compiling my file with GHC causes a stack overflow and I need to Ctrl+C out of the process.

On the other hand, the exact same algorithm in OCaml runs extremely quick and without a hicup.

quickcheck is running makeChange with an arbitrary Int. maxBound :: Int here is 2147483647. When given a number that large, makeChange recurses a lot, subtracting one two-dollar coin at a time, so you blow the stack. This is where you need to consult a haskell guru to find a way to make your code tail-recursive -- or find a smarter algorithm (using mod c for example so it only needs to recurse 6 times total).

Amusingly, if you simply change the type to Integer -> [Integer], it all works ok. I suspect that since Integers have unbounded size, quickcheck only tests with reasonably small ones.

It's even worse than that if he's on a 64-bit machine!

It runs just fine on my box with i = 2^32: 10s to completion or thereabouts.

However, the way this is written, the code has to construct the entire list in memory before it can print any of it out so for larger lists it is pretty much guaranteed to blow the stack and / or memory depending on the computational representation.

If it was using a snoclist or something then it could stream the output and perform the calculation in constance space, as it stands it has to hold on to the whole list of integers before outputting any of them.

I'm surprised that the OCaML version 'just worked' frankly: either a) the OP didn't use QuickCheck with their OCaML code or b) the OCaML QuickCheck doesn't bother testing across the whole Int space.

Also, a quick test reveals that quickCheck on an Int will by default test 100 Ints across the entire range up to Int::maxbound. If your Ints are 64 bit this really isn't going to work very well on this code, regardless of what language you write it in unless you can stream the output. Any code that holds on to the list is going to fall over, since the size of the list is going to exceed physical memory for larger test values.

The code is already tail recursive, which is why it's doubly puzzling. Also, like you said, using an Integer instead fixes the problem. But I find that fixing these issues distracts me away from the main problem and that doesn't happen in OCaml.

Special operators out the wazzoo...Google doesn't help here - I don't even know the verbal names for some of them.

Operators are just functions, so try Hoogle:


FYI: Hoogle dies on :: and `, and gives a wrong result for =>.

Please note that this is just a simple problem, with a solution of printing out the right reference cheatsheet.

The real difficulty comes (IMO) when looking at piles of symbols in code and trying to determine what kind of meaning is coming from the symbol soup (C++ and Perl are notorious for this too).

Quite often (usually?), of course, public Haskell is written in a very clear and readable style. That's a major reason to use Haskell - to write in a readable language.

You won't find :: and => in Hoogle because they're keywords, not functions. For keywords, see the the wiki page that gtani posted:


I recommend reading Learn You a Haskell instead; the keywords were second nature to me by the time I finished.

I agree that there's too much "symbol soup" Haskell out there that uses infix functions excessively. Even if you recognize all of the functions, you still need to have their precedences memorized to decode the soup.

(too late to edit above)

The haskell wiki link is not adequate, relative to the RW Haskell hard copy index (not avail. online unfortunately), which starts with 1.3 pages of symbol function names (and is missing a few relatively common QuickCheck symbol names).

I was looking for something like the Scala staircase book, first edition freely available online, which has a complete list for that language.

Whatever symbols you're seeing in RWH that aren't on that wiki page are probably functions, not keywords. For functions, you should use Hoogle.

For better or worse, there will never be a complete list of infix functions for Haskell because new infix functions can be defined by the user.

I've had very similar experiences -- especially with the operators.

I'm also nervous about the "DLL Hell" the article mentioned. I want to be able to build my programs for the next 10 years without having to worry about dependencies going away.

I believe this is because cabal doesn't generate a manifest of the exact versions your library/application depends on (unlike Ruby's bundler's Gemfile.lock file).

Hopefully this gets resolved soon.

//would it be a good ROI? I am only a haskell beginner but i think we can break down this question a little more specific by talking about exact timespan of the projects. From what i understand haskell's maintainability is a huge advantage in some projects with never ending specification changes and feature requests.(ERP??) Anyone has tried haskell for something like that? or were stopped by the chicken-egg problem of finding ppl to maintain the code?? Would be good to have some data points.

The key reason I am looking at this kind of technology is because I have a large block of code in a situation where it is very difficult to do functional or unit tests due to system design. Being able to write in a language that statically analyzes my code for all the errors it can before my poor customers encounter it sounds like a huge win. The maintenance goal is being able to catch my fat-fingers and design flaws prior to rollout.

* I suppose if I had to quantify the maintenance timespan, I'd make a WAG of 5-7 years, possibly 10.

* It's also probable that after its solid now, in 5-7 years I will be doing other things and unable to be reassigned to work on this full-time. So it has to be other-people-hackable.

It sounds like other-people-hackability is the real problem for you then. i guess it's hard to convince someone from other programming paradigms to spend the effort and time required to get proficient in haskell.

Does the lack of multithreading in Ocaml matter for your applications, I think that is the only thing that would push me towards Haskell over Ocaml.

In this particular domain, not very much, due to the amount of IO interaction. In my opinion, the Ocaml threads library would be sufficient, at least for a while.

I agree with the author, but still it reminds me of "Beating the average" > This is the same argument you tend to hear for learning Latin.

Agreed that Haskell might not be to perfect fit for most of the enterprise projects out there, BUT nevertheless it is worth learning and mastering. The perceived difficulty comes from decades of teaching of imperative programming. I'm myself the product of C/C++/Java and it's amazing how I've manage to deal with so many peculiarities of theses languages. After having learned Haskell, I realize that it's much more coherent than the vast majority of other programming languages (including the functional languages). I'm learning Scala today and frankly, Haskell is much simpler. It is just amazing what you can achieve with good abstractions (parallelism, performance, modularity).

You might be tempted to use it yet but I'd suggest you keep on eye on this language. The potential is great and Haskell is evolving quite fast.

Yes, I actually started learning Haskell recently b/c I thought it would help me better grok Scala, which seems to borrow mostly from the ML family.

Scala is definitely a better Java imho, and you get to keep the JVM and all its libraries, and the Lift web framework is superb, and you can even build Android apps with Scala.

But ironically I've found myself falling in love with Haskell and not wanting to use anything else. For the first time ever I know what a real type system is and what it's for, and Haskell basically pulls a Steve Jobs in completely rethinking how to do parallelism (strictly control global state and side effects by eliminating them by default, enabled only via monad).

Mind expanding indeed.

Haskell demonstrates the difference between theory and practice.

What does it matter if an arbitrary language X is beautiful in theory if nobody uses it in practice because they don't comprehend it?

Consider all those theoretically beautiful languages: APL, Ada, Lisp, Haskell, OCaml, ... How many developers use them? I think less than 1 percent in summary. Why?

Because syntax really matters. For this reason imperative languages like C++, Java, Python, Perl, Javascript, even PHP are so successfull.

As a long experienced programmer I discovered that (at least for me) the best technique is not to be fixed on one language but to use metaprogramming. That means, use your current favorite language (or create your own DSL) and compile it to the platforms/languages of choice.

My current recommendation: shenlanguage.org

In the end the author seems to be using Go for a lot more development. Are people here finding it to be a good solution? It seems like it might be the language that could bridge the iOS and Android systems for app development.

>could bridge the iOS and Android systems for app development.

How would it do that? Can it compile to Dalvik and iOS or something? Not very familiar with Go yet...

Compile to native code. Android has the NDK; they support C/C++.

For me Go has made programming enjoyable again.

I like the shout out to go at the end. If you haven't tried that language yet I highly recommend it.

He kinda has to, being a Googler.

Nope. There is nothing that forces Googlers to either use or promote Go.

Well they ain't gonna use or promote C# are they?

If it's the best tool for the job, why not? They keep using and promoting Java despite Oracle suing their pants off over it.

Google is a very big and diverse organization, I'm sure there are some that do promote pretty much every language you ever heard of (and many you didn't).

Actually, I remember some years ago reading about some Googlers that used C#, no clue what for, and it was a while ago, but it is not unrealistic.

John Skeet?

My problem seems to be that despite desperately wanting to use Haskell for some decent sized project, it never seems to work out as my best option. Right now I'm interested in writing a metro styled app for win8, and even for the server side portions I feel like I'm better off just using CouchDB instead of any Haskell solution that I've seen so far. Before that it was yet-another-web-based-UI-library so my choices were pretty much coffeescript or javascript. Perhaps it's because I like to have fun with UI, but so far anything I'd like to actually spend time on -- Haskell just doesn't seem like the right tool for. My best use of it so far was solving project euler problems, and considering how much I liked it, it feels like a shame that I have nothing better to use it on.

That seems to be exactly what haskell is suited for solving, math problems. Which is great and wonderful when you are doing math related problems or things that are heavy in math but when you try and go outside those bounds that is when haskell becomes much much harder to write and work with. But alas that is both what makes haskell nice and hard to work with.

I've been using Haskell on and off for a few years and have written a few (small) projects in it. Every time I end up turned off of it, though.

First, I don't like how it makes side effects such a PITA. Fact of the matter is, computing is only useful for the side effects. A computation is useless if the result isn't printed to the screen, saved to a file, sent over the network, or used in some other way. So why make side effects so difficult?

Second, the community, or at least a vocal minority, come off as very condescending. If I have to ask a question on IRC I probably already feel dumb, I don't need somebody treating me like a child because I don't understand something.

One of the reasons I'd suggest Haskell to someone is exactly the community. I cannot see a question going unanswered on #haskell or at least 2-3 trying to help (doesn't matter if it's beginner-level or advanced). My experience has been the opposite.

I don't know if this was the intent or a side effect, but one reason for strictly controlling side effects is that it improves parallelization of the code. Global state, mutable vars, other side effects are some of the biggest obstacles to code parallelization, and that's one of Haskell's solutions to it. Haskell and Erlang seem to be the only two languages that seem designed from the ground up for massive concurrency and parallelism.

Haskell is a hard language because (a) laziness is not intuitive, especially when space-performance matters (sadly, it does) and (b) pure functional programming is just not practical for most people. Like the OP said, it's awesome for brain-stretching, but not the easiest language to use.

I started writing a game in Haskell and found that the scaffolding necessary to do randomness in the "right" way was just too painful. It could be that I hadn't learned the idioms well-enough to see a better way of doing things; but if I had trouble, I think it's fair to say that most people would. Don't get me wrong: I'm a huge fan of functional programming. I just think pure functional programming is impractical. Mutable state is like radioactivity: it's necessary, powerful, and sometimes very useful, but must be handled with extreme care, not promiscuously thrown about.

My favored computation model is one in which the waterline between lambda and pi calculus is clear: message-passing between agents who should ideally be referentially transparent, unless referential non-transparency is part of their design. The upshot of this is that if an agent needs to be optimized using mutable state, none of the others care. That is, it's what OOP should have been.

Hmm, as long as you do randomness in the IO monad, it's not any harder to do random numbers than it is to print to the screen.


   getStdRandom :: (StdGen -> (a, StdGen)) -> IO aSource

   Uses the supplied function to get a value from the 
    current global random generator, and updates the global 
    generator with the new generator returned by the 
    function.  For example, rollDice gets a random integer 
    between 1 and 6:

     rollDice :: IO Int
     rollDice = getStdRandom (randomR (1,6))
Of course, printing to the screen can be a pain in the neck.

The cultural aversion to IO is Haskell's largest psychological problem. Newbies learn to avoid IO at all costs, and then never back off from the precipice and realize that sometimes IO is with the price you pay.

But yeah, it's harder to do IO in Haskell than non-pure languages, and you have to bend your brain to a different model.

In the worst case, though, you can just run your entire program in the IO monad, and then factor out your pure code bit by bit. None(?) of the Haskell tutorials will tell you do this, but it is the gentlest way to get real work done as a new-to-intermediate Haskeller.

This is a fair point, but I'd generally prefer not to use the IO monad to generate random numbers because it feels "wrong", just like using IORef for mutable data when there are more "right" types feels wrong. Strictly speaking, pseudo-randomness isn't doing IO unless the source of randomness is considered something to which one is I/Oing.

Right, but pseudo-randomness is still monadic because you are mutating the state of the random number generator. That it is in IO is an implementation detail of where that generator's state lives.

It's been a little while since I worked with Haskell but I'll surprised if nobody ported e.g. Mersenne Twister to it, complete with its own Random monad. If not, maybe that could be a new project...

You have to get the randomness out of IO, because referential transparency and randomness of any kind (psuedo or otherwise) are fundamentally in opposition, but once you have the random source in hand, you can use it any way you like in otherwise pure code. The Random support in Haskell is set up to permit and to some extent support this usage, with the ability to "split" a random generator into something you use now and something you pass along. After that it's your responsibility to properly split & use, but I haven't yet found it to be a big problem in practice.

> I started writing a game in Haskell and found that the scaffolding necessary to do randomness in the "right" way was just too painful. It could be that I hadn't learned the idioms well-enough to see a better way of doing things; but if I had trouble, I think it's fair to say that most people would.

This. My current opinion as to what would constitute a perfect language is: something built on top of Haskell which would, in some as-yet-unconceived-of way, make it easy to thread mutable state exactly where it needed to go in your code.

Like monads were supposed to do, except readable.

Frankly, the code contains all the information you need to do this already, so perhaps we just need a source-code übereditor that marks it up so that you can see the dataflow.

Can you explain a little more what you mean about the dataflow?

When I was writing some game simulations I ended up creating a few different monads for different execution contexts. I would have my main world monad which contained configuration data, world state, information about all the actors, the random number seed. Then I would have other contexts like AI which is where the AI would figure out what to do, having handy stuff like all the current actor's data easily accessible, and which would return actions on the World. Then there would be the monad for Actions themselves, which would have a source and a maybe a target.

In short, I found using a few monads to clearly segment how different things interacted with each other actually made the program fairly clear. The Monads were all just stacks of State, Reader, Writer, and Maybe monads. It seemed like a very natural way to write something where the primary goal is to iterate on a function of type "World -> World", i.e. "Game ()".

That sounds interesting. Any chance the code is open sourced somewhere? I would love to read it.

Frankly, the code contains all the information you need to do this already, so perhaps we just need a source-code übereditor that marks it up so that you can see the dataflow.

Yes please! I'd love an uber-editor in which I can visualize the dataflow, control flow, and other static analysis output on-the-fly (for any language). Computers are fast enough to do it. It would aid in understanding and be a code maintainer's dream. Imagine that you change something and get immediate feedback on changes in control flow and data flow, and directly see if it was correct without going through any test cycle.

> My current opinion as to what would constitute a perfect language is: something built on top of Haskell which would, in some as-yet-unconceived-of way, make it easy to thread mutable state exactly where it needed to go in your code.

What's the difference between that and an imperative language?

The difference is that it only makes well-defined, encapsulatable dependencies on your mutable state easy. It doesn't make totally screwing up easy.

I have a hard time imagining how such a thing would actually work. How would the compiler decide that mutable state somewhere is encapsulatable and somewhere else it is not? A compiler can (very easily in fact) monadify all code for you, but what you get is an inefficient imperative language.

It seems to me that a much better approach is to allow mutable state everywhere, but to inform the programmer of the consequences via the editor/IDE by showing the result of a static analysis that analyzes which functions are pure and which are not.

Not entirely the same, but check Mozilla's rust (still in progress) - https://github.com/graydon/rust/wiki

Monads are not unreadable. When you finally get do-notation and realize that you can use the same idioms for (a) list comprehensions, (b) imperative-style programming, and (c) error-handling and option-chaining, there's a "Wow" moment. However, we can give up on selling monads to the masses. The name alone...

The concept of the monad is beautiful and awesome, but it's too different to succeed with the masses.

>However, we can give up on selling monads to the masses. The name alone...

A monad is just a monoid in the category of endofunctors, what's the problem?

(for anyone not familiar with the in-joke: http://stackoverflow.com/questions/3870088/a-monad-is-just-a...)

> laziness is not intuitive

That's just, like, your opinion, man. In the context of a declarative programming language, laziness is so intuitive it hardly needs a name.

By the way, what concrete benefits do you think lazy evaluation offers in contrast to lazy data structures?

I've noticed that with Clojure I can enjoy most laziness just the way I want to enjoy it by using lazy sequences and data structures. There's no lazy evaluation but I haven't really bumped into any problems with eager evaluation in Clojure. But Haskell goes deeper and has lazy evaluation as well. What additional, further good does it bring? (Besides nearly impossible debug prints...)

With regards to generating random numbers, the Foreign Function Interface is also really useful. Here's a one-line wrapper around random(3).

  foreign import ccall "stdlib.h random" c_rand :: CDouble -> CLong

That will most likely bite you since you're giving a pure type to a non-referentially transparent function. There is a reason why random functions are in a state monad or IO.

In particular, while you don't really care what order the compiler makes the underlying calls to the C function, you do care that it actually makes all the calls rather than CSEing them away. As a pure function, the compiler has every right to optimize away multiple calls to your FFI-bound random function, in favor of a single call and multiple references to the value.

Why haskell is a more appropriate question.

The intro read like it would be a much longer article.

tl;dr - Haskell isn't easy enough to use / is lacking good toolchains for author's use cases

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact