One (git-annex) is a large-ish, serious work, and I have been very pleased with how haskell has made it better, even though there was a learning curve (took me two weeks to write the first prototype, which I could have probably dashed off in perl in two days), and even though I have occasionally been blocked by the type system or something and had to do more work. One concrete thing I've noticed is that this is the only program where I have listed every single bug I fixed in the changelog -- because there have been so few, it's really a notable change to fix one!
My other haskell program was essentially a one-off peice of code, which converted ten years of usenet posts from the 80's into "modern" usenet posts. At that point I was over the learning curve, so I wrote it as fast, or possibly faster than I would have written the equivilant in perl, banging out a 800 lines of code in 12 hours or so. And the code is clean, pure, and even has reusable modules, which would never have happened with any other language I've used. And it all worked the first time. Converting the entire known corpus of A news articles to B news, and from there to C news, with success on the first try is an amazing feeling.
I'm going to be sticking with haskell. I do worry that some of my haskell code may need fiddling to keep working for 5 or 10 years though.
Take a look at this stuff in your favorite flavor of the Blub programming language. Most of the same stuff that the Haskell community calls with strange names exists in the imperative/OO world. However, in the imperative/OO world these things tend to be ad-hoc constructions ("design patterns"?) that don't have any theoretical background and are difficult to reason about.
I'm finding more and more that the hardest part of haskell is understanding the syntax and the terminology.
Take the IO Monad. Everyone who wrote some web/servlet code, that needs to output stuff to the web over several classes (might not be the best design in the beginning) uses a Writer (or other class) to aggregate output. He therefore, as global vars are bad (and thread local are worse ;-) adds a Writer to every method.
public T doSomeStuff(T input, Writer io)
This is nasty and ugly, and the IO monad is in essence just another way to write this in a neat form, make it composable and control the IO environment.
public IO<T> doSomeStuff(IO<T> input)
But you would not get this insight from any of the monad tutorials written by Haskell people (except the link above).
Simple monad insight comes form other people outside the Haskell community, e.g. James Iry
It was a really interesting read. I made a follow-up post  that analyzed the free-form responses to the last question, "What do you think is Haskell's most glaring weakness / blind spot / problem?".
Number one by far was libraries (spread across: quality + quantity, library documentation, Hackage, cabal). There are a lot of people working on this though, and progress is being made.
The runners up were 'Tools', and 'Barrier To Entry'.
I think Haskell, at the very least, is a fantastic way to learn some different ways of thinking and good habits. The (little so far) coding in it I've done has been very enjoyable for me, but I don't expect everyone to feel the same way.
It has a very warm community, and a fair bit of momentum. As time goes on I predict (and hope) it will become more viable for a greater number of people to use.
 - http://blog.johantibell.com/2011/08/results-from-state-of-ha...
 - http://nickknowlson.com/blog/2011/09/12/haskell-survey-categ...
I find it hard to articulate the ideas that I came away with. Something like 'code should be built out of small, composable abstractions that obey simple algebraic laws'. It's incredible how powerful this is in combination with pure code (which enables easier composition and algebraic reasoning), typeclasses (which make it easy to express the interface to an abstraction) and quickcheck/smallcheck (which make it easy to express and test the laws which the abstraction should obey).
Others have written about this more clearly than I ever could. In particular, Conol Elliot's writing on denotational semantics  and Chris Okasaki's Purely Functional Datastructures .
The haskell community tends to be dominated by academic research so it's easy to dismiss the typical examples as impractical. Right now I'm trying to apply the same ideas to Kademlia routing in erl-telehash . Hopefully I will eventually be able to demonstrate what I struggle to articulate.
That is not, unfortunately, the world I live in. If it is the world you live in, more power to you. Count your blessing and live happily. But the rest of us need all the help we can get.
Network effect with programming languages is a difficult nut to crack. That's the reason most new languages build on the syntax and features of existing "successful" languages.
The toolchain aspect has been less of an issue for us since we're using it for pure data analysis rather than shoehorning it into building webapps etc.
On reflection, I love(d) Haskell, but the tradeoff that created the IO monad was too much for me. After 5 years using Haskell and a couple of years away from it, I've become convinced that the way Haskell isolates IO and state is not the correct solution. I'm not smart enough to know the correct solution, but creating monads, monad transformers and arrows and then providing sugar to get everything to look imperative again feels wrong. Uniqueness Types feel a bit healthier.
Now that I've said that, let me say that I'm most certainly impressed with the work around IO and state in Haskell because it was an amazing bit of intellectual horsepower and is an incredible step. But it doesn't feel like the final step/answer. Unfortunately, I had to get work done for clients and had to step away from Haskell, but I'm excited to see what the community brings forth. It's the most beautiful language on earth.
(There are a few array libraries, but I'm pretty sure nothing as complete as NumPy+SciPy+matplotlib exists for Haskell..)
Strong typing would be nice to have sometimes, but in practice I think it limits the amount of ad-hoc data analysis and exploration you can do while trying to piece together a piece of software. I think the scientists in our company would murder the software engineering team if they had to deal with strong typing to get things done.
In either case you still have to think about those issues when correctness is a requirement. When it isn't (ad-hoc analysis, prototyping), dynamic languages are more pleasant to work with. When it is a requirement, Haskell is uniquely powerful.
Problem is, many projects are subject to both requirements at the different times. Exploratory at the start, strict once the problem to be solved is identified. Know the tradeoffs and pick your poison.
For what it's worth, of course i do rudimentary testing before running things for an hour, but there's often small things that are missed that don't show up on smaller datasets for one reason or another. Of course it couldn't catch every possible runtime error, but in my real-world experience I have definitely come across things that would have shown up during compilation with strong type, _especially_ if the array sizes were encoded into the type system.
There's something practical about how Haskell makes you think about program structure and data representations ahead of time instead of it being an afterthought; but I agree, this can also be restrictive when you are being exploratory.
The potential is there, it just needs to be exploited.
I think that Haskell is doomed to a life of being a research toy language for the simple reason that that's what people are using it for, not because functional programming is necessarily much harder (because I don't think it is).
If you're interested but unsure, you might take a look around but I don't feel all that guilty suggesting waiting another year or two.
But they don't use Haskell. The ad basically asked if they could talk me down to OCaml. It came up for discussion on the list, and as I recall they preferred OCaml so they could write speedy code without getting hung up on lazy evaluation and its sometimes-difficult-to-predict execution time.
I still haven't looked at OCaml, although I had some exposure to SML in college. I'm looking at Scala these days (The type inference isn't even Hindley-Milner! Scandal!).
Don't get me wrong. I love Haskell, and am willing to put up with things I wouldn't put up with in other languages. But it makes it difficult for me to sell the language to others who will also have to work with it.
This isn't a challenge, but I'm curious what standout libraries that ruby or python have for which Haskell is lacking in a good alternative.
And, of course, you rarely hear Latin mispronounced. In fact, the only thing rarer than hearing Classical Latin being mispronounced is hearing it being pronounced correctly. E.g.:
In Greek, during Caesar's time, his family name was written Καίσαρ, reflecting its contemporary pronunciation. Thus, his name is pronounced in a similar way to the pronunciation of the German Kaiser.
On the bright side, Latin does embody a vast array of fascinating tidbits like that one. Plus you can read two-thousand-year-old poetry and cast spells like Harry Potter.
I discovered this when I looked at using Scala for a particular project. The ratio of good-to-unfinished/poor libraries was much lower than for Ruby. It was also evident that code style/programming conventions had not solidified yet; it looked like the early stages of Ruby, where people weren't yet quite sure how to write "rubyesque" code. (Take a look at Ruby's standard library; it's for the most part very much out of date with "modern style" Ruby.)
I see the same kind of uncertainty with some Haskell projects, like the Text.Regex package, which I tried to figure out how to use, and failed, even with repeated google searches to find examples. The author seems to attempt at a certain ambitious programming style based on typeclasses, but since its usage is undocumented (or was, at the time I tried it ~6 months ago) you have to be a level 15 Haskell wizard to untangle its API. Similar Ruby experiments exist, and are eventually deprecated because of "too much magic".
Sure, Ruby developers create way too many projects that are never finished (or equally bad, are abandoned). But the stable of good-or-great libraries is actually very solid. Probably more so for web development than other things (Ruby doesn't have anything like Python's NumPy, for example); if you start a web project with Ruby you can get absurdly productive by just harnessing a few existing gems.
That said, what really came out of left field was the OP saying that he and his friends are more and more using Go.
Is Go adoption happening? I'd love to have a systems programming language that isn't C or C++, but I'd always assumed that Go was dead-on-arrival specifically because it wasn't C or C++.
Heroku and Atlassian being the notable names.
Also interesting to note, the 15-440 Distributed Systems class at CMU is being taught in Go this semester...
That said, version "1" of the language is due for early next year, which should provide a long-term stable language and set of libraries, for details see:
Slowly (not surprisingly giving the youth of the language), but it is happening, see: http://go-lang.cat-v.org/organizations-using-go
having done a bit of work with haskell myself, i'd say it's even better for this for haskell programmers. if you're using type signatures and you've written a haskell program that passes type-checking, it's highly likely to be a correct solution
Second, A resulting correct Haskell program is likely more reliable, maintainable, and perhaps faster seems fallacious to me. It might as well be that because Haskell is hard, only people with a deep understanding and appreciation of computer science ever venture into it. So, in other words, Haskell programs are good because they're written by very bright people thinking very hard about what they are doing. The same reason old mainframe code works well and average PHP doesn't.
Your hypothesis -- that difficult languages result in more bug-free code, due to selection bias -- is, to say the least, somewhat of a minority opinion.
But even if we accept this, Haskell is different. The difficulties of Haskell are not arbitrary. Nor are they related to the difficulty of understanding the machine at a low level. In fact, Haskell insulates you from all that; it's easily the highest level language in common use.
It's that Haskell insists on program correctness. You have to consider the entire range of values that any function could ever process. And due to its purity, Haskell applies a razor to your thought process, cutting out all unjustified or implicit assumptions.
Luckily it also gives you tools of unparalleled flexibility when it comes to creating abstractions. Creating a new abstraction (like the composition of two functions) isn't just as easy as function application, that's how you do function application in Haskell. Typically with less syntactical fuss than it takes Java to initialize a variable.
I agree that a good type system eliminates an entire class of bugs (being strongly typed is what makes Java bearable), but I rarely see bugs in well-written software caused by not having considered the full range of inputs to your function.
Remember that the "inputs" in non-pure languages are not only the function arguments, but can be global variables, objects (singleton or otherwise), external files, and so on. These are all inputs.
Plus, Java's type system is a toy compared to Haskell.
Yes, I'm aware what "inputs" are. One property of what I consider well written software is the absence of global state. "External files, and so on." are, unfortunately, necessary for any program, even Haskell ones, to do anything useful.
> Plus, Java's type system is a toy compared to Haskell.
Plus, so what? I'm not even beginning to compare Java with Haskell.
I would point out that Haskell can in fact do useful things, so these problems must have been solved. It is a common misconception that Haskell has "problems" with external state, when in fact what it has a way of explicitly managing such state where almost any other language does not. It isn't so much that Haskell is lacking the ability to manage external state as that other languages lack the ability to manage external state, and end up with a solution that from the Haskell point of view looks like a punt rather than a great solution.
But it's difficult to explain how that works in an HN comment; I'd suggest Learn You a Haskell up through the Monad chapter, then spending some time with STM until you grok how and why the type system actually manages to guarantee proper use of STM at compile time. This is interesting because STM has proved effectively intractable in languages that can't maintain the STM constraints at the type level. While the course I've just recommended may take a week of education, that's about all it would take; it's not months, and it's actually a very valuable perspective to gain on the problem of creating quality code I'd recommend to anyone.
Haskell is 'meta typed' (not a real term, just made that up). In Haskell, vars have data types, functions have type signatures, and even type signatures have types called kinds. A type signature is a specification of what types of parameters a function takes, and what type of result it returns. A kind specifies how many rounds of currying a type signature requires.
And it gets more sophisticated from there. Languages like Java with just mere data types are baby typed at best. Not even remotely comparable.
I can attest to the painfulness of the IO monad. I still haven't gotten it fully. The error messages are also fairly cryptic. The first program I wrote was about 15 lines and it took me about 4 hours of time. But it worked flawlessly and efficiently. The difficulties notwithstanding, this is probably the most fun I've had learning a language ever.
It's great to go over old and new math concepts and do so while exploring Haskell.
What is putting me off (tried building small things in Haskell a couple of times) is the already heavily buzzword compliant community, throwing not only a new language with new concept at me, but adding words that at first seem to be made up on the spot.
Additionally, as others have said here, I'm having most troubles interacting with the world (IO!). Pure functions (Math!) are ~easy~ to represent.
You suggest starting with that language and learning 'new math concepts' on the go?
(I'm sure this works for some people and hats off to you guys, but for me this increases the mental complexity immensely)
I find human languages are the same for me. I have always liked to approach them from a bottom up linguistic direction, but this means largely self-structured study.
ps.: I'm not a professional coder and my experience has been on using Python/Shell Script for sysadmin stuff only. However, I constantly read C code to figure out issues and how the book tries to make a parallel between FP and Imperative is quite nice for me.
To learn about the situation, I've put together similar programs in Lisp, OCaml, and Haskell, as well as installed compilers for Haskell & Ocaml on the PPC.
I've coded for over a decade, and I've never encountered such a difficult to use language and jargony community & documentation (including AWK & SED). The only reasons I have been able to do anything are Real World Haskell, Learn You A Haskell, and Stack Overflow.
I'm not going to say Haskell is useless, or has no libraries, etc. Those aren't true. It's also not a bad language because it's weirder than my Blub (Common Lisp). It's a really sweet language, and I think in the hands of an expert, Haskell can dance.
But, I'm going to say Haskell is nearly impossible for an experienced procedural programmer to pick up and go with on the fly. There are a few reasons for my opinion:
* Special operators out the wazzoo. >>= ` ++ :: etc. The wrong 'dialect' of Haskell leads you to believe it's Perl and APL's love child. It's just not clear what something does until you find a reference book. Google doesn't help here - I don't even know the verbal names for some of them. :)
* Monads & in particular, the IO Monad. The number of tutorials and explanations (and the number of new ones) suggest that this is not the most obvious concept in the land. It seems to be very simple if you know what you're doing (and what operators to use), though.
* The REPL is not identical to the compiler. This means that you can't trust the REPL. Coming from Python and Lisp, that is a pain.
* Type messages that are quite unclear, and probably require referring to the Haskell98 report to fully understand.
Regardless, the above are surmountable problems and reasonable when moving to a new paradigm (very frustrating, though).
However, there are two key issues that are close to deal-breakers, with a third more minor one.
* Time to put a small program together. Easily 3x-10x my time working on Ocaml, a language which I am less experienced in (in both languages, I am amazingly inexperienced).
* Building the compiler on PPC (business reasons why I would need to do this). Ocaml builds with the traditional ./configure && make. Very straightforward. GHC requires a cross compile with some funky source tweaks, or possibly a binary package (but the bin package dependency tree required replacing libc++, at which point I stopped). This is a dealbreaker unless I can straightforwardly guarantee my boss a good ROI with Haskell vs. (OCaml or other statically typed language).
* Human costs for my code. It's not professional to have a codebase only I can use in a team. Yes, the team could learn Haskell, but would it be a good ROI? If OCaml gets us there faster...
So Haskell is probably not going to work for me at work. :-( We'll see though.
I can't agree more. This is by far my biggest issue with Haskell. It would be so much easier to learn the language if you didn't have to learn the REPL separately from the language proper.
Do those functions compile, and then crash anyway? I'd be interested to see examples. In my limited experience, if you can get your code to compile, it's pretty stable. Would be interesting to see counter examples.
fn 0 = return ()
main = fn 1
main = head 
$ ghc --make test.hs -Wall
Warning: Pattern match(es) are non-exhaustive
In an equation for `fn':
Patterns not matched: #x with #x `notElem` [0#]
Warning: Pattern match(es) are non-exhaustive
In an equation for `fn':
Patterns not matched:
#x : _ with #x `notElem` [0#]
0# : (_ : _)
makeChange :: Int -> [Int]
makeChange amount = loop 0 [200, 100, 25, 10, 5, 1] 
where loop total coins@(c:cs) solution
| total == amount = solution
| null coins = error "no solution"
| otherwise = if total + c > amount then
loop total cs solution
loop (total + c) coins (c : solution)
To make sure that my code was correct, I wrote a QuickCheck property:
quickCheck (\(Positive n) -> sum (makeChange n) == n)
On the other hand, the exact same algorithm in OCaml runs extremely quick and without a hicup.
Amusingly, if you simply change the type to Integer -> [Integer], it all works ok. I suspect that since Integers have unbounded size, quickcheck only tests with reasonably small ones.
It runs just fine on my box with i = 2^32: 10s to completion or thereabouts.
However, the way this is written, the code has to construct the entire list in memory before it can print any of it out so for larger lists it is pretty much guaranteed to blow the stack and / or memory depending on the computational representation.
If it was using a snoclist or something then it could stream the output and perform the calculation in constance space, as it stands it has to hold on to the whole list of integers before outputting any of them.
I'm surprised that the OCaML version 'just worked' frankly: either a) the OP didn't use QuickCheck with their OCaML code or b) the OCaML QuickCheck doesn't bother testing across the whole Int space.
Operators are just functions, so try Hoogle:
Please note that this is just a simple problem, with a solution of printing out the right reference cheatsheet.
The real difficulty comes (IMO) when looking at piles of symbols in code and trying to determine what kind of meaning is coming from the symbol soup (C++ and Perl are notorious for this too).
Quite often (usually?), of course, public Haskell is written in a very clear and readable style. That's a major reason to use Haskell - to write in a readable language.
I recommend reading Learn You a Haskell instead; the keywords were second nature to me by the time I finished.
I agree that there's too much "symbol soup" Haskell out there that uses infix functions excessively. Even if you recognize all of the functions, you still need to have their precedences memorized to decode the soup.
The haskell wiki link is not adequate, relative to the RW Haskell hard copy index (not avail. online unfortunately), which starts with 1.3 pages of symbol function names (and is missing a few relatively common QuickCheck symbol names).
I was looking for something like the Scala staircase book, first edition freely available online, which has a complete list for that language.
For better or worse, there will never be a complete list of infix functions for Haskell because new infix functions can be defined by the user.
I'm also nervous about the "DLL Hell" the article mentioned. I want to be able to build my programs for the next 10 years without having to worry about dependencies going away.
I believe this is because cabal doesn't generate a manifest of the exact versions your library/application depends on (unlike Ruby's bundler's Gemfile.lock file).
Hopefully this gets resolved soon.
* I suppose if I had to quantify the maintenance timespan, I'd make a WAG of 5-7 years, possibly 10.
* It's also probable that after its solid now, in 5-7 years I will be doing other things and unable to be reassigned to work on this full-time. So it has to be other-people-hackable.
You might be tempted to use it yet but I'd suggest you keep on eye on this language. The potential is great and Haskell is evolving quite fast.
Scala is definitely a better Java imho, and you get to keep the JVM and all its libraries, and the Lift web framework is superb, and you can even build Android apps with Scala.
But ironically I've found myself falling in love with Haskell and not wanting to use anything else. For the first time ever I know what a real type system is and what it's for, and Haskell basically pulls a Steve Jobs in completely rethinking how to do parallelism (strictly control global state and side effects by eliminating them by default, enabled only via monad).
Mind expanding indeed.
What does it matter if an arbitrary language X is beautiful in theory if nobody uses it in practice because they don't comprehend it?
Consider all those theoretically beautiful languages: APL, Ada, Lisp, Haskell, OCaml, ... How many developers use them? I think less than 1 percent in summary. Why?
As a long experienced programmer I discovered that (at least for me) the best technique is not to be fixed on one language but to use metaprogramming. That means, use your current favorite language (or create your own DSL) and compile it to the platforms/languages of choice.
My current recommendation: shenlanguage.org
How would it do that? Can it compile to Dalvik and iOS or something? Not very familiar with Go yet...
Actually, I remember some years ago reading about some Googlers that used C#, no clue what for, and it was a while ago, but it is not unrealistic.
First, I don't like how it makes side effects such a PITA. Fact of the matter is, computing is only useful for the side effects. A computation is useless if the result isn't printed to the screen, saved to a file, sent over the network, or used in some other way. So why make side effects so difficult?
Second, the community, or at least a vocal minority, come off as very condescending. If I have to ask a question on IRC I probably already feel dumb, I don't need somebody treating me like a child because I don't understand something.
I started writing a game in Haskell and found that the scaffolding necessary to do randomness in the "right" way was just too painful. It could be that I hadn't learned the idioms well-enough to see a better way of doing things; but if I had trouble, I think it's fair to say that most people would. Don't get me wrong: I'm a huge fan of functional programming. I just think pure functional programming is impractical. Mutable state is like radioactivity: it's necessary, powerful, and sometimes very useful, but must be handled with extreme care, not promiscuously thrown about.
My favored computation model is one in which the waterline between lambda and pi calculus is clear: message-passing between agents who should ideally be referentially transparent, unless referential non-transparency is part of their design. The upshot of this is that if an agent needs to be optimized using mutable state, none of the others care. That is, it's what OOP should have been.
getStdRandom :: (StdGen -> (a, StdGen)) -> IO aSource
Uses the supplied function to get a value from the
current global random generator, and updates the global
generator with the new generator returned by the
function. For example, rollDice gets a random integer
between 1 and 6:
rollDice :: IO Int
rollDice = getStdRandom (randomR (1,6))
The cultural aversion to IO is Haskell's largest psychological problem. Newbies learn to avoid IO at all costs, and then never back off from the precipice and realize that sometimes IO is with the price you pay.
But yeah, it's harder to do IO in Haskell than non-pure languages, and you have to bend your brain to a different model.
In the worst case, though, you can just run your entire program in the IO monad, and then factor out your pure code bit by bit. None(?) of the Haskell tutorials will tell you do this, but it is the gentlest way to get real work done as a new-to-intermediate Haskeller.
It's been a little while since I worked with Haskell but I'll surprised if nobody ported e.g. Mersenne Twister to it, complete with its own Random monad. If not, maybe that could be a new project...
This. My current opinion as to what would constitute a perfect language is: something built on top of Haskell which would, in some as-yet-unconceived-of way, make it easy to thread mutable state exactly where it needed to go in your code.
Like monads were supposed to do, except readable.
Frankly, the code contains all the information you need to do this already, so perhaps we just need a source-code übereditor that marks it up so that you can see the dataflow.
When I was writing some game simulations I ended up creating a few different monads for different execution contexts. I would have my main world monad which contained configuration data, world state, information about all the actors, the random number seed. Then I would have other contexts like AI which is where the AI would figure out what to do, having handy stuff like all the current actor's data easily accessible, and which would return actions on the World. Then there would be the monad for Actions themselves, which would have a source and a maybe a target.
In short, I found using a few monads to clearly segment how different things interacted with each other actually made the program fairly clear. The Monads were all just stacks of State, Reader, Writer, and Maybe monads. It seemed like a very natural way to write something where the primary goal is to iterate on a function of type "World -> World", i.e. "Game ()".
Yes please! I'd love an uber-editor in which I can visualize the dataflow, control flow, and other static analysis output on-the-fly (for any language). Computers are fast enough to do it. It would aid in understanding and be a code maintainer's dream. Imagine that you change something and get immediate feedback on changes in control flow and data flow, and directly see if it was correct without going through any test cycle.
What's the difference between that and an imperative language?
It seems to me that a much better approach is to allow mutable state everywhere, but to inform the programmer of the consequences via the editor/IDE by showing the result of a static analysis that analyzes which functions are pure and which are not.
The concept of the monad is beautiful and awesome, but it's too different to succeed with the masses.
A monad is just a monoid in the category of endofunctors, what's the problem?
(for anyone not familiar with the in-joke: http://stackoverflow.com/questions/3870088/a-monad-is-just-a...)
That's just, like, your opinion, man. In the context of a declarative programming language, laziness is so intuitive it hardly needs a name.
I've noticed that with Clojure I can enjoy most laziness just the way I want to enjoy it by using lazy sequences and data structures. There's no lazy evaluation but I haven't really bumped into any problems with eager evaluation in Clojure. But Haskell goes deeper and has lazy evaluation as well. What additional, further good does it bring? (Besides nearly impossible debug prints...)
foreign import ccall "stdlib.h random" c_rand :: CDouble -> CLong
tl;dr - Haskell isn't easy enough to use / is lacking good toolchains for author's use cases