
What it's like to use Haskell - tikhonj
http://engineering.imvu.com/2014/03/24/what-its-like-to-use-haskell
======
exDM69
Thanks especially for the "Not All Sunshine and Rainbows" section. It is all
too easy to write about the positive things and leave the negative parts out.

Resource leaks in Haskell perhaps a bit trickier to track than in other
languages and I would appreciate hearing more about the issue you were
experiencing and how you solved the problem. In many blog posts there have
been warnings against long running Haskell processes but you guys seem to have
fairly successful with it.

Also, the problems you were experiencing with Cabal might be fixed in newer
versions with the sandboxes feature which is now built-in with later versions
of Cabal.

~~~
implicit
We deal with Haskell resource leaks the same way you would in C++ or Java.

We have production monitors on every host that show basic metrics like memory,
disk, and CPU utilization. Atop that, we added a tracker for the number of
suspended Haskell threads. (that is, threads which are not blocked on I/O, but
are also not running)

We found that the machines are usually able to handle requests as soon as they
come in, so if the number of Haskell threads goes above 0 for any length of
time, the machine is about an hour away from melting down.

We can restart the process without losing any connections, so this leaves us a
very comfortable margin of error.

Once we know we have a problem, it's usually pretty simple to run the heap
profiler on the process and look at recent commits. We continuously deploy, so
there's only about a 10 minute delay before a particular commit is running in
front of customers. This makes tracking regressions down really fast.

Even in cases where we can't figure out why a bit of code is leaking, we can
almost always identify it and revert it until we understand what's going on.

~~~
pmahoney
> We can restart the process without losing any connections

Would you mind expanding on this a bit? I'm not too familiar with Haskell, but
I am familiar with various was of blocking new connections while allowing
existing connections to complete, either at the load-balancer level or built-
in each individual process.

What Haskell stack are you using, and how are graceful restarts accomplished?

Thanks.

~~~
implicit
One of my coworkers wrote a really cool bit of software to do this. I want him
to open source it.

Basically, you can share a single socket amongst many servers. The OS ensures
that just one process accepts each connection.

You can therefore have a manager process that owns the socket and passes it on
to application processes.

To update, start new processes, then politely tell the old ones to go away.

~~~
dllthomas
One really cool thing in Linux is that you can actually pass file descriptors
between processes over unix domain sockets.

~~~
enigmo
Windows has supported this for ~14 years too.

~~~
dllthomas
Good to know. Does it work for everything that's an fd in Linux? I know you've
got to treat sockets and files differently in some cases (or at least did
once)...

~~~
enigmo
It works for most kernel handles, sockets might be a little more normal
starting with Win7 but I stopped doing Windows development around then.

Here are the official docs: [http://msdn.microsoft.com/en-
us/library/windows/desktop/ms72...](http://msdn.microsoft.com/en-
us/library/windows/desktop/ms724251\(v=vs.85\).aspx)

~~~
dllthomas
Looks like there's a separate function for sockets.

Still, cool stuff there too.

------
pron
Switching a programming language is a major decision for a company. It
requires changing the runtime library and possibly re-writing a lot of in-
house infrastructure. While personal affinity to a language is important,
making a decision for a company might have substantial business consequences.

For a company, overall language adoption, availability of libraries, tools and
talent (and I'm not talking about training someone to be productive; sooner or
later you need real experts and training an expert is expensive in any
language) are extremely important. That's why when choosing between two
languages, assuming both are well suited to the job at hand, a company should
always pick the one with the wider adoption unless the other is _so
significantly "better"_ to trump lesser adoption. And the smaller the
adoption, the "better" the language needs to be.

There's no doubt Haskell is an excellent programming language. I'm learning it
and enjoying it. But because it has such low adoption rates, it needs to
provide benefits that are orders of magnitude higher than other, more popular
languages. I guess that for some very particular use cases it does, but I just
can't see it in the vast majority of cases.

Hobbyists can afford playing with different languages, and can (and perhaps
should) jump from one to the next. Companies can't (or most certainly
shouldn't, unless for toy projects); they should pick (wisely) a popular
language that's right for them, and just stick with it.

BTW, I think that if a company does want to try a good, though not-so-popular
language, it should pick a JVM language, as the interoperability among JVM
languages, and the availability of libraries that can be shared among the
popular and less-so JVM languages, reduces much of the risk.

~~~
boothead
I about 50% agree with you here. The factor that you're forgetting is the
filter effect: Haskell is not adopted, it's learned after a thankless (there
are no jobs) process of deep study. This means that the quality of available
talent is of a much higher standard. If you adopt a technology with these
characteristics, and you're one of the few who are hiring in this pool then
you have a significant advantage!

The only risk in my mind is when the big boys figure it out and start gobbling
up Haskellers! :-)

~~~
pron
I think this is a bad filter. I'd rather filter prospective employees based on
an ambitious algorithm or a complex project they've done. Most of the people I
know that like solving hard problems with novel data structures, algorithms,
or mechanically-sympathetic implementations are averse to spending time on
learning new programming languages. It seems to them (and to me) like focusing
on bling rather than on substance.

Of course, this is a crude generalization, and obviously, anyone who's
interested in learning any new, perhaps unappreciated, and difficult skill
would make a good hire, and Haskellers are no exception. But I wouldn't filter
based on programming language preference.

~~~
nbouscal
I have to disagree on two points.

First, the implicit point you are making that ambitious algorithms or novel
data structures are useful in industry. For the vast majority of applications
this simply isn't the case.

Second, the argument that people interested in learning advanced computer
science aren't interested in learning new programming languages. This goes
against all of my experience; my friends who are most interested in advanced
computer science concepts are exactly the ones who spend the most time
exploring different programming paradigms as well.

I have one idea of what the distinction might be, though: learning different
programming paradigms is very different from learning different programming
languages. I certainly wouldn't want to waste time learning yet another Java
or yet another Python, since as you say that would be focusing on bling rather
than substance. If this is the distinction, Haskell comes out fine, as it is
clearly a different paradigm from the mainstream languages.

------
amirmc
It's good to hear stories about how FP is being used in industry and I'd like
to read more.

For those who have an interest in this, I recommend the Commercial Users of
Functional Programming workshop ([http://cufp.org](http://cufp.org)). It's
where I first heard about Facebook's use of OCaml to create Hack
([http://www.youtube.com/watch?v=gKWNjFagR9k](http://www.youtube.com/watch?v=gKWNjFagR9k))

~~~
teemo_cute
While visiting your link ([http://cufp.org](http://cufp.org)), I saw Chester
Bennin of Linkin Park fame there. I didn't know that he's also a programmer!
Programming in a functional manner. Another Wow!

~~~
omaranto
Isn't the Linkin Park vocalist named Chester Bennington? I think you're off by
a gton.

~~~
teemo_cute
Chester Bennin is what the website says. Click his picture in the upper-right
corner, you'll be then taken to the page where it says his name is 'chester
Bennin.'

~~~
omaranto
I agree there is a "chester Bennin" listed on CUFP, there is also a person
named "Chester Bennington" who was the lead singer of Linkin Park. What I
disagree with is your claim that these people with different last names are
the same person.

~~~
teemo_cute
Haven't you seen the picture? It looks like Chester. I was not the one who
claimed they are the same person. If they are indeed different then the CUFP
website is at fault for misinformation.

~~~
omaranto
Oh, sorry, I didn't realize that CUFP was saying this. I searched for "chester
bennin site:cufp.org" on Google and the only result returned was this page:
[http://cufp.org/users/genie](http://cufp.org/users/genie)

~~~
teemo_cute
Apology accepted :)

But you're kind of right. I can't really find information on Chester (from
Linkin Park) doing functional programming.

It's either the guy on the picture really looks like Chester, or the said
person (programmer) used Chester's pic as his avatar (unlikely, though, since
they are a reputable sight).

------
Pxtl
Wait, IMVU? As in those terrible banner-ads with the Bratz-dolls-looking
avatars? They use Haskell under the hood?

Seriously, this is like finding out that McDonalds' restaurants are powered by
nuclear fusion.

Although this does make me very curious about learning Haskell - I've been
pretty stagnant about learning new languages.

~~~
judk
McD's runs the most technically sophisticated operation in the industry.

------
kristianp
It always surprises me to see these imvu.com posts on HN. They are often
insightful and interesting, but when you see the website you wonder who pays
money for this. Do they sell virtual goods for their own virtual world,
similar to secondlife?

~~~
teemo_cute
They do. I was once an active IMVU user. Outward appearances can be deceiving,
but believe me they are profitable. If you read Eric Ries book 'The Lean
Startup' you can have a glimpse of IMVU's back story there.

~~~
piokuc
Yeah, IMVU is a company behind a product which most people know about because
of the "Lean Startup" book, not because they used the product itself. I must
say I read the book with great interest but I never found the product itself
interesting enough to even have a look at at :)

~~~
Pxtl
The target market is pretty clearly teenaged/tweener girls. I'm not surprised
to learn that Hacker News doesn't have many of those.

------
noelwelsh
I can't like this post enough. It affirms everything I believe to be true
about developing software using modern languages. I wish more developers would
invest more time in learning better tools.

~~~
pekk
I guess saying "This." didn't seem verbose enough.

Let me propose an incredibly controversial idea: Haskell isn't a better tool,
it's just yet another tool. Which is aggressively promoted not so much with
actual benefits, but by trying to shame people who prefer other tools by
calling them dullards and telling them that they don't know what they are
doing.

~~~
davidw
"This." is an abomination, a pox upon the English language, and, what's more,
serves absolutely no purpose on a site with little arrows to vote for things
you like.

~~~
judk
Votes are for contributions to discussion, not expressing agreement. Please
stop abominating the arrows.

------
Kiro
The problem I had with Haskell when trying it out was that every time I wanted
to use a package with dependencies I ran into "cabal: Error: some packages
failed to install. The exception was: ExitFailure 1.". The solution was most
often trying earlier versions of the package until it worked or not use the
package at all. Cabal just seemed so broken.

~~~
chopin
Not to diminish the Cabal problems here, but you run into similar problems
with Maven as well. Version management is a real hard problem. Especially when
there is little awareness in producing compatible successors. Especially this
problem made my experiences with Haskell somewhat bitter. But I have the same
problem (in really productive code) with Maven. Writing a working Maven plugin
can be made pretty challenging by that as especially the stuff you want to use
in Maven plugins is burdened with incompatibilities.

So, Cabal is in good company in that regard.

~~~
bjourne
Haskell packages like to specify requirements using both a lower bound and an
_upper bound_. So package A depends on B being version 0.55 <= B < 0.99.
Package C depends on B being version 0.90 <= B <= 1.35. So if you installed C
before A you will get version 1.35 of B and will now need to either downgrade
B or have two versions installed. Chaos ensues.

Why Hackage packagers like to specify upper bounds I'm not sure of. Backwards
compatibility either is very hard to achieve in Haskell or maintainers just
don't care that much about it. I don't know about Maven so I can't say whether
that is as bad. But at least pip, npm and gems are a breeze compared to cabal.

~~~
massysett
"Why Hackage packagers like to specify upper bounds I'm not sure of."

There is a policy that says you're supposed to do this:

[http://www.haskell.org/haskellwiki/Package_versioning_policy](http://www.haskell.org/haskellwiki/Package_versioning_policy)

The idea is that by specifying upper bounds, the package is more likely to
build after the dependencies change.

This idea is subject to considerable controversy.

[http://www.haskell.org/pipermail/haskell-
cafe/2012-August/10...](http://www.haskell.org/pipermail/haskell-
cafe/2012-August/102885.html)

There's also been more recent mailing list traffic on it, but my Google skills
are failing me here and the arguments haven't changed all that much.

My opinion is that we are relying too heavily on upper bounds to solve a whole
bevy of interrelated, but separate, problems. Other solutions are evolving to
help deal with some of the problems. cabal sandboxes allow you to build a
package separately from your others, reducing dependency issues. cabal freeze
will help you specify exact dependencies, including transitive ones. Another
project, Stackage, aims to maintain a whole slew of packages in a buildable
state:

[https://github.com/fpco/stackage](https://github.com/fpco/stackage)

~~~
bjourne
It's an incredibly stupid policy. As others have noted, other package managers
also has the capability of setting upper bounds on package versions and pip
also has the ability to freeze your dependencies to specific versions.

However those features aren't necessary to productively use pip for example.
Because Python package maintainers are very good at maintaining backwards
compatibility (or using deprecation cycles when those are needed) because
people that depend on their packages get upset with them. The rules are
different for beta releases because it's your own fault if you depend on them
and their api changes.

Precisely because specifying upper bounds is encouraged, Haskell maintainers
seem to act as if they have a free card when it comes to backwards
incompatible changes. In the long run that causes much more packaging troubles
then specifying upper bounds saves. Perhaps there's nothing wrong with Haskell
nor Cabal, but the Haskell community's way to handle backwards compatibility
is wrong imo.

~~~
mightybyte
Upper bounds are _information_ about what you know your package works with. It
makes no sense to throw this away. What does make sense is improving our
tooling to allow this information to be ignored in certain circumstances.

I've been on the other side of the fence with a large app written a long time
ago that did not specify upper bounds. Now that app is essentially unbuildable
given the time I'm willing to put in. If I and all my dependencies had
followed the PVP and specified correct upper bounds everywhere, I would still
be able to build my app today. No, I wouldn't be able to build it with the
latest version of other libraries, but that's not what I want to do.

If you don't specify upper bounds, the probability of your package building
goes to ZERO in the long term. Not epsilon...zero. That's unacceptable for me.
If it works today, I want it to work for years to come.

~~~
massysett
"Upper bounds are information about what you know your package works with."

Not really; the upper bounds generally are speculative. They specify that the
package will not work with particular versions when typically those versions
have not yet been released. Often when those versions are released they work
fine, which prompts a useless flurry of dependency bumping.

"If I and all my dependencies had followed the PVP and specified correct upper
bounds everywhere, I would still be able to build my app today."

I now distribute my packages with the output of the ghc-pkg command so you can
see the exact versions the library builds with. That solves your problem
without using speculative upper bounds.

I wouldn't expect it to work for "years to come" in any event, though: Haskell
changes too much. You probably won't be able to get today's package to work
with the compiler and base package that will exist years hence. It's hard to
even get an old GHC to build: today's GHC won't build it because
of...dependency problems.

~~~
mightybyte
> Not really; the upper bounds generally are speculative.

Yes, an upper bound is still useful information. It says that a package is
known to work with a certain range of dependency versions. This is why I think
cabal should support something like < for speculative upper bounds and <! for
known failure upper bounds. Then we could have a flag for optionally ignoring
speculative upper bounds when desired.

> I wouldn't expect it to work for "years to come" in any event, though

I can still download a binary of GHC 6.12.3. Hackage also keeps the entire
package history so there's no reason things shouldn't build for years to come.
I'm all for the fast moving nature of the Haskell ecosystem, but we also need
things to be able to build over the long term and there's no reason we
shouldn't be able to do that.

~~~
massysett
"I can still download a binary of GHC 6.12.3."

Which won't even work on current versions of Debian GNU/Linux because the
soname for libgmp changed.

"Hackage also keeps the entire package history so there's no reason things
shouldn't build for years to come."

No it doesn't. I had specified a dependency on a particular version of text,
and it summarily vanished from Hackage.

Even assuming that it's a worthwhile goal to make sure _old_ software builds
with _current_ tools, upper bounds do nothing to get you there.

~~~
mightybyte
> Which won't even work on current versions of Debian GNU/Linux because the
> soname for libgmp changed.

Again, my argument assumes that you're not upgrading to current versions of
things. Enterprise production-grade deployments often fix their OS version
because of this. There's a reason RHEL is so far behind the most recent
versions. If you are upgrading, then you'll be fixing these problems
incrementally as you go along and it won't get out of hand.

> No it doesn't. I had specified a dependency on a particular version of text

You're making my point for me. It looks like you were probably depending on
0.11.1.4, which doesn't exist now. Your problem would not have happened if you
had used an upper bound of "< 0.12" like the PVP says instead of locking down
to one specific set of versions. If you had done that, cabal would have picked
up subsequent 0.11 point releases which should have worked fine. Also, I can
still download text-0.1 which is more than 5 years old.

------
iaskwhy
I didn't know what Haskell code looked like so I went to the official website
and there was this code example[1]:

qsort (p:xs) = qsort [x | x<-xs, x<p] ++ [p] ++ qsort [x | x<-xs, x>=p]

Is this a good example of good Haskell code?

[1]
[http://www.haskell.org/haskellwiki/Introduction#Brevity](http://www.haskell.org/haskellwiki/Introduction#Brevity)

~~~
probably_wrong
I think that's a bit extreme for an example - I'd go for the following
version, which is less concise but easier to read:

    
    
      quicksort :: Ord a => [a] -> [a]
      quicksort []     = []
      quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
          where
              lesser  = filter (< p) xs
              greater = filter (>= p) xs
    

So, what's going on here? First you have the declaration of the function - in
this case, you have a list of items of type A which can be compared to another
one (Ord means "I can use <, <= and so on over this element"), and you'll
return a list of type A.

Then you have the base case - if the list is empty, you'll return an empty
list.

After that you have the case "item++[list of items]" (note that the second one
can be an empty list), where you return [list of smaller items]++item++[list
of larger items] (the two lists are defined under the "where" \- filter
removes elements from a list).

Note however that, instead of "(lesser) ++ item ++ (larger)" you have
"(quicksort lesser) ++ item ++ (quicksort larger)" \- this will call the
algorithm recursively over smaller lists, but I bet you already knew that.

Both pieces of code do the same thing, but I think mine is what you'd expect
from a typical piece of Haskell code - the one you have is closer to "see how
small I can make this function".

~~~
iaskwhy
Thanks for the insightful reply! I can understand most of the code but I still
do not understand how should I know what "p" and "xs" is. Is it something you
get when you declare the function as being "possible" to order (I believe
that's what "Ord" means based on your explanation)?

~~~
jerf
One last thing I'll add to the other replies, the example shown is a bit
unusual, like indexing a for loop with "q" instead of "i". The traditional
pattern is this:

    
    
        (x:xs)
    

And that reads "x and the rest of the x-es", that is, the "xs" is not "ex
ess", but "plural x", whatever remainder remains of the list of "things" in
that list. (p:xs) is a bit weird, but if I saw (p:ps) I'd understand.

Also, Haskellers often use "x" here because they write a lot of functions that
operate on "anything"; for instance, in this case, you don't really know
_what_ you're sorting, so, call it x because what other name is any better?
There's a set of about 10 of these conventional one-letter variable names. (A
few more than conventional imperative languages, but not obscenely so;
imperative has i, j, and k for iteration, a and b in sorting, sometimes f for
function, and p for pointer.)

~~~
dllthomas
I assume here it's p for pivot, xs because the rest of the list is just items,
not pivots (as ps would imply). I'm not sure I _love_ the naming choice, but
it has a logic to it.

------
auvrw
i'd like to hear more about employers' perspectives on hiring for "an uncommon
language without a comparatively sparse industrial track record." there was
some debate about it in the clojure 1.6 release announcement. the author of
the article didn't find it to be a problem, but then he was apparently only
hiring one person.

good post, Tikhon!

~~~
mightybyte
When I left my last Haskell job, my employer put out ads to the Haskell
community and quickly got a pool of 5-10 solid applications. A good number of
those were from people outside the U.S. which was a no-go for them. After
waiting a few weeks they decided to open it up and not require Haskell
experience. I looked through a number of these resumes and there was a
_massive_ drop in quality. It was like night and day. None of those applicants
were even remotely interesting IMO. They ended up waiting a few more months
and finally found a Haskell applicant in the U.S. that they were happy with.
If you're willing to accept people working remotely your chances of finding
someone are much better.

------
hyperpape
I'm curious about the claim that it only takes a few days to get people
productive working in Haskell. I like the language a lot, but learning it has
definitely been slow compared to any other language I've learned. It just
seems like there are a lot of changes to how you think.

Is this a feature of the way your code is structured that simplifies what
people have to understand about the language before they can contribute?

~~~
nbouscal
To be fair to the OP, the original claim was: "People who have prior
functional programming knowledge seem to find their stride in just a few
days." I would be somewhat surprised if someone with no prior functional
programming experience could be productive in a few days in Haskell, because
as you say, there are a lot of changes to how you think about programming.
It's a lot easier for people with experience in a Lisp, and even easier yet
for people with experience in another ML descendant.

All that said, the structure of the code definitely does help a lot in
lowering the bar for contributions. When Don Stewart was on the Haskell Cast
[1], he mentioned that the way they use Haskell allows contributions from
people who not only have no prior functional experience but who have little CS
background at all past basic programming literacy. Much of the ability to do
this comes from Haskell's rigorous isolation of side effects.

[1]: [http://www.haskellcast.com/episode/002-don-stewart-on-
real-w...](http://www.haskellcast.com/episode/002-don-stewart-on-real-world-
haskell/) around 42:40 or so

~~~
hyperpape
I probably should have been more explicit: I use higher order functions and
recursion all over the place in higher level languages that I use, so I'm
familiar with parts of functional programming. But structuring impure programs
in Haskell still has taken a long time to get used to (and I'm not great at it
yet).

------
AnimalMuppet
> I wouldn’t write an OS kernel in it, but Haskell is way better than PHP as a
> systems language.

What's the definition of "systems language"? If PHP could even remotely be
considered an option, the definition is clearly not what I think it is...

------
csense
Haskell is a terrible choice for a production system.

I know well over a dozen programming languages. I have a stronger theoretical
background in mathematics, computer science, and compiler design than most web
developers. But Haskell has always been utterly mystifying to me.

I've attempted to learn Haskell several times, and I've had monads explained
to me several times right here on HN -- but I simply don't seem to be able to
wrap my mind around the concept.

To be sure, if I had a year or so to devote to studying the theory behind
Haskell's type system, I might indeed become more productive writing web
applications in Haskell. And I don't doubt that there are certain problems in
logic programming or language theory for which Haskell provides useful
concepts and powerful tools, to the point where the solutions to important
problems may have trivial implementations.

I've always thought that a major problem with Haskell for web development is:
(A) You'll have difficulty finding people who both know Haskell and are
interested in web applications, and (B) you'll have difficulty training web
developers in Haskell. The author of the article apparently has a training
process that gets around (B), and I'd kind of like to know what it is -- a way
to learn Haskell without a ton of study.

~~~
devwebee
I've seen experienced programmers failing at learning Haskell for the same
reason experienced programmers fail at learning Vim, the initial learning
curve is very steep, and they don't see the value on climbing that wall. Now,
if you know Vim, ask yourself, was it worth it? I'm convinced 99% of people
would say yes. It is the same with Haskell, all those seemingly complex
concepts come with a big reward.

Learning Haskell is easier with a strong functional programming background;
sadly CS students don't learn much about FP, so having lots of experience in
CS and compilers won't necessarily put you in a better position than somebody
who has no formal background in CS but lots of programming experience.

A math background on the other hand should help you. Learn about how category
theory is applied in Haskell, and you're already half way there. Monads are
just a piece of the puzzle.

I do agree with your last three points. I haven't used Haskell in production,
so my experience in "real world Haskell" is limited, but it has helped me
immensely in my daily work with other languages. Haskell forever changes the
way you think and approach problems, just like Vim changes the way you write
and edit code; it will make you a better programmer.

~~~
csense
> the same reason experienced programmers fail at learning Vim, the initial
> learning curve is very steep, and they don't see the value on climbing that
> wall.

This is definitely me. I only got really into Linux with Debian-like systems
after Nano became those distributions' standard editor. I know the bare
minimum about vim for occasional, minimal editing of configuration files. I
only ever use it when I'm stuck with a rescue prompt or some third-rate distro
that has nothing else. Usually one of my first goals is to get either a GUI,
or file transfer capability, or a package manager up and running; then I stop
using it.

~~~
dllthomas
_" I only got really into Linux with Debian-like systems after Nano became
those distributions' standard editor."_

I'd recommend learning an editor that has any sort of power - vim, emacs, or
something lesser known. Trying to get anything done in nano is like pulling
teeth. Yeesh.

