
Why Haskell is Kinda Cool - icey
http://amtal.github.com/2011/08/25/why-haskell-is-kinda-cool.html
======
ionfish
There are a few missteps in this article. To begin with, the author claims
that a variable of type Double is "a function with no arguments". This is not
the case [1]; not everything in Haskell is a function. A Double is just that:
an instance of the numeric type Double.

They also say that "[T]he Ord "typeclass" ... implements comparisons." But of
course the typeclass itself doesn't implement comparisons. A typeclass defines
an interface which its instances implement. Consider the Eq typeclass:

    
    
        class Eq a where
            (==) :: a -> a -> Bool
            (/=) :: a -> a -> Bool
    

So to make a type an instances of Eq, we need to implement (==) and (/=)
functions with the type signatures provided by the class. It's pretty obvious
how this should go for the Bool type, so let's define that and make it an
instance of Eq.

    
    
        data Bool = True | False
        
        instance Eq Bool where
            True  == True  = True
            False == False = True
            _     == _     = False
            
            a     /= b     = not (a == b)
    

This is all covered very well by Learn You a Haskell [2].

[1]: [http://conal.net/blog/posts/everything-is-a-function-in-
hask...](http://conal.net/blog/posts/everything-is-a-function-in-haskell)

[2]: [http://learnyouahaskell.com/making-our-own-types-and-
typecla...](http://learnyouahaskell.com/making-our-own-types-and-typeclasses)

~~~
sharkbot
The points you describe are valid, but there are a few nits:

Regarding your first point, your referenced article makes a misstep of its
own, conflating the implementation with the semantics. From the article: "Do
some folks believe we’re still doing what Church did, i.e., to encode all data
as functions and build all types out of ->?"

From a semantics perspective: Yes. If it makes it easier to reason about, then
treat variables as nullary functions. In the Spineless, Tagless G-Machine
paper, both values and thunks are represented as closures, which may or may
not be evaluated. For example:

    
    
      infiniteSeries :: [Integer]
      infiniteSeries = iterate (1+) 1
    

Is this a variable or a function? Having a special case for Double ("a closure
containing a value") versus [Integer] ("a closure containing a thunk") is a
distinction only important once the implementation becomes an issue (ie, lazy
evaluation affecting memory usage).

As for your second point, typeclasses can include partial implementations. For
example, the Eq typeclass [2] has a minimal definition required, either (==)
or (/=), as one can be defined in terms of the other.

[1] research.microsoft.com/pubs/67083/spineless-tagless-gmachine.ps.gz

[2]
[http://haskell.org/ghc/docs/latest/html/libraries/base/Prelu...](http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Eq)

~~~
ionfish
Great points! That we can treat infinite streams as having the same semantics
as lists is of course dependent on Haskell's non-strictness—it's not the same
in ML, for example.

For me one of the more convincing arguments that article puts forward is that
Haskell only has unary functions; the type Int -> Int -> Int is just a
shorthand for Int -> (Int -> Int), i.e. functions which appear to have more
than one argument are considered for the purposes of the semantics to be of
the form (λa.(λb.c)).

The core point that I think Conal Elliot tries to make is that as far as the
denotational semantics is concerned, Haskell has values of many types, some of
which (the ones of (abstract) type a -> a) are functions.

You are of course correct about Eq and partial implementations. For me the key
point is that where one function is defined entirely in terms of other
functions and class constraints which ultimately rely on the instance
implementing those other functions, they encapsulate the logic of the
typeclass—in other words, they define a property of the interface. So equality
for values of a given type is ultimately defined by the _instance_ , but the
relationship of equality to inequality—a property of the concept of equality
in general—is defined by the _typeclass_.

~~~
masklinn
> For me one of the more convincing arguments that article puts forward is
> that Haskell only has unary functions

It's not specific to Haskell, you can trivially argue that all languages with
functions have only unary functions but most languages use tuples as the
function's sole parameter.

Haskell even lets you switch a function between the two systems via `curry`
and `uncurry`.

------
baltcode
I liked Haskell when I tried it for all of these reasons. However, there were
a few problems I had:

1\. Debugging logic. I LOVE its typesystem, which catches most of my errors.
However, sometimes I have algorithmic/logic errors. In a perfect world, I
would have worked those out well before coding. But once I have coded it, I
just don't know how to debug my code. I have gotten so used to gdb/pdb as my
last resort that I am lost without it. Does anybody have any idea debugging
Haskell programs?

2\. Its Lazy evaluation sometimes get in the way of tail recursion
optimization unless you force it, and can lead to the stack getting full. But
this can be handled by forcing evaluation with seq.

~~~
Peaker
1\. There's the ghci debugger, though I don't know if it is practical (I don't
use it myself).

Alternatively, you can use Debug.Trace (or the
[http://hackage.haskell.org/packages/archive/TraceUtils/0.1.0...](http://hackage.haskell.org/packages/archive/TraceUtils/0.1.0.1/doc/html/Debug-
TraceUtils.html) extension), which makes "print-based" debugging easier than
in imperative languages:

Say you have:

    
    
      processValue f =
        after .
        map f .
        before
    

If you want "print-based" debugging, you can use Trace.Utils.traceAround:

    
    
      processValue f =
        traceAround "processValue" $
        after .
        map f .
        before
    

Or, if you just want the results of "after":

    
    
      processValue f =
        traceId "Result of after: " .
        after .
        map f .
        before
    

Alternatively, there's also the FileLocation package, which gives you
filenames + line numbers in your debug prints via Template Haskell macros.

2\. Controlling evaluation is indeed tricky. Haskell programmers take
manipulation of infinite data structures (of any kind) for granted -- but then
if they want to control stack use, it gets a bit tricky. Other languages take
stack use management for granted, but if you want to have infinite arbitrary
data structures, it gets tricky.

I personally prefer Haskell's choice, because it means that having infinite
structures doesn't contaminate the code with effects (which explicit
suspensions/resumptions of computations would do), but it could probably be
done better (e.g: move laziness/strictness annotations mostly to the type-
level).

~~~
eru
Haskell doesn't even really have a stack. It's more like a spaghetti stack.

~~~
Peaker
Is that supposed to be derogatory? :-)

~~~
eru
No. It's at technical term that crops up naturally in the discussion of e.g.
continuations and closures.

------
leif
Does anyone know how to turn off the extra mouse cursors? That is bugging the
hell out of me and I can't concentrate on the article enough to read it (but I
really want to).

~~~
domhofmann
Scroll to the bottom of the page and click the power icon on the enflock
widget.

~~~
mortice
Which still leaves mouse pointer images all over the document. Horrendous.

~~~
badhairday
Type javascript:$('.ms-mouse').hide(); into the address bar and hit enter
after you turn off Enflock.

~~~
leif
this is the dumbest thing in the world

------
pnathan
Here's a question for the Haskellers out there:

How does Haskell help you write less code? What mechanisms are in the language
that allow you to write higher level abstractions (as compared to, say, Ruby
or Python)?

How does Haskell help you to express concepts that are very difficult to
express in other languages?

~~~
andolanra
Haskell abstracts things at an even higher level than most programming
languages. For example, the infamous monads are a way of abstracting out the
details of how 'effectful' computations (i.e. computations that do something
other than simply taking arguments and returning values) should work. Monads
model effects like exceptions, mutable state, input/output, nondeterminism,
and so forth, and by abstracting them into the same pattern, it allows you to
write functions that work with _any_ effect in the same way. For example, if I
have a list of operations which have effects and all have the same result
type, I can sequence them together using the _sequence_ function, so for IO:

    
    
        sequence [putStrLn "foo", putStrLn "bar", putStrLn "baz"]
    

will print those three strings on their own lines. However, I can use the same
function for Haskell's equivalent of null:

    
    
        sequence [Just 3, Just 8, Just 5] -- results in Just [3, 8, 5]
        sequence [Just 3, Nothing, Just 5] -- results in Nothing, because Nothing is
                                           -- propagated if it occurs
    

or for nondeterminism

    
    
        sequence [[1, 2], [3, 4], [5, 6]] -- results in every possible list whose first
                                          -- element is drawn from the first list, whose
                                          -- second element is drawn from the second
                                          -- list, and whose third element is drawn from
                                          -- the third list.
    

or for any other effect that Haskell has. A lot of learning Haskell has to do
with looking at the abstractions Haskell offers (monads are only one; you also
get things like arrows, functors, applicative functors, &c) and understand how
you can phrase your problems in terms of them and consequently use Haskell's
abstractly written functions to your favor. A great practical example here is
parsing; parser combinators can be expressed as monads or as applicative
functors, both of which make your parsing code very small and quite natural to
read. For example, the following is a function to parse strings like
"(552,864)" using the Parsec parser combinator library:

    
    
        parseDigit = oneOf ['0'..'9']
        parseInt = many parseDigit
        parsePairOfInts = do
            string "("
            p1 <- parseInt
            string ","
            p2 <- parseInt
            string ")"
            return (p1, p2)

~~~
jeffdavis
Why can't you use monads in ruby? A monad is essentially just an interface,
right?

The original question was about the _mechanisms_ , not the style or libraries.

I don't really know haskell, so correct me if I'm wrong.

~~~
Peaker
Type-classes are _like_ interfaces, but not quite the same.

They allow return-type polymorphism, which ordinary OO interfaces do not. And
the Monad type-class requires this feature.

Basically, when you call the "return" function in Haskell, (e.g: return 5) --
the code being called depends on the type of the result. The type of the
result is determined from the context (by type-inference or rarely, type
annotations). In Ruby or Python, there's no easy/direct way to encode
something like "return" such that it works with any Monad instance.

~~~
jeffdavis
Interesting. I see what you are saying, but could you provide an example (or a
link to an example) for clarity? Preferably something not easily accomplished
in ruby?

~~~
eru
I found an article about return-type polymorphism. See
[http://vpatryshev.blogspot.com/2010/01/dispatch-by-return-
ty...](http://vpatryshev.blogspot.com/2010/01/dispatch-by-return-type-haskell-
vs-java.html)

One thing is impossible, though: defining two methods that differ only in
return type, something like

Most important point:

Something like

    
    
        int i = parse(String source);
        long l = parse(String source);
        boolean b = parse(String source);
    

is possible in Haskell, but not in Python, and barely imaginable in Java.

~~~
Peaker
Elaboration:

    
    
      class Read r where
        read :: String -> Maybe r
    

Note that it's polymorphic on the type of the return _within_ the Maybe. It's
very flexible, the polymorphic type can appear anywhere within the type
signature.

Now you can write various functions that use "read" and they all remain
return-type polymorphic. For example, you can write one that loops, requesting
the user to repeat entry until parse-able data (of the wanted type) is given:

    
    
      repeatReadingUntilValid :: Read r => IO r
      repeatReadingUntilValid = do
        line <- getLine
        case read line of
          Nothing -> do
            putStrLn $ "Invalid input: " ++ show line
            repeatReadingUntilValid
          Just result ->
            return result
    

That's just a silly example, because read isn't that interesting.

Another example is QuickCheck, which uses type-classes to auto-generate fuzz-
testers for functions.

For example:

    
    
      import Test.QuickCheck
    
      pretty :: MyType -> String
      pretty = .. pretty print my type here ..
    
      unpretty :: String -> MyType
      unpretty = .. parse the pretty printing of my type here ..
    

Now I can test that unpretty is indeed the inverse of pretty:

    
    
      quickCheck (\x -> unpretty (pretty x) == x)
    

(for every x, the unpretty of pretty of x equals x).

I can generalize this property to:

    
    
      isInverse f g x = f (g x) == x
    

And then use:

    
    
      quickCheck (isInverse unpretty pretty)
    

Similarly you can define:

    
    
      commutative f x y   = x `f` y == y `f` x
      associative f x y z = (x `f` y) `f` z ==
                            x `f` (y `f` z)
      transitive f x y z = x `f` y && y `f` z ==> x `f` z
    

Which makes an important type of unit testing a breeze.

The type system is saving us from writing code here.

------
davidhollander
> _Since most of my bugs in other languages stem from unwanted interactions
> between pieces of code via side effects, this is kind of a big deal._

Is this really the case for most people? Most of my errors are syntax, pattern
matching, not considering certain cases, unrefined logic, etc.

Creating external side effects such as writing to files or sockets is not that
big of deal or as error prone, because after you learn that trick once, it
behaves in the same way. The POSIX standard does not change as frequently as
my own code :)

------
bonaldi
How awful is enflock? (NB: Very)

~~~
johanbev
Indeed. I refuse to read this. Haskell is probably awesome, but this site is
clearly not. Time to get noscript running again...

------
warmfuzzykitten
Good illustration of why unreadable text colors are kinda lame.

~~~
bcl
Perfectly readable here. Although not knowing Haskell some of the syntax is a
bit difficult to grok.

~~~
ionfish
Haskell syntax is actually pretty straightforward, once you know what you're
looking at. It's difficult at first blush because of the weird infix operator
symbols ($, ., <> etc.) but once you get used to them there's nothing too
scary.

That being said, why not post any problems you have here? I'm sure plenty of
people would be happy to help explain Haskell's syntax, and no doubt more
people than just you would benefit from that.

------
aklein
I'm thoroughly convinced Haskell is the String Theory of programming. You may
not be convinced by it, but you sure as hell have to be smart to grok it.

~~~
ionfish
Arguments of this kind are both wrong and dangerous. Becoming a good Haskell
programmer isn't easy, but then neither is becoming a good programmer
_simpliciter_.

The language itself is conceptually novel compared to imperative languages,
but the concepts themselves are neither difficult nor abstruse. Actually I
find Haskell refreshingly simple; it's a lot like Lisp that way.

I'm sure you'll agree that it's unfortunate that so many people are put off by
a subject's alleged difficulty, rather than encouraged by it. However, given
that this is in fact the case, wouldn't it be better if we concentrated on
saying how enjoyable, intellectually stimulating and useful learning certain
things was rather than discouraging people from even having a go?

------
ch0wn
You convinced me to give it a try. I'd still be interested in an opposite
view. Are there any non-ranty articles about the problems of the language?

~~~
andolanra
A sibling comment mentioned that mutation is difficult; another prominent
problem is that reasoning about time and space efficiency in Haskell is rather
unintuitive. It's easy to learn a little bit about optimizing functional
programs (e.g. through CPS-transforming your functions and taking advantage of
tail-call optimizations), and then find your Haskell program even more slow
and space-wasteful because it creates a huge number of thunks. (A "thunk" in
this context is a zero-argument function whose sole purpose is to be evaluated
later; in Haskell's case, to be evaluated when it's "needed" à la call-by-
need.) On the other hand, laziness also means you can write apparently
inefficient code that nevertheless performs incredibly well, e.g. constructing
infinite data structures.

There are also both major and minor squabbles of various levels of importance
(e.g. Haskell got :: and : mixed up, you can't have two records with the same
field name, Bob Harper argues that laziness _itself_ is a mistake, various
people would prefer abstractions other than monads for dealing with state) but
I think Haskell's performance and Haskell's initial impedance with imperative
programming are the big problems—at least, those are the big problems I hear
people complaining about.

(Then again, I'm an enthusiastic Haskell person, so you probably should find
someone who likes the language less to give you a better opinion.)

------
alnayyir
>most classes of bugs are caught at compile time

This is false, can we stop pretending that hindley-milner solved the Halting
Problem?

~~~
Peaker
You don't need to solve the halting problem to solve most classes of bugs.

~~~
alnayyir
Most? No.

Some, a few? Yes.

Also, stop trolling ##C with your Haskell stuff, nobody cares.

~~~
Peaker
<http://ycombinator.com/newsguidelines.html>

~~~
alnayyir
I've been here much longer than you, don't hide behind the url.

~~~
pg
I've been here longer than either of you and he's right. You're being
needlessly abrasive.

