
Why the world needs Haskell - blandinw
http://www.devalot.com/articles/2013/07/why-haskell.html
======
codemac
Because how else would you implement Agda? Hah!

The idea of managing side effects as a type (monads) still hasn't seemed
compelling to me in terms of development time. Here I agree with Liskov in
saying that it's a bit over the top[0]. Most of the quality I see with Haskell
has to do with strict types, not the handling of I/O errors as types
themselves.

Not that I want to bash it.. Learning haskell has _actively_ changed how I
approach all my C/C++ development, and gotten me far far far into the weeds of
now learning Agda as a prototyping language for designs/semantics[1].

The world may or may not need Haskell. However it's certainly a better place
now that it has it.

[0]: She said this at a talk she gave at work.. It was similar or the same as
her "The Power of Abstraction" talk, dunno if she makes the same comment in
every presentation though.

[1]:
[http://www.youtube.com/watch?v=vy5C-mlUQ1w](http://www.youtube.com/watch?v=vy5C-mlUQ1w)

~~~
ghswa
I don't agree that managing effects is over the top per se however the use of
monads feels over the top for pretty much everything!

It's always struck me as strange that value (as in a typical type system) and
effects would be controlled through the same system. The type signature of a
function and it's effects seem very much orthogonal to me.

If we want to be controlling side effects then we really ought to be using a
separate effect system[1]. There's a scala plugin demonstrating this (although
I've not tried it)[2].

With separate type and effect systems I should be able to define a pure
function fib(n) and call it like this fib(getValueFromUser()) that is
_without_ having to use special operators to get at the value which can only
be used in certain contexts a la Haskell.

[1]
[http://en.wikipedia.org/wiki/Effect_system](http://en.wikipedia.org/wiki/Effect_system)

[2] [https://github.com/lrytz/efftp/wiki](https://github.com/lrytz/efftp/wiki)

~~~
gohrt

        x = fib(getValueFromUser())
    

x is not pure, but fib is. OK, the compiler can figure that out without
requiring the programmer to write a special "bind" operator.

How about this:

    
    
        dofib(argumentProvider) = fib(argumentProvider())
        dofib(lambda: 1)  // pure
        difib(getValueFromUser) // effectful
    

Is dofib pure or not? That depends on the value of argumentProvider cannot be
determined statically.

~~~
Silhouette
_Is dofib pure or not?_

Food for thought:

1\. The easy but limited solution is that this code doesn't compile, because
argumentProvider must have a single type/effect and you couldn't have both a
pure and an impure function with that type/effect.

2\. Is purity that important? Ultimately we care about avoiding our programs
doing unintended things, and often effects are just fine as long as they don't
misbehave in some way. Purity is a means to an end.

3\. It's fascinating to extend the ideas of generic programming from
mainstream type systems to effect systems. I suspect there is a lot of
potential benefit to be had if we can figure out how to do this without
introducing a lot of boilerplate code, in the same way that we can write code
using generic types to various degrees today but have type inference spare us
a lot of keyboard bashing.

~~~
ghswa
Effect inference has already been done:
[http://research.microsoft.com/apps/mobile/showpage.aspx?page...](http://research.microsoft.com/apps/mobile/showpage.aspx?page=/en-
us/projects/koka/)

I doubt we'll see anything like this in a mainstream language anytime soon.

~~~
Silhouette
That's an interesting case I hadn't seen before, so thanks for the link.

Some of the related pages don't seem to be available at the moment, but from
what I could see it looks as if that approach still gets hung up on questions
of decidability, and doesn't have a very powerful concept of the regions where
effects apply, which has been another interesting aspect of the wider research
so far. It's good to see someone else working on the field, though.

------
tieTYT
I don't have much experience with haskell beyond Learn You A Haskell (a book),
but I wonder how beneficial the no null pointer errors are in practice.

The way haskell handles this reminds me of checked exceptions in Java. In java
if you read from a file, your code won't compile unless you have a catch block
that handles the possibility of an IO exception. This is called a checked
exception because you have to check for the possibility or else your code
won't compile.

I know many Java programmers handle checked exceptions by wrapping the checked
exception in a unchecked exception so they don't have to deal with it. Don't
Haskell developers end up doing the same thing with their Maybe concept?

Haskell avoids null by using a Maybe type/class (I always forget the
terminology). A Maybe can evaluate to either a value or a type that represents
the absence of a value. (This is an oversimplification for the consideration
of people who know nothing about Haskell)

For example, you've got an associative map data type and you find an element
in there. At the time of writing the code you "knew" that the element "has to
be there". Haskell makes you deal with the possibility that it's not. Won't
most developers just end up throwing an exception in that case so they don't
have to deal with the impossible possibility? Then, x months from now when the
code gets changed so that the map won't have the element there, all the sudden
your code gets an error. How is this different from a null pointer exception
in any other language?

(Part of me is ignorant and part of me is playing the devil's advocate.)

~~~
gizmo686
Gennerally when you have null pointer erros, it is because the developer did
not realize that there was the possibility of having a null value at that
point in code. Adding this information to the type system allows the developer
to know exactly where the issue might occur (and have the compiler complain if
it is not handled). Also, in terms of use, maybe has another major advantage
over null. Many times in languages with null you see the pattern

if (foo==null) {return null} else {...}

This pattern is handled automaticly by maybe; If you try applying a function
to Nothing (maybe's version of null), then the result of that function is also
Nothing, even if the function itself was not designed to handle Maybes.

Additionally, as a matter of culture, Haskell programmers rarely throw
exceptions.

~~~
wereHamster
> Additionally, as a matter of culture, Haskell programmers rarely throw
> exceptions.

The writers of unix and other base libraries would disagree. See for example
`head`. Or most functions that calls into C where the C function fails with an
error. In that case Haskell blows up into your face, and you have to wrap the
function invocation in a catch.

~~~
Peaker
Or replace head with a better function.

~~~
epsylon
Or use pattern matching.

~~~
mercurial
Pattern matching is great, you can usually use it instead of head, but you
want something like Control.Error.Safe.tryRead to replace other unsafe
functions like read.

------
k_bx
While I agree that Haskell is a great language and everyone should learn it
because it's fun to play with it, I think solution to author's problems (and
as a current evolution step) would be something like Kotlin.

It's Java-like language (no need to manage pointers), and it doesn't have
null-pointer problem, but it's still high-level imperative OOP language.

While many people say that they're code in Haskell is easier to read, since it
has this "side-effect-free" guarantees, for me it seemed as not true for some
time recently. In Haskell, when your code gets complicated (and starts to have
some patterns you want to omit in typing), you start writing Monads. And when
you start writing monads, your code gets _harder_ to read since you need not
only consider the code, but also keep (>>=) operator (all of them, if you use
multiple monads combined via transformers) in your head for every pair of
lines. Your code can suddenly have something like global variables (dynamic
scoping) hidden in monad (as with State monad), it's flow can be changed
dramatically and different other surprises.

~~~
mercurial
Haskell gives you referential transparency and immutability guarantees,
something you'll never get in Kotlin.

That said, I agree that Haskell code is typically dense, and suffers from
readability problems.

~~~
k_bx
Yeah, I'm just saying that referential transparency and immutability often
turn into a lie, since this code:

[https://gist.github.com/k-bx/594a415a06fdd0fc3841](https://gist.github.com/k-bx/594a415a06fdd0fc3841)

could easily put 2 different values into `a` on each line. Of course,
technically everything is still correct, and getA still returns same result
for each call.

~~~
mercurial
Er, but that's not the same 'a', it's two different bindings with the same
name, and you should get:

    
    
      foo.hs:3:9: Warning:
        This binding for `a' shadows the existing binding
          bound at ...
    

when you compile with -Wall.

~~~
k_bx
Yes. You're free to rename second a to b. The point I wanted to make is that
it's the same getA, but anyone reading the code should understand that result
binded to a and b will be different.

~~~
mercurial
Depending on the monad. If you are in Reader and getA == ask, a and b will
have the same value. If you are in IO and getA == getLine, a and b may be
different.

~~~
k_bx
The whole point I was talking about is that as soon as you're in monad,
referential transparency becomes a bit a "lie", and a <\- getA, b <\- getA
becomes non just analogous to a = getA(); b = getA();, but even worse in
"referential transparency" sense.

~~~
chongli
Referential transparency is not a lie; _getA_ is just a value. It could be
substituted for its meaning without altering the program. For example, the
following program:

    
    
        foo = do
          a <- getA
          putStrLn ("Hello, " ++ a ++ "!")
          where
            getA = do putStrLn "What is your name?"
                      getLine
    

Is equivalent to:

    
    
        bar = do
          a <- do putStrLn "What is your name?"
                  getLine
          putStrLn ("Hello, " ++ a ++ "!")

~~~
k_bx
Yes, that's exactly what I was sayint 2 comments upper this thread:

> The point I wanted to make is that it's the same getA, but anyone reading
> the code should understand that result binded to a and b will be different.

So, while referrential transparency is still in place, programmer who reads
the code will most likely care not about getA's result, but rather what will a
and b get binded to, and in those terms it's just the same as good old

a = getA(); b = getA();

------
_pmf_
> It’s also why ruthless testing and 100% test coverage have become so
> important in mainstream languages. But even with 100% test coverage you
> can’t be sure that your code will work correctly if a function unexpectedly
> returns nil unless you’re also mocking things out or using fuzz testing.

You can write tests with 100% coverage that don't use a single assertion. Test
coverage is a completely useless metric, but it is an easy one to measure and
understand, which is why it is so popular in pseudo-QA and the management
tier.

~~~
ghswa
I frequently see people talking about an unqualified "100% test coverage". Is
it safe to assume in these cases that the author is referring to statement
coverage[1] rather than something more stringent like decision coverage or
even MC/DC?[2]

If we're talking about statement coverage then I agree that achieving 100%
statement coverage then calling it a day really isn't as helpful as it might
sound.

[1]
[https://en.wikipedia.org/wiki/Code_coverage#Basic_coverage_c...](https://en.wikipedia.org/wiki/Code_coverage#Basic_coverage_criteria)

[2]
[https://en.wikipedia.org/wiki/Code_coverage#Modified_conditi...](https://en.wikipedia.org/wiki/Code_coverage#Modified_condition.2Fdecision_coverage)

------
pspeter3
I think this can be generalized to functional languages in general. Scala
provides a lot of the same features in here.

~~~
tieTYT
Can you list those same features that it provides? My understanding of Scala's
philosophy is it provides as many features as it can. A Maybe type, immutable
state, separation of IO, etc. isn't very useful if my coworker can choose not
to use it.

~~~
ionforce
Scala is like Perl in that there is a non-zero possibility of writing non-
native code. In Scala, you're supposed to favor immutability and functional
constructs, but there's nothing preventing you from treating Scala exactly
like Java. Same with Perl and C.

Also, if you need a language to dictate your coworker's behavior, that says
something more about your coworker than the language. Why doesn't he just
choose to write good code? (barring arguments regarding the activation energy
of good versus bad code)

~~~
ericssmith
I assume by "non-native", you mean "non-idiomatic"?

Since I recently went through coworkers writing "bad" (by which I mean non-
idiomatic) Scala, and repairing the situation, I can say that it was
surprisingly easy to lay down "functional constructs" and the result was
amazing. The code became reliable, well-structured, and easy to leverage.
Switching to Scala for our projects -- including introducing procedurally
minded programmers to it -- was a huge win for us.

~~~
pspeter3
Yeah, we discovered that for my team's projects as well. Having a scala "guru"
define and teach idiomatic scala has actually increased our productivity a
lot. Any non-idiomatic scala is now purely for interacting with our existing
Java libraries

~~~
ericssmith
FWIW, the constructs with the initial biggest win for us (in converting
procedural thinking) were pattern matching and option types. For pattern
matching, the guidelines were

\- Don't do any processing after the pattern match within a function; break
into smaller functions

\- Reach for pattern matching before if/else

\- Avoid the use of var

\- Avoid dropping through to wildcard pattern. (This tip was from Yaron Minsky
of OCaml fame)

In conjunction with preferring the use of combinators over loops, and using
Option (and being forced to think about None), we got surprisingly functional
Scala code in a short time.

------
platz
Haskell is a puzzle language
([http://prog21.dadgum.com/38.html](http://prog21.dadgum.com/38.html)).

I intend to learn some Haskell but I keep wondering if it might be more useful
to spend some time with Clojure or F#

~~~
spullara
People who program because they like puzzles, puzzle me. I like things that
are straight forward and obvious when I program.

~~~
merijnv
> I like things that are straight forward and obvious when I program.

So do I, which is why I program in Haskell rather than the other languages
which make me worry about a ton of irrelevant things like the global state of
my program and don't let me enforce invariants of my code in the type system,
guaranteeing I can't violate without making the type checker complain.

------
vinkelhake

        boost::optional<User> get_user_by_name(string);
        boost::variant<string, User> get_user_by_name(string);
    

Optional is making its way into the standard library.

~~~
chongli
Right, but how does that give me any assurance that the value contained within
the optional is itself not null? Haskell gives me that assurance.

If all pointers can be null, then every pointer type is an implied optional
type. Haskell's advantage is that it allows us to define types which _cannot_
be null.

~~~
vinkelhake
An optional<T> has two states: it either holds a T or nothing. There's no
third state.

All pointers can be null, but unlike (for example) Java, we don't have to deal
with pointers to objects all the time. C++ has value types and it's perfectly
possible to pass objects that cannot be null.

~~~
chongli
_An optional <T> has two states: it either holds a T or nothing._

But T could be a pointer to a valid object or a null pointer, or am I wrong?

 _C++ has value types and it 's perfectly possible to pass objects that cannot
be null._

Only if those values live on the stack. What about heap objects?

~~~
ajuc
It's possible, no matter where the object lives:

    
    
        #include <cstdio>
        using namespace std;
        class Type {
          public:
            Type(int a):_a(a){};
            int _a;
        };
        
        Type& getType(int a) {
          Type* t = NULL;
          t = new Type(a);
          return (*t);
        }
        Type& doSthWith(Type& t) {
          //no need (and no way) to check if t is reference to proper object
          t._a=t._a+1;
          return t;
        }
        
        int main() {
          Type& t = doSthWith(getType(5));
          printf("%d\n",t._a);
          return 0;
        };
    

This can be broken of course, for example by

    
    
        Type& getType(int a) {
          Type* t = NULL;
          return (*t);
        }
    

But that's another matter.

~~~
chongli
Right. What makes Haskell special is _not_ that it has option types, it's the
fact that most types in Haskell cannot be null. This allows us to write pure
(and total) functions, knowing that the compiler will enforce it; giving us a
high degree of assurance that our function will not fail.

------
Confusion
Even in Haskell, you still need to test your code. The type system doesn't
prevent

    
    
      double = (3*)

------
skybrian
I skimmed so perhaps I missed something, but it looks like this is just the
usual argument in favor of functional programming. I don't find it convincing
because there's more to programming than getting the right output. Performance
and memory usage are also important, and Haskell is extremely opaque in that
respect. A strict language with optional laziness makes it much easier to
reason about performance.

------
re_todd
Reading Ruby code makes me happy, reading Haskell code makes me want to pull
all my hair out. Enough said.

~~~
reitzensteinm
"I love playing the tumpet, but I grabbed a violin and it sounded like a cat
dying. Enough said."

------
dschiptsov
Haskell zealots are using the same nonsensical arguments that Java used,
claiming to be a "safe" language where the compiler and static typing
"eliminating common bugs". This is a naive meme, almost everyone telling to
each other.

First of all - no static typing system, whatever sophisticated it is, could
protect from incompetence, lack of knowledge of underlying principles and
plain stupidity. The dream that idiots will write a decent code will never
come true, no matter how clever tools could be, just because it is
_impossible_ to write any respectable code without understanding hows and
whys.

But, for those who managed to understand the core ideas and concepts on which
programming languages were based (immutability, passing by reference,
properties of essential data-structures, such as lists and hash tables) it is
possible to write sane and reasonable code even in C, leave alone CL or
Erlang, and the type system will become a burden rather than advantage.

So, Haskell is really good to master Functional Programming (which is much
better to learn using old classic Scheme courses), to understand the ideas it
rests on. To realize what is function composition, environments, currying,
partial evaluation, closures, why they and _when_ they are useful and handy,
and how clear and concise everything (syntax and semantics) could be if you
just stop here - just skip the part about Monads - they are just over hyped
fanciness to show off.

Learning Haskell _after_ Scheme/CL really clarifies one's mind with
realizations how the same foundation ideas work in a alien (static-typing)
world, and how, everything is clean and concise, until you're starting messing
everything up with "too advanced typing".

Again, it is much better to learn the underlying ideas (why it is good to
separate and pay special attention to functions that performing IO, what is
recursive data structures and _why_ null-pointers do exist in the first place)
instead of stupid memes like "monads are cool" or "Haskell prevents bugs".

The trick is that it is that dynamic languages with proper testing (writing
tests before code) is _not_ worse than this "static typing safety", and that
the very word "safety" is just a meme.

~~~
tome
The benefit of Haskell for me is not that it allows "idiots" to write decent
code, but that it allows very smart people to write decent code.

It's much harder (in my experience) for very smart people to write decent code
in C, C++, Python etc than in Haskell, simply because so much of their
smartness is consumed by having to constantly think about what code might
break their program.

~~~
dschiptsov
Thinking constantly is part of the craft.)

~~~
tome
Absolutely, which is why the compiler should remove as much thinking burden
from the programer as possible. Then he/she has more free brainpower to spend
on additional important things.

~~~
dschiptsov
Exactly the same rhetoric about those evil pointers and safety of static
typing and compiler technology was that gave rise to Java.

Haskell, it seems, while using the same slogans, plus FP and lazyness
buzzwords, is really doing the job when you follow its conventions.

Actually, that part which re-implements standard FP techniques with very
clever and concise, based on familiar conventions (currying, laziness) syntax,
along with ability to write "curried type signatures" is remarkable. It feels
much better than SML.

It is definitely the language worth of learning _after_ Scheme/CL.)

~~~
dasil003
This is an such a bone dry straw man it's liable to spontaneously combust in
the summer sun.

The fact that corporate overlords selling Java made certain overhyped promises
does not have any bearing whatsoever on the merits of Haskell.

Let's break it down: Java fixed memory management at a significant performance
cost, but they didn't solve null references, and in practice it was a big
enough of an advancement to actually get a foothold against C++.

Haskell on the other hand actually solves null references as well, so the
bugspace that it actually eliminates is easily an order of magnitude bigger
than Java's, and it does so without being particularly slow or verbose.

Now as to the rest of your argument, of course it's true that FP principles
can be applied anywhere, and given the correct discipline you can achieve much
of the same benefits through careful architecture and coding practices.
However what you're dismissing is the value of the compiler guarantee, and
that should not be minimized. Of course we can look at any code sample and
reason about how it should be structured to minimize side effects. But the
problems with mutability and null references are endemic to large systems, not
isolated features. Where Haskell's guarantees start to shine is when your code
base goes over 100kloc and no one person understands the whole thing anymore
and the preconditions informing the design no longer apply and things have
been hacked from a dozen different perspectives. In this case those guarantees
have non-trivial value.

