

What was your "ah ha" moment with Haskell? - dons
http://www.reddit.com/comments/vi425

======
supersillyus
My "ah ha" moment with Haskell was after a few years of using it quite
regularly, I realized that it wasn't actually making me more productive in the
kind of code I actually write from day-to-day. It's a lovely language and I
wouldn't discourage anyone from using it, but for my purposes I realized it
was more exciting than useful at some point, and after that I haven't been
back to it as much.

~~~
CoffeeDregs
Agreed. It's so beautiful, but so constraining. I'm a much better developer
for having used it, but returned to Python/Ruby/Java[for-speed] after spending
lots of time with Haskell.

That said, I __desperately __miss static-typing and Hindley-Milner type
inference... I keep searching for the perfect language.

~~~
srean
Then I would be interested about your opinion on OcaML/MLTon, F# and Scala. To
me they seem like a good balance. If you are more adventurous then try Felix.

EDIT: apparently someone did not like your comment. Some downvotes confound
me.

~~~
CoffeeDregs
Would that downvotes interested me...

To your questions:

OcaML: seems like a hack-ish [S]ML. There's a nice comparison between SML and
OcaML here : <http://adam.chlipala.net/mlcomp/> . I like SML's syntax, but
OcaML made it too easy to be imperative and seemed too hackish. Most of the
OcaML I've seen looks like weird C, but written in OcaML because it's F4ST3R.

F# : I run Linux... next!

Scala* : I can't stand it. I use Python for everyday coding and I really don't
like the philosophy behind str(), len(), and friends, but it's otherwise
straightforward. There's pretty much only 1 [reasonable] way to do things in
Python. Scala seems like the evile lovechild of Perl and SML. Classes and
"case Classes"? WTF? Type inferencing, but not powerful type inference? If
you're going to move to Java++ without going too far toward Haskell, then Gosu
or Mirah seem like better compromises. That said, I'm only investigated Scala
[not coded in it], so my griefing is likely due to a lack of familiarity.

SML/MLTon: the syntax is 95% good but they should have embraced significant-
whitespace wholeheartedly. Do they really need an "end"? But, generally, I
really like SML's thinking. In particular, I'm rooting for SML by following
Yeti (<https://github.com/mth/yeti>; but "case" is closed with "esac",
really?!) and Roy (<http://roy.brianmckenna.org/>). Oh and I hate header
files. Sooooo 1995...

Clojure: static typing. I want to believe, but the lack of static typing
(including Hindley-Mindler type systems) seems like a short-cut. I think the
static/dynamic typing argument is a relic of pre-good-static-typing system and
I don't think that big, server-side languages should be dynamically and/or
weakly typed. That said, Stuart, and his hair, are great.

Felix: interesting, but I see no mention of type inference, so have concerns
about the type system. Also, the wiki is broken and that makes me think "dead
project".

But, unfortunately, I want mature toolchains, libraries, etc, so, though I
wrote a mid-sized web framework in Haskell, I'm one of those who is waiting
for a functional language to emerge as the winner. Until then, I'll work in
Python and will support Yeti and Roy.

* I've forgotten where I saw it, but Scala also had some bizarre rules around interpreting variables in case statements [or something] involving the case of the argument. I closed the book at that point. Haskell has special notations for special features, not special assumptions for normal features.

~~~
mahmud
I don't think you understand either ML or Lisp. SML would never make
whitespace significant because: 1) that's a brain-dead choice, and 2) ML, much
like Lisp, is used as a language, as a notation, and as a "kernel" language.

Clojure without dynamic-typing is bath-water without baby. What is the point
of interactivity, homoiconicity and macros if the language is statically
typed?

~~~
hencq
Is there any specific reason why macros can't work in a statically typed
language? How does Typed Racket handle this. Or does it not have macros?

~~~
heretohelp
Typed Racket is more like type annotations with a checking routine than
anything, if you're writing a macro you want generality.

You achieve this by omitting the type annotation and reverting to normal
Racket.

You can do macros in a statically typed language, but it would (and does) get
ugly fast.

------
lkrubner
Off topic, but my "ah ha" moment with both Clojure and Lisp was this blog post
by John Lawrence Aspden:

[http://www.learningclojure.com/2010/09/clojure-faster-
than-m...](http://www.learningclojure.com/2010/09/clojure-faster-than-machine-
code.html)

He managed to get a statement to go as fast the JVM could possibly go, and he
did this by getting the code to write code (the code added type castings to
every variable, which apparently gave the JVM the info it needed to optimize
like crazy). And there is no way to do that without hard-coding, and if you
don't know what kind of data you are going to get, then obviously there is no
way to hard-code anything. In other words, this kind of stunt can only be done
in a language that allows this kind of code-that-writes-code.

~~~
batista
>* In other words, this kind of stunt can only be done in a language that
allows this kind of code-that-writes-code.*

On the other hand, this kind of stunt will still be a stunt, not a common use
case of the language.

------
danieldk
There were far to many 'ah ha' moments. So, I'll take one that I don't hear
too often ;). Suppose that you are working on FFI code:

    
    
        do
          x <- malloc
          poke x myCDouble
          return x
    

My C reflex was: 'I have to specify how much memory I want to allocate, but
malloc doesn't take an argument, what the heck?'. Obviously, since Haskell has
proper type inference, it can deduce that _x_ is a pointer to a CDouble, and
has no trouble allocating the proper amount of memory. But for a moment I was
thinking it can read my mind :).

------
jiggy2011
I studied haskell at university, but there was very little explanation of why
it would be useful or the fundamental difference between it and a standard
iterative language (or maybe there was and I skipped that day).

At that point I assumed it was simply a language invented by academics in
order to torture undergrads.

It wasn't until a bit later and playing around with things like python and
Javascript and using closures/lambdas that I realized I could use some of the
functional ideas I had drilled when doing haskell to write simpler code.

Now when I go back to Java I often get frustrated and the amount of code I
have to write simply to work around the fact that functions are not first
class objects.

~~~
spacemanaki
> Now when I go back to Java I often get frustrated and the amount of code I
> have to write simply to work around the fact that functions are not first
> class objects.

Frankly, I've mostly gotten over Java's lack of first-class functions and just
make do with the boilerplate involved with regular for-loops or with anonymous
implementations of interfaces that are just stand-ins for functions, etc. What
I really struggle with nowadays is Java's (relative to Haskell's) weak type
system. I'm only a novice Haskell programmer, but even so I miss things like
Maybe, Either, tuples, etc, which can be really painful to work around the
lack of in Java. I've realized I would rather work in a dynamically typed
language or a strong statically typed language than the wretched mess that is
Java.

~~~
jonsterling
Remember, Maybe and Either are not language features in Haskell, but rather
library features! Java is totally capable of hosting them. (Though, there
isn't any special syntax for tuples in Java).

~~~
spacemanaki
But if you implemented Maybe in Java, there still wouldn't be any compile-time
guarantee that Nothing would be handled, right? At best you'd have something
in code that more strongly encourages a certain convention.

Just now I briefly looked at implementations of Maybe in Java and at least one
of them fakes this with checked exceptions, while another seems to lean
heavily on Guava's Function interface, which is just awkward to use. I'd
certainly be interested in seeing alternative implementations, if you know of
any that are decent.

Also, a maybe more serious issue is that since it's not idiomatic Java you'd
have to do a lot of wrapping around libraries, and hard-selling to colleagues.
Admittedly no longer dealing with flaws in Java's type system, but a big part
of a language is the community and ecosystem, and in Haskell's case people
using it have already bought on to the advantages of stronger types.

~~~
jonsterling
There's also no compiletime guarantee that Nothing is handled in Haskell;
there are plenty of unsafe partial functions. In a non-total language, you
unfortunately have to avoid partial functions without help from the compiler.

You're quite right that programming with these kinds of types in Java
introduces you to a whole new kind of Hell! I'm not sure I'd recommend it.

However, I'm simply pointing out that types like these are _not_ language
features. Some languages may be better suited to them than others, but they
are definitely library features, and had best not be considered otherwise.

------
neutronicus
I have yet to have my big ah-ha moment for how to do what I do (scientific
computing) in Haskell.

I recently found myself needing a proof-of-concept implementation for solving
a bunch of big tridiagonal matrices in parallel using MPI. I thought to myself
"here's an opportunity to use Haskell!", but I must confess I'm rather stumped
for how one goes about allocating some memory, banging on it, communicating a
subset of it to another processor(s), reading a buffer from the other
processor, and then banging on the memory I allocated before some more based
on what I got back from the other processors(s).

Does one actually attempt to control the machine with this level of
granularity with Haskell? Can one actually get any mileage out of the type
system doing this sort of thing? Or am I just trying to fit a square peg in a
round hole?

~~~
bos
In my experience, Haskell is not yet a great language for numerically
intensive computing.

I'll explain in a bit, but before I do, let me first address your question
about "can I bang on bits?". Yes, you can allocate memory and do all the low-
level hacking you please in Haskell. It's not really any harder than in C,
although the notation is different and that throws people. But because this is
a very imperative way of programming, it's also not going to be any faster
than C (typically it'll be a bit slower).

There are even MPI bindings for Haskell, and they look pretty much the same as
for other languages (i.e. very low level).

If you're just foontling around imperatively in big homogeneous arrays and
sending messages, then the type system really won't do you any good, and
you'll rightly find yourself wishing for the notational convenience and speed
of Fortran 95.

You could use immutable arrays (the Vector type is your friend) and higher
order functions instead, and thereby benefit from Haskell's rather nice
parallel evaluation support with only a little effort. This is quite
practical, and can lead to pretty code that runs quickly.

Where all the fancy type-related machinery comes into play for numeric code is
still largely a matter of research. There are interesting projects underway
for parallel programming (both on CPUs and GPUs) that rely heavily on the type
system. They're not obviously useful for real work yet, and since they rely on
advanced type system features, neither are they something you just pick up and
use as a newbie. Nevertheless, I think they're pretty cool projects, and I
have been watching them for a few years.

So while there's a lot of interesting stuff going on, the current state of
affairs is somewhat mixed. You might enjoy learning your way through it,
though; there are many rewards to the path.

~~~
dbaupp
_> There are interesting projects underway for parallel programming (both on
CPUs and GPUs) that rely heavily on the type system_

Are you referring to Accelerate[1]? And isn't Repa[2] stable and usable?

[1]: <https://github.com/AccelerateHS/accelerate/> [2]:
<http://repa.ouroborus.net/>

~~~
danieldk
Unfortunately, repa and vector are sometimes a few times slower than
counterparts in C or C++ for the lack of SIMD intrinsics. Thankfully, things
are moving forward fast on that front:

<http://ghc-simd.blogspot.com/>

~~~
neutronicus
repa also has some unfortunate lacunae, either in the API or in the
documentation.

For instance, one thing I tried to do but couldn't - apply `scanl` to an array
with a piece of intermediate state that is threaded through the computation,
save the piece of state at the end, and then apply `scanr` to the same array,
using the piece of state from the application of `scanl`.

------
opminion
The first example of parametric polymorphism, as introduced by a good teacher.

I wasn't even attending the class proper, but taking notes for a deaf student
as a paid job.

------
flink127
My "ah ha" moment with Haskell was when I ragequit for the 23rd time and
decided that Haskell is probably not for me.

~~~
ufo
That also what I did when I realized that the "return" function isn't actually
for returning values.

But then I came back for the 24th time and got hooked. Now Im doomed forever.

~~~
jonsterling
:) What an unfortunate function name. But "pure" isn't much better...

~~~
pja
I always thought it ought to be "inject" or something. Return is just outright
confusing to novices & just encourages them to believe that do ... return
notation has something to do with imperative programming. Which is does of
course, but only by the most circuitous of routes.

My embarrassing Haskell moment was how long it took me to realise that I could
never get Arrow notation to work because the first argument to an arrow was
the "arrow type" bit, so I was always trying to pass the wrong number of
arguments to ( __*) and friends. Took me ages to get over that hump.

------
sordina
I didn't have a single "ah ha" moment, but things became much clearer when I
realised that although it was possible to have heterogeneous collections
through the use of type-classes, what I nearly always wanted was to create a
new data type with a constructor for each behaviour I was interested in
encompassing.

The fact that data-types are so cheap, both syntactically and computationally,
really frees you up from having to worry about not creating them. I did go too
far the other way for a while and created new types for everything. There
exists a happy middle ground, but it's hard to define where exactly it lies.

------
jamesbritt
I was converting a somewhat complex Haskell example into Ruby for a
presentation and kept running into neat concise Haskell expressions that I
could not easily express in Ruby.

It got me thinking that Haskell might be a better Ruby.

------
clux
Reading the blow your mind wiki was ah ha overload.
<http://www.haskell.org/haskellwiki/Blow_your_mind>

------
shriphani
I really like this:

twoK=1:(map (2*) twoK)

I still have a long way to go though - I can't write a lot of it without
getting stuck and giving up. It would be really nice if I was comfortable with
haskell.

------
vitomd
My ah ha moment was when I found this tutorial <http://learnyouahaskell.com/>
, because was really fun and I learned a lot. Before, I was trying to learn
from this: <http://www.haskell.org/tutorial/> and was so boring that I almost
quit

------
meowzero
It wasn't Haskell but another functional programming language (scala). I got
my major "ah ha" moment when I did a code review with another experienced
functional programmer. It showed to me that functional programming is a
different paradigm.

------
tonetheman
Ha already been said, but my ah ha came when after reading quite a bit about
it. I realized I was incapable of understanding it and moved on. :)

~~~
djhworld
Reading won't get you anywhere unfortunately.

The ingenuity of Haskell is doing, writing code. You'll have many 'ah-ha!'
moments after that.

Also I think the major problem with Haskell is A LOT of the online content out
there is primarily aimed at academia and quite honestly baffles me too.

If you stick with LYAH and RWH and...just doing code then it becomes a lot
more usable and fun to program in

------
verroq
Learn Haskell to see how things could be done. Don't use Haskell for things
that should be done.

