Hacker News new | comments | show | ask | jobs | submit login

Whoa, as a python guy, this makes me not want to learn Haskell at all. The syntax seems as bad a Perl.

That's because this is a very rough, "top-down" description of the language. If, instead of trying to think of Haskell as an inferior Python--which it definitely isn't--you learned it from fundamentals up, all of the syntax would make much more sense.

It's like math: if you just saw something like a triple integral over a weird region with little fiddly symbols everywhere, it wouldn't make sense. If you learned about them starting with the basics, the logic would be clear and elegant.

The other thing is that all the "do" stuff and 'fish operators" are actually really clever and really high-level abstractions--they are basically a unified language for describing all sorts of computations, not just ones with side effects. For just reading most code, this isn't important; for understanding the elegance of Haskell, it is.

If, instead of trying to think of Haskell as an inferior Python--which it definitely isn't--you learned it from fundamentals up, all of the syntax would make much more sense.

This is essentially what I took from the article.

However, I still agree with the OP.

If you are already familiar with the language syntax, grammar, idioms, and so forth then it seems easy and trivial. I find a lot of Haskellers confused as to why anyone finds the language difficult and opaque. Many tutorials and articles are written to convince people that Haskell is easy to learn, I think, because they are familiar with it and desire others to be familiar with it too. Once one is familiar with the language it doesn't matter how dense and difficult it is.

The truth is that Haskell is difficult to learn. One does have to start learning from the fundamentals up because the fundamentals of Haskell are so different from everything else that it's difficult to relate to the previous experiences of most people.

Contrast this with someone who learned C or some dynamic scripting language like Python from the ground-up.. there are a lot of languages with enough shared grammar and vocabulary out there that they can reach out to them without abandoning their previous notions entirely about how programming is done.

Haskell may not exactly ask for one to abandon their notions of computing conceptually but I think it is fair to say that it does ask you to give up all of your previous knowledge of syntax and grammar in order to learn it.

I think there are two different questions here. I agree that the semantics of operators such as >>= is a Good Thing and makes Haskell a Better Language Than The Competition, but I still think that the grammar and syntax (such as the significant whitespace rules, or the fact that <- and -> do totally different things) is poorly chosen.

Haskell is as hidebound to the ML school as Perl is to the curly-brace school, and they both suffer for it.

Trust me, it's better. Perl achieves conciseness by using global metavariables (e.g. $_, $/, etc.) which are silently updated by functions and that you have to keep track of. Everything here is explicit; there are just lots of symbols...

Could you give examples of these silent updates in Perl?

I'm by no means a Haskell expert, but one of the worst parts of learning Haskell (or reading arbitrary code) is that there's no special syntax for partial application. Certainly that's a benefit in many cases if you're familiar with the signatures of the functions you're using, but it's definitely not explicit.

Consider 'chomp', which to a first approximation removes trailing whitespace from a variable. If you don't pass it any variables, it modifies the $_ variable. Better hope you know what's in there. Furthermore, it's behavior depends on what $/ is set to; if you set $/ to something else, it will do something different. Run 'perldoc perlvar' for more examples.

Lack of special syntax for partial application can be confusing for a lot of beginners; it certainly is very confusing if you're implementing, say, the continuation monad. But I think a lot of people overestimate the extent to which partial application appears in normal code: usually you fill up all the arguments except the last one, which is shuttled in via =<< or something similar. You don't have to think too hard about it, because the typechecker will make sure you've put all the functions together properly.

Partial application also tends to be very useful for writing functions in a clear "point-free" style--that is, without naming arguments; it is also very nice for higher-order functions like map and filter.

It's because you think of functions differently in Haskell. They're not running processes which are delayed by partial application but instead partially constrained unifications.

Which is sort of worse sounding, but it becomes quite natural once you start writing using types. I think of it like using legos.

    makeLegoman :: Legs -> Torso -> Head -> Legoman
    makeLegoman some_legs :: Torso -> Head -> Legoman
    makeLegoman some_legs some_torso some_head :: Legoman

Heh, I really like the Lego man example. I'm going to have to use it in the future.

Regex capture is the most obvious one. The $N variables are updated silently. You just don't think about it because it's so central to Perl.

There are also many things that update $_.

There is no special syntax for partial application because in Haskell partial application is always possible.

You can call the function

   foo :: a -> b -> c
like this

   bar = foo a
or like this:

   bar = foo a b.
In the first case bar will be a partially applied function of type

   bar :: b -> c
and in the second case it will simply be a value of type c.

Actually, as far as I know, this is actually what happens in the background. I mean that calling

   bar = foo a b
actually gets converted to

   bar = (foo a) b.

There is no special syntax for partial application because in Haskell partial application is always possible.

I am aware of this. This is the source of some of my frustration reading Haskell code. I know that you can think of functions as single-arity first-class entities, but I've never been a mathematician and I've never thought that way.

I don't have enough experience maintaining large Haskell programs over a long period of time to know, but I suspect that this layering approach might encourage the kind of convolution you occasionally find in CL or Smalltalk programs where it's easy to build a teetering tower of the wrong abstractions. Perhaps there's something different about Haskell that encourages a more careful code curation, but I don't know.

Alternately, perhaps it's just a flaw in the way I think about Haskell that I want a visual distinction between partial application and calling a function.

How about this:

      print $_
I don't remember the exact syntax, but the tr acts on $_ by default, and $_ starts out as each element of the array.

In map, $_ is a localised reference. Thus it doesn't clobber $_

    my @array = qw/one two three/;
    $_ = "Hello";
    map { tr/a-z/A-Z/ } @array;

However remember it's a reference so using tr// will update @array...

  say "@array";    # => ONE TWO THREE

It's not actually syntax. The language allows you to define infix operators, and they're just functions like any other with full documentation searchable on hoogle: http://www.haskell.org/hoogle/

> It's not actually syntax. The language allows you to define infix operators, and they're just functions like any other ….

`<-` really is syntax: http://www.haskell.org/hoogle/?hoogle=%3C-.

Yup, I was only talking about the "fish operators" as he calls them.

As both a Python and Haskell guy, trust me, it's not. The syntax is an example - Haskell code is, in practice, cleaner than presented - some of the article's code is built to example. However, do be aware that languages like Python and Ruby are more friendly to easy reading, by design. Haskell is purely functional, and most designations are immutable. Definition clauses are also combined, which lessens variables (as they're immutable, among other reasons). Don't get turned off because of the syntax. Haskell and Python are useful for different things.

Haskell and Perl are actually incredibly close in essential ways.

More literal and decoupled languages like Python or Lisp (or C) mostly work from the inside out in a predictable way where the results of a function don't depend on how or where you use it. Sometimes you couple to a global variable or something but usually that's bad practice and you can be careful to avoid it when it isn't necessary.

Haskell and Perl depend on type systems and context to determine what a function is supposed to do. The expression that receives the result of a function has to be interrogated to determine what the function might do and that can continue recursively. Perl uses $ and @ and Haskell gives you a lot more possibilities than that, but most Haskell depends on the local versions of @ and $ because those are the best contexts, as lwall knew. The context focus can be a nightmare or it can be a perfect concise crystal of exactly what you want but it isn't exactly the acme of separation of concerns. Haskell gives you a lot of rope to hang yourself here.

And then there's the way Haskell and Perl both love their proliferation of syntax, mostly sugar, and special symbols that make them look like line noise until you're an insider at the language club. And even then, they can still be line noise but you aren't supposed to admit it.

(Note: I've written Perl and Haskell programs but not used them for significant projects. Mostly I'm stuck with C++ and Javascript for the same reasons you probably are.)

There's an important difference you are missing here, though, which is that even if you don't know what specific type context a function is being used in, you still have some very precise information about what the function may do. For example, you may not know what 'mzero' is going to do in this type context, but you do know that it has some specific properties (e.g. mzero >>= f === mzero) and can extrapolate from there.

You'll have similar problems in dynamic dispatch systems: what does this object do? It could do so many things. In Haskell, we demand that the object always follow a certain set of rules. This is very useful.

There's not a whole lot of syntax in Haskell, e.g. no special statements for control flow. What may be a little confusing is the infix operators. Haskell lets you define your own infix operators (which is a good thing compared to a fixed number of user definable operators, like C++ or Python) and there's a few of them you need to know to be efficient with the language (>>= being the most important, $ and . coming second). Exception: <- is not an infix operator but part of the do notation syntax.

Haskell's syntax is very clean and readable, the examples in this article are probably intentionally a bit confusing or at least artificial.

And besides, when looking at a programming language, syntax is the _last_ thing you should look at. The semantics is what is important, by which I mean function calling conventions (fixed vs. curry vs. varargs), type systems (dynamic vs. static vs. inferred), type of evaluation (strict vs. lazy vs. unification) and so on.

Choosing a programming language based on it's syntax is like choosing a wife based on her looks only.

I would say that understanding composition (.) come before understanding bind (>>=), as bind is like higher-order composition.

All the notation is marginally useful when you're writing math on a chalkboard, but saving a few characters in source code isn't really worth it. I like languages that resist the urge to make use of those letters above the number keys.

The fact that Haskell allows libraries to define their own operators is a crime against humanity that justifies the sort of criticism usually leveled at Lisp macros, which is that it's impossible to guess what they mean without looking them up.

Operators are just names.

When you're writing very abstract code that operates on abstract structures -- there's often no name that conveys the meaning. English and maths don't have names for every useful abstraction and abstract concept.

Instead of inventing some arbitrary name in English, you invent some arbitrary name as an operator, and that gives you infix syntax (often desirable for binary operators) and a recognizable visual signature.

Also, operators can greatly enhance readability, if you know a certain convention. Consider for example a vector library:

There are various arithmetic operators -- and those can be applied on scalars and vectors. Ordinary arithmetic operators operate on scalars. If < is prepended on the left side of the operator, then the left argument is a vector, and similarly with > on the right side. This makes it easy to immediately know what <+> means, vs what *> means, and so forth.

In other languages I agree, in Haskell it's perfectly normal and no problem. Most of the times, the Haskell code is very readable.

Nah, it's no biggie. What really is rough is the adherence to static types; it's very harsh and can really slow down a person hoping to build things.

And of course, if you're like me, then you don't want to go into a language where there's no Twisted equivalent.

I can't tell if you're trolling. GHC Haskell has first class pre-emptive concurrency support. You don't need Twisted because we give that all to you, out-of-the-box, without needing to do anything fancy.

I think he is complaining that he can write code in a natural form and then GHC's runtime makes the necessary calls to the event loop to make things happen as declared. This is annoying because all the hours you spent trying to work around Python turned out to be a big waste of time. Or something.

Twisted doesn't just provide concurrency. Twisted also provides tools for building network servers and protocols. Haskell doesn't have anything unified; the community can't even decide which kind of I/O (lazy, iteratee/enumerator, handles) is best, let alone recommend a simple way to sit down and build fancy stateful networking.

Twisted, Event Machine, and Node (yes, even Node!) provide this. Haskell should have a library which does, too.

I find the static typing actually started helping me once I got used to it. Now almost all of my type errors correspond to errors in logic that I would otherwise have had to find at runtime.

Is Twisted like Node.js? If so, look here: http://stackoverflow.com/questions/3847108/what-is-the-haske...

For a great counterexample, watch the first 10 minutes of this screencast of data-driven programming, in which the the data types for OCR, like PixelData, are first written as essentially undefined, and then refined and changed as needed by the code as it develops. http://www.youtube.com/watch?v=045422s6xik

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact