Hacker News new | comments | show | ask | jobs | submit login

> Haskell has much better facilities for abstraction

I've been using Clojure for about 5 years now. I've actively worked for the past two years on core.logic which has quite a few fancy abstractions (custom types, protocols, macros) that all interact quite nicely together. Now, I'm not a Haskell expert, but I do feel that some of the things I've built would be could be challenging in the Haskell context. I might very well be wrong about that.

That said, I'm actively learning Haskell as there are some aspects of Haskell I find quite attractive and in particular a few papers that are relevant to my Clojure work that require an understanding of Haskell. My sense is that even should I eventually become proficient at Haskell, my appreciation for Clojure's approach to software engineering would be little diminished and likely vice versa.

Although another responder floated Template Haskell as Haskell's alternative to macros, Haskell loses out in that comparison. TH is both harder to work with than Lisp macros and sacrifices type safety[1], so it is avoided where possible. TH is used to generate boilerplate, but for other purposes (especially creating DSLs), Haskell favors abuse of do-notation through something like Free Monads[2].

Of course, this is nothing at all like macros, but in practice we can achieve many of the same goals while maintaining type safety. So, win win win (the third win is for monads).

[1] http://stackoverflow.com/a/10857227/1579612 [2] http://www.haskellforall.com/2012/06/you-could-have-invented...

EDIT: As dons points out, I was imprecise in my wording. I don't mean to say that TH leads to Haskell programs which are not type safe. The compiler will, of course, type check generated code. In general, given that dons has been programming Haskell for 14 years compared to myself who has been doing it for 1 year, prefer what he has to say on this subject.


> sacrifices type safety

How does TH sacrifice type safety? The generated code is type checked.

For actually customizing syntax via macros, quasi quotation is the popular approach, combined with TH. People can eg embed JS or ObjC syntax fragments directly into Haskell this easy. http://www.haskell.org/ghc/docs/latest/html/users_guide/temp...


I'll have a crack at some suggestions, as a Clojure user (nice work on core.logic!) with some Haskell familiarity.

For logic programming: you might find Control.Unification [1] interesting -- and of course if you don't need unification then you can go pretty far with the plain old List monad.

Protocols/multimethods: Haskell typeclasses are really really powerful. Probably my favourite of all the approaches to polymorphism I've seen anywhere, although YMMV (they do lean heavily on the type system.)

Macros: there is Template Haskell [2], which is pretty interesting although I can't claim to've used it myself. It doesn't quite share the simplicity of macros in a homoiconic dynamic language, but at the same time in Haskell it feels like you don't need to lean on compile-time metaprogramming as much as you do in a Lisp to achieve power. (Can't quite put my finger on why).

[1] http://hackage.haskell.org/packages/archive/unification-fd/0... [2] http://www.haskell.org/haskellwiki/Template_Haskell


Kiselyov et al have an interesting paper on logic programming in Haskell [1]. They introduce a backtracking monad transformer with fair disjunctions, and negation-as-failure and pruning operations. And the implementation is based on continuation, so it's more efficient than reification-based approaches such as e.g. Apfelmus's operational monad [2]

[1]: http://www.cs.rutgers.edu/~ccshan/logicprog/LogicT-icfp2005....

[2]: http://apfelmus.nfshost.com/articles/operational-monad.html


Is that Basque-Icelandic pidgin?


Heh, fair point. I guess it can read like that. I'll expand.

Monad transformer = thing that can transform a monad into another one with additional primitive operations. Monads are more or less the Haskell way of achieving something akin to overloading the semicolon in imperative languages. In imperative languages the meaning of the semicolon is baked into the language. It seems natural and stupid to think about what it means, normally it's just "do this, changing the program state by assigning some values to some variables, then do this with this new state" but if you think harder, it can also mean "do this and skip the rest of the loop" (if the first statement is a break) or "do this and then skip a bunch of levels up the call stack until you find an appropriate handler, unwinding the stack in the process" (if it's a throw). Haskell doesn't have any of this baked in, but in a way allows you to define the semantics of the semicolon yourself.

So monad transformers then allow you to build up more complicated semicolon semantics by combining simpler ones (and getting the full set of their primitive operations, such as assignment, break or throw statements in the previous examples).

Logic programming, in its simplest form, is concerned with deducing "goal" facts of the form C(X), from a bunch of rules of the form "A(X) and B(X) and ... imply C(X)" and other facts of the form "A(X)". One way you can do this is look for all the rules with your goal fact as their conclusion, then look how you can derive their premises, and so on. Which essentially boils down to a backtracking search.

So what Kiselyov et al did was to implement some primitive operations and overload the semicolon in a way which makes it easy to perform a backtracking search. Or more precisely, since it's a monad transformer, they figured out a way to add these primitives to any set of existing ones (such as assignment primitives for instance). Their implementation also provides some interesting backtracking operations which can be tricky to implement (the aforementioned fair disjunctions, negation as failure and pruning). And it is efficient since it's based on continuations (which are just functions of a certain format), as compared to other approaches which first have to represent the target program as data ("reify" it), then define an interpreter for that data and finally run the interpreter.




One of my frustrations with learning Haskell is that everybody assumes that I am coming from a imperative programming background. I've toyed Python a wee bit, but I've really only ever used functional languages (Scheme, R, Clojure) and have only ever programmed in a functional style. Needless to say, I have no clue what the semi-colon is supposed to do in imperative languages. As I have been told many times, knowing functional programming lets you start at 2nd base with Haskell...but no further.

Thanks for the attempt. I'll try again after I'm done with RWH.


Sorry, I guess that it's the default assumption since most people do come from an imperative background. And you had to choose a language with significant whitespace as your first imperative language, that's just low ;) But essentially, in languages with Algol-inspired syntax the semicolon is an end-of-statement marker. In Python, this is normally (but not always) the newline character.

I hope that helps decrypt the monad explanation part of my post somewhat. I wish I could give one which better relates to your background, but of the three languages you mention, I've only had very superficial exposure to Clojure. And you've also probably heard it before, but I would recommend LYAH over RWH as the first Haskell text.


I also find it amusing that Haskellers always tell you "semicolon" when you normally won't ever see one in any code.

What they're talking about is statements. If you have a completely pure functional language, no function should ever have more than one expression. If the function had two expressions that would mean one of them did something and the result was thrown away. But if you don't have side effects, why would you have an expression that does something and then throws the result away?

In Haskell any function can only have one expression. So if you need to do several steps (i.e. you need side effects) you have to use do notation. Do notation is syntactic sugar for turning what appear to be several statements into one statement.

If you use the "curly brackets" way of coding then you would separate these statements with semicolons (tada!), but most code is white space significant so they just use one line per statement.

So what do does is takes the code you wrote in the lines and determines how to combine it. If you're using do then you're using a monad of some kind (a container, basically). Given the container and the contained, you can do two kinds of operations: "side effect kinds" that appear to do a statement and ignore the result (these actually work with the container) and expressions.

The interesting thing about this kind of code is you don't keep track of the container. Sometimes you don't see any trace of it at all except in the type signature. There will be functions that talk to the container but they don't appear to take the container as a parameter (which is good since you don't have a handle to it anyway). Behind the scenes do is setting up the code in such a way that the container is actually passed to each of your expressions so your "side effects" aren't side effects at all, they're modifications to the invisible (to you) container.

And each line is fused together by using one of two functions the container defines. So this is where the power comes in. How the container chooses to put these lines together depends on what the container actually is. An exception container, for example, might provide a special function (called, say, "throw") that will call each expression (functions by the time the container seems them) unless one of them calls its special "throw" function at which point it doesn't execute any further expressions and instead returns the exception it was passed.

I don't know if that makes things better or worse. :)


That did make it a little better, thank you :)


> Better?

Much better! Thank you :-)

I think you've also managed to prod me a little closer to understanding monads -- I always feel most tutorials are too complex, or too theoretical. Relating monads and the semicolon gave my brain a nice "hint" towards understanding them better, I think.

It's exactly related to the problem(s) I have with Haskell -- magical syntactical sugar that for me doesn't really seem to simplify anything. Quite like how semi-colon works in JavaScript -- end-of-statement is needed, but in javascript it is inconsistent. And like in other languages where the semicolon is a statement, but one one normally doesn't really think about...


For me, monads made sense when I started thinking of them as an API for composing normally uncomposable things. I'll try to explain it without assuming much knowledge of Haskell.

Composition is key in functional programming, and when you have two functions a -> b and b -> c, there is no problem... You can just compose them and get a -> c.

Using the List monad as an example, consider the case when you have a -> [b] and b -> [c]... You have a function producing bs but you can't compose it with the other function directly because it produces a list. We need extra code to take each element of the source list, apply the function, and then concatenate the resulting lists.

That operation becomes the "bind"[0] (>>=) of our Monad instance. Its type signature in list's case is [a] -> (a -> [b]) -> [b], which basically says "Give me a list and a function that produces lists from the elements in that list, and I will give you a list", but that is not so important. The point is that you start with a list and end up with another list, which means that by using bind, you can compose monadic operations (a -> [b]) indefinitely.

The "return" operation completes the picture by letting you lift regular values into the monad, and when return is composed with a regular function a -> b it transforms it into a -> [b]

The more generic form of a monad in Haskell is "m a" which just means a type constructed from "a" using whatever "m" adds to it (nullability, database context, possible error conditions etc.)

As you can see from the type signature, "a" is still there. Monadic bind and return allow composing existing a -> m b and a -> b functions, and this abstraction is made available through the Monad typeclass.

[0] Note that for lists, there's actually another valid definition of bind that behaves differently but also satisfies the required properties. Since you can't have two instances of a typeclass for a single type, it's defined for a wrapper type "ZipList" instead.


You're welcome. But I'm now not sure if I got my point across correctly on the monad part of the post (but then again, smarter people than me have failed on that front). I'm confused about what you mean with syntactic sugar. I wouldn't call semicolons in JS syntactic sugar, as they are automatically inserted by the compiler. So I guess you could call lack of semicolons syntactic sugar for semicolons. Personally, I would call it idiocy :) (not necessarily the lack of semicolons - I like the appearance of e.g. Python - but their automatic insertion).

In general, I'd even say that Haskell provides very little syntactic sugar, and the stuff that it does provide is both quite useful and rather understandable. Examples being list comprehensions, or even list syntax to begin with, where [1,2,3] is sugar for 1:(2:(3:[]))). Yes, the do-notation (which is what you normally use with monads) is also sugar, but it's not the difficult part of understanding monads. The difficult part is understanding the various bind operators (or equivalently, join or Kliesli composition) and how exactly they model different computational effects, and lastly forming the "bigger picture" from the examples.


  > So I guess you could call lack of semicolons syntactic sugar for semicolons.
Yes, exactly. Inconsistent sugar. My early experience with Haskell was that a lot of the syntactic sugar was (or seemed) very brittle -- combined with uninformative error messages -- much like missing semicolons can lead to -- even in Java iirc.

(I believe that is fixed now, however. I still remember it -- contributing to a somewhat irrational fear of Haskell :-).


That was a great explanation, thank you.


> (Can't quite put my finger on why).

Lazy evaluation. Much of what you use Lisp macros for is to control when/if arguments get evaluated. Haskell just works this way so you don't need macros for it.


Another way to think about it is that every Haskell function is a macro.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact