Hacker News new | comments | show | ask | jobs | submit login
Hello Haskell, Goodbye Lisp (newartisans.com)
68 points by dons 3141 days ago | hide | past | web | 47 comments | favorite



Well written article, but it would be more relevant, IMHO if it compared Haskell and Clojure, not CL.


Not really, the criticism apply in both cases. The main area where clojure wins against CL is that clojure is not bloated and that wasn't where the author of the article had major problems. Also, to his additions, I would like to add in the lack of decent data-structures in lisp. Cons cells are nice but they are no match for algebraic data-types and pattern matching.


Clojure completely erases the parallelism section as a complaint (although to be fair GHC is the unchallenged fastest parallel gun in the west, but contrary to the article, so far as I know even GHC never just guesses parallelism without being asked).

Data structures, you mean like hashmap, vector, list, sequence? Those are all basic in Clojure. You can de-structure them within the parameter list of a function or a let, which is about half of what pattern matching is used for in Haskell. (There are also streams, but those are a bit more fuss, and they're experimental.)


> Data structures, you mean like hashmap, vector, list, sequence?

No, algebraic data types: http://en.wikipedia.org/wiki/Algebraic_data_types


From the article.

data Tree = Empty | Leaf Int | Node Tree Tree

Here, Empty, Leaf and Node are the constructors. Somewhat similar to a function, a constructor is applied to arguments of an appropriate type, then yielding an instance of the data type to which the constructor belongs. For instance, Leaf has the something like a “functional type” Int -> Tree meaning that giving an integer as an argument to Leaf produces a value of the type Tree. As Node takes two arguments of the type Tree itself, the datatype is recursive.

Operations on algebraic data types can be defined by using pattern matching to retrieve the arguments. For example, consider a function to find the depth of a Tree, given here in Haskell:

depth :: Tree -> Int depth Empty = 0 depth (Leaf n) = 1 depth (Node l r) = 1 + max (depth l) (depth r) Thus, a Tree given to depth can be constructed using any of Empty, Leaf or Node and we must match for any of them respectively to deal with all cases. In case of Node, the pattern extracts the subtrees l and r for further processing.

The explanation of how Leaf, Empty, and Node work make me wonder what happens if I want to use Leaf for some other purpose, outside the context of Tree. It looks like Tree necessarily "leaks", which is a bad thing.

The definition of depth doesn't give me any reason why I'd want to use such an inverted definition.

I like pattern matching as much as the next person, but it's a tool, not a goal.


Leaf is a data constructor for the type Tree, it's not supposed to be a generally reusable entity, and it wouldn't make sense to treat it as one, just like it wouldn't make sense to treat a constructor in Java as reusable across classes. There's not really even anything there to reuse - it's just a symbol attached to a very small bit of structure, linked to the data type Tree.


If trees are made us using Leaf, Node, and Empty, tree "leaks" names, which is a disaster because lots of things have leaves, nodes, and empty. That's what the explanation seems to say.

It's unclear if calambrac's comment is meant to say that trees are actually made using something like tree.Leaf(int) or (more likely) tree() by itself and type disambiguation.

One reason why I've never looked seriously at Haskell is that all of the examples that I've seen are basically puns and there's no indication as to how one might build larger programs. I'm sure that there's some way to do so, but I assume that the advocates present the language in its best light.


Haskell has one of the best namespace/module systems of any language I've ever seen. It doesn't 'leak' these names; they're just the names of the data constructors, they're supposed to be visible. If you need to control access, use the module system.

I think it's perfectly legitimate for examples of particular features to omit other features (like modules, in this case).


> Haskell has one of the best namespace/module systems of any language I've ever seen. It doesn't 'leak' these names; they're just the names of the data constructors, they're supposed to be visible. If you need to control access, use the module system.

If I have to use modules to hide distinguish tree leaves from other leaves, I'm using modules to do what I get from classes in other languages. Since those languages also benefit from modules ....

> I think it's perfectly legitimate for examples of particular features to omit other features (like modules, in this case).

I agree that any specific example should be targetted. My point is that none of the Haskell examples that I've seen address large programs.


I mean... you're right, you are using modules to do what you get from classes in other languages. So what? You say this like it's a bad thing, it's not, it's just how this particular problem is solved in Haskell. Different != Horrible. I would argue it's actually a little bit more elegant, in that it takes a problem (namespacing) and solves it one way (modules), rather than having some hybrid package/class system.

I'm interested to see the examples you've seen for other languages that do address large programs. I've personally not come across many "How to write Firefox in Python"-style tutorials.


> I mean... you're right, you are using modules to do what you get from classes in other languages. So what?

My point is that languages that have classes also seem to need modules. Do haskell's modules address the problems that addressed by the combination of classes and modules in other languages? Does Haskell somehow avoid those problems? Is there some other mechanism?

> I'm interested to see the examples you've seen for other languages that do address large programs.

Examples of modules can be trivial yet demonstrate why modules might be useful for large programs.


Haskell isn't object-oriented, so there aren't any little namespaces (classes) running around that themselves need to occupy a namespace (package). So modules are enough.

The Haskell wiki entry on modules is short, pretty complete, and has plenty of examples:

http://en.wikibooks.org/wiki/Haskell/Modules


'If trees are made us using Leaf, Node, and Empty, tree "leaks" names, which is a disaster because lots of things have leaves, nodes, and empty.'

Non sequitur? Where is the problem. Try

> data MyMaybe a = Just a | Nothing

the compiler won't complain about a name crash with the build-in Maybe-Monad.

[Sorry, accidentally downmodded the parent-comment from 1 to 0. Slippery fingers.]


>>If trees are made us using Leaf, Node, and Empty, tree "leaks" names, which is a disaster because lots of things have leaves, nodes, and empty.'

>Non sequitur? Where is the problem. Try

> data MyMaybe a = Just a | Nothing

Huh? How does that create a tree instead of a banana?

Actually, it's completely unclear what that example is doing. I'd guess that it's specifying a new name and a corresponding value, but I'm not sure which name and what kind of value.


The Just and Nothing that he's defining for his MyMaybe are completely different things from the Just and Nothing used for the Maybe that ships in the Prelude. MyMaybe may as well be a banana for all Prelude.Maybe cares.


Yes. And my code does 'nothing'. It just defines an algebraic datatype.


I know why I might want to define a tree.

I have no idea why I want to define an algebraic datatype.

Note "you can define trees using algebraic datatypes" doesn't tell me why I want to use algebraic datatypes.


I want to use them so I don't have to write very much code, so that I can easily reason about my program by directly substituting symbols with their definitions, so that I can use pattern matching against instances of that type, so that I can group them into type classes and know what operations I'll have available to work with them... etc.


It's more or less the haskell version of a struct or a public class with fields and no methods, except that you can specify more than one shape for the same type.

    data MyMaybe a = Just a | Nothing
...means in Java speak "MyMaybe<a> is an abstract generic superclass with a final singleton subclass Nothing<a> having no fields, and a final subclass Just<a> having one field of type a"

You then write functions that take that data and use it.


I thought trees where algebraic datatypes?


> Data structures, you mean like hashmap, vector, list, sequence? Those are all basic in Clojure.

And CL...


You can use the same car, map on any of them as the functions operate on a sequence interface not on cons lists.

Whereas in CL these functions are defined directly on the implementation (cons lists) so map can't map over a vector you need to make a special vector only map and so on.


> Whereas in CL these functions are defined directly on the implementation (cons lists) so map can't map over a vector you need to make a special vector only map and so on.

Wrong.

The Common Lisp list functions, such as mapcar, are restricted to lists but Commmon Lisp also has sequence functions which take any Common Lisp sequence type.

Common Lisp's map happens to be a sequence function.

I forget if hashes are a Common lisp sequence type, but vectors definitely are. IIRC, Common Lisp strings are sequences.


mapcar in clojure wouldn't be limited to lists even if no one had though about the it at the time - this is the difference.

There's a ton of CL code out there that only works on cons lists, adding a sequence type doesn't solve this nor does it force people to use it or change existing code.


I don't see the option of specialized functions as a bad thing. I also don't see why the existence of code that uses such functions is a bad thing.


Cons cells are nice but they are no match for algebraic data-types and pattern matching

But of course, with a dynamic type system, objects are basically the same thing. In CL, you can encapsulate data inside its own type, and then match on that type. (It also has super/sub-typing relationships, which Haskell does not.)


Haskell does have interface inheritance with type classes.


This is more like trait composition (1) than inheritance. I don't consider this a downside, though, as I almost always choose trait composition over sub-typing when I am using a more OO-ish language.

(1) http://www.iam.unibe.ch/~scg/Research/Traits/


Clojure has great support for parallellism, and Clojure's community is very friendly. Haskell has nothing on Clojure in these specific regards.


In support for paralleliism or a friendly community? I'd have thought it has both in truck loads.

Well tested, fast parallelism, that's stood up to 15 years of testing...


As a longtime GHC user and sometime #haskell visitor, I find your troll HILARIOUS. One thing though, you forgot "Clojure has a better library of persistent data structures." That would have sent dons into an epic flameout.


I'm not sure if mdemare's post was a troll, or just poorly worded. Clojure has a friendly community and very good support for parallelism. Haskell does also.


I think my post was neither, and I'm confused at the reactions. You summarized it perfectly.


Why do you think this?

Haskell not only has a friendly IRC channel, it also has friendly tools for interacting with the community. For example, it is very nice to hear someone talking about Yi, type "cabal install yi", and then having it installed and running in just a few moments. (It is also very nice to pull the xmonad darcs and say "cabal install" to install the newest version of xmonad. I don't even bother with Debian packages for anything related to Haskell anymore.)

Anyway, my point is, I think you should try Haskell before you talk about it.


But I've tried Haskell, and I do like it!

"X has nothing on Y" means X is not superior to Y. Not "X couldn't tie Y's shoelaces."


The meaning is much closer to the latter in my experience.


Clojure's object system: http://clojure.org/runtime_polymorphism


You must be new here...


Is the "parallelism" section theoretical, or you can get the Haskell compiler to actually generate multi-core code automatically?


No, its not theoretical. However, do not assume the Haskell compiler automatically parallelises arbitrary non-parallel code. That's simply not possible to do well, and is really a dead end.

What GHC does do is semi-implicit multicore parallelism for particular subsets of the language.

The GHC runtime is a parallel runtime supporting a range of parallel abstractions from semi-implicit parallelism to explicit task parallelism, in a three level hierarchy of OS threads -> Haskell lightweight threads -> fine grained thread sparks.

The most automated mechanisms are:

* thread sparks - you hint which code to run in parallel with `par`, and the runtime uses that hint to parallelise your code, distributing it across cores. * data parallel arrays - if your algorithm is expressible as an array program, use DPH and the array operations will be automatically parallelised (alpha!)

To more explicit paralelism with: * threads * transactional memory * mvars * message passing

An example of the semi-implicit multicore parallelism is given in the "Haskell in 5 minutes" tutorial here:

http://haskell.org/haskellwiki/Haskell_in_5_steps#Write_your...


Is there analog to List macros in Haskell?

edit: yes, there is! http://www.haskell.org/th/


A lot of (but not all of) what you end up getting from Lisp macros ends up being not needed to be implemented with macros in Haskell anyway.


Can you give an example of something you can implement with a Lisp macro, but you can't implement with Haskell?


Sure. Consider the case of defining an instance of a type class:

     instance Foo Bar where
         quux = quuxifyBar -- terrible example
Now, you want to follow the same pattern to declare "Foo Baz", "Foo Quux" and "Foo CannedAir". I hope you like copying and pasting, because that's how you do it without Template Haksell.

In Lisp, it would be a simple matter of writing a macro, like this:

    (defmacro make-my-type-thing (type)
         `(instance Foo ,type
              ((quux (x) (,(intern (format nil "quuxify-~A" type)) x)))))
Then you can:

    (eval-when (:compile-toplevel)
        (loop for i in '(Bar Baz Quux CannedAir)
              do (make-my-type-thing i)))
Yes, the example is very contrived, and yes, Haskell ships with a lot of abstractions to make this case very uncommon. But in Haskell, you just can't treat your code as data -- it's code.


I've noticed the same thing with Clojure. I only dig into macros to remove boilerplate, like when using APIs provided in other JVM languages.


I've been using macros in Clojure to write my own object system. It seems to me that for serious meta-programming in Haskell, you have to use Template Haskell, and from what I've heard, it's not nearly as flexible.


It seems to me that for serious meta-programming in Haskell, you have to use Template Haskell, and from what I've heard, it's not nearly as flexible.

"seems" and "heard" is the best you can do? Try it, and you'll find that it's very easy to meta-program in Haskell, even without macros.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: