Data structures, you mean like hashmap, vector, list, sequence? Those are all basic in Clojure. You can de-structure them within the parameter list of a function or a let, which is about half of what pattern matching is used for in Haskell. (There are also streams, but those are a bit more fuss, and they're experimental.)
No, algebraic data types: http://en.wikipedia.org/wiki/Algebraic_data_types
data Tree = Empty
| Leaf Int
| Node Tree Tree
Here, Empty, Leaf and Node are the constructors. Somewhat similar to a function, a constructor is applied to arguments of an appropriate type, then yielding an instance of the data type to which the constructor belongs. For instance, Leaf has the something like a “functional type” Int -> Tree meaning that giving an integer as an argument to Leaf produces a value of the type Tree. As Node takes two arguments of the type Tree itself, the datatype is recursive.
Operations on algebraic data types can be defined by using pattern matching to retrieve the arguments. For example, consider a function to find the depth of a Tree, given here in Haskell:
depth :: Tree -> Int
depth Empty = 0
depth (Leaf n) = 1
depth (Node l r) = 1 + max (depth l) (depth r)
Thus, a Tree given to depth can be constructed using any of Empty, Leaf or Node and we must match for any of them respectively to deal with all cases. In case of Node, the pattern extracts the subtrees l and r for further processing.
The explanation of how Leaf, Empty, and Node work make me wonder what happens if I want to use Leaf for some other purpose, outside the context of Tree. It looks like Tree necessarily "leaks", which is a bad thing.
The definition of depth doesn't give me any reason why I'd want to use such an inverted definition.
I like pattern matching as much as the next person, but it's a tool, not a goal.
It's unclear if calambrac's comment is meant to say that trees are actually made using something like tree.Leaf(int) or (more likely) tree() by itself and type disambiguation.
One reason why I've never looked seriously at Haskell is that all of the examples that I've seen are basically puns and there's no indication as to how one might build larger programs. I'm sure that there's some way to do so, but I assume that the advocates present the language in its best light.
I think it's perfectly legitimate for examples of particular features to omit other features (like modules, in this case).
If I have to use modules to hide distinguish tree leaves from other leaves, I'm using modules to do what I get from classes in other languages. Since those languages also benefit from modules ....
> I think it's perfectly legitimate for examples of particular features to omit other features (like modules, in this case).
I agree that any specific example should be targetted. My point is that none of the Haskell examples that I've seen address large programs.
I'm interested to see the examples you've seen for other languages that do address large programs. I've personally not come across many "How to write Firefox in Python"-style tutorials.
My point is that languages that have classes also seem to need modules. Do haskell's modules address the problems that addressed by the combination of classes and modules in other languages? Does Haskell somehow avoid those problems? Is there some other mechanism?
> I'm interested to see the examples you've seen for other languages that do address large programs.
Examples of modules can be trivial yet demonstrate why modules might be useful for large programs.
The Haskell wiki entry on modules is short, pretty complete, and has plenty of examples:
Non sequitur? Where is the problem. Try
> data MyMaybe a = Just a | Nothing
the compiler won't complain about a name crash with the build-in Maybe-Monad.
[Sorry, accidentally downmodded the parent-comment from 1 to 0. Slippery fingers.]
>Non sequitur? Where is the problem. Try
Huh? How does that create a tree instead of a banana?
Actually, it's completely unclear what that example is doing. I'd guess that it's specifying a new name and a corresponding value, but I'm not sure which name and what kind of value.
I have no idea why I want to define an algebraic datatype.
Note "you can define trees using algebraic datatypes" doesn't tell me why I want to use algebraic datatypes.
data MyMaybe a = Just a | Nothing
You then write functions that take that data and use it.
Whereas in CL these functions are defined directly on the implementation (cons lists) so map can't map over a vector you need to make a special vector only map and so on.
The Common Lisp list functions, such as mapcar, are restricted to lists but Commmon Lisp also has sequence functions which take any Common Lisp sequence type.
Common Lisp's map happens to be a sequence function.
I forget if hashes are a Common lisp sequence type, but vectors definitely are. IIRC, Common Lisp strings are sequences.
There's a ton of CL code out there that only works on cons lists, adding a sequence type doesn't solve this nor does it force people to use it or change existing code.
But of course, with a dynamic type system, objects are basically the same thing. In CL, you can encapsulate data inside its own type, and then match on that type. (It also has super/sub-typing relationships, which Haskell does not.)
Well tested, fast parallelism, that's stood up to 15 years of testing...
Haskell not only has a friendly IRC channel, it also has friendly tools for interacting with the community. For example, it is very nice to hear someone talking about Yi, type "cabal install yi", and then having it installed and running in just a few moments. (It is also very nice to pull the xmonad darcs and say "cabal install" to install the newest version of xmonad. I don't even bother with Debian packages for anything related to Haskell anymore.)
Anyway, my point is, I think you should try Haskell before you talk about it.
"X has nothing on Y" means X is not superior to Y. Not "X couldn't tie Y's shoelaces."
What GHC does do is semi-implicit multicore parallelism for particular subsets of the language.
The GHC runtime is a parallel runtime supporting a range of parallel abstractions from semi-implicit parallelism to explicit task parallelism, in a three level hierarchy of OS threads -> Haskell lightweight threads -> fine grained thread sparks.
The most automated mechanisms are:
* thread sparks - you hint which code to run in parallel with `par`, and the runtime uses that hint to parallelise your code, distributing it across cores.
* data parallel arrays - if your algorithm is expressible as an array program, use DPH and the array operations will be automatically parallelised (alpha!)
To more explicit paralelism with:
* transactional memory
* message passing
An example of the semi-implicit multicore parallelism is given in the "Haskell in 5 minutes" tutorial here:
edit: yes, there is! http://www.haskell.org/th/
instance Foo Bar where
quux = quuxifyBar -- terrible example
In Lisp, it would be a simple matter of writing a macro, like this:
(defmacro make-my-type-thing (type)
`(instance Foo ,type
((quux (x) (,(intern (format nil "quuxify-~A" type)) x)))))
(loop for i in '(Bar Baz Quux CannedAir)
do (make-my-type-thing i)))
"seems" and "heard" is the best you can do? Try it, and you'll find that it's very easy to meta-program in Haskell, even without macros.