Hacker News new | comments | show | ask | jobs | submit login

Clojure is nice, but the idea that it is somehow built for the real world where Haskell isn't is just patent nonsense that I wish people wouldn't repeat. You do not need to understand complex math to actually use Haskell! All the important ideas can be understood and used in Haskell terms alone--you can think of them just like the ideas and jargon introduced in other languages. Except more general, more elegant and more consistent because they have a unifying underlying theory which the people who designed them in the first place do understand.

The biggest conceptual shift is to thinking functionally rather than imperatively, so it's going to be similar in both languages. The difference is that Haskell is more thorough, but the fundamental ideas are very similar.

Haskell, of course, has many of its own advantages which are mostly detailed elsewhere. I'm merely going to give you my highest-level take-away from trying Clojure: Haskell has much better facilities for abstraction. Haskell allows you to use types and data representation specific to your donation while still allowing you to take advantage of some very generic libraries. And the custom types are not actually complex, unlike what Clojure leads you to believe with its preference to using the basic built-in types: you construct your types from basically three fundamental operations: combining multiple values into one (like a tuple or struct), giving a choice of values (like a variant or disjoint/tagged union) or creating functions. That's all there is to them!

Basically, don't be afraid of Haskell, whatever the common wisdom is. As usual with these sorts of things, it's much more common than wise.




> Haskell has much better facilities for abstraction

I've been using Clojure for about 5 years now. I've actively worked for the past two years on core.logic which has quite a few fancy abstractions (custom types, protocols, macros) that all interact quite nicely together. Now, I'm not a Haskell expert, but I do feel that some of the things I've built would be could be challenging in the Haskell context. I might very well be wrong about that.

That said, I'm actively learning Haskell as there are some aspects of Haskell I find quite attractive and in particular a few papers that are relevant to my Clojure work that require an understanding of Haskell. My sense is that even should I eventually become proficient at Haskell, my appreciation for Clojure's approach to software engineering would be little diminished and likely vice versa.


Although another responder floated Template Haskell as Haskell's alternative to macros, Haskell loses out in that comparison. TH is both harder to work with than Lisp macros and sacrifices type safety[1], so it is avoided where possible. TH is used to generate boilerplate, but for other purposes (especially creating DSLs), Haskell favors abuse of do-notation through something like Free Monads[2].

Of course, this is nothing at all like macros, but in practice we can achieve many of the same goals while maintaining type safety. So, win win win (the third win is for monads).

[1] http://stackoverflow.com/a/10857227/1579612 [2] http://www.haskellforall.com/2012/06/you-could-have-invented...

EDIT: As dons points out, I was imprecise in my wording. I don't mean to say that TH leads to Haskell programs which are not type safe. The compiler will, of course, type check generated code. In general, given that dons has been programming Haskell for 14 years compared to myself who has been doing it for 1 year, prefer what he has to say on this subject.


> sacrifices type safety

How does TH sacrifice type safety? The generated code is type checked.

For actually customizing syntax via macros, quasi quotation is the popular approach, combined with TH. People can eg embed JS or ObjC syntax fragments directly into Haskell this easy. http://www.haskell.org/ghc/docs/latest/html/users_guide/temp...


I'll have a crack at some suggestions, as a Clojure user (nice work on core.logic!) with some Haskell familiarity.

For logic programming: you might find Control.Unification [1] interesting -- and of course if you don't need unification then you can go pretty far with the plain old List monad.

Protocols/multimethods: Haskell typeclasses are really really powerful. Probably my favourite of all the approaches to polymorphism I've seen anywhere, although YMMV (they do lean heavily on the type system.)

Macros: there is Template Haskell [2], which is pretty interesting although I can't claim to've used it myself. It doesn't quite share the simplicity of macros in a homoiconic dynamic language, but at the same time in Haskell it feels like you don't need to lean on compile-time metaprogramming as much as you do in a Lisp to achieve power. (Can't quite put my finger on why).

[1] http://hackage.haskell.org/packages/archive/unification-fd/0... [2] http://www.haskell.org/haskellwiki/Template_Haskell


Kiselyov et al have an interesting paper on logic programming in Haskell [1]. They introduce a backtracking monad transformer with fair disjunctions, and negation-as-failure and pruning operations. And the implementation is based on continuation, so it's more efficient than reification-based approaches such as e.g. Apfelmus's operational monad [2]

[1]: http://www.cs.rutgers.edu/~ccshan/logicprog/LogicT-icfp2005....

[2]: http://apfelmus.nfshost.com/articles/operational-monad.html


Is that Basque-Icelandic pidgin?


Heh, fair point. I guess it can read like that. I'll expand.

Monad transformer = thing that can transform a monad into another one with additional primitive operations. Monads are more or less the Haskell way of achieving something akin to overloading the semicolon in imperative languages. In imperative languages the meaning of the semicolon is baked into the language. It seems natural and stupid to think about what it means, normally it's just "do this, changing the program state by assigning some values to some variables, then do this with this new state" but if you think harder, it can also mean "do this and skip the rest of the loop" (if the first statement is a break) or "do this and then skip a bunch of levels up the call stack until you find an appropriate handler, unwinding the stack in the process" (if it's a throw). Haskell doesn't have any of this baked in, but in a way allows you to define the semantics of the semicolon yourself.

So monad transformers then allow you to build up more complicated semicolon semantics by combining simpler ones (and getting the full set of their primitive operations, such as assignment, break or throw statements in the previous examples).

Logic programming, in its simplest form, is concerned with deducing "goal" facts of the form C(X), from a bunch of rules of the form "A(X) and B(X) and ... imply C(X)" and other facts of the form "A(X)". One way you can do this is look for all the rules with your goal fact as their conclusion, then look how you can derive their premises, and so on. Which essentially boils down to a backtracking search.

So what Kiselyov et al did was to implement some primitive operations and overload the semicolon in a way which makes it easy to perform a backtracking search. Or more precisely, since it's a monad transformer, they figured out a way to add these primitives to any set of existing ones (such as assignment primitives for instance). Their implementation also provides some interesting backtracking operations which can be tricky to implement (the aforementioned fair disjunctions, negation as failure and pruning). And it is efficient since it's based on continuations (which are just functions of a certain format), as compared to other approaches which first have to represent the target program as data ("reify" it), then define an interpreter for that data and finally run the interpreter.

Better?


Almost.

One of my frustrations with learning Haskell is that everybody assumes that I am coming from a imperative programming background. I've toyed Python a wee bit, but I've really only ever used functional languages (Scheme, R, Clojure) and have only ever programmed in a functional style. Needless to say, I have no clue what the semi-colon is supposed to do in imperative languages. As I have been told many times, knowing functional programming lets you start at 2nd base with Haskell...but no further.

Thanks for the attempt. I'll try again after I'm done with RWH.


Sorry, I guess that it's the default assumption since most people do come from an imperative background. And you had to choose a language with significant whitespace as your first imperative language, that's just low ;) But essentially, in languages with Algol-inspired syntax the semicolon is an end-of-statement marker. In Python, this is normally (but not always) the newline character.

I hope that helps decrypt the monad explanation part of my post somewhat. I wish I could give one which better relates to your background, but of the three languages you mention, I've only had very superficial exposure to Clojure. And you've also probably heard it before, but I would recommend LYAH over RWH as the first Haskell text.


I also find it amusing that Haskellers always tell you "semicolon" when you normally won't ever see one in any code.

What they're talking about is statements. If you have a completely pure functional language, no function should ever have more than one expression. If the function had two expressions that would mean one of them did something and the result was thrown away. But if you don't have side effects, why would you have an expression that does something and then throws the result away?

In Haskell any function can only have one expression. So if you need to do several steps (i.e. you need side effects) you have to use do notation. Do notation is syntactic sugar for turning what appear to be several statements into one statement.

If you use the "curly brackets" way of coding then you would separate these statements with semicolons (tada!), but most code is white space significant so they just use one line per statement.

So what do does is takes the code you wrote in the lines and determines how to combine it. If you're using do then you're using a monad of some kind (a container, basically). Given the container and the contained, you can do two kinds of operations: "side effect kinds" that appear to do a statement and ignore the result (these actually work with the container) and expressions.

The interesting thing about this kind of code is you don't keep track of the container. Sometimes you don't see any trace of it at all except in the type signature. There will be functions that talk to the container but they don't appear to take the container as a parameter (which is good since you don't have a handle to it anyway). Behind the scenes do is setting up the code in such a way that the container is actually passed to each of your expressions so your "side effects" aren't side effects at all, they're modifications to the invisible (to you) container.

And each line is fused together by using one of two functions the container defines. So this is where the power comes in. How the container chooses to put these lines together depends on what the container actually is. An exception container, for example, might provide a special function (called, say, "throw") that will call each expression (functions by the time the container seems them) unless one of them calls its special "throw" function at which point it doesn't execute any further expressions and instead returns the exception it was passed.

I don't know if that makes things better or worse. :)


That did make it a little better, thank you :)


> Better?

Much better! Thank you :-)

I think you've also managed to prod me a little closer to understanding monads -- I always feel most tutorials are too complex, or too theoretical. Relating monads and the semicolon gave my brain a nice "hint" towards understanding them better, I think.

It's exactly related to the problem(s) I have with Haskell -- magical syntactical sugar that for me doesn't really seem to simplify anything. Quite like how semi-colon works in JavaScript -- end-of-statement is needed, but in javascript it is inconsistent. And like in other languages where the semicolon is a statement, but one one normally doesn't really think about...


For me, monads made sense when I started thinking of them as an API for composing normally uncomposable things. I'll try to explain it without assuming much knowledge of Haskell.

Composition is key in functional programming, and when you have two functions a -> b and b -> c, there is no problem... You can just compose them and get a -> c.

Using the List monad as an example, consider the case when you have a -> [b] and b -> [c]... You have a function producing bs but you can't compose it with the other function directly because it produces a list. We need extra code to take each element of the source list, apply the function, and then concatenate the resulting lists.

That operation becomes the "bind"[0] (>>=) of our Monad instance. Its type signature in list's case is [a] -> (a -> [b]) -> [b], which basically says "Give me a list and a function that produces lists from the elements in that list, and I will give you a list", but that is not so important. The point is that you start with a list and end up with another list, which means that by using bind, you can compose monadic operations (a -> [b]) indefinitely.

The "return" operation completes the picture by letting you lift regular values into the monad, and when return is composed with a regular function a -> b it transforms it into a -> [b]

The more generic form of a monad in Haskell is "m a" which just means a type constructed from "a" using whatever "m" adds to it (nullability, database context, possible error conditions etc.)

As you can see from the type signature, "a" is still there. Monadic bind and return allow composing existing a -> m b and a -> b functions, and this abstraction is made available through the Monad typeclass.

[0] Note that for lists, there's actually another valid definition of bind that behaves differently but also satisfies the required properties. Since you can't have two instances of a typeclass for a single type, it's defined for a wrapper type "ZipList" instead.


You're welcome. But I'm now not sure if I got my point across correctly on the monad part of the post (but then again, smarter people than me have failed on that front). I'm confused about what you mean with syntactic sugar. I wouldn't call semicolons in JS syntactic sugar, as they are automatically inserted by the compiler. So I guess you could call lack of semicolons syntactic sugar for semicolons. Personally, I would call it idiocy :) (not necessarily the lack of semicolons - I like the appearance of e.g. Python - but their automatic insertion).

In general, I'd even say that Haskell provides very little syntactic sugar, and the stuff that it does provide is both quite useful and rather understandable. Examples being list comprehensions, or even list syntax to begin with, where [1,2,3] is sugar for 1:(2:(3:[]))). Yes, the do-notation (which is what you normally use with monads) is also sugar, but it's not the difficult part of understanding monads. The difficult part is understanding the various bind operators (or equivalently, join or Kliesli composition) and how exactly they model different computational effects, and lastly forming the "bigger picture" from the examples.


  > So I guess you could call lack of semicolons syntactic sugar for semicolons.
Yes, exactly. Inconsistent sugar. My early experience with Haskell was that a lot of the syntactic sugar was (or seemed) very brittle -- combined with uninformative error messages -- much like missing semicolons can lead to -- even in Java iirc.

(I believe that is fixed now, however. I still remember it -- contributing to a somewhat irrational fear of Haskell :-).


That was a great explanation, thank you.


> (Can't quite put my finger on why).

Lazy evaluation. Much of what you use Lisp macros for is to control when/if arguments get evaluated. Haskell just works this way so you don't need macros for it.


Another way to think about it is that every Haskell function is a macro.


Yes, already having gotten "over the hump" on Haskell, the issue I have with Clojure is that I dig into it (finding it a truly appealing language, and wanting to achieve that sweet spot), but usually walk away thinking it really isn't giving me anything Haskell isn't, and is lacking many of the niceties I've come to rely on. I just have this feeling I would love it if I didn't already know my way around Haskell, but the "benefit-venn-diagram" feels like concentric circles.

If there are some people out there who are well versed in both Haskell + Clojure, I'd love to hear some insight into where Clojure shines, and where you find yourself saying "Clojure is better at this!" in some distinct and meaningful way.


Well, the Clojure repl is way better. Not being pervasively lazy makes it easier to reason about many things. Not strictly boxing IO in the IO monad makes it easier to debug (debug by println is still useful!). Macros are much easier to understand than Template Haskell, and since you don't have real typing you can have heterogeneous collections, or values where the type of one part depends on the value of another part, easily.

Also I think it really is easier to get started with Clojure than with Haskell, even for the (very well thought out!) concurrency primitives, though obviously that doesn't matter if you're already up and running with Haskell.

I use Clojure exclusively at my job, but for some things I really miss something like Haskell's type system, even though I freely admit I don't really understand the type system at its higher levels (specifically related to GADTs, existential types, type familes, etc.). Applicative functors would make some things so much nicer.

and you don't have to understand dependent typing to do it, either. I know there are heterogeneous list implementations for Haskell, but I can never understand how they work.


>since you don't have real typing you can have heterogeneous collections

You can have heterogeneous collections in Haskell, you just have to mean to. :) Check out the the "Obj" example here [1]

[1] http://www.haskell.org/haskellwiki/Existential_types


"I know there are heterogeneous list implementations for Haskell, but I can never understand how they work."


No, there are no heterogeneous list "implementations". There are ways to get the type system to basically use the type-class as the "type" instead of the concrete type (this is how you get heterogeneous lists in languages like C#/Java as well btw). That means the same normal list can hold this heterogeneous data, as could maps, sets, Maybe, anything.


http://hackage.haskell.org/package/HList-0.2.3 sure seems like an "implementation" of heterogeneous lists.


That looks like some kind of research thing, not something anyone is using. And as I say, there is no need to since you can use Existential types or GADTs to type the list on interface instead of representation. And that will give you heterogeneous containers of any kind where as this package you showed would only work for this one kind of package.


Thanks for the answer, exactly what I was looking for.

A few comments:

> Well, the Clojure repl is way better.

I've messed around with it, not enough to know, but wouldn't shock me. `ghci` is pretty decent, but it's not really world-class at anything. iPython is probably the best repl I've ever used for any language.

> Not being pervasively lazy makes it easier to reason about many things.

Agree. This is, indeed, problematic at times.

> Not strictly boxing IO in the IO monad makes it easier to debug (debug by println is still useful!).

Well, Debug.Trace gets you most of what you want there (using unsafePerformIO under the covers), so there is an "escape hatch" for doing printf-style debugging. But it's not quite as smooth as printf in languages that don't sandbox purity so much. On balance, I'd still take the purity (because I do a lot less debugging!), but point granted.

> Macros are much easier to understand than Template Haskell

No experience with macros, but since lisp-like things are so big into macros, sounds plausible.

> and since you don't have real typing you can have heterogeneous collections, or values where the type of one part depends on the value of another part, easily.

I'm not sure I view this as a /virtue/, to be honest. I'd rather have the type checking, and use typeclasses or ADTs to put mixed types into a sequence.

> Also I think it really is easier to get started with Clojure than with Haskell,

Yeah, I think this is undoubtedly true. Haskell veterans often say "whats' the big deal?", but the deal is big. And it's not so much because of "math", in the classical sense, as much as it's about very high, very new, abstractions. Many of them without analogies to things you've done before or things in the "real world". I did ocaml for awhile before I did Haskell, and Haskell was still a pretty big leap.

Anyway, thanks again for the constructive feedback.


As someone who is trying to learn Haskell, my best guess would be: Clojure's "hump" is like 1/1000000 the size of Haskell's "hump".


Not well versed in both, but I think desktop apps are an area where Clojure shines. Let's say I want to write a cross platform desktop GUI app in Haskell. What do I use? wxHaskell seems to be actively maintained, but a lot of the projects listed on http://www.haskell.org/haskellwiki/Applications_and_librarie... appear to be inactive or dead.

With Clojure I have access to Swing (yeah I know...Swing isn't everyone's favorite) without installing any additional libraries. Then there is seesaw which is a very nice layer on top of Swing. To install seesaw I simply edit a text file in my Clojure project and the project management/automation system installs it for me. Very nice.


A meta-comment: I find it amazing that we are arguing whether Clojure or Haskell is better suited for real-world applications.

Discussions like this one convince me that we have advanced our programming techniques over the last decade or so. After all, we could be arguing about semicolons, the GOTO statement, or other similarly important issues.

It's a nice feeling.


Sure, arguing that a functional language is the solution to our problems hasn't been done since oooo.... LISP? We have come so far.


The arguing here is which functional language is the solution, not arguing that a functional one is!


> Clojure is nice, but the idea that it is somehow built for the real world where Haskell isn't is just patent nonsense that I wish people wouldn't repeat. You do not need to understand complex math to actually use Haskell!

In the abstract, possibly, but not in practice. You can't ignore the effect the general bent of the community has on the language and the code that is idiomatic in the language. People tend to describe Haskell code in mathematical terms, and libraries use operator overloading and the like in mathematical analogies.


All of the terminology used widely in the community can be--and almost always is--defined in terms of Haskell rather than math. You can pretend the math does not exist and treat it as language-specific jargon. You might miss out on the truly cutting-edge libraries until somebody writes a nice tutorial for them, but that would still be true even if the abstractions were not mathematical in nature.

Many of the people in the community do not know much of the math beyond the terminology they learned in Haskell. That's certainly what I gathered talking to people at a local Haskell meetup and it matches my impression of many of the people online (e.g. in the mailing list or StackOverflow.


If it is language-specific jargon, then people's problem is that Haskell has way too much jargon for them to wade through. No matter how you phrase it, to many people, any real Haskell is an impenetrable morass of esoteric abstractions.

Basically, Haskell itself is pretty easy. The language is small, simple and elegant. But the libraries are massively, massively complex and do things in ways that are very often not at all intuitive if you're not familiar with the math behind them. That's what people mean when they say it's too heavily based in math.

This is not to say that you can't use Haskell in the real world, but it has some attributes that make it daunting for many people. I don't think this a bad thing, either — not everything should be for everyone. Variety is the spice of life.


> But the libraries are massively, massively complex and do things in ways that are very often not at all intuitive if you're not familiar with the math behind them.

I'm just a beginner Haskeller, but this seems like a pretty bold (and false) claim. I've used Parsec for writing a Scheme (while reading the book), Shelly and Shake for build and test automation, and some statistics libraries for, well, statistics. They were all intuitive to use, and there was no "impenetrable morass of esoteric abstractions". I'm not sure what you're on about. Maybe provide examples?


I will write this answer in the flavor of your average Haskell library, as read by somebody who has learned the core language very well but not that library:

  you <*$%^> everyone %%<#>%% experience
Are you really going to tell me you have never looked at code written with a package you haven't learned and initially found it to be unintelligible? If not, then I would guess that you are simply well-suited to Haskell. Again, I'm not saying Haskell is bad, but that it does have attributes that make it daunting to many people. The top-rated comment on here even talks about how easy Haskell is now that he's "over the hump." Even compared to a relatively esoteric language like Clojure, Haskell's hump is pretty big.

I believe the canonical example of a weird math-based library that makes beginners cry is Control.Arrow. Reading arrow-based code without having fully grasped both the idea of arrows and the API itself (do you think <&&&> could use a few more ampersands?) is an exercise in frustration. Even the humble monoid — simple as it may be — is hard for many people to grasp, because they're such a frustratingly generic idea with horribly unhelpful terminology attached to them.

Want more? Here's one of the top Haskell projects on Github — a library that does lenses, folds, traversals, getters and setters: https://github.com/ekmett/lens

With that library, you can write things like

  _1 .~ "hello" $ ((),"world")


You mean that the example would be more readable if it used proper function names instead of operators? You would still need to "learn the library", i.e. look-up what the functions do.


I mean that as a whole it's daunting and feels "unfriendly" to a lot of people. The heavy use of operators is part of it, but it's more than that. Haskell gets really abstract, which is cool, but it's hard to think about. Like I said, even monoids and monads seem hard to people learning Haskell despite being really simple. And look at the FAQ on that Lens library — it's an abstraction on top of things that are already pretty abstract in Haskell to offer "a vocabulary for composing them and working with their compositions along with specializations of these ideas to monomorphic 'containers'". That's cool and I really don't mean to criticize it, but it's undeniably esoteric and abstract.


Function names give you at least a vague idea of what the function does, but operators don't. I think Haskell's operators can make code more concise and elegant to someone who's familiar with them, but it does make it harder for a beginner to understand what's going on.


I'm pretty sure all the railing hard on the symbols above the number keys must be symptomatic of some sort of verbal learning disability in Haskell programmers...


I don't think that's fair. It allows for some really concise code that actually is quite readable once you're conversant. For example, I do not think Parsec would be better off if "<|>" were renamed to "parsecOR" or something like that. It's just, like I said, obscure and daunting.


I do think Parsec would be better off if "<|>" were something like "parsecOR". Good function names are good.


The idea is that Parsec represents a grammar, similar to BNF with <|> giving alternative rules. So something like:

    atom =  many digit
        <|> identifier
        <|> stringLiteral
        <|> between (char '(') (char ')') expression
I can't imagine any way this would look better with a name instead of the <|> operator. You could write it something like:

    atom = many digit
        `or` identifier
        `or` stringLiteral
        `or` between (char '(') (char ')') expression
but that's no clearer and a bit harder to follow. With a much longer identifier than "or", it would be completely unreadable. If you had to make "or" prefix (the `` let you use a normal function in infix position), it would be even less readable. Combine the two (a longer, prefix identifier) and you have an even bigger mess!

The operators let you see the structure of an expression at a glance. You don't have to read the whole thing to figure out what's going on. Moreover, the <|> symbol is pretty intuitive--it's clearly based on | which usually represents some sort of alternation.


I find the second one much clearer than the first one, because there is no ambiguity to me whether '<|>' means the same thing as '|'. It's also less ugly.


> chc: I do not think Parsec would be better off if "<|>" were renamed to "parsecOR"

> sanderjd: I do think Parsec would be better off if "<|>" were something like "parsecOR"

Do you think perhaps some people think about functions visually while others think about them auditorily?

An IDE could easily render function names however the human reader wants. The option whether to render function names as names or symbols would be customizable, just like the display colors for various element types are.

Within each option, there's further choices. For names, there could be a choice between various natural languages. For symbols, it could be restricted to ASCII e.g. <|>, or full Unicode, e.g. | enclosed by a non-spacing diamond.


For what it's worth, Emacs already does that last thing. As a contrived example borrowed from a post I wrote a while back[1]:

    \ a b c -> a >= b && b <= c || a /= c
gets rendered as:

    λ a b c → a ≥ b ∧ b ≤ c ∨ a ≢ c
However, this is a bit of a pain to do in general. It works well for common functions and syntax elements, but has to be built into the editor. Doing it more generally would require the author of the initial function to come up with multiple different names for it, which sounds unlikely.

[1]: https://news.ycombinator.com/item?id=4742616


Your examples are terser which many programmers would prefer.

> Doing it more generally would require the author of the initial function to come up with multiple different names for it, which sounds unlikely.

Unlikely for now, but could become more likely in the future.


If you're really interested in this, I'd encourage you to take some actual Parsec code and make this change, then compare the two for readability. My guess is that while it's conceptually nice, you'll find in practice it hampers readability more than it helps.


Thanks for the constructive suggestion - I don't have much invested in this particular little debate so I'm not sure I'll take the time to do that, but it's definitely a good idea.

I will say that in my opinion, the alternative in the reply above this (which I can't reply to) looks fairly nice, is clearer, and not even a little bit harder to follow. But I'll allow that as a very casual Haskell programmer, my opinions about Haskell are probably not particularly valid.


>you <*$%^> everyone %%<#>%% experience

Haskellers like to use symbols for two argument functions because of precedence rules. You learn them for the modules you're using just as you would have had to learn the function names. But usually seeing how the symbol is named combined with it's type signature is enough.

As far as lenses, I personally don't like them so far. But not because of the (admittedly, questionably named) functions but rather that doing anything in lenses seems to require template magic.


Given a well-designed library that does something concrete related to a task you understand, I find that I generally don't have to learn the function names just to read code that uses it. Like, if see "HTTP.post(data, headers)" or "record.attach(file)", I have a pretty good idea what it's doing even if I don't know those particular libraries.


I think you are correct here. In my view, advanced languages facilitate the development of complex abstractions in the libraries. I find this to be the case in both Haskell and Scala. But I also don't think we will ever return to the innocent days of yore. Programming is growing up after six decades of trying to figure out what 'mainstream' ought to be. I suspect that the whole business will bifurcate into easy-to-use languages and increasingly complex libraries atop more sophisticated languages to meet the challenges of scale and speed.


This just isn't true. Everything in Haskell is about the types and that's what you have to come to grips with if you want to use the language. Have you looked at projects like Yesod or Snap? Not a lot of math there but a hell of a lot about types.


Even if Haskell could somehow be proven to be "the better language" (whatever that means), Clojure is still extremely elegant, simple and beautiful, and it has one huge advantage: it runs on the JVM, so it automatically harnesses the vast JVM ecosystem, as well as all of its (unmatched, IMO) monitoring, profiling and runtime-instrumentation tools (+ dynamic class-loading and hot code replacement).


Indeed, I think when most people say "suitable for real world projects" they mean "I can get away with using this at work because it's just a jar".


Tried Frege?

https://github.com/Frege/frege

"Frege is a non-strict, pure functional programming language in the spirit of Haskell ... Frege programs are compiled to Java and run in a JVM. Existing Java Classes and Methods can be used seamlessly from Frege."


" in the spirit of Haskell " means it isn't Haskell.


True, but it's in the same camp, where "elegant" and "I have to write parentheses all the time" are two different concepts, and, and this was the point, it can use the JVM libraries. (in a type safe manner, btw)


I think you can use both. As Clojure is more dynamic in nature where Scala/Haskell are good compiled static compliments.

Large projects, I would go with Scala/Haskell and for front-end systems, I would use clojure.

Why do it this way? With clojure, you can easily modify data or small pieces of logic. Simply edit the script without the recompile. With the scala/haskell api or library, you probably need something that changes less frequently. That backend system may act as a core piece of your library. ...

And if you don't like that. You can do Python and Java/C# which can give you the same effect.


I think Haskell is great but it's weak in libraries.

This is where Clojure has a slight advantage because you can tap into the Java ecosystem for stuff that isn't currently in the Clojure world.

Database libraries are a good example of this, database support in Haskell is still pretty flaky and cumbersome to setup in comparison to JDBC.


+1. People get themselves worked up about words like "monads" and "algebraic data types," but using those things is not difficult, and the amount of guarantees they give you about your code is so far beyond anything else out there that IMO it's not worth doing functional programming unless you have a proper type system together with monads.


I'm curious what advantages you think Haskell has over ML/Ocaml. I'm pretty familiar with OCaml and like it a lot, and Haskell seems to be very similar, but I haven't used it for anything substantive. What benefits, if any, does Haskell have over Ocaml, and is it worth learning if one already knows Ocaml? I know that Haskell has lazy evaluation while Ocaml doesn't, but you can simulate lazy evaluation in Ocaml.


I actually also really love OCaml. It has some truly brilliant features which Haskell lacks like polymorphic variants (and structural sub-typing in general) and a top-notch module system. (Much unlike Haskell's "worst-in-class" module system.)

Haskell does have a bunch of advantages though. A pretty petty one is syntax: Haskell's is simpler, prettier, more consistent and yet more flexible. They're very similar, of course, but I think Haskell gets all the little details right where OCaml often doesn't.

The single biggest advantage Haskell has--more than enough to offset the module system, I think--is typeclasses. Typeclasses subsume many of the uses of modules, but can be entirely inferred. This moves many libraries from being too awkward, inconvenient and ugly to use to being a breeze. A great example is QuickCheck: it's much more pleasant to write property-based tests in Haskell because the type system can infer how to generate inputs from the types. Being able to dispatch based on return type is also very useful for basic abstractions like monads. Beyond this, you can also do neat things like recursive typeclass instances and multi-parameter typeclasses.

Honestly, if I was forced to pick a single Haskell feature as the most important, I would probably choose typeclasses. They're extremely simple and a foundation for so much of Haskell's libraries and features. Most importantly, typeclasses are probably the most obvious way the type system goes from just preventing bugs to actually helping you write code and making the language more expressive, in a way that a dynamically typed language fundamentally cannot replicate.

Laziness in Haskell is not really a big deal. It can make much of your code simpler, but it also makes dealing with some performance issues a bit trickier. It also tends to make all your code more modular; take a look at "Why Functional Programming Matters"[1]. I am personally a fan of having laziness by default, but I think it's ultimately a matter of philosophy.

And philosophy, coincidentally, is another reason to learn Haskell: it takes the underlying ideas of functional programming further than OCaml. In Haskell, everything is functional and even the non-functional bits are defined in terms of functional programming. Many "functional" languages are actually imperative languages which support some functional programming features. OCaml is one of the few that goes beyond this, but there is really a qualitative difference when you go all the way.

Once you know that everything is functional by default, you can move code around with impunity. You on longer have to worry about the order code gets evaluated in or even if it gets evaluated at all. You also worry far less about proper tail calls, which mostly compensates for having to deal with some laziness issues.

Haskell also embraces the philosophy in another way: you tend to operate on functions like data even more than in OCaml. The immediately obvious example is that Haskell has a function composition operator in the Prelude. There is no reason for OCaml to not have this, but--as far as I know--it doesn't. It's an illustration of the philosophical differences between the two languages. On the same note, Haskell also tends to use a whole bunch of other higher-order abstractions like functors and applicatives.

So I think the most compelling difference is ultimately fairly abstract: Haskell just has a different philosophy than OCaml. This ends up reflected in the libraries, language design and common idioms and thus in your code. I think that by itself is a very good reason to learn Haskell even if you know OCaml; and since you do, picking Haskell up will be relatively easy!


Thanks for the awesome reply! I think you've just convinced me to check out Haskell more thoroughly at the next chance I get.

Edit: It's great that you mention OCaml's module system too; my professor probably mentions the benefits of it in just about every lecture.


That was a great reply. I would personally like to hear more about the advantages OCaml has. OCaml was my first foray into functional programming but I was really put off by +., print_int and friends (which you obviously don't need in Haskell). I know about functors but what else is so bad about Haskells module system? The biggest weakness of it I see is the inability to handle recursive imports.


I used to love Ocaml, but it doesn't have the manpower/momentum/excitement that Haskell has (had?)

(IIUC because Ocaml licensing is not very welcome to open source efforts).

I'm very good at picking losers... (Ocaml, D, Factor..)


Haskell got some sharp edges which make me hesitant to use it. I once had a Haskell program that had a serious space leak that would cause it to exhaust ram and crash. The culprit? Adding "+1" to a counter repeatedly and then never evaluating the counter. It took a very smart programmer (who happens to have a PhD in math) six hours of debugging to find it.

Also I've never seen Haskell's FFI work - it's just too much of a pain. Clojure's Java integration works right out of the box.


> Also I've never seen Haskell's FFI work - it's just too much of a pain.

For what it's worth, I've found Haskell's FFI very pleasant for interfacing with C (and the reverse looks just the same, but I haven't tried). You then interface with any othet language through C, since most can FFI through C anyway. I think it's a very reasonable solution!


> six hours of debugging

I guess this just shows that Math PhDs don't know how to use the fine profiler: http://stackoverflow.com/questions/15046547/stack-overflow-i...

Which would have saved about 5hrs 50mins...


> I'm merely going to give you my highest-level take-away from trying Clojure:

Then you go on the describe what you find good about haskell? That sounded weird.


No, I was trying to explain what the biggest difference I found between them when I tried Clojure. I agree that wasn't worded very clearly.


"And the custom types are not actually complex, unlike what Clojure leads you to believe with its preference to using the basic built-in types"

Custom data types are not always complex, but efficient and useful data types often are. Clojure's data structures have very good performance for immutable data structures. Clojure's maps, for instance, have what effectively amounts to a O(1) lookup, while Haskell's Data.Map is a size balanced binary tree with only O(log n) performance.


> effectively amounts to a O(1) lookup

So O(log n) then. We obey gravity around these parts.

I presume you are referring to HAMT-like structures ,such as found in http://hackage.haskell.org/packages/archive/unordered-contai... which are by no means unique to Clojure.

Besides simply having the data type, you still need a good allocator and GC optimized for immutable data, which is where GHC stands alone - http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...


> So O(log n) then. We obey gravity around these parts.

Which is of no practical difference to O(1) if log n is always very small.

> I presume you are referring to HAMT-like structures... which are by no means unique to Clojure.

Of course they're not. My point was that it is not trivial to derive a efficient and useful data structure like a HAMT from Haskell's type system. The data structures in a library like unordered-containers are in practise just as opaque to the developer as the core data structures in Clojure.


> as opaque to the developer

http://hackage.haskell.org/packages/archive/unordered-contai...

This highly optimized data type is defined in 6 lines, easily accessible from the docs.


No it isn't, at least not in any meaningful way. The type definition itself may be 6 lines, but that doesn't adequately describe the data structure, otherwise there'd be no need for the rest of the library.


Actually most Clojure maps are Hashtables, which do have O(1) amortized lookup and insert. Granted, the 'amortized' is important.


>Clojure is nice, but the idea that it is somehow built for the real world where Haskell isn't is just patent nonsense that I wish people wouldn't repeat.

Really? Because SPJ, for one seems to agree with it.


Coldtea, could you provide a link where SPJ says this ? I googled it but couldn't find anything.


I think he may be talking about this: https://www.youtube.com/watch?v=iSmkqocn0oQ


Yes. Plus the cheeky "avoid success at all costs" remarks etc. Of course he doesn't mean it's not usable (or used already) in the real world at all.

Just that it's not in the perfect practical form that a real world language would have.


This is just a propaganda - repetitive reciting of slogans made of long words.)

It is, of course, difficult to argue with zealots, but I will try nevertheless.)

The cost of what is called "advanced type system" is inability to put an elements of different types in the same list or tuple or whatever. It is not just a feature, it is a limitation. In some cases, when you, for example, dealing only with numbers, say, positive integers, it is OK, but what then the real advantage of such type checking?

On the other case, the concept of a pointer which all those "packers" trying to throw away is very mind-natural. When we have a box we could put anything in it, as long as it fits. Imagine what a disaster it would be when you moving from one flat to another but must pack stuff only into special kind of boxes. This one is only for such kind of shoes, this is only for spoons, that for forks. So, with a pointer we could "pick up" everything, and then decide what it is and where it goes.

Another mind friendly concept is using symbols and synonyms to refer to thing - s symbolic pointers. It is how our minds work. Those who are able of thinking in more than one language know that we could refer to it using different words, but "inner representation" is one and the same.

These two simple ideas - using pointers (references) and have data ("representation") to define itself (type-tagging is another great idea - it is labeling) gives you a very natural way of programming. It is a mix of so-called "data-directed" and "declarative" and, as long as you describe a transformation rules instead of imperative step-by-step processes, "functional" styles.

Of course, the code will look in a certain way - there will be lots of case-analysis, like unpacking things form a big box - oh, this is a book - it goes to a shelf, it is a computer speakers, it goes on the table, etc. But that's OK, it is natural.

The claims that "packing" is the best strategy is, of course, nonsense. Trying to mimic natural processes in our mind (as lousy as we could do it) is, in my opinion, has some sense.

There are some good ideas behind each language and not so good. Symbolic computation, pattern matching, data-description (what s-expression, or yaml is) are good ones. Static typing, describing "properties" instead of "behavior" - not so.)

Also it is good to remember that we're programming computers, not VMs. There is something called "machine representation" which is, well, just bits. It doesn't mean to swing into another extreme and program in assembly, but it is advisable to stay close to hardware, especially when it is not that difficult.

Everything is built from pointers, you like it or not.) The idea to throw them away is insane, the idea (a discipline) of not doing math on them is much better. The idea of avoiding over-writing is a great one, it is good even for paper and pencil - everything become a mess very quickly, but avoiding all mutation is, of course, madness.

So, finding the balance is the difficult task, and it is certainly not Haskell. Classic Lisps came close, but it requires some skill to appreciate the beauty.) So, the most popular languages are the ugliest ones.


> The cost of what is called "advanced type system" is inability to put an elements of different types in the same list or tuple or whatever. It is not just a feature, it is a limitation. In some cases, when you, for example, dealing only with numbers, say, positive integers, it is OK, but what then the real advantage of such type checking?

I can't say I managed to completely understand the argument you're making here, but doing mostly Java/Python for work, I don't remember the last time I had to write a heterogeneous list. At worst, you can always go for existential types.


A list that has a special marker at its end is a general concept. The-empty-list in Lisp, EOF in C, etc. With this you have streams and ports and all the nice things.)


Either today's tea was not strong enough, or you're answering the wrong person.


Yeah, I should change a tea-shop.)

In my opinion, the single-linked-list data structure as a foundation of Lisp was selected not by accident. It is also most basic and natural representation of many natural concepts - a chain, a list. You could ask for the next element, and find out that there is no more. Simple and natural. Because of its simplicity the code is also simple.

Such kind of lists should be heterogeneous, because when all the elements are of the same type, it is more natural to represent it as an array - an ordered sequence. As far as I know, Python's lists actually are dynamic arrays.

The sequences of the elements of the same type (same storage size and encoding) with a marker at the end could be also viewed as a homogeneous lists. C strings processed as a list of characters is canonical example.

Now consider a UTF-8 encoding. It is a variable-length encoding. UTF-8 string is not an array, and because you cannot tell the boundaries between runes while reading, is not a list. But, nevertheless it could be considered and processed as a stream, until EOL marker is reached. This is why it was invented in Bell Labs to keep things as simple as possible.

Now, you see, the concept of a homogeneous list from math is not enough for CS, and sometimes it is much better to have it fuzzy. What is a list is a matter of a point of view.

I think I will keep my dealer.)


> In my opinion, the single-linked-list data structure as a foundation of Lisp was selected not by accident. It is also most basic and natural representation of many natural concepts - a chain, a list. You could ask for the next element, and find out that there is no more. Simple and natural. Because of its simplicity the code is also simple.

Yes, that's a recursive data structure well-suited to functional languages, same as Haskell lists.

> Such kind of lists should be heterogeneous, because when all the elements are of the same type, it is more natural to represent it as an array - an ordered sequence.

From a memory point of view, maybe, but that's really an implementation detail. If you take, eg, Perl arrays, which you can access by index, push, shift, unshift, and pop, the actual implementation is invisible from the programmer, just as it should be in this sort of language. Java will happily store cats and dogs in an array of Object, for instance.

> Now consider a UTF-8 encoding. It is a variable-length encoding. UTF-8 string is not an array, and because you cannot tell the boundaries between runes while reading, is not a list. But, nevertheless it could be considered and processed as a stream, until EOL marker is reached. This is why it was invented in Bell Labs to keep things as simple as possible.

You could very well store it as a list of bytes. This wouldn't be terribly efficient, but it's perfectly doable. Whether you process it as a stream or you have it stored in whatever array/collection is orthogonal to the fact that your language of choice supports heterogeneous lists. You can do stream processing in Haskell too (with, eg, pipes). You have also access to other data structures which are not lists, but which do expect to be homogenous.


Yes, it is indeed a stream of bytes, with a zero-byte as an EOF marker. That's why UTF-8 is good-enough.

As for lists, as long as your next element is not always in the next chunk of memory, you need a pointer. A chain of pointers is a single-linked list. This is the core of a Lisp and it is not an accident. Together with type-tagging, you could have your lists heterogeneous, as simple as that.)

This is a part of the beauty and elegance of a Lisp, in my opinion - few selected ideas put together.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: