Hacker News new | comments | show | ask | jobs | submit login

You know, this almost reminds me of the ML vs. Haskell debate. I remember fondly, hearing the 15-312 kids talk about how ML is right and everything else is wrong. The CMU PL dept was kind of adorable that way.

One of the classic examples is Haskell's typeclasses. Haskell's typeclasses are kinda kludgy because there's no way to provide more than one instance of a typeclass. ML's functors are way better.

But as it turns out, most of the time, we just need one instance. It's much simpler to invoke, and much easier to understand. If you need more than one instance, then use newtype.

In the end, we wouldn't be where we are in PL without the crazy folks at CMU and Bell labs. So I feel no small amount of sadness that ML didn't win. I concede the notion that ML may have made better design choices. But what matters more to me is that elements of FP get into the mainstream. Here, Haskell has done a better job of showing what is good with FP.




You have to concede it's a bit funny that you're pointing to Haskell as an example of the 'worse-is-better' approach that leads to viral adoption of dirty solutions over the endless pursuit of perfection.


If anything you could turn it around and say that ML's approach to control and data effects exemplifies worse-is-better on the purity spectrum. It took a long time for Haskell to solve these problems a la Haskell 98. It's only recently that Haskell has "caught on" relative to ML.


Haskell is much more "MIT approach" than "New Jersey approach". For a better example, consider how the quick & dirty hack that is Javascript is now far, for more commercially important than the entire FP language family combined (although, it's worth noting, somewhat inspired by FP).


As of importance, I boldly compare JS to Middle Ages' plague.

It is very important but not the way it helps.


So one of the first languages with closures to get huge mainstream popularity hasn't helped? right...


Closures are too small an addition comparing to huge step back in semantics. Side effects in JS are irrepressible.


It depends on what perspective you take I guess. From one view, ML is the worse language, but from other views it would be Haskell. Suppose you require a fully formal specification which has been verified in a mechanized way. You have that for Standard ML in Twelf and Haskell has nothing sort of that.

If you look at the concept of being a purely functional language however, it is the opposite with Standard ML being the "worse" animal.

Module system: Then SML got it right and Haskell got it Wrong.

Laziness/Strictness: This is a duality. There are advantages and disadvantages to both approaches so there is no worse/right choice IMO. If you look at the recent stuff on polarity in proof theory it becomes clear that when you latch onto a specific evaluation order, you make some things simple and other things hard.


Dude, you just DID NOT insult Haskell !

[Sort of relevant]

Apparently SPJ was once asked by a customs official in America if an american citizen could do what he did. Someone with him announced that his work as the creator of ML made him irreplaceable.

So you see, ML's got fanboys in high places. Even close to SPJ apparently.

Source for SPJ Koan : http://www.youtube.com/watch?v=NWSZ4c9yqW8 (around the 4 min mark).


FWIW you can link to a specific time in a YouTube video like this: http://www.youtube.com/watch?v=NWSZ4c9yqW8#t=3m0s


Perhaps Haskell compromised on first-class modules. But ML compromised on:

* Type-classes (which overlap but aren't really the same thing and are probably more important)

* Typed-effects (a.k.a purity): This is a big one to lose.

* Laziness/Strictness control

I think ML is more of a compromise than Haskell.


Actually, I was originally of the same opinion, but then I read some lecture slides from Simon PJ. He has suggested that laziness was perhaps not the best default. It is hard to reason about space leaks with laziness. Likewise for typeclasses being inferior to ML functors.

http://www.cs.nott.ac.uk/%7Egmh/appsem-slides/peytonjones.pp...

I personally disagree. I'm pretty sure laziness has made it much easier for me to try out an idea. If it works, then I'll test that I didn't space leak. The notion of it almost makes me feel dirty. ;-) :-P


> He has suggested that laziness was perhaps not the best default. It is hard to reason about space leaks with laziness.

Aha, but it's perhaps not the best default for practical reasons, and it is the best default for theoretical reasons. Does that not make ML's eagerness an example of worse is better, because the worse theoretical solution is simpler the the rubber meets the road and you just want to find out where you're leaking memory>


> Laziness/Strictness control

I don't think there was any compromise here, just a different choice of default. Both offer a great deal of control over what evaluation method to use.

Haskell, for what it's worth, compromised on:

* Formal definition - this is an often-overlooked win for SML, and the sort of thing that lots of languages could use.

* Module system - type classes complicate this problem, but the lack of a decent module system hurts Haskell when building large systems.


Well, I already mentioned the Module system part.

Good point on the Formal definition compromise.

But I think laziness-by-default has some fundamental advantages that SML pretty much loses: http://augustss.blogspot.com/2011/05/more-points-for-lazy-ev...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: