> Haskell has them, and Scala encodes them using a horrible hack
You can view it like that, my perspective is that in Scala the underlying mechanism (passing the dictionary around by means of implicit parameters) is more explicit. Much like how prototypes in Javascript work, versus class-based OOP. With prototypes you can have class-based design, but you can also go beyond that. And this is similar to how Scala deals with type-classes. This does have a cost in terms of learning curve, but then you've got extra flexibility. CanBuildFrom is not a type-class.
On implicit conversions and view bounds, I'm not fond of them either. I actually hope that they'll go away completely. I think they made it in the language (and survive), only because Scala runs on the JVM and they had to make the builtins (like Arrays, Strings) behave nicely without much overhead. In Haskell, Strings really are lists of chars. In Scala you can't do it, unless they introduced their own String class, which would have been awful, or if they introduced some sort of implicit conversion mechanism, which they did. And this could have been some magic that Scala's compiler applies for builtins, but personally I dislike compiler magic (that you can't hook into) more than I dislike hacks made for interoperability (if only they got rid of that implicit conversion from Any to String in Predef, which is so freaking annoying).
> You can view it like that, my perspective is that in Scala the underlying mechanism (passing the dictionary around by means of implicit parameters) is more explicit.
What you lose is compiler-enforced coherence. When you use type classes, certain functions come with the expectation that you will consistently pass the same dictionary around between calls. With actual type classes, this is a given: there can be at most one instance of a type class for a type (or tuple of types). With Scala's implicits, you can trick the compiler into passing different dictionaries between two calls that expect the same dictionary.
I am aware that this is a tradeoff: Haskell's type classes are antimodular (because a global set of instances must be kept in order to guarantee instance uniqueness per type) and Scala's implicits can be used in potentially incoherent ways.
(The only way to escape this tradeoff would be to use dependent types to explicitly establish a connection between values and the dictionaries that must be passed around.)
Personally, I find antimodularity annoying, but incorrectness freaking scary. So I prefer Haskell's type classes.
> In Haskell, Strings really are lists of chars. In Scala you can't do it, unless they introduced their own String class, which would have been awful, or if they introduced some sort of implicit conversion mechanism, which they did.
I do not find linked lists of Chars to be all that useful. In fact, most of the time they just get in the way. Fortunately, all I have to do is turn on the -XOverloadedStrings GHC extension.
I only scratched the surface of Haskel, so I don't really know what's annoying and what isn't :-)
I think Arrays in Scala are more important, because Scala introduced its own collections library and you end up manipulating Arrays a lot, since they are so pervasive and it's rather nice for Arrays to be viewed as Seqs.
On the other hand I would have been happy with just extension methods and given that simple extension methods made their way in Scala 2.10, I hope they'll break backwards compatibility at some point and pull implicit conversions out of the language - which isn't possible without a source-code migration tool, ala Go, but I saw that people on Scala's mailing list dream of one, so there is hope :-)
> I think Arrays in Scala are more important, because Scala introduced its own collections library and you end up manipulating Arrays a lot, since they are so pervasive and it's rather nice for Arrays to be viewed as Seqs.
Java's collection library is not well suited for a language that claims to support functional programming, so this was a no-brainer.
> On the other hand I would have been happy with just extension methods and given that simple extension methods made their way in Scala 2.10
This is precisely what annoys me so much about Scala: so many features that overlap with each other in terms of the functionality they provide!
> I hope they'll break backwards compatibility at some point and pull implicit conversions out of the language
What Scala (or, most likely, a successor to Scala) really needs to do is: 1. define clearly what means of abstraction it wants to provide, 2. make sure these means of abstraction do not overlap with each other. For example, subtype polymorphism (inheritance) is a strict subset [1] of ad-hoc polymorphism (implicits), so a decision has to be made whether subtype polymorphism is good enough, in which case implicits must go away, or full ad-hoc polymorphism is necessary, in which case inheritance must go away.
[1] When the Liskov substitution principle is respected, at least.
I understand that Scala was designed to be compatible with Java, but ugly stuff that is there only for compatibility reasons can always be put into the standard library rather than the core language. In the former case, only those who need it pay the price. In the latter case, everybody pays the price.
Scala's features don't really overlap. Defining extension methods (with the new implicit class construct) is just syntactic sugar over implicit conversions.
Also, I find implicit parameters in general to be awesome. Because of implicit parameters, Scala achieves the best marriage available between OOP and FP. Scala doesn't just pretend to be FP. Scala is FP. Think of languages like Ocaml or F#, which have 2 type-systems in the same language.
Scala also doesn't side-step covariance/contravariance problems when dealing with generics, like other OOP languages do, providing the tools to deal with both. And we could agree that covariance/contravariance is a problem created by OOP subtyping, but if you've got OOP in the language, then you can't really pretend that all generic types are covariant or invariant and it's just awful to have two type systems in the same language.
> but ugly stuff that is there only for compatibility reasons can always be put into the standard library rather than the core language
That's what they are trying to do with SIP-18. Scala 2.10 emits warnings if you use implicit conversions without importing language.implicitConversions and they plan to transform those warnings into errors in future versions. Scala 2.11 went through a compiler refactoring phase, badly needed as the compiler's internals are messy, the plan being to make it more modular. Things are progressing in the right direction ... if only they got rid of the implicit conversions defined in Predef (and thus imported implicitly everywhere).
> Scala's features don't really overlap. Defining extension methods (with the new implicit class construct) is just syntactic sugar over implicit conversions.
Point taken, for this particular case. But subtyping and implicits still cover a large part of each other's functionality, and the former is a subset of the latter when you take into account the Liskov substitution principle.
> Also, I find implicit parameters in general to be awesome.
It is certainly better than having no way to do general ad-hoc polymorphism, but...
> Because of implicit parameters, Scala achieves the best marriage available between OOP and FP. Scala doesn't just pretend to be FP. Scala is FP. Think of languages like Ocaml or F#, which have 2 type-systems in the same language.
... I beg to differ here. Scala is biased towards OOP: you can do Java-style programming with no inconvenience and awkwardness, but Haskell- or even ML-style programming requires imperfect encodings in Scala. How do I do the equivalent of a ML signature ascription? (That is, assigning a signature to a module that possibly hides some members and/or makes some types externally opaque. This requires structural subtyping for modules.) Why is type inference unavailable precisely when you would most need it?
On the other hand, OCaml is biased towards FP. I still dislike it for other reasons, though: functions being eqtypes makes no frigging sense; the syntax is heavily biased towards imperative programming, so I end up having to parenthesize a lot; applicative functors are leaky abstractions in an impure language. And, unlike, Haskell, which may be huge but is ultimately built on top of a small core, OCaml strikes me as intrinsically huge (just huge, though, not full of warts like Scala). I dislike F# even worse, because it ditches the ML module system. When I want to use a ML, I use Standard ML.
> Scala also doesn't side-step covariance/contravariance problems when dealing with generics, like other OOP languages do, providing the tools to deal with both. And we could agree that covariance/contravariance is a problem created by OOP subtyping, but if you've got OOP in the language, then you can't really pretend that all generic types are covariant or invariant and it's just awful to have two type systems in the same language.
I do appreciate that it provides tools for managing the mess, but I still prefer languages that do not create a mess.
On a second though: does it actually manage the mess, or does it just make it worse? When a language's object model makes C++'s look simple by comparison, that is a sign that there is something wrong going on.
> That's what they are trying to do with SIP-18. Scala 2.10 emits warnings if you use implicit conversions without importing language.implicitConversions and they plan to transform those warnings into errors in future versions.
The whole subtyping via inheritance is far more ugly than that, and that is not going away anytime soon.
> When a language's object model makes C++'s look simple by comparison, that is a sign that there is something wrong going on.
Yeah, and when you want to make it excessively clear to everyone involved that you have not the slightest clue what you are talking about, then bringing up C++ is absolutely the best way to go.
> Correction: Haskell has them, Rust has a limited form of them, and Scala encodes them using a horrible hack.
Fun fact: The way Haskell encodes them is identical to the way Scala does.
> What you lose is compiler-enforced coherence. When you use type classes, certain functions come with the expectation that you will consistently pass the same dictionary around between calls.
Could you please show an example where this goes wrong?
> With actual type classes, this is a given: there can be at most one instance of a type class for a type (or tuple of types). With Scala's implicits, you can trick the compiler into passing different dictionaries between two calls that expect the same dictionary.
Limiting type classes to one instance per type ... that's what I would call completely pointless. With that kind of restriction, why have typeclasses in the first place? There is probably not much (anything?) which could be enabled by such a crippled feature compared to dynamic dispatch/OO/subtyping.
> I am aware that this is a tradeoff: Haskell's type classes are antimodular (because a global set of instances must be kept in order to guarantee instance uniqueness per type) and Scala's implicits can be used in potentially incoherent ways.
> Fun fact: The way Haskell encodes them is identical to the way Scala does.
No. The way Haskell does type classes ensures coherence: you can never pass the "wrong" dictionary. There is at most one dictionary per type, globally, so it is always the right one.
> Limiting type classes to one instance per type ... that's what I would call completely pointless.
Ensuring instance coherence is not pointless. It helps in the correctness department.
> With that kind of restriction, why have typeclasses in the first place? There is probably not much (anything?) which could be enabled by such a crippled feature compared to dynamic dispatch/OO/subtyping.
>> Fun fact: The way Haskell encodes them is identical to the way Scala does.
> No. The way Haskell does type classes ensures coherence [...]
No one is disputing that Haskell layers additional restrictions on top of typeclasses, but as mentioned the encoding is the same in Haskell and Scala (look it up if you don't believe me). That's why it's kind of funny that you think Scala's encoding is terrible.
> Wrong.
I guess that's why all those GHC extensions to make Haskell's typeclasses less half-assed exist in the first place? :-)
> This abomination would not have happened in Haskell.
What abomination? This code does exactly what you told it to do. It tells us more about your inability to write code which doesn't look like a terrible translation of Haskell.
If you wanted the ordering to be a property of your type, then make it a property. But don't whine if you write code which lets you switch typeclass instances that it in fact let's you switch typeclass instances. :-)
> No one is disputing that Haskell layers additional restrictions on top of typeclasses
The definition of "type class" requires that there can be at most one instance per type. I know perfectly well that, operationally, Haskell just passes a dictionary around just like in Scala. But, denotationally, instance uniqueness causes a difference in semantics, namely, guaranteed coherence. And, to be frank, one of the reasons why I have adopted functional programming is that, most of the time, I do not want to be slowed down by operational concerns. Haskell allows me to write code I can reason about in an equational fashion. Scala does not.
> I guess that's why all those GHC extensions to make Haskell's typeclasses less half-assed exist in the first place? :-)
I was talking about Haskell, not GHC. Personally, when it comes to GHC's type system extensions, I am rather conservative: MultiParamTypeClasses, FunctionalDependencies, GADTs, TypeFamilies (only for associated families, never for "free-floating" families), RankNTypes. (That is, I may use other extensions, like OverloadedStrings, but they are not extensions of the type system itself.)
As I said above in this thread (though not in reply to you), I find antimodularity (due to instance uniqueness) annoying, but incorrectness freaking scary. Of course I would rather have both modularity and guaranteed coherence, but if one must absolutely go away, then it is modularity.
> What abomination? (...) If you wanted the ordering to be a property of your type, then make it a property.
I want the language to help me write correct code, and implicits do not help.
> Haskell allows me to write code I can reason about in an equational fashion. Scala does not.
That's just non-sense. In Scala typeclass instances are just part of the equation. That's just not a big deal at all.
> Personally, when it comes to GHC's type system extensions, I am rather conservative: MultiParamTypeClasses, FunctionalDependencies, GADTs, TypeFamilies (only for associated families, never for "free-floating" families), RankNTypes.
LOL. I guess that's a joke, right?
> I find antimodularity (due to instance uniqueness) annoying, but incorrectness freaking scary.
There is no incorrectness. The code you have written does exactly what you have specified. Don't blame the tools for writing bad code. :-)
Haskell providing you the tools to easily corrupt your runtime, now that's what I call "freaking scary incorrectness".
> I want the language to help me write correct code, and implicits do not help.
Then why do you fight the language so much in your code example?
I can write terrible code in Haskell, too. Does that prove anything? No.
> That's just non-sense. In Scala typeclass instances are just part of the equation. That's just not a big deal at all.
The important part of the equation that Scala misses is that, when a value V has a type T that is an instance of a type class C, then V has to respect the laws associated to C. Granted, this is not enforced statically (because it requires dependent types), but instance uniqueness allows you to confine all the manual testing/enforcement to the point where the instance is defined.
> LOL. I guess that's a joke, right?
No OverlappingInstances, no UndecidableInstances, no ImpredicativeTypes, no DataKinds, no PolyKinds... sounds fairly conservative to me.
> There is no incorrectness. The code you have written does exactly what you have specified.
Dereferencing a dangling pointer is undefined behavior, and in particular, it may cause a segfault. Just as specified.
> Then why do you fight the language so much in your code example? I can write terrible code in Haskell, too. Does that prove anything? No.
This kind of argument I expect from dynamic language folks, not from Scala proponents.
> when a value V has a type T that is an instance of a type class C, then V has to respect the laws associated to C
And that's exactly what happens in Scala if you cared to look at actual code.
The only difference is that one can associate different values with different typeclass instances.
If you decide not to associate values with specific typeclass instances and instead write code which accepts arbitrary instances, despite the fact that the values invisibly depend on a specific one ... sorry, that's just plain dumb.
That's like writing an API which accepts numbers and an arbitrary operation on those numbers, but only returns reasonable results for addition; and then complain that your code is fine as long as the "arbitrary operation" is addition.
No compiler on this planet can fix a lack of brain cells.
> Dereferencing a dangling pointer is undefined behavior, and in particular, it may cause a segfault. Just as specified.
If you can't see the difference yourself, I can't help you.
> This kind of argument I expect from dynamic language folks, not from Scala proponents.
Yeah, sorry. Haskell users know how to deal with different programming languages, but the latest influx of HN's "I-read-something-about-Haskell-on-the-internet-a-few-minutes-ago-let's-tell-everyone-how-dumb-they-are-to-show-my-new-intellectual-superiority-as-a-Haskell-expert"-kids has muddied the water a bit and caught me a bit off-guard, because I usually deal with thinking people only.
> And that's exactly what happens in Scala if you cared to look at actual code.
My example notwithstanding? Heh.
> If you decide not to associate values with specific typeclass instances and instead write code which accepts arbitrary instances, despite the fact that the values invisibly depend on a specific one ... sorry, that's just plain dumb.
I am not disagreeing that it is dumb. I am complaining that the language allows me to do something that is dumb in first place. This is attitude towards correctness (the language washing its hands of having to deal with it) is what I dislike.
> If you can't see the difference yourself, I can't help you.
There is no difference. In both cases, an undesirable behavior is possible because of a hole in the language's design.
> Yeah, sorry. Haskell users know how to deal with different programming languages
Yes, exactly. The way I deal with Scala is to not use it.
>> And that's exactly what happens in Scala if you cared to look at actual code.
> My example notwithstanding? Heh.
Yes. Just go and read some code instead of using your imagination. Compare for instance TreeSet.apply and List#sorted.
> I am complaining that the language allows me to do something that is dumb in first place. This is attitude towards correctness (the language washing its hands of having to deal with it) is what I dislike.
Then you should dislike Haskel, too, because that's exactly what Haskell allows you to do, too.
Prelude> let getLargestElement (x:xs) = x
Prelude> getLargestElement [3,2,1]
3
Prelude> getLargestElement [1,2,3]
1 -- OMG!!!
Haskell is sooo incoherent! I wrote a function that obviously relies on that the input is sorted in a certain way, but Haskell let's me pass arbitrary lists to it! How dumb is that?! Haskell has a really terrible attitude towards correctness!
>> Yeah, sorry. Haskell users know how to deal with different programming languages
> Yes, exactly.
Sorry, but you've been sorted into the I-read-something-about-Haskell-on-the-internet-a-few-minutes-ago-let's-tell-everyone-how-dumb-they-are-to-show-my-new-intellectual-superiority-as-a-Haskell-expert category already, so this doesn't really apply to you.
> The way I deal with Scala is to not use it.
That's perfectly fine, but please stop commenting on things you don't understand. You are only embarrassing yourself.
You can view it like that, my perspective is that in Scala the underlying mechanism (passing the dictionary around by means of implicit parameters) is more explicit. Much like how prototypes in Javascript work, versus class-based OOP. With prototypes you can have class-based design, but you can also go beyond that. And this is similar to how Scala deals with type-classes. This does have a cost in terms of learning curve, but then you've got extra flexibility. CanBuildFrom is not a type-class.
On implicit conversions and view bounds, I'm not fond of them either. I actually hope that they'll go away completely. I think they made it in the language (and survive), only because Scala runs on the JVM and they had to make the builtins (like Arrays, Strings) behave nicely without much overhead. In Haskell, Strings really are lists of chars. In Scala you can't do it, unless they introduced their own String class, which would have been awful, or if they introduced some sort of implicit conversion mechanism, which they did. And this could have been some magic that Scala's compiler applies for builtins, but personally I dislike compiler magic (that you can't hook into) more than I dislike hacks made for interoperability (if only they got rid of that implicit conversion from Any to String in Predef, which is so freaking annoying).