Hacker News new | past | comments | ask | show | jobs | submit login
True Scala complexity (yz.mit.edu)
182 points by plinkplonk on Jan 9, 2012 | hide | past | favorite | 148 comments

Fantastic post. The most salient excerpt for me:

    def filterMap[B,D](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D]): D
    def filterMap[B,D <: GenTraversableOnce[B]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D]): D
    def filterMap[B,D <% GenTraversableOnce[B]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D]): D
    def filterMap[B,D[B]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D[B]]): D[B]
    def filterMap[B,D[B] <: GenTraversableOnce[B]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D[B]]): D[B]
    def filterMap[B,D[B] <% GenTraversableOnce[B]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D[B]]): D[B]
    def filterMap[B,D[_]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D[B]]): D[B]
    def filterMap[B,D[_]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D[B]], ev: D[B] <:< GenTraversableOnce[B]): D[B]
    def filterMap[B,D[_]](f: A => Option[B])(implicit b: CanBuildFrom[?,B,D[B]], ev: D[B] => GenTraversableOnce[B]): D[B]


    > The answer to our original question? It turns out none 
    > of these are correct. In fact, *it is impossible to insert 
    > a new method that behaves like a normal collection method.* 
    > This, despite the heavy advertising of enrich my library.
Stuff like this makes think about how, despite all of the problems with using it in libraries, it's lovely that many dynamic languages can be extended in your application with little fuss.

    # Ruby.

    class Array
      def filter_map

    // JavaScript.

    Array.prototype.filterMap = function() {

To me, a big advantage of Scala's "enrichment" over monkey-patching in Ruby or JS is that it isn't global. That is, you have to import the enrichment. Another code module in the program won't be unexpectedly affected by it.

In practice, I almost never use monkey-patching in dynamic languages because it's too dangerous. While in Scala there are cases where enrichment won't work, you can always just write a regular function in those cases, and there are lots of cases where enrichment _does_ work...

Adding and/or altering functionality at runtime isn't dangerous. Monkeypatching may be, so avoid that.

Also, an entirely too-little used idiom (blame Rails programmers):

    module OverrideSomeMethod
      def some_method

    s = SomeClass.new
    s.extend OverrideSomeMethod

This is an idiom I use regularly (in Perl, Ruby, Io & Javascript) and come across it often in the Perl world where Moose roles are used.

The only downside of this idiom is the extra runtime cost which maybe an issue for Rails?

Some languages make monkeypatching a lot less dangerous.

For eg. in Perl you can use dynamic scoping to localise its effect:

    no warnings 'redefine';
    local *SomeModule::some_func = sub { say "MONKEYPATCHED!" };
    # now everything in this scope that uses or calls SomeModule->some_func 
    # will now use the monkeypatched version

  # where has everything else outside this scope remains unaffected
In Ruby Refinements earmarked for ruby 2.0 will have something similar: http://www.rubyinside.com/ruby-refinements-an-overview-of-a-...


    public static class EnumerableExtensions
        public static IEnumerable<R> FilterMap<T, R>(this IEnumerable<T> list, Func<T, Option<R>> callback)
There are some methods that need to be part of the class and carried around with the instance so that things like polymorphism work. But many operations work perfectly fine without. By making those lexically scoped, you avoid the problems of monkey-patching and method collisions. Extension methods are fantastic for this.

Also, for kicks, Magpie:

    def (items) filterMap(callback)
        var result = []
        for item in items do
            match callback(item)
                case true, mapped then items add(mapped)
                else nothing

    var result = [1, 2, 3, 4, 5, 6] filterMap with
        if it % 2 == 0 then (true, it * 2)

    print(result) // 4, 8, 12
Magpie is dynamically-typed, but methods are lexically-scoped (and are multimethods).

you have the same problem in C#. you can't take a Foo class and extend it from outside to have all the IEnumerable methods then let it have all the IEnumerable extension methods.

I'm sorry, but won't these two examples add the method just to arrays? The author is intentionally trying to add a single method that will work for all collections, both scala's and non-scala's, and will always return a collection of the same type as the one the method was called on. Can you show how to easily do that in a dynamic language?

In Gosu, it's pretty much the same (though enhancements are statically dispatched and thus subject to a different set of limitations):

enhancement MyEnhancement<T> : T[] { function filterMap() { . . . } }

Roughly the same in C# with extension methods, though with C# you explicitly import the extension method while in Gosu they're automatically always there (sort of good, sort of bad). Again, not exactly the same as the dynamic language examples, but enhancements/extension methods do allow you to extend existing classes in a reasonable fashion while still being amenable to all the other advantages static typing gives you.

Both you and jashkenas are missing the point. The examples you are giving are also pretty easy to do in Scala.

The more complex method he is proposing (filterMap) cannot be done at all in the languages you mention, though he only uses it precisely to get the most complex kind of method that Scala collections offer. But it is also possible, and I just blogged about it here: http://dcsobral.blogspot.com/2012/01/adding-methods-to-scala....

But, no, that is not what he wants. He wants to add this method not to the collections, but to something that isn't a collection. Well, Scala can do that too -- it added all the collections methods to String and Array, didn't it?

And here comes the twist: he wants to add filterMap not by adding it directly to them, like Scala does. He wants, instead, to go _through_ that code to get at them.

With extension methods, the login would be like this:

* X adds extension methods to Y * Z adds extension methods to X * Therefore, Z extension methods should be available on Y

And, in fact, it is even possible to do that in Scala for many methods, but not for the particular combination he chose, and while still inferring all types.

Instead of getting bogged down in his specific example (the tree), I think it's more helpful to focus on his larger points in "On Acknowledging Problems" (the forest).

I disagree. The problem here is that those claiming “it is easy in language A” haven't even understood the problem.

People should first actually understand the problem, only after that a discussion about solutions makes sense.

That's not the problem, it's a problem. That is, it's a problem in this discussion, but I feel it has been dealt with well. The author's main point, though, was not about the specific example. That was to illustrate his larger point, which was about the complexity that arises when rich features interact.

The issue the author mentioned has already been solved in a much more easy and efficient way, without trying to use every feature of the type system.

Now the question remains: Should a language make almost-impossible and dangerous tasks easy or hard? I certainly prefer a language like Scala, which makes easy things easy, hard things possible and dangerous things hard, instead of the other way around.

That is a nice feature of dynamic languages, however you lose strong typing. I think the thing to take away is Scala hasn't gotten the perfect blend of these two yet. You can't make that competly generic map filtering extension yet. However you can make a less portable alternative. So you make you decision on what is more important.

> however you lose strong typing

You lose static typing; strong typing is different: http://en.wikipedia.org/wiki/Strong_typing

Just to be pedantic, you lose static typing. Ruby (and Python, etc.) are strongly, dynamically typed languages.

In F# this is approximated with:

  type 'a IEnumerable with
    member this.filterMap f = 
       [for x in this do if f x <> None then yield (f x).Value]

  let j =  Map([("a",1) ; ("b",2) ]).filterMap(fun kv -> if kv.Value = 1 then Some(kv.Key,kv.Value) else None) |> Map.ofList

  let n = [1.;2.;4.].filterMap(fun x -> if x % 2. = 0. then Some(x**2.) else None) 
The extension works with any Sequence (arrays, sequences, maps,etc.) with the caveat that it maps every enumerable to a list. You could also define an extension for a specific type where a broad definition does not make sense.

Personally, I lean more functional, I have never run across a need for extending things in this manner.

Arrays are not sequences in Scala. They are Java arrays, not Scala sequences.

Yes, I picked this up, and was not making any statement against that. In fact I do not subscribe to the validity of his approach. But I gave a shot at seeing if I could give code that was succinct and matched.

The requirement was to add a method that works for all collections, whether platform or language specific, while preserving type. The code I gave is an approximation of a solution - to use a rough analogy: topologically speaking the code matches but loses the geometry. The code I gave works on basically all .NET collections, whether C# or F#, string or tree - as long as they implement the interface - they are matched. That it leverages the existing organization should not count against it. The failing is that although types are preserved it is under a new geometry or structure.

Common Lisp:

(defmethod filter-map (f (a array)) ...)

That was not the question. It is trivial to do that in Scala for the use case mentioned.

The problem is adding it as an "instance" method to a collection and expecting that it is usable by something not being a collection at all.

Well that is the beauty of it. In Scala methods/functions that come after the dot are privileged. In a language with multiple dispatch they are not.

So CL solves the problem without adding more complexity, whereas in Scala you have to extort yourself to shoehorn some functionality after the dot.

CL isn't even statically typed. Everything is easy if you don't expect that the language gives you any useful guarantees.

I have no idea what you mean by "adding it (which "it"?) as an "instance" method to a collection". You can do this:

(defmethod filter-map (f (c (eql some-particular-collection))) ...)

but I suspect that's not what you meant.

I also can't make heads or tails out of "expecting that it (which "it"?) is usable by something not being a collection at all." I'm not even sure that's proper English, let alone semantically meaningful.

Maybe you should try to understand the actual issue first, _before_ claiming that "but it is easy in my pet language" stuff ...

Additionally, last time I looked Cl wasn't really statically typed. Has that changed recently?

Maybe if you stated what you think the actual issue is in clear, unambiguous terms instead of being pissy and snarky about it this discussion will not degenerate into chaos.

No, CL is not statically typed. And your point would be...?

The issue is with static typing. One can define a filterMap function in Scala much as you defined it in CL. The author's goal is to do that while also always statically knowing the most precise type of the returned collection. So, it's an issue that only comes up in a statically typed language.

Of course, I don't program in Scala, so my explanation may not be accurate.

> The issue is with static typing. One can define a filterMap function in Scala much as you defined it in CL. The author's goal is to do that while also always statically knowing the most precise type of the returned collection.

That's right.

> So, it's an issue that only comes up in a statically typed language.

No, that's wrong. You can do type inference in non-statically typed languages. Lisp compilers do this all the time.

If a compiler infers types at compile-time for a dynamically typed language, I still consider that "static typing" because it's statically inferring the types. If the term "static typing" is the problem, then I can rephrase: it only comes up when you try to determine all types before executing the program.

That's right.

At this point I would like to remind both you and soc88 of a parable:

Patient: Doctor, it hurts when I do this.

Doctor: Well, don't do that.

(Soc88's response, in the context of this parable, is something along the lines of, "But anyone who doesn't do this is a moron.")

Inferring types at compile time is necessarily hard. It is a corollary of the halting problem that no static type inference can be perfect. Therefore you have the following choices:

1. A simple compiler that sometimes fails to identify type errors at compile time

2. A simple compiler that sometimes produces false positives (i.e. signals a type error in a program that is in fact correct)

3. A complicated compiler. (Note that even a complicated compiler will also do 1 or 2 or both, but potentially less often than a simple compiler.)

Those are your only options. Reasonable people can disagree over which is preferable.

I don't disagree with your points, but now that we've established that, that is why your original example does not solve the problem as presented in the post. The problem inherently has to do with statically inferring types.

That depends on what you consider to be "solving the problem." Do you want to "do this" or do you want to be free from pain? You can have one or the other, but not both.

OF COURSE static type inference is hard. That's a straightforward consequence of the halting problem. Pointing to defmethod is just an oblique way of making the point that perfect type inference is NOT NECESSARY for getting things done. You can choose to lament the complexity of Scala (and static type inferencing in general) or you can use Lisp or Python and trade certain compile-time guarantees for simplicity. Like I said, reasonable people can disagree over which is preferable.

What reasonable people cannot do is insist that there is a single perfect solution that is both simple and error-free. Anyone who believes that has not understood the implications of the halting problem.

Another thing reasonable people cannot do is frame the tradeoff as a binary choice: either you use static type inferencing, or you give up all compile-time guarantees. That is simply not true, as is amply demonstrated by e.g. the SBCL compiler. It's a complex, multi-dimensional space of tradeoffs in language design, compiler complexity, and different kinds of compile-time guarantees. It's INHERENTLY complicated. The best you can hope to do is find a reasonable point in the design space for your particular quality metric. For the OP, Scala isn't it.

Since the OP uses Scala to solve his problems, I have a feeling that Scala is, to him, a reasonable place in the design space. Scala allows a function much like your example. He used that example not say "This is a failing of Scala, and why I will not use it," but to say "This example demonstrates a complexity that is a natural consequence of the design of Scala." In other words, he said something quite similar to what you said.

> Scala is, to him, a reasonable place in the design space

Reasonable perhaps, but manifestly not ideal or he would not be complaining about how complex it is.

> he said something quite similar to what you said

Well, I didn't actually say much, I just posted a snippet of code and left people to draw their own conclusions. Why soc88 chose to start a fight I can only guess, but it seems to be not uncommon behavior among people trying to defend untenable positions.

What you said two posts up about inherent complexities, not at the beginning.

4. A simple compiler that sometimes requires a type annotation.

> Those are your only options. Reasonable people can disagree over which is preferable.

Ah ok. Being right seems to be more important to you than having a honest discussion.

Have fun, I'm out.

> A simple compiler that sometimes requires a type annotation.

It is easy to show that that will not solve the problem. If your language is Turing-complete, then you can embed (say) a Lisp interpreter and arbitrary Lisp code within it. The only way your compiler can be complete and correct for your language is for it to be complete and correct for this embedded Lisp. This is a fundamental result. There is no way around it.

Well ... maybe just click the link and read the article? It is pretty clear.

Ahh, the famous behavior of Lisp fanatics. Tragic, how it is obvious to everyone – except themselves – why no one wants to use their language.

> No, CL is not statically typed. And your point would be...?

Uh ... what about

a) Author complains about the inability of the compiler to prove some property of his code.

b) Untyped languages – by definition – don't provide any substantial proving abilities based on types.

c) Therefore, you are completely missing the point.

Hey, you're both [edit: mistake, see below] new here and obviously knowledgable about the topic at hand. At HN, we try to maintain civility - it's an explicit goal of the community. What this implies is that if you're in a discussion with someone, and you realize they don't understand an important point of the discussion, instead of using sarcasm, it's much better to say "Oh, I see, you're missing point x."

> you're both new here

My account was created 1458 days ago. Just how long does someone have to be here before you no longer consider them "new"?

Sorry, I said "both" by mistake. You are an active and well-known contributor to HN, and I know you're reasonable, which is part of why I felt soc88 was being unreasonable.

> Author complains about the inability of the compiler to prove some property of his code.

No, the author is complaining about the complexity of the language. Maybe you should go back and re-read the article. Start with the title.

> Untyped languages

Lisp is not untyped. "Not statically typed" is not synonymous with "untyped."

> Therefore, you are completely missing the point.

Which of us is missing the point remains to be seen.

I think it's a good post as well, but filterMap is a contrived example. It comes out of the box: flatMap can be used as a filterMap, because Option[T] is an acceptable substitute (through implicit convernsions) for Iterable[T]. (An option is a collection type; just one with 0 or 1 elements.)

The existing collections functions in Scala take you very, very far. If you need to write your own primitives for performance reasons, it can get tricky, but that's really uncommon. Odersky's book (chapter 25) explains how to do that.

Collections libraries are hard in Scala because of what you get. Once you define a few simple functions and possibly implicit conversions, you get all 50+ sequence methods "for free", and if you do it right, your map type functions will return collections of the same type (runtime and static) as the original-- without explicit typing. You get a lot of leverage, but you have to work a little bit for it.

And because Ruby devs think it is so great, they try to make it more Scala-like, right?

Or what am I missing?

After reading a bunch of these articles and really walking away unimpressed this one is fantastic. It does a great job of showing how cryptic you can make Scala code. It also shows that the same topic on which half the article is spent explaining can be boiled down to four lines of less "Scala Like" code. As a pragmatic user of Scala I don't see why the fact that you _can_ do some amazing but cryptic things with the language is a detriment. Its like pointers in C or templates and multiple inheritance in C++. Some people say that the developer shouldn't be given such capabilities since it can lead to cryptic and possible dangerous code. I am of the opinion that the additional flexibility the features provides is nothing but a benefit and that its up to the developer/organization to make sane limits on what can be used.

"As a pragmatic user of Scala I don't see why the fact that you _can_ do some amazing but cryptic things with the language is a detriment."

Because you hit the cryptic too soon, and you can't really avoid it. And I generalize this to any such language that has this characteristic, not just Scala. You can't use C++ for very long without having to know huge swathes of it if you want to use the libraries created by others, and you want to be able to debug why they don't work perfectly. (Bearing in mind "working perfectly" also includes you having a correct mental model of how they work, which is tricky if you only have a subset of the language in your head!) Same for Haskell. Compare with Python, where you basically can learn a reduced subset of the language, yet still use libraries fairly effectively. You need to know the basics of having objects and calling methods. You don't need to know how to write your own iterator, any of the double-underscore methods or the resulting protocols, you probably don't need to know decorators (and even if you do, probably just how to use them, because they are unlikely to bite you in odd ways if you just copy-paste instances of them), you don't need to know metaclasses, etc.

Type systems of almost any kind are very hard to satisfy without understanding. In some sense, this is one of the very constraints the stronger ones are trying to enforce.

This is not my experience.

I don't give a dime about higher-kinded types, type bounds, type views, type constraints, CanBuildFrom ... and I'm happily using the language.

You must be one of those lucky people that only write new code all the time and never have to maintain code written by someone else. For the rest of us poor suckers who have to do a mix of writing new code and maintaining existing code bases written by other sadistic (!) developers then it's a very valid concern.

Some languages show that you can do these things in a simple way. For example I impmentet a Persisten Vector for Dylan (very common lisp like language) in about 3 sessions. It was the first time I wrote dylan code befor that I only read on dylan. Its working and fiels like a "native" collection allready.

I guess it would take me the same amount of time fully understand this article. Other people in this thread have shown how easy it is in other languages too.

I still have not found a single comment showing "how easy it is in other languages". Do you have a link?

I'm not Parent, but I think that this is his implementation: https://github.com/nickik/Persistent-Vector-in-Dylan/blob/ma... The last part (line 222+) is what defines it as a sequence in Dylan.

Dylan (and other multi-dispatch languages) don't have the problem of "adding methods to objects/classes", because methods are standalone entities (first-class, whereas in Scala the are not) that exist independently of the data they operate on.

so you can just "add" a method to something by defining it:

    define method upcase(s :: String) => (ret :: String)
      // code here
    end method upcase;

    "abc".upcase // => "ABC"
    // is just sugar for
    upcase("abc") // => "ABC"
so you could define 'forward-iteration-protocol (a method that returns 8 values) on anything in Dylan (built-in or not) to make it a "Sequence".

That has nothing to do with the issues mentioned in the original article.

In fact, the thing you described is trivial in Scala.

    implicit def Upcase(s: String) = new { def upcase = s.toUpperCase }


I thought we were talking about "adding methods" to collection-like things so that they look as if they were built-in. And this was what the article was about: That it gets complex (and impossible) if you want to solve it for the general case.

It wasn't abut type-safety, just about complexity.

The problems in Scala arise, because you have to extort yourself if you want to "add a method" in the privileged position after the dot. You have to to resort to implicit conversions which are a non-composible feature. To fake composibility the author has to wade through huge piles of complexity.

These problems don't arise in Dylan at all, because there is no privileged argument position (the receiver) and you can just define methods for anything without conversions to wrappers or monkey-patching. Namespacing is done via the module system and lexical scope.

A collection-like thing in Dylan is any object where someone has implemented the required methods (first and foremost: forward-iteration-protocol). All those methods can be implemented without having access to the definition of the objects class or type, so things like native arrays (with only .length indexing and value setting as operations, akin to Java arrays) can be made collection-like.

Then you can just define your own methods for collections like filter-map, which will then work for thoose native Arrays.

If you only use the minimal collection protocol:

    define method filter-map(coll, pred :: <function>, transform :: <function>)
      let <ret-type> = type-for-copy(coll); // analogous to CanBuildFrom
      let new-coll :: <ret-type> = make(<ret-type>); // analogous to Builder
      let (init, limit, next, end?, key, elt) =
      for (state = init then next(coll, state),
           until: end?(coll, state, limit))
        let e = elt(coll, state);
          add!(new-coll, e));
        end if;
      end for;
    end method upcase;
If map and choose (filter) are already defined (And yes; they are in terms of the collection protocol):

    define method filter-map(coll, pred :: <function>, transform :: <function>)
      map(transform, choose(pred, coll));
    end method filter-map;
Yes, I don't really like the API-design of forward-iteration-protocol. It works like iterators in Java, but is designed to not need allocation for simple indexable collections like lists and vectors etc.

Fully agree. I have not trieded this but I think the typesystem features dylan has should work too (limit and unions).

The most importend methods to expand are element (getting the n's object), forward-iteration-protocol and add. For a immutable collection that all you really need.

This is really, still missing the point. “adding methods” is trivial.

The real complexity is when you add method M to collection C and expect that class A which does not have any relationship with C also gets the method.

It has been shown that it is possible in Scala, without all the unnecessary complexity shown in the authors post.

Still, I fail to see any statically typed language even coming close to what is requested from Scala.

A lot of the items the post author describes as complex you can't do at all in other languages.

"Simple things should be simple, complex things should be possible." - Alan Kay

I don't think the blog author gives particularly great examples of simple things that are made complex by the language. He simply gives examples of things that are inherently complex that scala at least makes possible.

I have to respectfully disagree. First of all, the things he shows aren't inherently complex: adding an additional function to the existing collections library is something that's possible in other languages in less confusing ways (see other examples in this post). The various concepts required to solve the problem in Scala like implicits and higher kinds might be inherently complex, but the problem he's trying to solve isn't inherently complex. And while the Scala solution might not be 100% identical to, say, just adding the method directly to a class in Ruby (since in Scala it's type-safe, etc. etc.) that's still missing the point. The point is that if you sit down and say "I want to add a method to all my collections that works like the existing methods," in Scala it requires an understanding of a huge number of complicated intersecting concepts, whereas in other languages it doesn't.

Secondly, excusing complexity by saying "it makes things possible that wouldn't otherwise be possible" isn't really enough of a justification. The question is: do those exact things need to be possible? Or is there some way that gets me 90% of the way there without the complexity, and that's good enough? More power isn't always better, which was a big part of the point of the article. Just dismissing it by saying "well, the complexity lets you do powerful things" doesn't really refute the point of the article, it totally misses it.

There's a tradeoff to be made. You might make those tradeoffs differently than I would, which is totally understandable, but we should at least be able to have a conversation about the fact that there is such a tradeoff without people dimissing statements like "Scala is complex" out-of-hand.

"adding an additional function to the existing collections library is something that's possible in other languages"

Is it possible in other statically compiled strongly typed languages? I don't think so?

The advantages/disadvantages of static vs. dynamic languages seem out of scope for the "is scala too complex?" question.

See my examples below for how to do it in Gosu (via enhancements). In C# it's pretty much the same (via extension methods). Both are statically typed languages. Again, not 100% the same as Scala, and each comes with a different set of tradeoffs than the Scala approach (i.e. they're statically dispatched in Gosu), but they're certainly both reasonable, and much "simpler", solutions to the problem of "how do I add a filterMap function to all arrays or collections that works pretty much how I want it to."

OK I read up some on this and neither Gosu enhancement nor C# extended methods come close to what the post author is trying to attempt (adding methods to Seq and having them automatically get attached to Array, String, and CharSequence) with a single method.

So it doesn't really seem like they solve the problem "how do I add a filterMap function to all arrays or collections that works pretty much how I want it to" any simpler than Scala. All the complexity that the author brings up in his post is because he was trying to add this functionality with a single method (and a bunch of implicits).

Up top http://news.ycombinator.com/item?id=3444688 I gave working code that comes close. To be fair the reason why it gets closer (works with both F# and C# lists,arrays, sequences,dictionaries,maps,strings etc) is clerical. From this thread, I understand Scala and Java arrays are divorced.

But the problem is: put in a string and it gives back a list of characters. So I use the term it is "topologically correct" heh. To be honest I do not think there is anything functional about what he is attempting but perhaps the idioms in Scala are different? My knowledge of scala is mostly horizontal (from Ocaml,F#, haskell).

Returning the most precise result types (both collection _and_ element type) is a requirement. If this wouldn't be required, the problem would be much easier.

Very interesting. I'm gonna go read more about this stuff now. Thanks!

You pretty much point to solutions which don't solve the problem mentioned in the blog post at all and seeing how C# extension methods basically reintroduced instanceOf checks as an implementation necessity I'm pretty sure that extension methods are the wrong way to go.

I don't remember C# extension methods being particularly prone to incur instanceOf checks, but I could be missing something; can you expand on this?

Have a look at how stuff like Count is implemented. It basically does instanceOf checks for ICollection to figure out if the underlying type supports a better way to compute the result (instead of iterating trough the whole collection).

This is completely non-extensible and people have to pay the price for this syntactic sugar (e. g. extension methods not discoverable with Reflection).

Other languages have a much cleaner approach: traits (in languages like Scala) and default methods (in Java) both solve the problem in a more straightforward and correct way.

> Is it possible in other statically compiled strongly typed languages? I don't think so?

Sure it is :) see C# http://msdn.microsoft.com/en-us/library/bb383977.aspx

I don't see how it solves the problem mentioned.

First of all, the things he shows aren't inherently complex: adding an additional function to the existing collections library is something that's possible in other languages in less confusing ways (see other examples in this post).

It's very easy to write a new function for your particular case. You can write an acceptable map function for your new collection, or a new list-munching function, just as easily as in ML. It will be a top-level function instead of a method, but that's usually just fine (if inelegant).

The long-term risk you are taking is that you might want this list-munching function to apply to other collection types and will now have code duplication.

What Scala offers is the ability to write new methods that apply to all collections (Strings, BitSets, Lists) if you wish... and to create new collections types (with a little bit of work and comprehension, but not as much work as is involved in writing 50+ library functions by hand) that automagically end up with all the collections functions.

This impressive and quite radical code reuse is the part that's hard about Scala, but that's only relevant if you want to write libraries at the L2/L3 level of quality and beauty.

But again, other languages do offer the ability to write methods that apply to all collections, without the "complexity." Sure, it's different, but in Gosu for example you can write an enhancement method on the Collection interface, or on the List interface, and now everything that implements Collection or List has that method. It's then a matter of whether or not the objects in question implement that appropriate interface or not. Your new functions apply to everything that implements those interfaces, and if you want a new thing to have all those methods then you just implement the interface and you get all 50+ functions for free.

Scala offers a particular way to solve that problem that has its own plusses and minuses, but it's not the only way for a language to let you do that, and it's certainly not the "simplest" way to accomplish that goal.

Arrays are not collections, either in Scala, or in the blog's condensation of it. But the blog nevertheless tries to define a single extension method that works for both sequences and arrays. I do not think this problem has a simple solution in any language.

That's a fair point; there's no simple way that I know of to do that in any language (though many languages avoid having both constructs), and the general solution is to write parallel sets of methods to operate on the two types of data structures.

> First of all, the things he shows aren't inherently complex: adding an additional function to the existing collections library is something that's possible in other languages in less confusing ways

> "I want to add a method to all my collections that works like the existing methods," in Scala it requires an understanding of a huge number of complicated intersecting concepts, whereas in other languages it doesn't.

Sorry, but there is a bit more to it. I will gladly accept a link to a language which does all the stuff he was trying to do, but until then I'm not convinced at all.

"Notice how several of the problems I pinpoint above are of the form, “How do I accomplish this expression that isn’t even possible in many other languages?” And these are mostly static build issues, which are far from the worst fates imaginable. Certainly, if you don’t care at all about clumsy code or resorting to escape hatches or “writing Java in Scala,” it’s frequently possible to bang out mostly-type-safe Scala while side-stepping battles against the compiler straitjacket. Plus, let’s not lose sight of all the things Scala brings to the table, none of which I’ve described. If there’s one thing more time-consuming than illustrating the complexities of the Scala language, it’s illustrating all the advantages, and I’ve already spent more time on this post than I’d care to admit."

Yes, the post author says much the same thing as I just did. I just thought it was worth specifically mentioning in the discussion thread (in case some people didn't make it all the way through the blog post).

I think it would be great if we could replace implicits by:

1) a simple mechanism for "enrichment" (aka. retro-active extension, virtual classes)

2) functional type-level computation (as opposed to the mini-prolog engine that is implicit search + type inference).

Reducing complexity is complicated, unfortunately. We [have been|are] thinking about both of these alternative features, though.


ps: The following part of the article is inaccurate: "Turns out that Scala will search up to one level up the type hierarchy for a matching shape for the implicit."

Check out the "implicits without the import tax" part of http://eed3si9n.com/implicit-parameter-precedence-again. The implicit scope includes all the superclasses (and their companion objects) of all the parts of the type of the implicit value that's being resolved.

Regarding some of the "brainteasers" posted in the article, I am assuming that most of them are simply bugs that will get ironed out in version 2.9.2 or thereabouts ?

eg. The following fails:

Set(1,2,3).toIndexedSeq sortBy (-_)

But doing the same in 2 steps, ie. after assignment, works

val xs = Set(1,2,3).toIndexedSeq; xs sortBy (-_)

I have been bitten by this several times now, so I don't unnecessarily chain > 2 functions even if it does compile ( which also solves one other brainteaser posed in that article ) Have also seen the problem with the add function, when I wrote a matrix manipulation library.

def add(x: Int, y: Int) = x + y

add(1,_) fails, but add(_,_) works. Even though the error message " missing parameter type for expanded function" seems reasonable, and providing the parameter type ie. add(1,_:Int), does compile, the bahavior is hard to explain to newbies.

His point about hanging your hat on asInstanceOf[...] and the resulting code being clumsy...ok, guilty as charged, but then you only have so many hours in a day, and mgmt is paying $$ to solve boring business problems ( compute the asset quality of five million loans in thirty lines of business over twelve quarters using scala ) and not mucking around ( add a filterMap to the scala collection library to get an idiomatic Scala implementation )

I enjoyed the programming snippets in the article very much, but I still don't get what the point of the rant was. He goes on and on about "lack of acknowledgement of complexity", but what does that really mean ? Does he want like a gold star ? Even if everybody acknowledges that scala is complex, what then ? Eventually some of it will get fixed & the rest will not & life will go on. Why be a downer at such an early phase of growth of the language ? All this talk of excess complexity will simply scare off the early adopters. Java has been around for 15 years and we still don't have decent generics. Scala is so far ahead in such a short time. Patience, etc.

> Even if everybody acknowledges that scala is complex, what then ? Eventually some of it will get fixed & the rest will not & life will go on. Why be a downer at such an early phase of growth of the language ?

Because things don't get fixed if people won't talk about them. Irrational defenses (like calling someone who points out legitimate problems a "downer") are a hindrance toward making Scala better than it is right now.

First, I think a blog post ending with "bring on the flames" should have comments enabled, otherwise it's not really fair. It comes across as whining without wanting to listen to advice or a response.

Second, I think the blog post is useful because it shows what's wrong with some (fortunately very small) part of the Scala ecosystem, and because it points to a way to fix it.

Here's a quick recap: The author tries to add arbitrary operations to Scala's Seq abstraction without changing its source code and wants them to work also on arrays (which are plain old Java arrays), without any extra work. Arrays in Java support: length, index, and update; that's it. There is as far as I know no language in existence that allows the precise thing the author wants to achieve. And there are many variations, such as adding only to Seq or only to Array that would be really easy in Scala but still impossible in most other mainstream languages. The author then throws all the machinery he can think of at the problem to still achieve the same non-result. Well, tough luck. He might have hit a thing that's simply impossible to do in a generic way, given the tools we currently have. In fact, I have not checked whether there would be a way to achieve the result that he wants because that's beside the point. There are always limits to a generic formulation that will force you at some point to treat things on a case by case basis.

The problem is that, in trying to achieve his impossible goal, the author (mis-)uses a lot of the most powerful features of Scala, and concludes that Scala is simply too complex for helping him achieve the result. I believe he wrote this blog post to prompt the maintainers of Scala to add even more power to the language so that he can achieve his goal (the only other motivation I can think of is that he's trying to actively damage the ecosystem he writes code in, but that would make no sense to me).

My response will probably not please him. I think that we need to take away sharp knifes from people who have a tendency to cut themselves. I was always proud that in Scala you could do in a library where in other languages you had to change the compiler. Inevitably, some of this is cutting edge stuff. We have tried many times to clarify the boundaries, for instance when I defined the Scala levels


But we can't prevent a developer who prides himself to "stroll right through level L3" to get hurt.

So, I believe here is what we need do: Truly advanced, and dangerously powerful, features such as implicit conversions and higher-kinded types will in the future be enabled only under a special compiler flag. The flag will come with documentation that if you enable it, you take the responsibility. Even with the flag disabled, Scala will be a more powerful language than any of the alternatives I can think of. And enabling the flag to do advanced stuff is in any case much easier than hacking your own compiler.

I would be interested to read your comments on this proposal.

Ugh, sorry about comments being disabled. No idea how that happened; should be fixed now. I do want to hear feedback, flames and all, so finding that comments were off this whole time was disappointing for me too.

I assure you I didn't stroll through L3. I've been using Scala for half a decade and I'm still learning it.

Your reply is a bit disheartening. I don't know how else to convince you, as a long-time user of your product, that every single one of the issues I listed has been a "real-life" encounter. Dismissing the ends as impossible and thus my path toward discovering this fact as mis-using the language suggests to me that you may be overly fixating on the specific example goal I used; it's only one sample point out of many I could have chosen. That it's impossible to add such a method is not even close to the main point I'm trying to make; it's everything that came before that.

This post was neither a request for more expressive power nor an attempt to sabotage the community. The goal was much more modest: to just get something off my chest and to hopefully nudge the discussion past "is Scala complex." I would further submit that if you think I'm advocating piling in even more features/power, then I've done a woefully inadequate job communicating my thoughts. My current belief is that the solution involves either less power or a simply different set of tools.

As for your proposal, I'm not sure how it solves the problems:

1) You can always restrict yourself to writing in only subsets of the language, but this falls apart as soon as you interact with any code that isn't yours.

2) With all due respect, the idea sounds to me like it's tucking a bunch of features away behind a flag and discouraging users from voicing concerns or thinking critically about these features, which is really all I'm trying to do here.

Sorry for my misunderstanding regarding comments. I got frustrated by some recent Scala bashing posts that had comments disabled, and mistakenly assumed there was a pattern. I meant to respond directly to your blog, but now that the discussion on HN is in full swing, we should keep it here.

As someone who struggled myself for a long time with the design of collections until the pieces fell into place, I can understand your struggles very well. In every language there is a limit of what can be achieved, and there is a grey zone before that where things get messy. I believe that Scala collections pushed the envelope in terms of flexibility and ease of use. But repeating this feat by extending all kinds of collections and collection-like structures generically with your own operations is not at all trivial.

You are absolutely right to point out when things get messy of course, even if it would be only to serve as a warning to others who might follow you.

The question is how to avoid similar experiences in the future. One can either change the language to make simple what you found hard. But before I buy into that, I would like to see a constructive and complete proposal what one should change.

Or, one could make a better effort to delineate the limits and the grey zone. That's what I proposed. Importantly, my proposal would only apply to definitions not to usages. So I'd put implicit conversions and definitions with higher-kinded types behind a flag, but not their uses. I hope this would give pause to library designers that mix advanced features with too much abandon, but it would not hinder users of these libraries at all.

> Truly advanced, and dangerously powerful, features such as implicit conversions and higher-kinded types will in the future be enabled only under a special compiler flag. The flag will come with documentation that if you enable it, you take the responsibility.

Haskell (well, the GHC compiler) has been doing this kind of things for ages. They even take the further step (which I agree with) of requiring that "dangerously powerful" features be explicitly enabled on a per-case basis. They also have a syntax for enabling features on a per-file basis, which reduces the likelihood of advanced techniques creeping into other parts of your codebase.

I think your proposition is a good idea, and heartily recommend a review of how the GHC team has adressed this problem.

Note that this is not perfect; frequently, in order to use a module using some language features, you need to enable these language features yourself (the interface may be in-expressible without the language extensions!) But they don't seem to obtrusive because most of the language features aren't actually that controversial. I've written about how the features interrelate here: http://blog.ezyang.com/2011/03/type-tech-tree/ One of the problems is Scala really is more complex: many believe that's just what you get from mashing Java and functional programming together.

Yeah, I expect, if Scala were to go this route, much the same would happen in regards to "inheriting" language features. Sometimes you can neatly tuck them away in the library, sometimes they spill out into calling code. At the least they give you some indication of what you're getting yourself into. I don't really consider that aspect of it to be a problem.

I think Scala has a few tools that, like multiple inheritance in C++ or Haskell's MultiParameterTypeClass, need to have a bit of a warning label so people avoid them until the situation screams for it.

MultiParameterTypeClasses are a fine feature! It's the ones like UndecidableInstances that you have to be careful about.

They are, until someone makes a five parameter monster because "someday I'll want that genericity." To be honest, though, I just googled the extensions I could remember until I hit one that hinted at creating a situation where the compiler simply doesn't have enough information to compile your code.

That said, yeah, UndecidableInstances look like a much better example. Especially when googling it turns up things like this: http://lukepalmer.wordpress.com/2008/04/08/stop-using-undeci...

Is MultiParameterTypeClasses something like multible dispatch from CLOS?

In that both embody the concept of using the types of multiple arguments to determine which instance of a function gets called, yes.

That's where the similarity ends, though. CLOS does this at runtime during method invocation, MPTC "dispatch" is determined at compile time. Multiple dispatch will generally have some overhead that MPTC won't, but it's more flexible in that the set of function implementations can be extended without recompiling.

Also, due to limitations of type analysis or type expressivity, it's possible to write something that would work for the specific instances where you are using MPTC but can't be proven to work for all instances of the types involved. Then again this is almost always an issue with static typing. Wether the compiler is saving you from a bad decision or keeping you from doing your job is a matter of personal opinion.

Thanks very intresting.

You might be intressted in this paper I read just a couple of days ago: "Extending Dylan’s type system for better type inference and error detection". (Dylan OO is simular to CLOS) A system like that helps to get the dispatch overhead down while still beeing able to extend it at runtime and you get alot of the typesafty.



In GHC the features are much more well-defined and bite-sized. If you ignore them you still have a very complete, usable and impressive language.

[edit s/less well/more well/]

First, I think a blog post ending with "bring on the flames" should have comments enabled, otherwise it's not really fair. It comes across as whining without wanting to listen to advice or a response.

Speaking as someone who sometimes writes controversial things and never enables comments on my posts...

I view collections of comments as discussions. Readers often form into communities, like Hacker News. If there are comments on a blog post, are you supposed to write your comment here in your community? Or there on the post? Well, a good question is, are you a member of the author’s community? Probably not! Communities have all sorts of standards for participation. What are the author’s standards? Why bother figuring them out for one comment?

Communities also have other tools like seeing a person's history of posts to get a sense of their viewpoint and biases. How do you do that on an author’s blog?

Ultimately, authors either have to get rid of comments or buy into a plug-in system like Disqus. In my own case, I usually just provide a link to HN for posts that I think are of interest to the folks here. Everyone else can use twitter, reddit, or their own blogs to continue the discussion.

That doesn’t mean I won’t read all the flames, there are no end of analytics tools for discovering what people are saying about my writing.

I don’t speak for this author, but considering he has a link to this discussion right at the top of his post, I’m guessing he’s listening to your thoughts and welcomes your feedback right here or wherever people congregate to discuss the news of the day.

Sometimes comments are just comments. If you need a link to figure out whether to take a person seriously (i.e. their comment has no merit free-standing - I'm not sure this is true), a link to a blog ought to be sufficient.

I know I frequently click on the links people provide when they make blog comments, but I don't think I've ever looked at someone's Disqus history. Disqus cuts across multiple sites in potentially drastically different arenas; it's too horizontal. But when someone provides a link, they usually mean it to be relevant.

I actually agree. I prefer to see discussions take place across blogs or in actual communities.

Turning off implicits is not exactly in the same category as removing debug symbols. One removes a convenience, the other disallows the compilation of a whole slew of existing (and future) programs.

My view is that Scala is what it is. It provides a set of powerful features to solve hard problems. Some features are conceptually weighty, but they are all there for a reason and constitute a logical whole.

Don't they?

Implicit conversions (and implicit parameter substitution) are just about the most important feature of the Scala language, from my perspective, and I don't think it makes any sense to hide this behind a compiler switch.

Just about every modern Scala library depends upon the "implicit typeclass pattern" - many of which require that you define your own typeclasses for your own code, or at least are able to correctly create instances of framework-provided typeclasses in the appropriate implicit scope, if for no other reason than to help out the Scala compiler when it can't correctly resolve an implicit. If you start to push implicits behind a compiler flag, this will mean that fewer people will be familiar with their subtleties, and will consequently become more utterly lost when the framework provided to them becomes insufficient to satisfy their goals.

Indeed, I'd suggest that the entire history of the advancement of the Scala library ecosystem could be described as the process of the community learning to use implicits to greater and greater effect. It makes no sense to me to hobble developers by marking a feature "off limits" by default - I know that for my own usage, Scala's benefits only really became evident once I understood implicits; before that, it was simply a slightly better Java. With implicits, it's a tool that makes Java look like COBOL.

> I know that for my own usage, Scala's benefits only really became evident once I understood implicits; before that, it was simply a slightly better Java. With implicits, it's a tool that makes Java look like COBOL

I would be interested to read any examples you may have which demonstrate this contrast.

>So, I believe here is what we need do: Truly advanced, and dangerously powerful, features such as implicit conversions and higher-kinded types will in the future be enabled only under a special compiler flag. The flag will come with documentation that if you enable it, you take the responsibility.

Yes, I love the idea. I've said so before on hackernews and on the scala-user mailing list; it would be a huge contribution to Scala.

Most of these features (implicits, higher kinded types, and hell, it's not complicated but ugly so I'd throw it in: using symbols as method names) are useful for library designers, but less so for your average project.

I've written a few thousand line Scala application that has so far made awesome use of first class functions, pattern matching, and actors. I can't think of another language that would work as well for me, and I haven't had to resort to any of the advanced features the author talks about.

What i'm terrified of is having patches/contributions that make use of these 'complex' features that might scare away newbies from hacking on the project.

I think having a 'version' of Scala (enforced by the compiler) that was as simple for beginners to use as Gosu, Kotlin, Ceylon etc. would go a long way in Scala evangelizing, and unlike the aforementioned languages, they wouldn't be as constrained as they grew as developers.

I also support the idea of putting known-dangerous features behind a compiler flag. Those powerful features are what draw me to Scala (and to C++), but where C++ failed, Scala can succeed by making it crystal-clear where the dragons lie.

Scala brings a lot to the table as a "java with less painful syntax and some functional programming tools", IE level A1/A2/L1. That's enough to inherit a huge legacy from Java, and if there's a way to allow that to happen without killing off the more advanced features, it seems like a win-win.

> The author tries to add arbitrary operations to Scala's Seq abstraction without changing its source code and wants them to work also on arrays (which are plain old Java arrays)

Are we sure that's true? In the first half, with RichSeqs and isMajority() and so on, the author appears to define all her own types, and gives her examples entirely in terms of those types. For example:

    scala> class Seq[A] { def head = 0 }
    defined class Seq
The built-in collections library doesn't seem to come in until filterMap(), AFAICT.

The author tries to add "ident" to Seq, and have it work automatically on native Java arrays.

> have it work automatically on native Java arrays

Correct me if I'm wrong, but he is not operating on native Java arrays at all.

He redefines Array, redefines Seq and redefines WrappedArray as a subtype of Seq. He adds ident to Seq, and since Array isn't a subtype of Seq, ident is not a member of Array ( he points that out too, and provides a solution as well. He is operating in his own "tiny parallel universe" as he says in the post. He even redefines String and CharSequence (which threw me off because isn't java.lang.String & java.lang.CharSequence already in the namespace so the conflicts etc...) Then he wants a Seq[Char] ( not the scala Seq but his redefined Seq ) to mimic the redefined String! The whole thing is quite crazy & terribly fascinating imo.

When? The author appears to define her own Array type as well:

    scala> class Array
    defined class Array

    scala> class WrappedArray(xs: Array) extends Seq // a Seq adapter for Arrays
    defined class WrappedArray

    scala> implicit def array2seq(xs: Array): Seq = new WrappedArray(xs)
    array2seq: (xs: Array)WrappedArray

    scala> new Array().ident
    <console>:13: error: value ident is not a member of Array
              new Array().ident

Arrays are not sequences, in either the standard Scala library or his condensation of it. The goal is still to define a single extension method for two unrelated types.

Well, sorta; the Array -> WrappedArray conversion relates them. The intent is to let you use Seq methods on Arrays, and that part works. But you don't also get RichSeq methods, even though you can use them on Seqs. I'm sure that seems obvious to you, but do you see how the author's intent is misaligned with the result?

I think that's the key takeaway: implicits aren't really a way of extending a library type, they just have some of the same effects. You still have to care about the distinction between the original type and the enhanced one, and that's, well, complex.

I agree. Implicits don't compose transitively (and for good reason), and that makes them complex. That's why I suggested to think about putting implicit conversions behind a compiler switch. They are extremely useful for what they are but not a panacea.

I think that if you put a flag it effectively means removing those features from Scala, because it will declare them as dangerous. If that happens, then you have lost to Kotlin / Ceylon because these languages have an edge on Scala in terms of ecosystem (IDE / hibernate). The only chance Scala has is in allowing more sophisticated language constructs.

I think the source of the problem, at least in this blog post, is that extension methods are limited today. They are done with implicit conversion which is limited for obvious reasons I guess (not allowing type A to arbitrarily become B). I think a feature focusing just on adding extension methods is required, even if it makes the language spec larger.

Please don't do that. I don't want to go work for some Scala shop or team in the future and find out that I'm limited to L1-only features because of management practice.

Instead you'd prefer the team to just pick Java or a lesser JVM language over Scala because they're cautious about some of the complexities implicit conversions bring?

If there was a safe, 'simpler' subset of Scala, I think you'd see a lot more adoption. And hell, once you get a job there, you can always make a good case for using some of the advanced stuff in certain parts of the code.

Unfortunately in the world we live in, not every programmer should be given the tools to let them write their own DSL as a means to accomplish their day to day tasks..

...what is to stop them from doing it now? I don't think you can fix management problems at the programming language level. Also, at least for the next five years, I doubt you'll have to worry much about those kinds of shops deciding to pick up Scala.

I understand the suggestion, and adding a guard to the knife makes some sense, but where's the line?

As a library developer, let's say I use a few implicit conversions to make my library easy to use. I sign off on the use with a compiler flag, no big deal, but now every user of my library has to do the same?

Perhaps I'm misunderstanding, but wouldn't this suggestion effectively ruin "ArrowAssoc"? Because I can't think of a single application I write that doesn't use one of those, honestly.

I am thinking only of putting (some) definitions of implicit conversions behind a compiler flag. Uses of such conversions would remain unaffected.

I don't like the idea of compiler flags, its extra work that isn't needed.

If people/teams decide they don't want to use aspects of the language then they are free to do so by convention.

The absence/presence of compiler flags form a kind of type system. It's a bit like the difference between Integer and IO Integer in Haskell.

I don't know Scala, but from a PR perspective it sounds like a quite bad idea.

One issue that Scala has been dealing with for ages is a "oh my god, look at these sharp edges, I'll hurt myself!" response. Even if an individual more or less trusts themselves to make good decisions about language constructs, there's the concern that an individual library can (or has) force inappropriate complexity on them.

Giving people a mechanism to limit, or at least easily discover, the use of complicated language features sounds like a PR win for Scala.

Adding a "complexity safety switch" kind of defeats the claim that your language isn't complex.

Except that I'm pretty sure he isn't denying that the language is complex. From Martin's comment, it appears that he is saying that Scala provides powerful and complex language features to solve hard problems, but that it isn't Scala's fault that people are so persistent about abusing them.

Then it's probably a good thing that no one was making that claim.

You may instead find people claiming that the complexity "isn't a problem" or "can be avoided by convention." I don't fully agree with either of those statements. A "complexity safety switch" (nice term, by the way) helps to alert programmers that more difficult language constructs are at play in a given piece of code. Identifying and advertising the use of problematic-but-useful language features seems like a good compromise between omitting them and allowing people to naively wander into them.

> Even with the flag disabled, Scala will be a more powerful language than any of the alternatives I can think of.

How is Scala more powerful than Haskell, Agda, or Coq?

I guess it depends on the definition of "powerful". Scala unlike Haskell, has first class modules ( aka OO ) and unlike Coq (And Agda, I presume) is turing-complete. ( Not that those things really matter to make an awesome language :) ) But for mature statically typed langs running JVM, Scala is the most powerful ...

Well, Scala has advantages and disadvantages over Haskell, but I wouldn't say it is more "powerful" than Haskell.

Consider the implementation of a properly lazy "const" function in Haskell:

  const x _ = x
And in Scala:


Not sure about Coq, but Agda is Turing Complete (you just need to turn off termination checks) and Epigram also supports general recursion as well as structural recursion with total functions.

Coq is not turing-complete, but IMHO, completeness is overrated :)

I was trying to point that the "more powerful" criteria is pretty subjective.

The OP said "more powerful than the alternatives", that usually means "runs in the JVM"... and AFAIK, Jaskell or CAL are far from mature.

I think you missed his real point, which was in the "Acknowledging Problems" section. Simply, what I got out of his post is that the individual features of the language interact in deep and complex ways. I don't use Scala, but that sounds like a fair and interesting point to make.

Do people really think that higher-kinded types are 'truly advanced and powerful features'?

unfortunate that there needs to be a --enable-flaming option :(

One source of Scala's design is Java interoperability, much as C++ has to live with its C legacy. This compromise may also be why people are able to use Scala in practice, though; they need to talk to Java libraries or need the performance of a language that maps straightforwardly to the JVM.

Scala is a functional/object-oriented hybrid, making it more complex than a purist language in either mold. It always lets you do things in a Java-like way - you can use it as "Java with less boilerplate" and touch almost nothing Scala-specific. Then it adds functional programming alongside.

To me this feels very natural; I like objects for the big-picture structure, but I like to write algorithms and manipulate data in functional style. If you need raw performance in some hotspot, write a Java-like while loop with mutable state; otherwise, write something nice and high-level (and the JVM will still be much faster than a "scripting language" however you define it, e.g. http://blog.j15r.com/2011/12/for-those-unfamiliar-with-it-bo...).

Scala does fix some Java warts that are legitimately complex or confusing in their own right. For example, primitive and boxed types are less strongly separated; there's no "static", just nice syntax for singleton objects; collections are 100x nicer with far less noise; a nice multiple inheritance design; covariance eliminates a bunch of nasty hacks; better ways to specify access controls; there's a decent way to factor out exception handling; case matching is _awesome_; and _so_ much less boilerplate in general.

I'm not sure the static vs. dynamic religious war can ever be resolved, but static types feel less broken and less verbose in Scala than in Java. Java makes you lie to the type system, or do something unnatural, much too often. Scala hasn't cured every such situation, but it's cured a lot of the most common ones, and greatly reduced the need for manual type annotations.

Scala gives you the conciseness of Ruby, but with static type checking, higher runtime performance, and interoperability with existing Java code.

Some tradeoffs of static type checking remain, such as compilation times.

People do go on wild goose chases trying to push the language farther than it's ready to go. I've done it myself. I agree with the article that there are lots of areas to improve and appreciate the constructive write-up.

But on the other hand, the perfect shouldn't be the enemy of the good. I certainly would not choose to go back to Java, even as I'd love to keep seeing Scala get even better.

If anyone is able, would you please explain to me these two questions from the quiz? I’m stumped.

Why does `toSeq` compile, but not `toIndexedSeq`?

    Set(1,2,3).toIndexedSeq sortBy (-_)
    Set(1,2,3).toSeq sortBy (-_)
Why does `h` compile, but `f` does not?

    def add(x: Int, y: Int) = x + y
    val f = add(1,_)
    val h = add(_,_)

toIndexedSeq takes a type parameter (I'm not sure why - it appears unncessary) which means the result of toIndexedSeq is not known when it is attempting to infer the Ordering. toSeq doesn't take one, it's known to be Seq[Int].

The other looks like some quirk of partial application. It's unlikely there's any fundamental reason, only an implementation imperfection.

Good to know. Thanks Paul!

I have a confession to make. I voted this article up simply based on the title. Even if it turned out to be incredible link bait (which it didn't), I'm just happy that someone out there is at least recognizing the desire to have a reasonable discussion of this issue.

I wonder how much of an impact it would be if Scala simply had better documentation. I love the terseness of the code and the complex things you can do, but I'll be damned if I could ever glean ideas from looking at method definitions rather than code examples.

I found it essential to read Martin Odersky's Programming in Scala book - if you haven't then I recommend it. It makes many things clear that can be hard to pick up using only the free online resources.

There's also a new docs site and it's getting better all the time: http://docs.scala-lang.org/

There's a lot of good books on the language, this is 1st edition of staircase book, which covers 2.7, so probably the majority was written for 2.8


There's also a book manuscript (downloadable from typesafe.com, and Manning.com has 3 books in the pipeline, tho the 3rd, on FP by Tony Morris doesn't show there.

Just wow. My eyes started glazing over at "views don’t chain. Instead, use view bounds" and he completely lost me on the next slide with generics. I had Scala in my mental TODO/Maybe list but after skimming over this post I think I'll pass.

I'd have the opposite reaction, I think: I'd want to learn a language because it introduced new concepts (and thus, terminology). It seems to me that it's a bit pointless to learn a language if you already intimately understand its theoretical underpinnings.

Yeah, the post went steeply uphill there. But you can actually understand the gist of most of what he's saying without actually understanding what it means that "views don’t chain. Instead, use view bounds". And somewhat further on, it goes downhill again.

In the simplest form, you can use Scala as a better Java. It's worth exploring it just for that comparison. And then there's much, much more you can do, if you want to, but it's also a very nice language if you don't try to add methods to the collections library and don't use dependent types.

So, a guy tries to solve a hard problem by using very powerful but very complex features that 99.9999999% of scala devs would never touch, and you take that as a reason to avoid scala? And then you come here to tell everyone how absurd your decision making process is?

I think that the source of the problem is that Scala is simply not an opinionated language. You can go functional, you can go OO; you can go immutable, you can go with locks etc. And because it tries to do everything it falls on its face, but worse than that - its users don't really know HOW you're supposed to use it. I always like to contrast Scala with Clojure and Erlang. Both are very opinionated: they have a philosophy of tackling problems, not just a set of tools, and that's why they are so elegant and beautiful (well, at least Clojure is. Erlang is showing its age at times). Even languages more similar to Scala like Groovy or Kotlin are more opinionated and more focused (and thus more elegant and simpler) than Scala. They are trying to be a more modern Java, a blue-collar OO language with some modern fun. But it seems like Scala is trying to push not only tools but concepts, only it hasn't decided which concepts are best so it's pushing all of them at the same time. The result is not only frustration but harmful education as to the best way go forward.

Both "more opinionated" languages are considerably slower than Scala, so sometimes it is necessary to fall back to "ugly, imperative" code and those languages make sure you will hate that experience.

The difference imho is that Scala doesn't punish you for trying to be fast where necessary.

Apart from that I would really like to where Groovy or Kotlin are "more elegant or simpler". I would have probably looked into the specification but something like that doesn't even exist for Groovy. From my last journey into Groovy I learned that this language is substantially underdocumented and buggy as hell. I prefer not touching it anymore.

Actually falling back to ugly imperative and super-fast code in Clojure is extremely easy, and its integration with Java is a pleasure. But I don't want to pit a specific language against Scala. My point is that it isn't clear (to anyone possibly), what Scala IS, other than a kitchen-sink language for programming concepts.

Some things in Scala are easy and some things are hard, but it's not always clear why. I'd expect the "good" things (whatever is considered good by the language designers, their expert opinion, that is) to be easy and the "bad" things hard. But in Scala it seems that whether something is easy or not depends on how well it fits with the language's algebraic model, and not how well it fits with recommended practice. Immutable is easy (that must be good, no?), but mutable is just as easy (so, wait, I'm supposed to... what exactly?); functional is easy, but so is imperative; implicits are easy but extension methods, as the blog post shows, can be really hard; type inference is easy except when it's not. See the problem?

> See the problem?

No. Must be because I actually use the language instead of trying to bash it to advertise another language. :-)

I'm still excited to see another language solving the problem mentioned in the article. At the moment it really looks like as Scala gets bashed for the complexity of something not possible in any other language out there.

Yeah, I'm sorry about that. That was not my intention. But you're right, I have grown disillusioned with Scala, partly because of stuff like the examples in the pop-quiz section of the post, but mostly because of the other things I mentioned.

But I don't agree with your analysis. I won't go into the question of whether or not the feat mentioned in the post is possible in other languages or not, but I do know that if I came to a rather experienced Scala programmer and asked him off-hand if the tasks attempted in the post are easy, I suspect that the answer would be "yes". Scala is just... surprising like that.

Very good post. One point with which i cannot agree is "In fact, it is impossible to insert a new method that behaves like a normal collection method. "

Please see http://ideone.com/ePUHG An excerpt: import MyEnhancements._ println("qwe".quickSort) println(Array(2,0).quickSort) println(Seq(2,0).quickSort)

Miles Sabin's solution is also worth a look: https://gist.github.com/f83892f65f63b14a1f75

It uses dependent types which will be included by default in Scala 2.10. I don't consider this as "simple" but my point of view is that Computer Science is not "simple" :-). And having a language supporting that level of genericity is really helpful for type-safety and code reuse.

@OlegYch's solution also represents a good trade off of complexity vs. convenience: his solution is simpler but doesn't get along so nicely with type inference; mine gets on just fine with type inference, but is more complex and depends (no pun intended) on dependent method types. As ever YMMV.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact