Hacker News new | past | comments | ask | show | jobs | submit login
Scala turns 10 (gmane.org)
72 points by psuter on Jan 20, 2014 | hide | past | favorite | 64 comments



However certain parts of Scala are slightly older (16/07/2003)[1]

[1] - https://github.com/scala/scala/blob/master/src/library/scala...


It's been 10 years since the first Scala release, not since the inception of the language.



My "favorite" feature is the case class airity limit of 22. Is that ever a fun one to discover the hard way.


This is finally going away in the next version (2.11, which will hit RC1 next month) see [1].

[1]: https://github.com/scala/scala/pull/2305


What are the implications of not implementing FunctionN? Will the apply method still be there? Can I still pass the case class companion object to a higher order function? Will shapeless' Iso.hlist still work? (that's the big one really, as it allows dealing with case classes more-or-less generically)


Yes the apply method is still there. But since it has more than 22 arguments, you cant't lift it into a function object anymore.

This will break some tools, like Play's forms, that need you to pass to a couple of serialization/deserialization functions.

Shapeless will probably be fine, or at least, since it is macro based, it will be able to go around the missing unapply and extract the fields directly, like pattern matching does. (I don't know shapeless enough to go into more details)


Aaaaah, interesting. Spray uses Shapeless heavily, so this should make it a shoo-in for the Typesafe stack.


This will be an interesting side effect: "omit generation of unapply"


Yes, this means there won't be a 1-to-1 conversion between a case class and a tuple anymore, since tuples are still limited to 22. This will probably break a few things out there (for people crazy enough to use case classes with 23+ parameters).


The problem is the real world doesn't always leave you with a choice. If you need to interface with a database that's got a table with 43 columns, good luck using Slick. Need to serialize it to JSON? Play-Json isn't going to like that.

I'm very much looking forward to fixing this ridiculous situation. Onboarding a new hire to Scala with this being the first thing they run in to isn't pretty.


Have to agree.. I ran into this when on the first time using Scala ,when I thought it would be fun to learn the language and a framework at the same time (Play). After initial crying for a bit I gave up trying to reason why this limit was put in place and just nested my data structure.


It's a problem that I contemplate every day. More often than not I have to resolve to using Map in which case I have to use external JSON library, because like you mentioned - Play JSON is not that friendly with anything else than case class.


We've resorted to an ugly nested case class structure. We like Play-Json overall, it works well even with large (>100mb) JSON objects, but we'd love to not deal with that. TBH, it's a bigger problem with Slick where you we can't just mess with the schema since other systems use the same DB.


"We've resorted to an ugly nested case class structure."

Yes the same here, and I haven't found an idea that's overall better, taking everything into account.


I think the only place I've seen anyone wanting more than 22 parameters was people who had a database table with more than 22 rows (!) and where using the SLICK library.


Yeah we ran into this problem... Scala and the whole ecosystem can get annoying. In the real world, outside of the academic bubble that Scala lives in, database tables are over 22 columns.


In Scala I find it more natural to group multiple columns together / break the case-class representing the table into multiple case-classes. The problem is that Slick needs to serialize/deserialize those case-classes into tuples.


I think the arity limit is more or less planned to be fixed in Scala > 2.11 anyway.



You can also run into it with forms in play, although for a variety of reasons you hit it at 18.

The whole situation just drips with arrogance... "we have a magic number, and if you hit it, you suck"


I agree. I know people posted a comment saying its fixed in 2.11 but if Scala wants to be production ready language it shouldn't take 10 years to address this.


> "we have a magic number, and if you hit it, you suck"

You mean like Java does since ... forever?


So basically, anyone with a legacy database :)


When I looked at Scala briefly in 2006 or so built-in support for XML processing was a big selling point. Another one was 'multiparadigm' - like C++. It was clear at that time that Scala would not become a mainstream language.


I don't follow the connection between your comments.

XML Support as selling Point + Multiparadigm == !mainstream?

I agree that scala has its quirks (IMO, the type system is way too complicated), but to have dismissed it based on two apparently unrelated features 8 years ago seems a little strange.


XML literals are on its way out from the default language by the way. There's an ongoing effort to make the language smaller, and from 2.11 one has to include a package to support it.


It appears they've changed the look of the scala-lang website to celebrate this:

http://www.scala-lang.org/

(This is what it looked like yesterday: https://web.archive.org/web/20140119030617/http://www.scala-... )


The current design is actually the old design whereas the archive.org link is the real current design.

Either they're doing a throwback thing for the anniversary or that's a remarkably coincidental server issue.


Wow I wouldn't have guessed it was that old.


Hehe, I remember taking a compiler class by Martin Odersky when I was studying at EPFL 8 or 9 years ago. We had to implement a compiler for a mini-Scala.

After that class, I thought I would ever see it again.


Was that mini-Scala any of the { Eins, Zwei, Drei, Vier } languages?


I honestly don't remember, I hadn't spent too much attention in class. (Mistakingly believing I would never ever write a compiler...)


It only started getting a lot of visibility after Twitter announced they were using it about 5 years ago.


What ended up happening to the .NET target?


the CLRs type system is in the annoying place of being both too powerful and underpowered to support scala's type system.

If it was weaker (á la JVM) it would be fine, if it was more powerful it could represent stuff at runtime.

I'm trying to find a link to the paper which says there is not an efficient one-to-one mapping from Scala to CLR


Hope you find the link; I'd like to read that paper.


It died, because .net supports reified types, which make interoperability pretty much impossible.

Contrary to popular opinion, erasure is a superior solution to reification for many reasons, and this is one of them.


Can you be more specific? I know that reification can bloat runtime memory requirements, but I'd like to know about other disadvantages.



Not really... this only mentions that reified generics would be incompatible with current Java collections, which is obviously a problem that Java has, not that reified generics have (C# does great!).


C# was designed for the CLR and the CLR was designed for C#. F# on the other hand has 2 generics systems in the same language, one inherited from C# and one for ML-style Hindley–Milner based on type-erasure and they don't interact well.

Java's generics can't be built on top of the CLR's reified generics, but this goes for other languages as well - Haskell, ML, Scala, you name it. Basically, reified generics work well in case you design the language in combination with the VM, otherwise it becomes a huge PITA.

FYI, the CLR's reified generics is one reason for why the JVM is a better target for other languages. For dynamic languages, they are a problem because the bytecode generator has to work around them. For static languages, such as Scala or anything in the ML family, it's a problem because either (a) they are designed for co/contra-variance or (b) they don't support higher-kinded types or (c) they don't support rank-2 types or (d) in case you do need co/contra-variance rules, for some reason Microsoft chose to restrict the declarations only to generic interfaces and delegates, so you can't design a List<+T> class for example (to be honest, I'm not sure if this last one is a limitation of the language or the CLR).

Also, having reified generics is less important than you think. Scala for example can do specialization for primitive types and because the type-system is much stronger, coupled with an awesome collections library and pattern matching, the flow of the types is much better. For example, I can't remember any instance in which I felt the need to do a (obj isInstanceOf List<int>) check, immutable collections can be covariant without issues and for everything else there are type-classes, something which C# sorely lacks.

IMHO, in spite of people's opinion on the matter, Java's type-erased generics is one of its best features.


Also, having reified generics is less important than you think. Scala for example can do specialization for primitive types and because the type-system is much stronger, coupled with an awesome collections library and pattern matching, the flow of the types is much better. For example, I can't remember any instance in which I felt the need to do a (obj isInstanceOf List<int>) check, immutable collections can be covariant without issues and for everything else there are type-classes, something which C# sorely lacks.

I was under the impression that Scala just used Manifests and type tokens (a la my colleague Neal Gafter's Super Type Tokens) to get around the problem of erasure. I don't see any evidence that this actually works better, though.


Yes, it also has TypeTags, allowing you to inspect exactly what the compiler saw at compile-time in detail. That's not really what one uses in practice, e.g...

    scala> val list = List(null, 1, 2, "any")
    list: List[Any] = List(null, 1, 2, any)

    scala> list.collect { case x: Int => x }
    res0: List[Int] = List(1, 2)

    scala> Vector(null, "hello", 3).collectFirst { case i: Int => i * 2 }
    res1: Option[Int] = Some(6)

    scala> import collection.immutable.BitSet
    
    scala> BitSet(1,2,3).map(_ + 1)
    res2: immutable.BitSet = BitSet(2, 3, 4)

    scala> BitSet(1,2,3).map(_.toString)
    res3: immutable.SortedSet[String] = TreeSet(1, 2, 3)
To be honest, tags are indeed needed for creating arrays, since they are reified by the JVM:

    scala> import scala.reflect.ClassTag
    scala> def myAwesomeArray[T : ClassTag] = new Array[T](10)

    scala> myAwesomeArray[Int]
    res4: Array[Int] = Array(0, 0, 0, 0, 0, 0, 0, 0, 0, 0)

One common problem is that overriding of parameters doesn't work for containers due to type erasure, so you can't have overrides on type parameters, because the JVM gets in the way (i.e. the following does not work):

    def sum(list: List[String]) = ???
    def sum(list: List[Int]) = ???
However, Scala has something more powerful (type-classes):

    trait Monoid[T] { 
      def plus(x: T, y: T): T
      def zero: T 
    }

    implicit object IntMonoid extends Monoid[Int] { 
      def plus(x: Int, y: Int) = x + y
      val zero = 0 
    }

    implicit object StringMonoid extends Monoid[String] { 
      def plus(x: String, y: String) = x + y
      val zero = "" 
    }

    def sum[T : Monoid](list: List[T]) = {
      val monoid = implicitly[Monoid[T]]
      list.foldLeft(monoid.zero)(monoid.plus)
    }

    scala> sum(List("a", "b", "c", "d"))
    res5: String = abcd

    scala> sum(List(1,2,3,4))
    res6: Int = 10
But wait, we can go much further with implicits that actually dictate the return type:

    trait SetBuilder[T, R] {
      def empty: R
      def buildInstance(param: T): R
      def concat(s1: R, s2: R): R
    }

    implicit object IntSetBuilder extends SetBuilder[Int, BitSet] {
      def empty = BitSet.empty
      def buildInstance(param: Int) = BitSet(param)
      def concat(s1: BitSet, s2: BitSet) = 
        s1 ++ s2
    }

    implicit object StringSetBuilder extends SetBuilder[String, Set[String]] {
      def empty = Set.empty[String]
      def buildInstance(param: String) = Set(param)
      def concat(s1: Set[String], s2: Set[String]) =
        s1 ++ s2
    }

    def buildSetFrom[T,R](params: T*)(implicit builder: SetBuilder[T,R]): R = {
      if (params.nonEmpty)
        builder.concat(
          builder.buildInstance(params.head),
          buildSetFrom(params.tail : _*)
        )
      else
        builder.empty
    }  

    scala> buildSetFrom(1,2,3)
    res7: scala.collection.immutable.BitSet = BitSet(1, 2, 3)

    scala> buildSetFrom("a", "b", "c")
    res8: Set[String] = Set(a, b, c)
Scala in general allows for code that is much more generic than C#, without having the need for reified generics.


And F* has dependent typing. I was never claiming that reified generics was necessary for stronger typing, but I'm also not convinced that it's an impediment.

At worst I could see how it could be a trivial implementation detail, but the fact that Scala went out of their way to use tokens and manifests suggests that the functionality is desired. Whether that's provided by the runtime or by the compiler writer is only a difference in effort (or rather, whose effort).


This is the first time I've heard of F*. Interesting. Is it usable?


Not really. It basically generates a number of type (in)equalities and outsources finding the solution to Microsoft Z3 theorem prover, which you cannot use for anything commercial.


This seems inaccurate - generics aren't a problem for F# at all (the integration between the .NET and ML type systems is seamless here, as far as I'm aware). Indeed the people behind the .NET generics design (Don Syme and Andrew Kennedy in MSR) were coming from an ML background, and Don is the primary researcher behind F#. The big ML-.NET type system mismatches have to do with things like pervasive overloading in .NET making type inference difficult, nothing to do with reified generics.


Don Syme, Andrew Kennedy and many others that work for Microsoft are awesome. This doesn't make it any less of an appeal to authority that's not valid. The reality is that their creativity was limited by the constraints of .NET

> The big ML-.NET type system mismatches have to do with things like pervasive overloading in .NET making type inference difficult, nothing to do with reified generics

You've hinted at a side-effect, but no, it has to do with subtyping [1], which naturally leads to co/contra-variance [2]. Or to put it bluntly, in a nominal type system that has sub-typing, Hindley-Milner type-inference is not only "difficult" but actually impossible. And because F# had to be interoperable with .NET, then it needed C# generics. Pity, because there's many things that F# misses, like Ocaml functors, structural typing, type-classes, higher-kinded types and the list can continue. And I just love how List.sum<^T> is defined [3] (static member, FTW).

[1] https://en.wikipedia.org/wiki/Subtyping

[2] https://en.wikipedia.org/wiki/Covariance_and_contravariance_...

[3] http://msdn.microsoft.com/en-us/library/ee353634.aspx


Sure, subtyping adds additional pain points (mostly separate from overloading, BTW). But I think this is mostly orthogonal to reified vs. erased generics. As it is, the .NET runtime supports reified generics and declaration-site variance (albeit limited to interfaces and delegates), and to my knowledge there's nothing stopping a different language/runtime from implementing reified generics even with usage-site variance (and Scala's type system only has declaration-site variance, anyway, AFAIK).


Generics are "so seamless" in F# that they ship in fact with two versions of Generics.


What does this even mean?



I wouldn't call that "two systems"; statically resolved type parameters are an extension of .NET generic parameters, not a separate system. In particular, a single definition can use both statically resolved and non-statically resolved parameters (e.g. in `let inline f x y = x + x`), and statically resolved type parameters can be used freely where normal .NET generic type parameters are expected (e.g. in `let inline g x = x + id x`). And these distinctions are fairly transparent to users in most cases anyway, since type inference is usually capable of inferring the kind of parameter needed.

In any case, it is true that F#'s type systems has concepts that the CLR doesn't natively support, but I don't see how this demonstrates any weaknesses in the CLR or F#. The exact same thing is true of Scala on the JVM, as far as I can tell - how are erased generics an improvement?


I'd love to see some evidence for these claims. Personally, I think that .NET's type system is quite elegant compared to the JVM's, and that .NET's combination of value types and reified generics gives several concrete advantages (better memory usage, avoids many Java pitfalls such as "why can't I use a T[]?", etc.).


> .NET's type system is quite elegant

What do you think about being able to use Void in Generics? Java's issue of not supporting primitive types pales in comparison to that.

Also, while reified generics and structs are fine, most of the optimizations one would reasonably suspect to be supported are just not there. Some time ago, it was still faster to pass value types by reference ... that's just embarrassing. Maybe RyuJIT does better here?


There are lots of things that could be better, and I don't mean to imply that .NET's system is perfect. But the fact that List<byte> contains an array of bytes rather than an array of much larger objects is often important in practice, and dwarfs any complaints I might have. Not being able to use System.Void in .NET generics is an occasionally frustrating quirk, not a huge problem, in my experience. Indeed, it's quite easy to create a new type with no public constructors to achieve the equivalent result (and F# does this with the unit type).


I've seen a few people reporting running Scala on .NET successfully with the help of IKVM [1]

[1] http://www.ikvm.net/


I've seen this project listed before but have never used it. Can anyone with a decent amount of experience with it speak to the quality of it?


It hasn't been maintained so it's getting axed in 2.11.


I heard the internals of the Scala compiler are really ugly and there isn't a true spec for why it does some of the things it does. So it might be hard to just make it match what the existing compiler does, because it's defined by it's implementation and it's hard to understand?


Not really. Yes, some parts are a bit involved, but it's better thanmost compilers out there. People are complaining at a pretty high level here.

The death of the .NET port can directly be attributed to Microsoft's marketing of the platform. If they manage to make people question whether .NET will still be alive in 10 years, it's no surprise that people start dropping support for it.


My sample size is small, but the sentiment mainly comes from this: "Pacific Northwest Scala 2013 We're Doing It All Wrong by Paul Phillips - YouTube" [1]

[1] https://www.youtube.com/watch?v=TS1lpKBMkgg


While I usually agree with most things Paul Phillips says, that talk was mostly an exception.

There is nothing which can't be fixed by focused work on all the bits and pieces which need work, and that's exactly what the scalac compiler developers are doing most of the time.

So many improvements are coming in every week that I can hardly stand using an older compiler version, because I always know that so much stuff has already substantially improved.

Despite what some people say, life in Scala-land is pretty damn good.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: