What are the implications of not implementing FunctionN? Will the apply method still be there? Can I still pass the case class companion object to a higher order function? Will shapeless' Iso.hlist still work? (that's the big one really, as it allows dealing with case classes more-or-less generically)
Yes the apply method is still there. But since it has more than 22 arguments, you cant't lift it into a function object anymore.
This will break some tools, like Play's forms, that need you to pass to a couple of serialization/deserialization functions.
Shapeless will probably be fine, or at least, since it is macro based, it will be able to go around the missing unapply and extract the fields directly, like pattern matching does. (I don't know shapeless enough to go into more details)
Yes, this means there won't be a 1-to-1 conversion between a case class and a tuple anymore, since tuples are still limited to 22. This will probably break a few things out there (for people crazy enough to use case classes with 23+ parameters).
The problem is the real world doesn't always leave you with a choice. If you need to interface with a database that's got a table with 43 columns, good luck using Slick. Need to serialize it to JSON? Play-Json isn't going to like that.
I'm very much looking forward to fixing this ridiculous situation. Onboarding a new hire to Scala with this being the first thing they run in to isn't pretty.
Have to agree.. I ran into this when on the first time using Scala ,when I thought it would be fun to learn the language and a framework at the same time (Play). After initial crying for a bit I gave up trying to reason why this limit was put in place and just nested my data structure.
It's a problem that I contemplate every day. More often than not I have to resolve to using Map in which case I have to use external JSON library, because like you mentioned - Play JSON is not that friendly with anything else than case class.
We've resorted to an ugly nested case class structure. We like Play-Json overall, it works well even with large (>100mb) JSON objects, but we'd love to not deal with that. TBH, it's a bigger problem with Slick where you we can't just mess with the schema since other systems use the same DB.
I think the only place I've seen anyone wanting more than 22 parameters was people who had a database table with more than 22 rows (!) and where using the SLICK library.
Yeah we ran into this problem... Scala and the whole ecosystem can get annoying. In the real world, outside of the academic bubble that Scala lives in, database tables are over 22 columns.
In Scala I find it more natural to group multiple columns together / break the case-class representing the table into multiple case-classes. The problem is that Slick needs to serialize/deserialize those case-classes into tuples.
I agree. I know people posted a comment saying its fixed in 2.11 but if Scala wants to be production ready language it shouldn't take 10 years to address this.
When I looked at Scala briefly in 2006 or so built-in support for XML processing was a big selling point. Another one was 'multiparadigm' - like C++. It was clear at that time that Scala would not become a mainstream language.
I don't follow the connection between your comments.
XML Support as selling Point + Multiparadigm == !mainstream?
I agree that scala has its quirks (IMO, the type system is way too complicated), but to have dismissed it based on two apparently unrelated features 8 years ago seems a little strange.
XML literals are on its way out from the default language by the way. There's an ongoing effort to make the language smaller, and from 2.11 one has to include a package to support it.
Hehe, I remember taking a compiler class by Martin Odersky when I was studying at EPFL 8 or 9 years ago. We had to implement a compiler for a mini-Scala.
After that class, I thought I would ever see it again.
Not really... this only mentions that reified generics would be incompatible with current Java collections, which is obviously a problem that Java has, not that reified generics have (C# does great!).
C# was designed for the CLR and the CLR was designed for C#. F# on the other hand has 2 generics systems in the same language, one inherited from C# and one for ML-style Hindley–Milner based on type-erasure and they don't interact well.
Java's generics can't be built on top of the CLR's reified generics, but this goes for other languages as well - Haskell, ML, Scala, you name it. Basically, reified generics work well in case you design the language in combination with the VM, otherwise it becomes a huge PITA.
FYI, the CLR's reified generics is one reason for why the JVM is a better target for other languages. For dynamic languages, they are a problem because the bytecode generator has to work around them. For static languages, such as Scala or anything in the ML family, it's a problem because either (a) they are designed for co/contra-variance or (b) they don't support higher-kinded types or (c) they don't support rank-2 types or (d) in case you do need co/contra-variance rules, for some reason Microsoft chose to restrict the declarations only to generic interfaces and delegates, so you can't design a List<+T> class for example (to be honest, I'm not sure if this last one is a limitation of the language or the CLR).
Also, having reified generics is less important than you think. Scala for example can do specialization for primitive types and because the type-system is much stronger, coupled with an awesome collections library and pattern matching, the flow of the types is much better. For example, I can't remember any instance in which I felt the need to do a (obj isInstanceOf List<int>) check, immutable collections can be covariant without issues and for everything else there are type-classes, something which C# sorely lacks.
IMHO, in spite of people's opinion on the matter, Java's type-erased generics is one of its best features.
Also, having reified generics is less important than you think. Scala for example can do specialization for primitive types and because the type-system is much stronger, coupled with an awesome collections library and pattern matching, the flow of the types is much better. For example, I can't remember any instance in which I felt the need to do a (obj isInstanceOf List<int>) check, immutable collections can be covariant without issues and for everything else there are type-classes, something which C# sorely lacks.
I was under the impression that Scala just used Manifests and type tokens (a la my colleague Neal Gafter's Super Type Tokens) to get around the problem of erasure. I don't see any evidence that this actually works better, though.
Yes, it also has TypeTags, allowing you to inspect exactly what the compiler saw at compile-time in detail.
That's not really what one uses in practice, e.g...
scala> val list = List(null, 1, 2, "any")
list: List[Any] = List(null, 1, 2, any)
scala> list.collect { case x: Int => x }
res0: List[Int] = List(1, 2)
scala> Vector(null, "hello", 3).collectFirst { case i: Int => i * 2 }
res1: Option[Int] = Some(6)
scala> import collection.immutable.BitSet
scala> BitSet(1,2,3).map(_ + 1)
res2: immutable.BitSet = BitSet(2, 3, 4)
scala> BitSet(1,2,3).map(_.toString)
res3: immutable.SortedSet[String] = TreeSet(1, 2, 3)
To be honest, tags are indeed needed for creating arrays, since they are reified by the JVM:
One common problem is that overriding of parameters doesn't work for containers due to type erasure, so you can't have overrides on type parameters, because the JVM gets in the way (i.e. the following does not work):
And F* has dependent typing. I was never claiming that reified generics was necessary for stronger typing, but I'm also not convinced that it's an impediment.
At worst I could see how it could be a trivial implementation detail, but the fact that Scala went out of their way to use tokens and manifests suggests that the functionality is desired. Whether that's provided by the runtime or by the compiler writer is only a difference in effort (or rather, whose effort).
Not really. It basically generates a number of type (in)equalities and outsources finding the solution to Microsoft Z3 theorem prover, which you cannot use for anything commercial.
This seems inaccurate - generics aren't a problem for F# at all (the integration between the .NET and ML type systems is seamless here, as far as I'm aware). Indeed the people behind the .NET generics design (Don Syme and Andrew Kennedy in MSR) were coming from an ML background, and Don is the primary researcher behind F#. The big ML-.NET type system mismatches have to do with things like pervasive overloading in .NET making type inference difficult, nothing to do with reified generics.
Don Syme, Andrew Kennedy and many others that work for Microsoft are awesome. This doesn't make it any less of an appeal to authority that's not valid. The reality is that their creativity was limited by the constraints of .NET
> The big ML-.NET type system mismatches have to do with things like pervasive overloading in .NET making type inference difficult, nothing to do with reified generics
You've hinted at a side-effect, but no, it has to do with subtyping [1], which naturally leads to co/contra-variance [2]. Or to put it bluntly, in a nominal type system that has sub-typing, Hindley-Milner type-inference is not only "difficult" but actually impossible. And because F# had to be interoperable with .NET, then it needed C# generics. Pity, because there's many things that F# misses, like Ocaml functors, structural typing, type-classes, higher-kinded types and the list can continue. And I just love how List.sum<^T> is defined [3] (static member, FTW).
Sure, subtyping adds additional pain points (mostly separate from overloading, BTW). But I think this is mostly orthogonal to reified vs. erased generics. As it is, the .NET runtime supports reified generics and declaration-site variance (albeit limited to interfaces and delegates), and to my knowledge there's nothing stopping a different language/runtime from implementing reified generics even with usage-site variance (and Scala's type system only has declaration-site variance, anyway, AFAIK).
I wouldn't call that "two systems"; statically resolved type parameters are an extension of .NET generic parameters, not a separate system. In particular, a single definition can use both statically resolved and non-statically resolved parameters (e.g. in `let inline f x y = x + x`), and statically resolved type parameters can be used freely where normal .NET generic type parameters are expected (e.g. in `let inline g x = x + id x`). And these distinctions are fairly transparent to users in most cases anyway, since type inference is usually capable of inferring the kind of parameter needed.
In any case, it is true that F#'s type systems has concepts that the CLR doesn't natively support, but I don't see how this demonstrates any weaknesses in the CLR or F#. The exact same thing is true of Scala on the JVM, as far as I can tell - how are erased generics an improvement?
I'd love to see some evidence for these claims. Personally, I think that .NET's type system is quite elegant compared to the JVM's, and that .NET's combination of value types and reified generics gives several concrete advantages (better memory usage, avoids many Java pitfalls such as "why can't I use a T[]?", etc.).
What do you think about being able to use Void in Generics?
Java's issue of not supporting primitive types pales in comparison to that.
Also, while reified generics and structs are fine, most of the optimizations one would reasonably suspect to be supported are just not there. Some time ago, it was still faster to pass value types by reference ... that's just embarrassing. Maybe RyuJIT does better here?
There are lots of things that could be better, and I don't mean to imply that .NET's system is perfect. But the fact that List<byte> contains an array of bytes rather than an array of much larger objects is often important in practice, and dwarfs any complaints I might have. Not being able to use System.Void in .NET generics is an occasionally frustrating quirk, not a huge problem, in my experience. Indeed, it's quite easy to create a new type with no public constructors to achieve the equivalent result (and F# does this with the unit type).
I heard the internals of the Scala compiler are really ugly and there isn't a true spec for why it does some of the things it does. So it might be hard to just make it match what the existing compiler does, because it's defined by it's implementation and it's hard to understand?
Not really. Yes, some parts are a bit involved, but it's better thanmost compilers out there. People are complaining at a pretty high level here.
The death of the .NET port can directly be attributed to Microsoft's marketing of the platform. If they manage to make people question whether .NET will still be alive in 10 years, it's no surprise that people start dropping support for it.
My sample size is small, but the sentiment mainly comes from this: "Pacific Northwest Scala 2013 We're Doing It All Wrong by Paul Phillips - YouTube" [1]
While I usually agree with most things Paul Phillips says, that talk was mostly an exception.
There is nothing which can't be fixed by focused work on all the bits and pieces which need work, and that's exactly what the scalac compiler developers are doing most of the time.
So many improvements are coming in every week that I can hardly stand using an older compiler version, because I always know that so much stuff has already substantially improved.
Despite what some people say, life in Scala-land is pretty damn good.
[1] - https://github.com/scala/scala/blob/master/src/library/scala...