Hacker News new | comments | ask | show | jobs | submit login
Scala Native (github.com)
466 points by virtualwhys on May 11, 2016 | hide | past | web | favorite | 259 comments

Is this a "true" Scala? As in, if I write a Scala program that only uses the scala stdlib, will it run on both the JVM and Scala native with no modifications to the source? Or, is it more like "a Scala" in the same way you would say "a Lisp"? I think a lot of the design decisions for Scala were made so that Scala would work on the JVM and easily inter-op with Java. It might make more sense to modify the language slightly to better suit the native environment. I'm seeing some hints to that on the page with the "@struct" and "@extern" decorators.

Clearly they added a bunch of new stuff that the JVM can't understand.

So it is either "Scala like LISP"


It is a superset of the Scala language. Meaning all existing stuff will work with it, but if you use new secret keywords it will work "better".

Which seems not quite ideal, because that means most Scala libraries will not be "tuned", and you will need all new libraries.. just like ScalaJS made you need entirely new Scala libraries that did not use reflection.

Worried the Scala ecosystem is turning into a nightmare ;)

> just like ScalaJS made you need entirely new Scala libraries that did not use reflection.

Most Scala libraries never used reflection, so most of the ecosystem "just worked" on Scala.js.

I think quite a few things in scala-native are there to show the possibilities of this platform, but in the mid- to long-term those improvements will be supported everywhere:

- @struct and AnyVals: As soon as AnyVals can support more than one value on the JVM (as soon as Oracle finally gets some things done) @struct can go away, because it's equivalent to AnyVal.

- @extern and @native could also end up being the same.

- @inline and @noinline are already supported across platforms.

> Most Scala libraries never used reflection, so most of the ecosystem "just worked" on Scala.js.

Do you have personal experience with this? I do not have hard numbers, but for me - basically none of my stack worked.

I had to find a new JSON parser. I had to find new validation for my form stuff. Etc. etc.

It wasn't impossible, but it was a lot of work.

It's not because of reflection. Most Scala libraries are reflection free. Exceptions are Scala libraries wrapping Java ones, or Scala libraries that are doing Java serialization / deserialization, which many times depends on reflection. For example JSON parsing in Scala many times piggybacks on Jackson, THE Java library for JSON parsing. However there are JSON parsing libraries that are pure Scala and that do not require reflection like Jackson does.

But reflection isn't usually an issue for Scala as Scala code does most of the same tricks at compile-time. Scala libraries that have problems are those libraries dealing with multi-threading. Do you block threads anywhere, waiting for a Future, or on a CountDownLatch? Any await/notify anywhere? Sorry, that won't work on top of Scala.js. That doesn't mean that you can't work around it though and it does take effort on the part of library authors. My own library (sorry for the shameless plug :)) is completely cross-platform: https://monix.io

BTW, Scala.js is very new, but the whole ecosystem wants to support it, every major library is being ported if not ported already and everybody is talking about it ;-)

You're talking about Play Framework, I assume. They did some fancy work with Macros (i.e. Scala Reflection) which causes a lot of headaches for Scala.js. The reality is most of the reflection it does is nice but not necessary and I really wish they offered a "switch" to turn it off.

Sorry I've never used Scala macros, beginner's question: are Scala macros not completely compile time? Why would they use reflection? Is it just syntactic sugar for dynamic type inference?

If they just use reflection during compilation: why would that not be compatible with whatever the runtime is? Shouldn't that be unaffected by runtime and purely depend on the compiler's support for reflect?

Or am I looking at this the wrong way?

Scala macros are compile-time. The Scala reflection library (that works at runtime) share most API with macros.

Macros should always work, reflection is harder as some information needs to be retained at runtime (either via Java reflection or additional data – which can both be problematic in Scala.js).

And as the original author of Scala.js pointed out in a Scala Days talk today, unrestricted reflection means that it's impossible to do dead code elimination, which is a non-starter for real-world use.

I've got at least 30 enterprise applications that disagree with your "nonstarter" assertion. Dead code can only exist in real world code.

I'm talking about within the main use case, which is front-end web development. I guess some folks might be cool with 10MB+ JS apps, but I don't think that would be terribly popular, and certainly not popular enough to bother with imitating Java reflection in JavaScript.

Well, Proguard does whole program optimization.

You have to specify the symbols to preserve explicitly.

I believe that works against jvm bytecode, which you don't get for scalajs. Instead the compiler has a JS specific intermediate representation which the optimizations occur on.

Play uses it specifically for in it’s JSON module; you can define readers and writers that allow you to parse the structure of a JSON object. To avoid having to be very explicit, you can define a case class that directly mirrors the structure of that object. e.g. given some case class that looks like this: case class SomeObj(someInt: Int) you could do: val theObj = SomeObj((json \ “someInt”).as[Int]) or implicit val reads: Reads[SomeObj] = ((json \ “someInt”).read[Int])(SomeObj.apply _) with reflection, you get: implicit reads: Reads[SomeObj] val theObj = reads.validate(json)

Seems trivial with this small example but with large objects it keeps your code much more maintainable and consistent since reading and writing from that case class is the same.

As far as it not working in Scala.js, from my understanding it has to do with how the reflection library is shared between both runtime and compile time implementations.

You can read more about it here: http://docs.scala-lang.org/overviews/reflection/overview.htm...

The tricks that Play JSON does don't need runtime reflection. Macros don't use runtime capabilities at all and all macros work in Scala.js. I'm actually surprised by claims of Play JSON needing the runtime reflection of "scala.reflect".

No, Play JSON doesn't work because it wraps Jackson, a Java library.

Your correct that it probably doesn't use scala.reflect, but it still uses reflection.

Jackson works using reflection. Play JSON therefore at least depends on reflection transatively.

Ya play is where I started, then I moved my app into Scala.JS land.

Had some cool parts to it, but not sure I was sold on the overall experience. I hate JS so bad and want to avoid it, but I kind of felt like I was mostly trading evils. Maybe if I worked at it enough it would finally get better? Not sure

When did you try it out? What didn't you like?

I ported a site written in vanilla Javascript. I decided to faithfully port it without any changes before introducing scala specific libraries. That was pretty painful. A bunch of manual casting for UndefOr and js.Function.

Since then I've started using some of the scalajs libraries/frameworks straight away on projects. It has been a much more pleasant experience!

It's not necessarily a bad thing: you can think about the various Scala implementations as dialects in a language family, like Pascal in the old days. It may be difficult to share code between these dialects, but it allows people to write code for a wide variety of environments using the same syntax and nearly-identical semantics.

The language is already a nightmare compared to a nearly syntaxless lisp so I'm not surprised that the ecosystem follows suit.

It's funny to me to see "syntaxless" used as a good thing :)

For me, syntax--good syntax, anyway--makes code far more readable.

Maybe. Syntax imposes structural rules on your code. If your domain doesn't fit the syntax, then you'll end up with far _less_ readable code. The draw towards "syntaxless" lisp-y languages is that you can build the syntax to fit your domain, rather than vice versa. The result is a DSL that naturally models the domain, rather than an unintuitive mess of data transformations to force your domain model to fit the structure imposed by the syntax and type constraints.

This thing that each Lisp developer writes its own DSL library is one of the reasons why the enterprise isn't so found of Lisp.

It is always a steep curve to dive into other developers code.

I agree. There's a complexity tradeoff with either decision.

Having worked with both approaches, there's a time and place for both. Being able to quickly dive into a codebase is not always a good thing. One thing conservative enterprise developers should like about DSLs is that they require new developers to structure their code in a way that fits the intended domain model. I've seen my fair share of code written by developers who "quickly dove in" and they almost never fit the model and end up causing a huge mess which may be unrecoverable. Conversely, if you don't trust your developers to write decent code, trusting them inside a dynamically-typed DSL is probably not a good idea either.

Wouldn't LISP become more popular if students started with LISP at school/university rather than C/C++, Java or Python? How can a young developer compare or choose if he/she had never been exposed to LISP?

It would help, but it isn't sufficient.

I had a very good CS degree in the mid-90's.

We got to use Pascal, C, C++, Prolog, Caml Light, Smalltalk, Oberon(-2), Component Pascal, Lisp, SQL, PL/SQL, x86 ASM, MIPS ASM, Java, across the 5 years it used to take (nowadays thanks to Bologna is no longer the case).

It doesn't mean we got to use many of those languages afterwards.

But Lisp is a special case, if Lisp Workstations hadn't failed in the market or if Sun and others hadn't picked UNIX as their Workstation OS, maybe it would be different IT world, in spite of all DSLs I was referring to.

It's pretty light on syntax compared to C/ALGOL-family languages. Function bodies are just another block (or expression), as are things like try. Operators are just method calls (except for precedence, and the precedence table is much shorter than C/Java/etc.).

The goal is to be as "true" as possible by default with extra flags to trade some exact semantic aspects for performance. E.g. overflow semantics, bounds checks, null safety etc.

Apart from the base language we'll also introduce some language/library extensions to make lower-level programming that is going to be limited Scala Native "dialect" of Scala. E.g. pointers, structs, stack allocation, extern objects etc.

Is there a difference between @extern and @native?

I could have imagined that it would have been more compatible if Scala-JVM and Scala-Native used the same annotations (using JNI behind the scenes on the JVM).

I would explain the difference as: @native says please implement this Scala method of this Scala class in C, respecting all the JVM calling conventions and memory model. @extern says call this C function that never knew about Scala or the JVM, respecting all the C calling conventions and memory model.

@native implies JNI-style definition to be available along the Scala method definition. @extern lets you call straight to C code without any additional ceremony, all you need is a single forward declaration.

Hi all, I'm the author of the project and would gladly answer any questions.

How's the managed runtime implemented (GC, memory model, etc.)? How did you get rid of JVM and JRE class library dependencies of Scala? What's the debugging story? I'd be great if this kind of info found its way onto th website. Is this in any way related to vmkit?

Excited about this, keep up the great work!

> How's the managed runtime implemented (GC, memory model, etc.)?

In a first release we're going to use Boehm GC. That's just a stepping stone, not a long-term strategy. Stay tuned for news on this front.

> How did you get rid of JVM and JRE class library dependencies of Scala?

We're reimplementing / porting from Apache Harmony all the things we need.

> What's the debugging story?

We're going to integrate with lldb.

> Is this in any way related to vmkit?

Not related in any way, apart from being based on LLVM.

> We're reimplementing / porting from Apache Harmony all the things we need.

Doesn't that create licensing issues? You already seem to have code from Harmony in the project, but the project's license is neither Apache nor does it mention it.

Additionally, won't that cause trouble with Oracle, considering the lawsuit against Google?

We're are going to have an official guide on licensing in next few days.

> We're reimplementing / porting from Apache Harmony all the things we need.

Would GCJ be any use? They had native compilation over a decade ago.

(Although developers moved on to IcedTea, iirc)

Hans Boehm's GC is actually really, really good. Before you embark on a massive engineering effort that will likely lead to little gain and a lot of pain, I would measure the overheads very carefully. I am not at all convinced precise GCs buy you that much over the Boehm GC for most apps, and they fundamentally kill interop with C code.

(I can't recall for sure but I believe I had this conversation with Martin in a taxi ride not too long ago :)

Mono was a long time on Hans Boehm's GC and despite feeding it a lot of metadata never got satisfying results with it. Eventually they replaced it with a custom GC (SGen).

Looks like it's currently using the Boehm–Demers–Weiser conservative garbage collector. In scala-native/rtlib/src/main/resources/rt.cpp it includes gc.h and allocates memory using GC_malloc, i.e. not in precise mode.


The garbage collector is typically the most difficult part of a language implementation, so I'm not surprised. On the other hand, a simple GC can be sufficient for many projects.

They can still replace it later once they're further along with the rest of the implementation. I doubt they'll keep it.

This is exciting but I've been burned in the past. Your commit graph seems to show you're serious so I hope you stick with it.

Will probably have more questions after I browse through the code but some basics: Q: How much of the collections library is covered with full support? Partial support? NYI w/ ETA? Q: Cross-platform support? If I wanted to run this on iOS and Android today, what do I have to do? What about on Windows or OSX? Q: How does GC work? Is there a way to take more control over GC for Scala objects? Q: Is it possible to ‘link’ in external dependencies, i.e. Joda Time. How do I do this right now? Q: Does the compiler use a translation layer? If so, is it consistent with Java 7 or 8?

On another note, the c extern stuff is excellent — exactly what I’d like to see for portability and performance (without the headache of something like the JNI).

> Q: How much of the collections library is covered with full support? Partial support? NYI w/ ETA?

Non-parallel collections should work in first developer preview (to be announced.) Parallel collections may took a few release to get working.

> Q: Cross-platform support? If I wanted to run this on iOS and Android today, what do I have to do? What about on Windows or OSX?

If you can compile and run C code with Clang on some platform, it's reasonable to expect Scala Native to be run there eventually as we're based on the the same underlying compiler toolchain. Currently we mostly aim at POSIX environments.

> Q: How does GC work? Is there a way to take more control over GC for Scala objects?

Boehm GC for now, but that's not our long-term strategy.

> Q: Is it possible to ‘link’ in external dependencies, i.e. Joda Time. How do I do this right now?

We can link with libraries via C ABI. Java libraries need to be ported over to Scala.

> Q: Does the compiler use a translation layer? If so, is it consistent with Java 7 or 8?

We strive to keep the same semantics as Scala/JVM.

Ah, so since Scala Native is not built on the JVM you can't just take a pre-existing Java library and make it interop. You need it to be real Scala. The only options then are to use a convertor or manually port the library from Java to Scala?

Yep, or drop the dependency. E.g. For joda specifically, you should probably move to Java 8 date API, which is more likely to be ported.

Bare in mind that scalajs has the same issue and a lot of work has already been done!

Is this actually affiliated with the facility that is behind Scala for JVM? I saw the banner of the university on the Scala Native site, is that for real?

I have been so extraordinarily impressed with Scala, and with the contributions its made in the PLT field, I'm very excited for Scala Native to start picking up steam.

Can we expect the same level of excellence from Scala Native as we've received from Scala for JVM?

I'm currently employed as research assistant at EPFL at Martin Odersky's lab and closely collaborate with Martin and Sébastien Doeraene (the author of Scala.js) on technical design and implementation of Scala Native.

Yes. Scala Native is developed at EPFL for real.

Will Lightbend be supporting this project in any way?

The reason I ask is that I see on GitHub that you are the only contributor so far. This project is quite a large commitment for a single person, I'm hesitant to play around with Scala Native without knowing more about future plans for support, milestones, etc.

Having said that, this project looks awesome. Nice work so far!

There have been so many successful projects started by one person. Shooting it down before seeing what will be done is a bad idea.

I'm interested in it breaking away from the JVM to become more performant, since that has been one of my concerns about Scala; there is a lot of good about JVM-based languages, but when in the ring with other languages like Go, you need to be quick to win.

I hope that others join him, but there's no reason to believe he can't do it if he sets his mind to it.

Note that I am not concerned about the fact that the project was started by one person, nor am I dismissing it altogether.

The author of this project is most definitely a smart and capable individual. However, at some point this project will reach a point where it requires more work than one individual can achieve, hence the question regarding official support - is there any financing, plans for new contributors, etc.

I understand your concern, but, as I said, a number of authors have not have financing, plans for new contributors, etc. in the beginning and have been successful; these are not requirement for success in the beginning.

For example, here is the story of how Python got its start from Guido van Rossum (quote from 1996):

"Over six years ago, in December 1989, I was looking for a "hobby" programming project that would keep me occupied during the week around Christmas. My office ... would be closed, but I had a home computer, and not much else on my hands. I decided to write an interpreter for the new scripting language I had been thinking about lately: a descendant of ABC that would appeal to Unix/C hackers. I chose Python as a working title for the project, being in a slightly irreverent mood (and a big fan of Monty Python's Flying Circus)."

And the history of how Ruby got its start by Yukihiro Matsumoto (quote from 1999) is similar:

"I was talking with my colleague about the possibility of an object-oriented scripting language. I knew Perl (Perl4, not Perl5), but I didn't like it really, because it had the smell of a toy language (it still has). The object-oriented language seemed very promising. I knew Python then. But I didn't like it, because I didn't think it was a true object-oriented language — OO features appeared to be add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for but couldn't find one. So I decided to make it."

Denys Shabalin is working under the author of Scala and corresponding regularly with the author of Scala.js, so he has more support than either van Rossum or Matsumoto did, and neither of them had the plans you spoke of- they just wrote the language because it is what they wanted to do.

Well, Ruby or Python would not have been suitable for many use cases in those days, and probably weren't as widely announced as scala-native is being.

Agreed. I see the statement "It's being developed at EPFL by one engineer" as a very high risk that once this student graduates or moves on to a different university, the project will be abandoned and die.

It's worth noting that Scala.js started this way, and is now mature and used by thousands of developers. I foresee a similar path for Scala Native.

It's worth noting that there was an LLVM-backed Scala project that was abandoned and died, so I definitely hope this avoids the same fate.

There was also a Scala.net project that was abandoned and died.

That's very common for academic projects.

Scala.net didn't die because it was academic. It died due to lack of interest, as nobody cared about a Scala for .NET

ISTR a significant reason it died was the .Net platform's reified generics, which made making a language with generics and a type system more expressive than the underlying platform problematic, and especially made it difficult to have something that would be fully compatible with Scala-on-the-JVM.

That's not really true. Yes there are challenges with reified generics and yes Scala's generics don't fit. However you can work around it by doing the type erasure yourself. You can always consider a List[Int] to be a List[Any] and be done with it. And yes that's going to generate inefficient code, but ClojureCLR doesn't seem to mind.

No, the reason for why Scala.Net didn't happen is because nobody cared. To find proof of this, you only need to look at Clojure. Its .NET implementation is well maintained by David Miller, yet it's very unpopular. And the reason for why .NET developers don't care is because they don't have an open source culture. Or in other words, if it doesn't come from Microsoft, then it doesn't exist.

If you do erasure, then you can't also have the level of platform integration Scala has on the JVM: Scala on .NET either ends up as a different-but-similar language that isn't 100% compatible with Scala-on-JVM on a language level (as well the library differences), or its a second-class citizen that doesn't integrate well with the .NET platform, in which case, why have it?

So, in a sense, no on cared -- because neither of the available options was anything anyone wanted.


It died for technical reasons because .net supports reified generics, which made it pretty much unable to support Scala's type system.

People who keep saying that erased generics is a stupid idea have no idea what they're talking about.

scala-native is facing insurmountable difficulties, the kind that will certainly not be conquered by a single student who will stop working on it as soon as he graduates.

Can you please post your slides from Scala Days?

EDIT: Thank you!


Is there a video online for it?

In the past the videos from the Scala Days talks are usually posted online through Parley's, I expect that would happen again this year. Though it is usually a few weeks to a month or two after.

What are your mobile plans? :) I realize this is probably a ways off, but I've dreamed of being able to write core logic in a language like Scala & then using that code from ios & android.

Will scala native generate some c-linkable files?

iOS/Android support is the most requested feature. Stay tuned for updates on that front.

Scala Native generates LLVM IR that can be compiled to C-linkable code but for now we focus on "one statically linked application at a time" use case.

How do you plan to deal with the large amount of metadata required for some stuff? Like the timezone database for calendars, the CLDR for formatting stuff, or various Unicode tables for handling case folding/comparions/etc.?

Will you be using native libraries/the data blobs installed on the host system? Or will you ship these things with each binary?

What are the plans with tail call optimisation in this version? or do you get that for free from LLVM?

We get that for free from LLVM.

The slides mention that proper tail calls "just work", even mutual ones. :-)

Do you see any possibilities to undo/mitigate JVM-specific design decision which handicap the language? For example could you not erase generics or would that cause too much problems incompatibilities?

That's a very interesting aspect that's also being explored in our lab under Dotty [1] project and it's Linker [2] subprojects. It's not a direct goal of Scala Native.

[1] https://github.com/lampepfl/dotty

[2] https://github.com/dotty-linker

Idiomatic Scala doesn't care at all about erased generics. With implicits and macros and libraries built upon them like shapeless, you can do everything you'd want to do with reified generics and more and it'll be done and statically checked at compile time instead of dynamically at run time.

I know there are ways like typetags to work around it, but on the JVM this is never going to run without extra work:

    def erased[T](xs: List[T]) : String = {
      xs match {
        case ns : List[Int] =>"Natural numbers"
        case fs : List[Double] => "Floating point numbers"
        case _ : List[T] => "Something else"
Which is purely because of a JVM runtime limitation. Will native scala behave identically to JVM scala in this case or will you be able to improve?

The part that you're missing is that code like this in Scala is totally unnecessary and it never happens:

    case ns : List[Int] =>
Worst case scenario, you can always do the following and it would be more idiomatic, no type erasure standing in the way:

    val listInt = listAny.collect { case x:Int => x }
    val listDouble = listAny.collect { case x:Double => x }
But even that is totally unnecessary. You know why? Because your function does not make sense. What could you possibly do with a function that takes a List[T] and returns a String?

The only way this would make sense would be if you are talking about a List[Any] deserialized from somewhere (like some shitty JSON library). But then in Scala we don't really work with Any, that should never happen and if you see libraries that use Any in their API, drop them like they are hot ;-)

> Which is purely because of a JVM runtime limitation

No it's not. Talk to compiler authors. This is actually a freedom, because by introducing reified generics in the runtime, then language designers either have to deal with inefficiencies due to boxing because of extra conversions, or to limit their type system, so for example you can say goodbye to higher-kinded types. A runtime that has reified generics is not a multi-language runtime. In fact many languages are doing type erasure. Haskell is doing type erasure. And why shouldn't it, I mean, type casting and isInstanceOf checks make no sense in Haskell.

In point of fact, I think its not so much a handicap: http://stackoverflow.com/questions/20918650/what-are-the-ben...

On what OS does it run right now?

Scala Native is currently being developed on Darwin/Intel/x64.

If you can compile and run C code with Clang on some platform, it's reasonable to expect Scala Native to be run there eventually as we're based on the the same underlying compiler toolchain. Currently we mostly aim at POSIX environments.

Mentions using LLVM, so all OS should work. Not sure about the stdlib.

All OSs which LLVM supports should work. Not quite "All OS".

Fantastic work!

* Any plans to strengthen the Scala core lib so one doesn't have to reach out to JVM for mundane tasks like I/O or threading?

* Is there a simple way to use raw C / C++ libraries from Scala native?

* Is there a http://search.maven.org/ equivalent?

1. That's our long-term goal, but don't expect that to happen overnight.

2. @extern objects are an easy way to call C code. There is no C++ interop at the moment.

3. Not yet.

Heya, your project looks interesting I must say. I was curious, is this mainly a 1-man gig or do you have support from the existing community (e.g. Odersky).

I heard some rumors that someone (you) was doing it as a graduation thesis under Odersky, is that true or complete bollocks?

Just wanted to say thank you for your amazing work!


What is the rationale for doing this, given that the Graal / JVM guys are already developing an AOT compiler which can compile Scala to native binaries?

Is this motivated by performance? The desire to write C libraries in Scala? What?

Given the high level, dynamic nature of Scala I am skeptical LLVM+Boehm GC will result in faster code. I expect it would result in slower code.

EDIT: OK, I just saw the link to your talk below. I feel you should know a few things:

1) The Hotspot team are adding AOT compilation to the JVM. There is a talk on it here. This mostly eliminates warmup time:


You can't eliminate warmup time entirely because profile guided on-the-fly optimisation is actually quite powerful and an AOT compiler that doesn't have profile data to work with can't generate code that's as good. But they support tiered AOT, where the AOT compiled code does basic profiling of itself, and then it can still trigger profile-guided JIT compilation if you want to.

2) Interop with native is being heavily worked on in project Panama: another JVM upgrade project. With the current prototype you can feed it arbitrary header files that are parsed with clang, and it generates Java interfacing files that are efficiently compiled (no overhead). There's a Pointer<T> type and so on.

3) Stack allocation isn't so easy to do unless you are willing to pretty wildly violate the safety of the language. It's also been found to not help much when very adventurous JVM implementors tried it anyway (Azul), because a good generational GC makes short lived allocations so cheap. Valhalla is doing value types on the JVM but that's about more than just stack allocation.

4) You can do manual memory management on the JVM already, via the Unsafe class. I'm sure a Scala DSL would make it have a more convenient syntax.

5) Java 9 will have a static linker equivalent that generates stripped, standalone, optimised application packages that don't depend on any separate JVM. The AOT compiler will be a plugin to this static linker.

So I don't mean to discourage you if this is a fun research project, but you should be aware that the daydream is not one that the actual Java team are ignoring for mysterious reasons. They're working on it too, and are better funded and taking a more general and compatible approach. By the time you have a tool that's usefully complete it'll take years and by then there might not be much difference between the approaches.

What Oracle does is cute, but I will put my bet on people who have a solid track record at shipping.

Really excited about this. This is another reason why Scala is an extremely valuable tool these days. With Scala JVM, Scala.js with React Native and now with Scala Native (LLVM), there'll be literally nothing you can't do well with it.

That's one of the things I like about Clojure too, i.e. that is a good tool for many jobs, in part because it runs on the JVM, the browser (ClojureScript), and can also be used to build React Native apps (http://cljsrn.org/).

Does Scala.js with React Native work well? I avoided it because I didn't want to complicate my stack or have to jump through hoops when reading the react docs. If it all works well I might make the jump to scala front & back...

Reportedly, it works really well. Have a look at https://github.com/chandu0101/sri

Just saw this on twitter. Scala, you have my attention. There were a lot of talks about "the tools of yesterday" and "the tools of the future" lately. Scala getting closer to the metal, without the JVM is a significant step toward "the tools of the future".

I would argue scala' main selling point is making the jvm palatable to people who hate Java. However, it's entirely unclear without the jvm why I would choose it over, say, rust, Haskell, go, c++, etc.

Scala is a pretty great language on its own.

Have a look at the adoption of Scala.js. The ecosystem of Scala.js is larger than the "compile-to-js" ecosystems of "rust, Haskell, go, c++" combined.

One of main strengths of Scala is that people get things done. It's the only language of the ones you mentioned which has reasonable support across three vastly different platforms.

Who actually uses Scala.js, though? It still seems firmly in the hobbyist camp.

> One of main strengths of Scala is that people get things done.

This is also true of rust, go, c++, and arguably haskell. I don't see anything about scala that inherently makes it easier to "get things done". If anything, I've spent a good 10% of my scala development time (over hundreds if not thousands of hours) debugging library, compiler, and documentation bugs and inconsistencies. This is not good for a language where you get things done.

> Who actually uses Scala.js, though? It still seems firmly in the hobbyist camp.

This is false. You can see a few commercial uses in "Built with Scala.js" at https://www.scala-js.org/community/

I have also talked to people out of a few tens of companies who have told me they were using Scala.js in production.

That is a very narrow view of Scala. It's a great and powerful language.

No-one denies Scala is a fine language, but I don't think that's a narrow view of it.

It's actually a pretty good question. A lot of Scala's design is full of compromises made for Java inter-op. A huge motivation for using Scala is "I want a modern language which allows me to use all these existing Java libraries and tools". If you remove the ecosystem and the inter-op, what is there left for Scala? Why not use a better language?

Because all the other languages mentioned are better only from very specific perspectives. Scala's strength is precisely that it's a mutt: Want to write imperative code? Sure. Functional? No problem. How about a type system that far more featureful than Go? That works too.

Scala gets a bad rap precisely because people are big fans of their own way of writing code, call it better, and think that anyone that wants something else is deluded. The Scala way is to say 'sure, you can do that too, to hell with purity'. I'd rather have power over purity any day.

Note: I use Scala in my day job. I consider it better than Java, but worse than other languages. I definitely agree that all languages accumulate compromises and inelegant hacks as they evolve. It's unavoidable.

That said, in my opinion Scala's blunders are many. You can find out about many of them from Paul Philips, ex committer of Scala, but if he sounds too bitter to you (he does to me; at this point he sounds like there is bad blood between him and Odersky), here are some of mine:

- Null. Null will come back to bite you in Scala code, whenever you call Java libraries, and the occasional Scala library that didn't get the memo. Sure, you can wrap every suspect value with Option(...), but why should you do this?

- Type inference is not as powerful as in a language with a cleaner type system such as Haskell. I'm told this is because Scala tries to be both OOP (inheritance) and FP at the same time. Here is a clear case of Scala's attempt at being a "jack of all trades" resulting in it being inferior than the sum of its parts.

- Type signatures in Scala collections and core types are extremely hard to read. You do not need to read them, but if you try, you are in for a world of hurt. This in itself is not a mortal sin, but of course if you're willing to forgive Scala this much complexity, surely you're willing to forgive other languages as well?

- Tooling is bad. It's getting better, but it's still bad. SBT is painful to use. As for IDEs, IntelliJ is now the recommended choice; I've tried both Scala IDE and IntelliJ, and they both suck. Slow, spurious compilation errors, uncomfortable to use.

- And finally, telling bit of personal experience: I do NOT use Haskell professionally, yet whenever I want to try a quick idea and see if it typechecks, I find it easier to open ghci (the Haskell REPL) and try my idea than do the same with the Scala REPL. What does this say about Scala?

> all languages accumulate compromises and inelegant hacks as they evolve. It's unavoidable.

Scala devs deprecate and remove things that haven't worked out well. They have an established track record of making these migrations easier with each release.

> Null

Upcoming versions will make references non-nullable by default. If you want things to be nullable, you will have to opt-in explicitly with T|Null.

> - Type inference

Type inference is not a huge priority, because people think that it's a good thing that declarations require type annotations (just like it's recommended in Haskell, too). One of the last pain-points, higher-kinded unification of type constructors, has been fixed recently which will make type inference "just work" in exactly those cases where the lack of type inference was most annoying.

> - Tooling

I think tooling is pretty amazing. Sure, Java has still some advantages in some places. But the tooling is superior to almost all other languages out there. SBT is amazing. IDE support is getting vastly better, not only have Eclipse and IntelliJ improved vastly, but also IDE support for Emacs, Vim, Sublime, etc. is coming along at a rapid rate, and it's just amazing to use. There are so many tools in the Scala ecosystem which just don't exist in this combination anywhere else.

> If you want things to be nullable, you will have to opt-in explicitly with T|Null

I can see how the opt-in null references might help prevent you from writing Scala code that uses null, but how does it help when interacting with Java code?

Thanks for the downvote. Let's close this discussion.

I didn't downvote you. In fact, I consider your post interesting, because (for example) I didn't know about the opt-in nullable references. Here's a +1 to counter the downvote.

Alright. Sorry.

Regarding your question about nulls and Java:

I think that there is not much Scala can do here. All existing evidence from other languages that tried this shows that there is a large semantic gap between "nullable values" and "values of unknown nullability".

As Scala is much more independent of Java it is a much smaller issue, but improvements with Java interop require action from Java library authors first.

The approach of supplying external nullability meta data has largely been a failure, because

a) authors are not aware of the stated, external constraints, so they could break these assumptions in future releases without even knowing

b) it's really really hard to retrofit nullability guarantees into libraries which have been designed without this requirement in mind

c) the existing ecosystem is not very amenable to these guarantees, as nullability metadata would be largely limited to final classes and members, because everything else could be subclasses or overridden in third-party code, breaking these assumptions

As soon as Java library authors themselves start using @Nullable/@NonNullable annotations everywhere, there is a good opportunity of reading these annotations and typing the values accordingly, but without it, benefits are very slim.

The planned changes are valuable for dealing with nullable types, but as mentioned "unknown nullability" needs a different solution, and I think it's currently not worth adding something to the languages as long as there is still hope for widespread adoption of nullability annotations.

I'm interested in exploring what makes a build tool good. Which design choices does SBT choose that makes it amazing?

- I've never experienced null problems in Scala. There's theory and there's practice. In practice if a NPE does happen, you treat it as a bug and wrap it. I don't experience problems, because Scala libraries are well behaved and for Java libraries I read the docs.

- Being a "jack of all trades" means Scala has the superior module system. In Scala you can have abstract modules, the way you have in Ocaml. In Scala type-class instances are lexically scoped, whereas in Haskell they are global. Haskell's type-classes are anti-modular, which is why there are people avoiding type-classes. A big part of what makes Scala so good is OOP. Haskell needs extensions to achieve similar functionality in a half-baked way and modularity is the main complaint of people coming to Haskell from Ocaml.

Btw, if you ask a C++ developer to describe OOP, he'll say it's the ability of having structs with a VTable attached. Most people think OOP is about state. That's not true, OOP is about single (or multi) dispatch and subtyping (having a vtable), hence it's not at odds with FP, unless you want it to be ;-)

- SBT is amongst the best build tools ever available. I could rant all day about the clusterfuck of Javascript (npm, Bower, Grunt, Gulp, Brunch.io, etc.) or Python (easy_install, setuptools, virtualenv) or .NET (MSBuild, Nuget, dotnet) or Haskell (cabal). For all its quirks, SBT is by far the sanest dependency and build management tool I've worked with. In fact, amongst the best reasons for preferring Scala.js is being able to work with SBT and avoid Javascript's clusterfuck completely.

- I've been working with IntelliJ IDEA with the Scala plugin for the last 3 years. I've had some problems with it, but all minor and it's amongst the best IDEs available, giving you everything you expect out of an IDE. Other platforms either don't have an IDE (Haskell), or require extra commercial plugins to behave like an actual IDE (Visual Studio).

And let me give an example: in IntelliJ IDEA I can click on any identifier in any third-party dependency and it automatically downloads and shows me the source code and I can set breakpoints and debug any third-party dependencies that way. Such a simple thing, yet it's a tough act to beat. Try it out in Visual Studio sometimes.

> I've never experienced null problems in Scala. There's theory and there's practice.

Yes, and I'm speaking in practice! If your language allows nulls, they will be used. I've experienced plenty of NPEs in Scala code, both from Java libraries and from misbehaving Scala code, to know this is real. Lucky you if you haven't experienced them! (As an aside: avoiding NPE is almost never a matter of simply "reading the docs". Many times there aren't docs at all, and even when there are, nulls are seldom documented).

> Being a "jack of all trades" means Scala has the superior module system

Your comparison with Haskell modules is fair. But that's just one aspect. Being a jack of all trades means Scala has poorer type inference, way worse type signatures, and generally it feels less clean than both a purer OOP language and a FP one. Scala does fine in all fronts, but not great. If you want to do FP, there are far better languages. I assume it's the same with OOP.

> SBT is amongst the best build tools ever available

If true, that's... unfortunate. SBT is uncomfortable and bizarre. Before you mention it: Maven looks likewise bizarre to me. These are tools to suffer with resignation, not to celebrate. Talking about Haskell, I'm trying to learn stack, which some say makes cabal more bearable. Are you familiar with it?

I agree IntelliJ is now an acceptable IDE for Scala. It's still ages from the comfort of using Eclipse with Java (at least what Eclipse used to be, not the unbearable beast it is now), and to be honest, IntelliJ only recently became usable. A couple of years ago (definitely less than 3 years), the IDE choked on Scala code and highlighted compilation errors left and right where there were none -- and I'm talking about vanilla Scala code, nothing advanced.

I'm interested in exploring what makes a build tool good. What does SBT do right and what do the others you mentioned do wrong?

Couldn't agree more with all of this. In practice NPEs are extremely rare in decent Scala code, and very easy to fix. The tooling is great, sbt has a slow startup time but is otherwise good, and I too love the mix of OOP and functional programming in Scala, they're really not at odds with one another.

What languages would you say have definitely better tooling: I'd say for sure the .NET languages, C++, Java, and Javascript but I'd put Scala just at a level below that, no? The tooling stories for Go, Rust, Haskell I don't think are as strong (yet), but that's definitely going to change though.

Not parent, but I would reduce it down to

- C# with VisualStudio + JetBrains addon

- Java with IntelliJ

Why not the others you mentioned?

- IDE support for F# is not very good.

- VB is too dynamically typed to be reliable.

- C++ IDEs seem to be constantly fighting with constructs that manage to break the IDE's understanding of the code. It's gotten better with LLVM, but even VisualStudio is far away from providing a reliable experience.

- JavaScript is dynamically typed, so IDE support is not reliable.

- VB.NET support is at the same level as C#

- F# does lack some Microsoft love, but Visual F# Power Tools does improve the experience a lot

- Netbeans and Visual Studio 2015 are quite good in JavaScript support, specially on code where JsDoc comments are available

I'm interested in exploring what makes a build tool bad or good. What do you dislike about SBT?

Sorry, I didn't see your question before. I don't know if you'll see my answer now, but here is what I dislike about SBT:

- It's unclear whether SBT wants you to write build.sbt or build.scala project files. It's recommended you use build.sbt by default, but I've seen plenty of "simple" projects, sometimes examples but often templates generated by tools, which default to build.scala. And now I've read build.scala is deprecated...

- The syntax of build.sbt is confusing. Is it Scala? Is it a DSL? Knowing Scala is certainly not enough to understand SBT (even for build.scala files!), but if I remember correctly one of its selling points was that it was "just Scala".

- Let's get not started about build files for build files...

- The structure of the file is confusing. I never know when defining a key is going to work or not. There seem to be tons of ways of doing something, which is fine in a general-purpose language but undesirable in a configuration format!

- The many third-party plugins are confusing to configure and are often poorly documented, which reminds me of Maven's discouraging "plugin hell"... I don't think I've ever seen a plugin for Maven which was adequately documented, which means you end up copying someone else's configuration without fully understanding it. I fear SBT is the same.

- Neither IntelliJ nor Scala IDE are entirely happy compiling SBT's project files. Thankfully the support has gotten better, but it's not 100% there yet. This used to be maddening in the recent past; a "compiled" project file which your IDE fails to properly understand is unusable!


> Scala's strength is precisely that it's a mutt: Want to write imperative code? Sure. Functional? No problem. How about a type system that far more featureful than Go? That works too.

This also essentially describes C++. This is also why both languages can be so terrible to use: every library has its own dialect.

The C++ comparison is great because it tells readers that that the person making it has no idea about C++, Scala or both.

I remember Martin Odersky explaining how the Scala compiler has to do a lot of work to compile FP code to a JVM that is mostly geared for the Java language only. Because of that the compiler is well positioned to compile to other targets as well. Like Javascript or LLVM.

Are you talking about compromises in the language or the standard library? In the language itself I can only think of type erasure for generics having to follow the JVM choice.

Snif. sbt demoNative/compile seems to work. But sbt demoNative/run produces a scala.scalanative.linker.LinkingError Unresolved dependencies: `#scala.Serializable`

I don't think the amount of compromises in Scala is more than the amount of compromises in any other language. No matter how clean the language starts up to be, after a certain time it always has to accumulate compromises with it's own previous design decisions, unless it's being kept in the never-ending alpha release state.

Can you provide an example of what you think is a better language?

Over rust/go/c++: more powerful type system, more concise code. Over Haskell: explicit about lazy-or-not, support for traditional OO inheritance when you need it.

I've heard a lot of the early adopters of Scala are moving away, including LinkedIn, Twitter, etc.

No, not really.

Some people from losing ecosystems like to spin it that way¹, but the truth is people are happy, adoption is great, and companies see that pain-points get addressed.

Of course there are companies which drop Scala, but often like in LinkedIn's case it's not caused by a dissatisfaction with Scala, but new leaders making different decisions like "we are using 10 different languages, we should consolidate on X and Y!".

The last unhappy company which is commonly mentioned is this social-network company Yammer. That was half a decade ago, and most of the issues mentioned have been addressed in the mean time. (Yammer has been bought by Microsoft, so it also made some sense to use a language that makes getting bought out easier.)

¹ I remember there was some very very bitter Groovy evangelist a while ago.

I know for a fact LinkedIn dropped Scala because of dissatisfaction over Scala. Things like maintainability, the fact it was just a single company developing Scala, etc that caused them to switch away.

Kafka is deprecating their Scala clients because of maintainability issues as well. Scala keeps making breaking changes on dot releases, making it impossible to maintain long term.

That's funny, when I talked to Neha (ex-Linkedin, cofounder of Confluent) she explicitly denied this meme about moving away from Scala.

The new Kafka client library is in Java just to reduce the number of dependencies, the server is still written in Scala. Strangely, using Java hasn't stopped Confluent from making breaking changes on dot release of the client... which really puts the lie to your explanation.

I specifically said the Scala clients are being deprecated. Go ask Neha to confirm this.

No, you specifically said they're being deprecated because of maintainability issues inherent to scala, which is not true.

Nope. They were getting sick of having to maintain different versions of the Scala clients because of breaking changes, as per a Kafka committer.

Even if some random kafka committer that you can't source believes that, the reasoning doesn't add up:

SBT makes it fairly straightforward to cross publish for different dot releases of Scala, and libraries in the Scala ecosystem do it all the time.

Scala dot releases are far apart. It's been over 2 years since 2.11, and 2.10 released in 2012.

By contrast, the JAVA client that the Kafka project published in November is already undergoing breaking changes for the current release candidate. It's pretty clear that avoiding breaking changes is not an overriding concern for them, and even if it is, it's equally clear that Scala was not the inherent problem.

Well, then we got different sources.

Scala is developed by the EPFL, Lightbend, and ScalaCenter. That's three, not one.

> Scala keeps making breaking changes on dot releases, making it impossible to maintain long term.

That hasn't been true for more than half a decade.

Sorry, I meant minor releases, like 2.10 to 2.11, etc.

These are not minor releases.

By definition those are minor releases.

Scala's versioning scheme is publicized and well-understood. Are you trying to blame Scala for not adopting SemVer before SemVer even existed?

Very silly.

Note that the use of major.minor.revision actually predates the formal SemVer specification by something like 20 years.

That doesn't mean you're wrong; but it does explain the confusion. I'm a fan of Scala but I'll admit I was confused by this as well when getting started with it since I've been trained to see everything after the first decimal as "minor".

> I remember there was some very very bitter Groovy evangelist a while ago

Apache Groovy has always been good for scripting stuff on the JVM, including for Grails and as a DSL for Gradle. Unfortunately, one of its backers tried to retrofit it to do static compilation, and pitched it as a alternative to Java instead of simply an accompaniment. That's when the trouble started. Groovy's static compilation can't be as good as that of a language designed from the ground to be statically compiled, e.g. Java, Scala, or Kotlin. To this day, Groovy's own codebase is still Java. Even though some people talk anonymously of "1 million line Groovy codebase where static compilation is required", Groovy's own "million line codebase" isn't statically compiled Groovy and until it is, any claim of such codebases is just a claim, nothing more.

I like Scala, and this is purely anecdotal: but with Java 8 becoming pervasive and large companies adopting Scala style guides that recommend most of the advanced features away [1], Scala becomes a less compelling choice.

Sure, companies aren't dropping existing code in Scala, but I would be surprised if large organization were to push for new code to be written in Scala instead of Java 8.

Edit: link

[1] https://github.com/twitter/scala_school

There are worlds between Java 8 and Scala and the gap is widening with every release.

Of course you can write Java in Scala and then the benefits over Java aren't that high (but are still there), but Scala is so far ahead in terms of language compared to Java that most current Scala libraries just _can't exist_ in Java.

Adoption these days seems to be great and accelerating, and given the growth of the library ecosystem there are more and more reasons to leverage the benefits of libraries written in Scala especially compared to legacy stuff written in Java.

A teaching page will naturally recommend away from advanced features during the teaching phase. Twitter's style guide for first-class scala code is http://twitter.github.io/effectivescala/ and code written in that style has huge advantages over Java.

Twitter is not moving away from Scala. Scala continues to grow at a faster rate internally then any other language, and is ~50% of Twitter's backend codebase.

Twitter is not moving away from Scala.

Source: I work there.

Hi, Some questions

- Can this reuse existing Scala code?

- How does it compare against Rust/GO/Swift? Why use this over them?

- How about libraries?

1. Yes, base language is the same.

2. It's mostly the question of what language you're most comfortable with. I personally love Scala sans the JVM and that's why I'm developing Scala Native.

3. Subset of java.* is going to be supported. Pure Scala code that uses that subset and other Scala libraries should just work on Scala Native.

A difference is that Scala can run on the JVM, the browser (and other JS ecosystems) via ScalaJS, and now native. It doesn't get more universal than that!

Native is all that it takes. Bytecodes are great for optimization passes and interactive development sessions.

Java should have gone the native route for deployment, instead of having a few architects at Sun being religious against AOT compilation.

The other two are important too: the JVM lets you leverage lots of existing libraries. The browser lets you reach more users.

Yes, but many of the use cases those libraries cover are also covered by other libraries.

The browser, well personally I am not a big fan for anything besides interactive hypertext documents. Even though I worked several years as web developer, I tend to favor native + network protocols instead.

Many researchers jumped into the JVM because it provided a fertile ground for language research, without having to build their own.

Apparently LLVM brought a change to that and now everyone is using it instead of the JVM for language research, with the benefit of always having JIT and AOT toolchains, with GCC trying to follow up on that.

What I dislike in the JVM was the religion against AOT compilation (only third party commercial JVMs offer it) and missing out on value types, even though Eiffel, Oberon and Modula-3 where all having them.

Personally I think all language toolchains should offer JIT/AOT, with the developers using the best one for each deployment use case. Although for dynamic languages, AOT is probably not a good use case.

There are lots of existing libraries in PHP, why not target PHP?

So, is this "released" yet? There are no downloadable installers or even instructions to compile.

No, it's not released yet, just publicly announced. Developer preview is planned to be released in the future. Stay tuned for further updates.

Looks like its based on dotty so Java 8 is a requirement.

It's currently based on Scala 2.11 with 2.12 and Dotty support coming in the future.

It is not released. But you can publish it locally. Compiling the demo-native subproject seems to work. When I try to run it I get a LinkingError though.

Demo reproduction guide to be published in next few days.

Given that Apache harmony is no longer being developed, you might want to coordinate with the Avian people, since they also reuse harmony.

Another thread covers in hands-on way Getting Started: https://news.ycombinator.com/item?id=11699897

What about GC?

In a first release we're going to use Boehm GC. That's just a stepping stone, not a long-term strategy. Stay tuned for news on this front.

True. I wonder if the "Extern objects" section mentioned malloc for a reason.

In a way, simple Scala code resembles Swift, as commented by this person: https://leverich.github.io/swiftislikescala/

Now the type system of course is entirely different.

    val label = "The width is "
    val width = 94
    val widthLabel = label + width
I really wish more languages would distinguish concatenation from addition, especially when they do type coercion. There's a very specific reason Perl opted for '.' to concatenate strings instead of +, which is that it disambiguates the following:

    val first = "2"
    val second = 3
    val together1 = first + second
    val together2 = second + first
Not to mention, addition is commutative and concatenation is not (1+2 == 2+1, but "one" + "two" != "two" + "one")

Hopefully Scala at least has some very well defined and easy rules for exactly how string concatenation works, but I think it's an uphill battle to argue that '+' is a good concatenation operator.

'+' works great as a concatenation operator, so long as you don't ever try to implicitly convert the values. Case in point - Python:

  >>> 1+2
  >>> "1"+"2"
  >>> "1"+2
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  TypeError: Can't convert 'int' object to str implicitly

Those are literals, not variables. My whole argument is about the ambiguity introduced when you are implicitly coercing variables. But yes, if the language does not coerce between string and numeric types, it's a non-issue. I'm specifically talking about supporting coercion but continuing to use '+' for concatenation of strings (and ==/!= for equivalence testing of strings as well).

Maybe, but in 20 years of programming (and 10+ of them with dynamic languages) I don't think I have had that issue more than 2-3 times.

(1) Passing an int/string when a function expects the other type? Yes, I had that happen lots.

(2) Concatenating a string and a (coerced) int?

It's either a much narrower subproblem of (1), or exactly what I wanted to happen in the first place (e.g. print "Your score is: " + n + " points".

I've dealt with it lots. While concatenating variable and string literals like your example is very rarely ambiguous, but what can be is using + to operating on two variables whose input has been returned from elsewhere.

Does that function that gets the user supplied value for height automatically convert to a numerical type, or is it a string?

What about the function that gets the user supplied value for width?

Is it stored in a back-end format that strictly types those fields to ensure that if the data is set from some less frequently used method it has the same type (e.g. are you storing as JSON, and can this JSON be created from multiple code paths?)

Given a simple statement like "foo = bar + baz", why should there be ambiguity when there doesn't need to be? The glib answer to the problem is to tell people to use better variable names, or document their functions better, or any number of suggestions that put the onus on the programmer, when this is a very simple to solve problem. Don't overload operators in the core language for conceptually different actions. We don't "add" strings, we concatenate them. Add is at best, shorthand for "add to end of" which means to append.

Coercion can be a useful tool, except for when it creates confusion. There are well known and tested ways to combat this confusion, so I'm not sure why we haven't adopted them widely where coercion is in use.

P.S. It's not just addition. Testing for equality is another common case where coercion can cause problems. The answer to that is equally as simple, use a different operator (such as "eq" and "ne" for string equality in Perl). Numeric equality is not the same as string equality (10.0 is equivalent to 10, but "foo." is not equivalent to "foo"), and making the programmer think (not not!) about what will happen when using an overloaded equality operator on two variables just shifts a small amount of up-front cost when learning the language (we have different operators for string concatenation and equivalence testing, learn them) to a cost imposed every time the programmer has to use those operators.

I think concats like 1 + "111" can be disabled with import scala.Predef.{find_right_implicit => _,_}. Cant imagine how many things it will break though

> There's a very specific reason Perl opted for '.' to concatenate strings instead of +

I don't see making string concatenation ambiguous with property access being a real improvement. At least use `..` or some other unused punctuation.

In Perl, '.' is not property access. I'm not trying to imply '.' is the best solution for a language where it's already in use, but that a separate, unambiguous symbol would be better. For example, Perl 6 moved to using '.' for calling methods, and uses '~' for concatenation. It's not like just changing '+' is sufficient anyway, there is still string equivalence to consider (outlined in my other replies).

D does.

auto greeting = "Hello" ~ " World!";

immutable x = 9 + 99;


You can also use: s"The width is $width"

Which is good, but interpolation is really just a specialized case of concatenating string literals and variables, not variables to other variables, which is where some ambiguity is introduced.

You can also do s"$varOne$varTwo" and that concatenates two variables.

Yes, as an alternate syntax, interpolation works well to disambiguate your intent, which supports my point. It's really just a highly specialized form of a different operator, which says I work on strings, so anything I see that isn't a string coerce to a string. For example, in Perl

    $varOne . $varTwo
Is entirely equivalent to

The question is why Scala chose to use + when it had other, non-ambiguous options, like above.

To be more familiar to Java developers.

There are some thoughts about migrating people away from + toward string interpolation. But automated conversion tools are required before this can happen.

D uses + for addition and ~ for concatenation

Otherwise known as "Haskell"?

Much to the Scalaz crowd's dismay, Scala always was an ML first and foremost. Any similarity to Haskell is incidental to Haskell/ML's shared background in strongly typed functional programming.

Does it matter much whether one replaces "Haskell" with "ML" in the parent post?

Incidental? It's pretty well known that Scala was heavily influenced by Haskell. In particular, do notation (for comprehensions), pattern matching, lots of standard lib classes (ex Maybe (Option)), many of the methods in the collections library (map, fold, take, etc)

Anyway, I wasn't trying to disparage either of them (or this project). Just poking fun at the seemingly common "I want to use Haskell but I'm forced to deploy on the JVM" use case for Scala.

> It's pretty well known that Scala was heavily influenced by Haskell.

Except that Martin Odersky himself has explicitly said that the primary influences were SML/OCaml and Java.

> In particular, do notation (for comprehensions),

I'll give you that one

> pattern matching

Which pre-date Haskell by nearly 2 decades in ML

> lots of standard lib classes (ex Maybe (Option)),

Also pre-date Haskell by nearly 2 decades, in addition to be named exactly the same as ML.

> many of the methods in the collections library (map, fold, take, etc)

Which pre-date Haskell by almost 4 decades with origins in LISP.

> Just poking fun at the seemingly common "I want to use Haskell but I'm forced to deploy on the JVM" use case for Scala.

Which is where the Scalaz folks come in. They want to piggy back off of something successful, as opposed to Frege, which is closer to what they actually want (avoiding success at all costs?). Who knows, maybe they actually like the strict-evaluation by default of Scala/ML, which is quite pragmatic, but definitely more in line with ML than Haskell.

Given the fact that OCaml seems to be adding type class support and has support for monadic comprehension (do/for), it seems like these ideas are getting adopted in multiple places.

As a Scalaz committer and as a developer working at one of the largest scala shops (that makes heavy use of Scalaz), we don't try to write Haskell on the JVM. We do try to write pure, functional code as much as we can, and so what we end up with falls somewhere in the middle between an OCaml and Haskell.

> OCaml seems to be adding type class support

Funnily enough, Scala's implicit-based type classes were the inspiration for the coming modular implicits type classes in OCaml (as opposed to the Haskell approach). It wouldn't be the first time that two languages have mutually inspired each other (The Rust and Swift teams have both acknowledged certain design decisions inspired by the counterpart).

reference for ocaml adding type class support?

It's pretty well known that any post that starts with "It's pretty well known" is usually misleading

> It's pretty well known that Scala was heavily influenced by Haskell. In particular, do notation (for comprehensions),

Is it documented anywhere that this was inspired by Haskell? To me, Scala for comprehensions seem syntactically more like a generalization of Python comprehensions than Haskell's monadic do notation.

Python's list comprehensions are a specialization of Haskell's list comprehensions[0].

I think Scala was influenced by Haskell's list comprehensions.

0: needs citation, though I remember reading something semi official about pythons comprehensions being influenced by Haskell. Maybe Python.org's page on Haskell.

Well at the very least, Haskell's came first; they were part of Haskell 98 (finalized in 1999) and Python 2.0 (the first version with comprehensions) was released in 2000.

Not to mention that you can get similar power to typeclasses in the language.

Yeah, think of it as practical Haskell that you can actually use in your day to day work ;-)

Haskell is a practical language, and one usual impediment for using it in your day to day work is "we must use the JVM". What happens if you remove that requirement? :)

Haskell seriously lacks mature tooling, like IDEs, and also libraries. It's still a niche language and it suffers from that. I think it's pretty clear it's not going anywhere close to mainstream any time soon.

That's debatable. Some tooling is needed, some (like IDEs) are mostly unnecessary; crutches we're used to because some languages are unbearable to use otherwise. IMO it's a niche language because, unlike Scala, it requires a clean break from the way the mainstream industry sees programming languages.

But don't mistake that for lack of practicality!

I don't subscribe to the point that IDEs are crutches any more. I don't rely on IDEs to generate code for me, I use them to explore code and refactor it efficiently.

Good refactoring support in IDE can save you a lot of time and errors, especially in the statically typed languages. It has nothing to do with language being unbearable, but a lot to do with the size and complexity of the code you have to work with.

I have nothing against Haskell, and before sticking with Scala I've seriously considered it. However, having things like Akka in Scala, an actor/OTP framework that is the only one that is even remotely comparable to what Erlang has, was a huge benefit.

Other tools like Play, Spark also made choosing Haskell an unpractical decision for me.

Besides, after a couple years of excursion into pure FP-only approach to development, I've understood to myself that OOP is not in any way in opposition to FP, but can be a rather welcome addition. Especially when you work with big code bases and you plan to maintain them for many years to come. And that is an area where Scala really shines.

Can you elaborate on this OOP for big code bases argument as I've seen it wheeled out in defence of Java and PHP5 many times but I just don't buy it. Often classes in OOP languages are used not for instantiating objects, which I would argue is their raison d'etre, but simply to encapsulate some data and a bunch of methods. What advantage does this have over simply using your language's namespacing properly? In Clojure and Python, for example, it's easy to place a bunch of functions in a single file, add a namespace and you can manage a codebase of any size. When PHP introduced namespaces in 5.3 I couldn't understand why its Java-esque OOP was still so idiomatic as they solved the main problem it was invented for.

I'd just say that objects are more powerful constructs than structs, records and simple shapeless datastructure. They enforce structure on your data, and give you at least some clues what you can do with that data, among other things.

After writing and maintaining a bunch of code in Clojure, Erlang and functional style Ruby, I feel like OOP+FP gives me greater flexibility in my design options and allows to architect better solutions to my problems.

Haskell doesn't seem to be a very good language for doing hard or soft realtime, or simulation (realtime games, flight simulation, robotics, industrial control etc.). On the other hand, given appropriate memory management which looks to be in the works, native Scala seems a pretty good fit.

It seems to me Scala is potentially much more "general purpose" than Haskell. In other words, "more practical".

I've no experience with hard realtime systems, so I won't debate about that (I'll just say I doubt Scala is suitable for that either), but Haskell can be used to write simulators and games.

Haskell is currently a practical general purpose language. I don't see any evidence Scala is any better at this. Are you speaking from experience?

"I've no experience with hard realtime systems, so I won't debate about that (I'll just say I doubt Scala is suitable for that either),"

Hard and soft realtime both require deadlines to be met, the difference being that in soft realtime a missed deadline is not considered a fatal error, just highly undesirable.

Examples: Soft realtime: A game where 60 FPS is desired for smooth animation. Lower frame rates degrade the experience but are tolerated.

Hard realtime: Flight surface control software in a fly-by-wire aircraft. Missed deadlines potentially result in a crashed plane - a true "fatal error".

Any ahead of time compiled language with deterministic memory performance may be used for hard realtime. Absolute performance isn't required, although it is desirable. As long as memory primitives are available for native Scala that permit pre-allocation, and the GC can be turned off (trivial), it should work fine even for hard realtime.

As an aside, I expect for many things native Scala will equal C++ in performance. On the JVM is extremely close to Java in performance.

"but Haskell can be used to write simulators and games."

Realtime simulations and games? It seems hard to reconcile immutable state with time-based simulation in any efficient way. Then there are garbage collection cycles and laziness to deal with.

"Haskell is currently a practical general purpose language. I don't see any evidence Scala is any better at this. Are you speaking from experience?"

GC based languages in general aren't good choices for the types of systems discussed above. GC is also a problem on smaller hardware (embedded systems for instance) since there should be a much larger memory pool than actually used for good performance.

Haskell also throws in laziness, which leads to non-determinism.

Scala also supports mutable state and the OO paradigm if they are better or more convenient for a particular system.

Thanks, I know what hard realtime means, I just meant I didn't have experience with those systems to really debate about them. Do you have practical experience building hard realtime systems?

> As an aside, I expect for many things native Scala will equal C++ in performance

Are you familiar with Haskell performance?

> [can Haskell be used for] realtime simulations and games?


As I said, I can't speak for hard realtime, but then again, most sims and games seldom require it.

> GC based languages in general aren't good choices for the types of systems discussed above.

You seem to be arguing out of theory (which is why I asked you if you were speaking from experience). We're also not discussing GC based languages "in general"; we are discussing Haskell, which is a very practical language which can and has been used successfully in a variety of real-world applications, and which qualifies as a general purpose language. If in doubt, talk to an actual Haskell practitioner instead of arguing out of theoretical positions, and you'll be surprised. Many such practitioners frequent HN, and I'm sure they are eager (no pun intended) to discuss the systems they've worked in.

For that matter, will Scala Native not be a GC language?

> Scala also supports mutable state and the OO paradigm if they are better or more convenient for a particular system.

As I argued before, in practice this means Scala is not a great language for OOP or FP. It's understandable that experienced practitioners of either style will prefer cleaner & better languages.

I don't predict a particularly great outcome for Scala outside the JVM, since I think the JVM is its main selling point. Once you remove this point, other languages become distractingly attractive :)

"Scala code is like Haskell fanfic."

Correction: Scalaz code is like Haskell fanfic.

Wow, they're even planning to provide a native version of the runtime library? That will be more than nice.

Not the author:

> - Can this reuse existing Scala code?

I think that's the plan. Would be a pretty pointless exercise without that, right? :-)

> - How does it compare against Rust/GO/Swift? Why use this over them?

Rust: Scala and Rust have different niches. Rust is more focused on low runtime overhead, while Scala is more focused on low development overhead. This means Rust can be potentially faster to run, but Scala is faster to develop.

Go: Go is utter shit that only survives due to the devs name-dropping "Google" every 5 minutes.

Swift: Scala is more mature, simpler, better designed. Not a knock against Swift, but there is a difference between a language where changes have been tried experimentally for years before either adopting or removing them, and Swift where things get added at a frightening rate.

> - How about libraries?

Libraries without Java dependencies should work, libraries with Java dependencies depend on whether their Java dependency is provided by Scala-Native (just like on Scala.js).

Please don't start programming language flamewars on HN ("Go is utter shit", etc.) Those are all the same, and we're hoping for thoughtful discussion.

We detached this subthread from https://news.ycombinator.com/item?id=11678317 and marked it off-topic.

Does anyone have a browser extension or similar that will reattach subthreads that get moved like this? Or maybe a user preference similar to showdead?

>Go: Go is utter shit that only survives due to the devs name-dropping "Google" every 5 minutes.

Making blanket statements like that without any substantiation makes me want to just write off everything else you have said.

To my mind writing off Go is table stakes for having an interesting discussion about modern programming language choice - anyone who takes Go seriously is coming from a position so different from mine that it would take many pages (all of which would be rehashing of old discussions) to bridge the gap, and I honestly can't be bothered. While I would try not to be so rude about it, I think GP is right to dismiss the language without going into detailed reasons; there are any number of threads about Go that cover the case for and against already.

Go is obviously pretty god awful when it comes to design when viewed strictly as a language. The language is mind numbingly boring and restrictive. The reason why people are interested in go (at least IMO) is because of stuff that isn't really about the language design. They are interested in the subsecond compile times out of the box. They are interested in one of the best standard libraries out there, a static binary that has subsecond start up times, a GC that doesn't require lots of tuning, a language spec so simple and a standard formatter so most code looks the same, etc.

The language it's self has no beauty and is not fun or interesting like scala. But I still love it and would totally base my company on it if I was planning on hiring thousands of developers and training them to use it. I would be scared of doing that with scala.

I guess I can see the case for using Go for something basic like CRUD that you just want to "crank out". But to my mind the kind of code you can program that way shouldn't be something you program at all - rather you write a system that handles it all generically (e.g. Rails/Django have generic model editor forms). Maybe in a business where programming is incidental it makes sense? But certainly it would destroy any kind of "hitting the high notes" business.

> But certainly it would destroy any kind of "hitting the high notes" business.

Not sure what you mean by that.

I still think that you are not really thinking about how many non-language pain points go takes care of. You don't really worry about a library being the wrong version, you need extremely little special training for a new developer to start on a go code base (compared to FP which requires a paradigm shift for most people), the people that have to setup the servers generally appreciate something that only requires a static binary to be dropped on the server and executed. Almost none of the things that make go great are actually related to programming languages, where as some of the best things about scala are all of the elegant ways you can express computation.

I agree with you that they are just 2 very different language in philosophy and usage. To my mind Go is about boring technology used to make interesting and very popular products while Scala is about exploring cutting edge in PL theory.

One can see most work on Scala in git commits is done either by few people at a research institute(EPFL) or at a consulting company(lightbend) whereas Go has quite varied committer profiles and there are far more committers.

> To my mind Go is about boring technology used to make interesting and very popular products while Scala is about exploring cutting edge in PL theory.

I would strongly disagree with the implication that Scala isn't used to make interesting or popular products (e.g. Spark or twitter), or indeed that it's "exploring" anything "cutting-edge"; it is very much a production-quality language (backed by theory sure, but theory that's been established for decades now), and I don't see that it contains anything more novel or experimental than e.g. Go's concurrency model. It's a decent platform for doing research in, but that's mostly just a question of being a good language.

> One can see most work on Scala in git commits is done either by few people at a research institute(EPFL) or at a consulting company(lightbend) whereas Go has quite varied committer profiles and there are far more committers.

We're talking about a factor of 2-3 in number of committers, not that I have any idea what that number is supposed to tell anyone. Certainly Scala has substantial industrial support and a wide base of contributors.

So what? If you can't tolerate an opinion I can imagine you are having a hard time on the internet. :-)

It does a discredit to any other better substantiated or more reasonable opinion you have stated.

For better or for worse when most people hear a new statement or opinion coming from someone they will trust it as much as they trust the least believable thing they have heard that person say.

This sounds like a very US-trigger-warning-safe-spacey stance.

I think most people on this planet are able to consider each point at its face value.

> I think most people on this planet are able to consider each point at its face value.

Humans have a pretty strong demonstrated tendency to develop positive or negative emotional attachments to pretty much everything, including sources of information/arguments, and see the world strongly filtered through those. This is, while it can be misleading, an important evolved survival mechanism in terms of attention/resource management.

Most people probably can, by expending effort, mitigate that to a certain extent, but that doesn't negate it completely.

People who can't assess statements objectively are probably exactly those people whose opinion I couldn't care less about.

Perhaps this is exactly the filter I want to have.

A few different people are trying to politely show you a way you're undercutting yourself.

And that's perfectly fine! I won't throw a tantrum because I might disagree (which I don't).

So you would not assess their other statements objectively?

This is not about safe spaces, this is about basic human psychology.

A small side note: Scala has several great features, but "low development overhead" just isn't one of them. Also why so harsh on Go?

If by low development overhead you mean amount of crap you need to type in order to get things done, Scala absolutely does have low development overhead.

As for Go, I'd say that's the most honest description of the language I've seen in a while.

"Low development overhead" was taken from parent comment, but I understood that it means how productive and fast it is to write code/develop something in Scala.

If your limiting factor is how fast you can type you are either an amazing person that should be studied or you need to take a typing class.

My limiting factor is usually how fast I can comprehend other people's code, and the extraneous crap some languages require can often be the biggest contributor to that.

I suppose "amount of crap you have to type" means "low boilerplate", not "basically APL".

Scala is pretty good in the low boilerplate department. I'm not a fan of Scala in the slightest, but in terms of code more or less directly expressing ideas it isn't bad.

Why do you say low development overhead isn't one of them? Outside of the one-time cost you pay to learn the language (which is higher than Go and most languages), Scala makes it super easy to write correct code quickly. The developer workflow of incremental compile + REPL + unit tests is great.

Probably depends on the code base, but having worked with Scala for a few years, there are a lot of things that causes development overhead. First things that come to mind: slow compile times, there are usually several ways to do one thing and also docs/guides are inconsistent, overused operator overloading in libraries (almost have to learn new DSL syntax for some libs), and it is not always easy to comprehend code by other developers.

not bagging anything you're saying, but I found slow compile times more or less go away if I open an sbt and type ~ compile to have it auto compile on save

Regarding Go... perhaps the parent was harsh, but not wrong per se. I think a strong argument can be made for Go being a very poorly designed language with an excellent community.

Scala certainly doesn't have high development overhead either. Build system more or less just works, plenty of libraries, minimal ceremony to do much. Don't get me wrong, I kinda hate Scala¹, but compared to _many_ languages it does have low development overhead.

¹ Disclaimer: haven't used Scala since 2.8. Maybe it's better now. Don't get your panties in a wad, but Scala is basically really shitty OCaml on the JVM in my mind.

Scala has been vastly improved since 2.8. If the experience you stated comes from Scala 2.8 I think you will love the way things are working currently.

I wouldn't want to go back to 2.8 even if people paid me a ton of money.

I love ML, but think OCaml is a poor ML. That's why I love Scala. It's extremely consistent, and design decisions are very considerate.

It's an amazing language and I can express my intentions much better than in OCaml (no typeclasses, no higher-kinded types, etc.).

I'll check it out again!

Thankfully I'm no longer working with the JVM, so I doubt I'll use it much, but it could be fun to take another look at.

Oh, and OCaml is possibly getting typeclasses (via modular implicits—not unlike Scala actually, for better or worse). It's not very painful if you use an alternate standard library (i.e. Core) that actually takes advantage of OCaml's amazing module system.

> Swift where things get added at a frightening rate.

Things are always added at a frightening rate when a language is young. Happened for C#, happened for Scala, happens for all languages.

Not really something to worry about.

The difference is that Scala devs didn't push for immediate adoption while they were still working things out.

Apple's Swift message is more or less "Stop writing Objective-C if you can and start using Swift – oh and by the way – you will have to rewrite your code with every major Swift release which we will be releasing at a rapid schedule".

It will be very hard for Swift to get rid of all the cruft they accumulate.

In Scala every major release addresses one or two pain points and migration is very smooth – no "rewrite all your stuff".

> In Scala every major release addresses one or two pain points and migration is very smooth – no "rewrite all your stuff".

Yes, the Java way. Take ten years to implement basic stuff.

I much prefer Swift and C#'s approach to this: first major version bump, things get deprecated. Second bump, they get removed. It worked great for C#, it's going to work great for Swift, which has already seen incredible adoption on iOS.

Scala is moving at a glacial pace in comparison, really not worth bothering with.

Oh come on Cedric. You managed to troll better in the past.

As you surely know it already, the deprecate-remove cycle is pretty much how a) Scala works and b) C# doesn't work.

> Go: Go is utter shit that only survives due to the devs name-dropping "Google" every 5 minutes


Go is great if you dont mind writing 1000's of lines of boiler plate code one would expect to exist.

Agreed, but that said I'm a recent convert from Scala to Go. Not sure I'd go backwards (no pun intended), boilerplate aside.

Having standardization on things like channels in the language makes for a much nicer consistency in the language. Scala has 8000 ways to do anything, half of which involve an exotic DSL. I'll take plain and simple please over esoteric and unmaintainable.

'plain and simple' languages encourage and often necessitate 'esoteric and unmaintainable' applications. Look at Java for examples of this, and Go is basically java without generics and a much worse garbage collector.

How so? You can find examples of poor engineering in any language. What's specific about go?

Go doesn't have the facilities that let you handle real-world concerns in a nice way. All the examples of bad, enterprisey Java - Spring AbstractSingletonProxyFactoryBean, XML config for everything, reflection-based JSON serialization, annotation-based transaction management, JavaSpaces - they weren't written by idiots, they were written because they were the best way to solve real business problems within the limits of the language - limits that go also has.

People don't like writing hundreds of lines of repetitive boilerplate the way go's lack of generics forces you to. They'll look for a better way, and they'll come up with hacks, just as they did in Java.

Well Java has generics since 2004 and I have seen bloated enterprisey usage and patterns increasing vastly since then. Spring has become major force after Java had generics. So your argument about generics does not sound realistic to me.

Java's type system is limited (no higher kinds) and methods that might throw exceptions can't be handled as values in a generic way.

Be fair now, you don't need to write them, you just copy/paste them.

Not sure if you meant that in jest, but it made me shudder a little.

Clearly people either love or hate Go. There is a lot of hate I've seen on HN. I personally love it, but who am I to say what is a good language and what isn't.

Honestly I'd expect to see the most Go hate on a thread about scala. The design principals of both languages are completely at ends with each other.

Clearly people have opinions that range across a wide spectrum on any subject.

To say interest in is divided into either hate or love is disingenuous.

I would like to hear what makes Go utter shit

Oodles of boilerplate to do stuff that's a (simple and understandable) one-liner in several other languages.

It's a dumbed down language, not a simplified language.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact