Although I agree with most of what the author says here about scala and its benefits and features, this is hardly a debunking and more of a strongly opinionated defense. Some of the arguments are along the lines of "hey SBT isn't just a bunch of symbols" or "I upgraded my MacBook and compilation times are not a problem" or "tooling is better now". Reassurances yes, debunking hardly.
Scala compilation time is probably the worst problem I have with scala, and it still is frankly. I have seen no evidence that this has improved from personal experience and after talking with devs I know working in large scala code bases (Foursquare, Twitter).
Even if there are speed improvements in the scala compiler in future versions, the author doesn't address binary incompatibility between major scala versions either which, along with slow compilation times, makes it a serious PITA to upgrade to a scala version that might actually compiler faster.
[edit] hit submit too early last time, re-edited after regathering my thoughts.
I'll address the binary compatibility concern. I think at this point for the average scala developer, binary compatibility isn't really a big deal anymore. At this point the releases require very few (if any) changes to the codebases and all the library maintainers have stepped up their game with releases corresponding with new versions.
Sure, it'd be great to have binary compatibility, but I think the ability to make changes to the language unhindered by supporting byte code from previous languages is more valuable.
For example with the 2.11 release, I had three different non-trivial sized projects and for each I was able to simply change the scala version in my sbt file and everything worked just as it had before with no library or compile issues.
I will agree that it was a huge issue in the past with 2.7 - 2.9, but in my world it hasn't bitten me in a long time. Definitely not trying to say it isn't an issue or something that wouldn't be great to be fixed, but at this point I have a tough time seeing it being considered a barrier to entry in any way.
Also, I just throw in that for me compile time hasn't been an issue for me lately either. I work on reasonably sized projects, but definitely nothing on the twitter / foursquare scale.
Scala developers end up on the defensive because many of the claims are either because of misunderstandings or are fashion driven, being repeated ad-nauseam many times by people with only a superficial experience of the language.
For example I'm going to make the claim that SBT indeed isn't just a bunch of symbols, but rather one of the best build management tools in existence. For anybody going to challenge that I'm going to start mentioning the complete clusterfuck of other platforms and languages, like that of C/C++, or that of Python, or that of Javascript, or that of C#/.NET, or that of Haskell. SBT not only does demonstrably a better job than most, but works well out of the box and is relatively pain free. It has its own set of problems of course, but what tool doesn't.
Compatibility issues are real. But you know, the nice thing about those compatibility issues is that it forced the community to come up with solutions. At least Scala and SBT are addressing compatibility issues, whereas the transition from Java 6 to later versions is proving to be next to impossible and I don't think I have to mention Python 3 or ECMAScript 6.
Compilation times haven't been that much of an issue for us because our codebase is split into services and libraries that have a reasonable size. And on the switch from Scala 2.10 to 2.11 I definitely saw an improvement, with the claims of 30% being about right.
I do agree that they can do better, taking a look at similar languages with a slightly better story, like Haskell. However what people are usually missing is that this is usually a trade-off with compile-time safety.
The more static a language is, the harder it is to compile it. The compilation cost for a dynamic language like Javascript is zero, the cost being completely amortized with everything happening directly at runtime. When going from Javascript to Java or C#, for people that prefer Java or C#, people go from zero to a very noticeable and resource intensive AOT compilation step and many times this tradeoff is worth it (depending on whom you ask) for safety, performance or tooling reasons. Well, Scala is a more static language than Java, it infers a lot of things, it has better generics, it does implicits which can then model type-classes and even the infamous CanBuildFrom pattern and so on, giving you a nice balance between productivity and safety in a static language. And that doesn't come for free.
> The compilation cost for a dynamic language like Javascript is zero, the cost being completely amortized with everything happening directly at runtime.
This is one of the big points I always try to make to people. JS compilation is cheap and maybe it's faster to spit out some functional code. But all you've really done is pushed the rest of your work to tomorrow. Your type verification happens at runtime, and now instead of your bugs being compiler errors, your bugs are production errors that you've exposed your customers to.
With Scala I rarely need to use the debugger. With JS and PHP I have to constantly use the debugger to track down "what type does this variable actually have in this random edge case at runtime?"
I used to believe this, but what I discovered is that edit-time/compile-time/run-time phases are not as important as failing quickly in absolute time.
If I spot my error during compile-time in scala and only during run-time in javascript that is still a win for javascript if I found the error 1 second after writing it in javascript because that's how long the page takes to load as opposed to 5 seconds in scala, because that's how long sbt takes to compile.
Absolute time to discover the failure from writing it is important, not the phases that the error is discovered in, and despite this article, I still find that on complex project structures sbt/scala is horribly slow.
This is not an assumption, it's an experience. Code shipped to users. In production. In both Scala and Javascript bugs got through, both required specific practices to try to minimise them; with those practices, I found many bugs in the javascript code extremely quickly. None of the bugs in the scala code were found quickly because the iteration cycle is poison (bear in mind, our code base was probably a worst case and may not be representative of scala in general).
I'm now firmly of the opinion that static vs dynamic typed is a sideshow, the real value is shifting as many errors as possible to as early as possible (in terms of seconds from typing the error to finding it) by whatever means you can, and a powerful type system is one tool for doing this, but if it compromises the iteration cycle and actually shifts the discovery of errors later in time then it's undermined one of the major reasons for using it.
Yes, but is that because of Java compatibility (which has been generally exemplary) or maybe because of other issues (I'm thinking of Oracle vs Google)?
Second, at least on lambdas they could generate inner classes for older devices and update DEX format now that Dex is compiled AOT to native code. So only the compiler dex2aot would be required to be updated.
Currently there is zero public feedback on the new compiler chain (Jack & Jill), but from the sources it appears it is Java 6.5 all the way.
They have some unit tests that appear to be related to Java 8, but no one knows what is their idea.
In a few years (by Java 10 timeframe), according to the actual roadmap which might still change, Java will have modules, a JNI replacement, reifeid generics, value types, new array type, GPGPU support, AOT compiler....
If nothing changes, assuming Android 4.4 devices will be gone, the only change will be from Java 6.5 to Java 7, if Android's team attitude is to stay like it currently is.
But at least they spend time doing those stupid devbytes videos, doing continuous buggy releases of the support library and Google services.
Android Java is definitely the second coming of J++/J#.
Not if startup time matters or if you need to deploy into systems where dynamic linking is not possible.
The majority of the commercial third party JVM vendors do offer AOT as part of their toolchain.
Similarly .NET always had a JIT/AOT model. The novelty with .NET Native is static compilation and integration of the Visual C++ backend.
I don't if this is true, but I read somewhere that AOT was tabu at Sun.
Oracle is now improving Java to cover the use cases where it still fails short vs other alternatives, so having an AOT compiler in the reference JDK instead of forcing devs to get it from a third party is part of their roadmap.
How it will look like in the end, it is not 100% clear.
Lombok works perfectly with Java 8. Don't know why I am a "village idiot" for using it since a couple of years.
Android is a special case, though. That's just Google being incapable of accepting a ruling and making a deal with Oracle. Instead they are pouting like little princesses and secretly developing their Swift copycat language.
I'm not quite sure what you are trying to say with this statement, "I don't think I have to mention Python 3 or ECMAScript 6".
ECMAScript 6 is finalized and many people are running it today in production environments either using Node on the backend or on the front-end using a build process that integrates with something like Babel.
The following quote implies that you believe JavaScript is runtime evaluated, when in fact the standards body does not specify runtime evaluation or compilation. That is a decision left up to the implementation. All major browsers today and Node.js use just in time compilation for JavaScript rather than runtime evaluation.
"The compilation cost for a dynamic language like Javascript is zero, the cost being completely amortized with everything happening directly at runtime."
[EDIT]
I have added a link to the JavaScript entry on Wikipedia where you can verify that modern browsers compile rather than interprest JavaScript.
The implementation technique is completely irrelevant. And the part of the comment that you quoted implies that the parent understands how it's implemented.
I'm not sure how implementation is "completely irrelevant"; if that were the case then the entire conversation is essentially irrelevant as we could potentially implement any compiled language as an interpreted language.
The piece I quoted indicates to me that the parent possibly does not understand this about JavaScript since they stated the compilation cost of JavaScript "is zero" that implies that JavaScript is not compiled. If there is a compilation step then some cost is incurred no matter how low.
It was further supported that the parent may not understand JavaScript is typically compiled when they followed on with "cost being completely amortized with everything happening directly at runtime". That would indicate that all of the cost is incurred/paid at runtime rather than in the compile step which is simply false.
The parent is talking about the costs and benefits of AOT compilation versus the lack. Whether the interpreted language uses a JIT has nothing to do with that trade-off. That's why it's irrelevant.
Also, that the parent bothered specifying "AOT" in the first place is proof enough the entire basis of your belief in their understanding of things is utterly groundless. This would be clearer to you if you understood that JIT compilation cost is amortized, as the parent alludes, and that it doesn't happen in "a" compilation step.
There is also a good transition story for porting code from Python 2 -> 3. The "problem" is largely a community issue: libraries with many dependents and a lack of authors to port them, institutions with large code-bases unwilling to port, and so forth.
The tooling hasn't been the problem. There are compatibility libraries (ie: six), source transformation tools (ie: 2to3), and plenty of books, blog posts, and talks on how to manage supporting both version of Python or doing a full port.
How is it straightforward? I tried porting a very small codebase from python 2 to 3 and ran into a bunch of issues which required manual workarounds. If it were easy to port, most library owners would have done so already.
I guess it depends on your idea of straight-forward. If you mean, "press a button and everything is done for me," then it's not very straight-forward.
If you want to port there's a well-tested and clear path. There are enough big libraries out there that have managed it. There is no real barrier as the OP suggested in their article.
My belief is that the effort os a linear function of your code base size. This means the larger the code base, the more effort required to port.
Also, porting is made more difficult by the fact that Python is a dynamic language, so a lot of errors that could be caught at compile time will only be catchable through a thorough testing suite. I have seen little evidence to suggest that porting from Python 2 to 3 is actually straightfoward in the sense of not requiring significant effort proportional to the size of the code base.
Believe what you will but the numbers suggest that it's not a technical problem to port a Python code base. It does take effort and work but it's "straight forward" in the sense that what is required is well known.
Aren't we past the tipping point? I think that at this point most old projects won't ever move to Python 3, but for new projects it's at least 50-50. If not >50 for Python 3.
Full-pass Scala compilation is slow, true. It takes my codebases at work a couple of minutes to compile currently. But
1) sbt incremental compilation is basically immediate. This facilitates tight feedback loops for running tests and using the repl
2) Scala's type system does so much more than fast-compiling languages (Go, Java, etc) that it's definitely worth it. What you lose in compile-time, you gain in speed to a correct and sound solution.
> Complexity or learning the language is still the biggest issue I see. That has not changed in 5 years.
That's not true. Most things are them same or easier than on Java8.
If you don't lear Lambda's on Java8 you are missing everything. And most things in Scala's are like Lambda map/foreach all over the place. and generators making things easier to read and easier to understand. Java8 doesn't have this future you end up writing flatmap(data -> return flatmap .. return flatmap return flatmpap) that's awkward.
On scala if you don't use the hard things the language is way easier than the most languages outside of the jvm ecosystem.
I've been using Scala primarily for a few years now after using Java since the 90's. Well written Scala code is much more pleasant to write and maintain than Java (true even for JDK 8). Scala's feature set is great, but you need to have good taste to use it effectively. It's a bit like the gang of four design patterns which some developers, especially Java developers, go looking for places to apply as many of them as possible. Rather consider where the features add value - they are very useful when used appropriately. When used sensibly Scala is amazing. You just have to keep an eye on the new guys so that they don't just use advanced features all over the place where they aren't appropriate. Fortunately that type of behaviour isn't that common but does feed the Scala is complex meme. After a few years with Scala I consider my time spent primarily with Java as lost years.
Agreed, love Scala, and I think it can lead to super maintainable code bases, but you do have to be careful. The worst offence is probably when people start to get comfortable with implicits, and start throwing them everywhere, definitely need a good code review process to stop that.
Also, the article left out probably my two favourite parts about Scala - awesome language level support for concurrency, and pattern matching. Whenever I go to languages without these features, I really miss them.
Scala is my favorite language at the moment. I've used (professionally) Java, C#, Perl, Python, Ruby, Objective-C, and Swift. I played with Go and started writing a simple library (https://github.com/joslinm/validate) to get acquainted with it.
Above all else, what always gets to me with Scala is its powerful generics. It truly is first-class. I'm excited for macros as well.
As someone interested in Scala but barely having used it, how do Scala's generics differ from C#'s reified generics? I've always found it difficult using generics on the JVM in comparison to the CLR because of boxing. I did a preliminary search but couldn't really find a good comparison between C# and Scala generics that was up to date.
Scala has higher kinded types (or, generics over generics). This is amazing (but can take a while to see why/get confortable with). .NET doesn't have this feature.
Thanks, this is what I was hoping someone would write!
I searched around a bit about HKT's and although I can't say I understand them, I can see how the ability to do something like provide a constructor over a type could be useful. It feels like it has a lot of use in writing very general-purpose libraries, or perhaps a DSL that can map to a wide variety of different types. Interesting!
Generics in C# and generics in Scala have the same basic features (generic classes/interfaces/methods). When I say "powerful", the first thing that comes to mind is its ongoing reflection evolution. TypeTags give us access to class information in a functional manner (by passing in an implicit type info object) that we can take advantage of using Scala's (also powerful) matching mechanism.
The second thing that comes to mind is the ability to support variance. To quote Twitter Scala School: "A central question that comes up when mixing OO with polymorphism is: if T’ is a subclass of T, is Container[T’] considered a subclass of Container[T]?" Scala allows us to express this information using +/- as in `Container[+D]` allows for subclasses of `D`.
Finally, though not so much related, an interesting development in the type space is type macros (http://docs.scala-lang.org/overviews/macros/typemacros.html). This will give us a stronger "generic" approach to programming that allows us to write code to construct type hierarchies. In general, Scala is moving fast and always evolving.
You'd have to be more specific as to what your problems are concerning boxing & jvm. Lists are properly specialized such that a = List(1) == List<Int> and a(0) == Int. Of course, if you use a diverse set of types in the list, you'll have a `List<Any>` which then you would use `match` like so...
match item {
case i:Int => println("int " + i)
case s:String => println("string " + s)
}
The traditional concept of generics you see in C# wouldn't be a selling point for Scala.
There are more powerful "generic" programming tool in the scala world.
See this article for an example.
https://meta.plasm.us/posts/2015/11/08/type-classes-and-gene...
That doesn't seem to have anything to do with generics; it looks to me like a great showcase of Traits and having an Optional type. Although I agree that those are more powerful constructs than what you can use in C#, that doesn't quite answer my question about how Scala generics themselves are such a great feature. Perhaps the OP was referring to the type system instead?
implicit parameters of generic traits (aka the typeclass pattern) have everything to do with how Scala's generics can be used in more powerful ways than C#'s. And they are 100% tied to Scala's generics. Generic types in Scala can be used in implicit search kick off inductive generation of type-safe code.
If you literally just mean generics at runtime, they are no different than Java's, outside of support for specialization for primitive types.
> Perhaps the OP was referring to the type system instead?
Generics and the type system are pretty much the same thing? They only matter and have true power at compile time regardless of your language. So the fact that Scala's generics have absurd power at compile-time (way more than anything in C#) is 100% relevant.
Haha, testing automation framework. Test engineers took our framework and wrote tests with it. Yes Perl professionally is not much fun but I fell in love with Larry Wall after reading the Perl book. Though Perl isn't the best language, a lot of his philosophy shines through and makes it much more than just another programming language.
The problem is not "production code", but library code. I write a lot of business code in Scala, and a large majority of it is very easy to read for the newcomers we have. What is absolutely terrifying is what happens when you look under the covers: If you look at the implementations of libraries, as a newbie, you will want to run away, because you need to understand many language features before you can even understand a thing.
This is why I ended up writing a series of articles on practical implicates. Implicits are pretty much unavoidable, but the scale's website documentation is appalling. But if they are explained through examples, and shows how, and when, it makes sense to use them, we can deal with them reasonably.
Somebody recently made the acute observation that they love their own Scala code but that they always find other people's Scala code very difficult to read, and that it's probably the reason why Scala never imposed itself.
I've been writing and maintaining scala code for about 4-5 years, and I found reading other people's code in our team wasn't that bad. Then again, we do code reviews and such to make sure the code is readable. Reading library code on the other hand...
Note that unconventional symbols are very rare in the standard library. Third-party libraries, especially those made by the scaskell community, are where the problem lies. If one uses scalaz one gets what one deserves.
I was going to mention Haskell as being similar. Lots of custom infix operators. On the one hand, once you know a library's custom operators it can make for very clean code, but it can be tough to read code for a library you don't know. I guess it's mitigated a bit by the fact that a lot of Haskell operators generalize well, so the same symbols tend to be used to do something similar, but it can be abused.
Lift is worse than Scalaz and barely maintained. I can't speak for Play; I use Spray for REST APIs (somewhat symbolic, but oh so worth it for the glorious routing DSL that combines the clarity of a config file with the refactorability and safety of code) and Wicket for HTML UIs.
Link didn't work for me, but I suspect Lift has been dead for a while. It was fine for the time, but if you want a full framework Play is pretty much better in every way, and if you want a light framework you have Akka-HTTP and others..
> You look at Scala code and first don't get anything because it's full of library specific symbols.
I've been using Scala for years and I've had minimal contact with libraries using symbols in any significant way (most libraries I commonly depend on use no operator overloading at all). The only ones which come to mind are some of the linked list operations and Map(foo -> bar, abc -> def) stuff.
No, Pimp my Library is not an anti-pattern. It even received dedicated syntax with `implicit class`es. Contrary to general implicit conversions, implicit classes are not dangerous because things are not actually converted as far as the developer can see (yes, under the hood, it's an implicit conversion, but it doesn't bite you). You're simply adding methods on existing classes, which is similar to extension methods in several other languages.
And in fact, if you combine implicit classes with values classes (e.g. implicit class RichFoo(val value: Foo) extends AnyVal { ... }) then you're effectively just adding extension methods, and there's not even an allocation involved.
It would be more accurate to say that abuse of implicit conversions are now considered an anti-pattern. They are a core part of the language e.g. many of the extra functions Scala adds to java's String class.
This is such an interesting choice of example because this particular implicit is baked into the language, and the discomfort you feel is a fine measure of how good an idea this is. Last I checked there was no way to turn it off.
This is an anti-pattern IMO and requires an import to opt-in (by the author, not the caller IIRC). Instead you'd use an implicit value class which should be zero overhead.
But that's just about the definition, not what it's doing which is no good. Anyone can abuse a lot of these features, which depending upon who you'd ask, is better flexibility than not having them at all.
This is true, and in some ways I get annoyed by this saying, because it doesn't add much value.
I think there are two better ways of evaluating a language in terms of good code/bad code:
1. Does the language (and its idioms) encourage good or bad code?
2. Does the language support strong enough abstraction facilities to allow good code to be written?
Now, of course, the problem is that people will disagree about what counts as good code, but these at least make you think about what's important in a language.
I find that Scala is pretty successful if you are already a seasoned Java developer. Coming from other ecosystems, simple questions like "How do I use an execution context other than the global one?" are answered in terms of established Java practices. ("Just call this with an Executor.") And down the rabbit hole you must go. That plus a ton of leaky abstractions (also requiring a knowledge of standard Java practices) has been a big source of frustration for me. But I suppose that if you were already dealing with these quirks in Java, Scala is a nice improvement.
This was one of my largest barriers in learning Scala. It seems that one needs to learn Java first because you're going to have to understand how it and Scala interact at some point.
I really don't foresee any of these alternative JVM languages getting much traction in my industry (Defense/Aerospace).
I use Scala where I can at work (mostly smaller projects where I am the only developer and can thus get away with it), but every place I've worked over the last few years has been staffed with developers who are actively hostile to the thought of using any language other than Java. And right now, Java is king in the Defense industry. It's the new Ada, except I'd rather be coding in Ada than Java.
On my own side projects, I don't use Scala, because I would never choose to target the JVM for greenfield development. But that's just me.
The Defense industry is probably a very tough nut to crack for any language, but right now, I'm seeing Kotlin picking up some good momentum. I hope it can take off, it's a very decent alternative and improvement over Java.
If I'm doing some front-end stuff, I'll use Javascript. I most certainly will not use something like GWT because I want my entire codebase to be Java-only.
I've used Ruby, Python, and others. I'd like to get back into Haskell at some point when I find the time. My current little side project is written using (mostly) Go and Javascript (I'll have to post a "Show HN" sometime, maybe).
Right now I'm really digging Go for network services :)
Using Scala for about 5 years after being a Java dev for another 5 years. This post captures my view pretty accurately as well. Including book recommendations. I want to believe that most software engineers don't avoid checking scala just because of some "one star" post about it or other rants.
I would add 'Scala in depth' to the book recommendations, it's similar to 'Effective java', the book you need after you have used the language for more than one year.
But seriously now: don’t let yourself be scared off by “category type theory” and other fancy-sounding words. You do not need all of this theoretical background in order to learn and become a fluent Scala developer. I know, because I have not bothered to look at those things until 4 years with the language. And when I did it’s not like I had an epiphany, throwing out all the code I had written until then. In fact, I was monading all over the place, just had no idea how the things I was building and using were called.
Works for the author, I suppose. Me, I'd prefer to have a solid grasp of the abstractions I'm using -- and an equally solid understanding of how the programming constructs I'm using correspond to those abstractions -- before using them to crank out production code.
Here's an example. At my job, we make heavy use of scalaz Disjunctions and Validations along with Applicative/Monadic combinators to handle errors. Everyone on the team is capable of writing and modifying error-handling logic that uses these tools, even though not every developer has a strong working. understanding of what an Applicative or Monad is.
great read! I'm 3 years into the language and brought it into my company. As part of mentoring other devs coming from a web background, I kept repeating the mantra, "make it work the simplest way first" then explore the options scala gives you. As long as there were units tests for the logic, the details could change. The more I use the language, the more fascinated I become. Heck, I just learned about Bottom types last week!
I can see why you have this mantra, but doesn't this approach lead to programming in Scala as if it were Java (aka "you can program in Java in any language"), which results in unidiomatic code?
Good point! I absolutely agree with you, and it's very likely I've misread the parent post. It's just that sometimes I hear people at my workplace conflate "the simplest way" with "the way I'm more familiar with, coming from Java".
should have put an example to help clarify. For example, instead of throwing implicits around, i'd recommend they manually declare arguments like ExecutionContexts. Also sometimes the best way to teach someone from an OO background, is to first let them write their code imperatively, then work with them gradually to move to more functional approaches.
exactly, if its a java dev you are re-training their solutions will probably java in Scala, if its a javascript or haskell dev they would different probably. Whats interesting to me at least is if its someone with no previous experience.
Rust doesn't really have traditional OOP, so yeah, a lot of it does go out the window.
Rust doesn't have some Scala features like Higher Kinded Types that lead to this kind of perception, so it's not quite the same. Our type system can be hard for totally different reasons ;). The borrow checker stuff.
In Java its such a pain to always account for possible exceptions. You have explicitly catch it or explicitly pass it on. Scala frees you of this hassle by not enforcing anything. So handy.
And yet with the ease and availability of Try, Either and Option combined with fluent pattern matching I find myself writing much more robust code, especially when using legacy Java libraries.
(My real fear is that Scala is just a gateway drug and if I don't take it easy I'll be main lining Haskell eventually. )
Explicit exceptions in method signatures are a language design mistake. If you need to handle multiple classes of return values, that's what sum types are for. Exceptions exist to communicate secret messages from a raiser to a handler, preventing anyone in the middle from intercepting them.
But then are doing the bubbling manually and, on top of that, you pollute the return type of all the methods along the way. Suddenly, callers of such a function that just needed an Int now have to deal with an Either<Error, Int> and all the boiler plate that comes with it.
Exceptions fix that by not bothering any of these users because the return type of the methods in-between is not affected.
I had read somewhere that there were talks of creating a smaller subset of Scala, one that's easier to learn. I believe it wasn't going to have implicit arguments (or conversions?).
I think Odersky had mentioned something of this kind. Anybody else remembers something of this sort?
That would be Dotty[1]. It's the same language as Scala but with union and intersection types, and some extraneous features stripped out (e.g. procedure syntax, XML, and `for some` types).
Under the hood it's a complete overhaul, based on the DOT calculus[2][3]. Already compile times are faster than current Scala (without any optimizations applied), and tooling will improve as well (Dotty fuses current Scala's type parameter and type member implementation into one, thus less work for IDE presentation compilers to do, for example).
Implicits aren't going anywhere, thankfully, how else could Spark et al offer such wonderful DSLs ;-)
> Play has both APIs in Java and Scala, so you can slowly transition from one to the other (provided that you architect your application correctly especially in terms of database access).
What does he mean by "architecting the application correctly"? I have a project in production which uses Play framework in Java and I'd be interesting in transitioning this to Scala? What should I look out for?
The problem with Scala is it that it's too complicated. This results in slow compile times, poor IDE support, and on and on. I worked with Scala for about 10 months back in 2011. Never again.
I would probably be prone to agree with you in 2011. Things are very different in 2015 and it wouldn't hurt you to try. My devs all use IntelliJ with it and have never complained. I use vim personally :)
I've found IntelliJ support for Scala to be excellent. I can only guess that you were using Eclipse? IntelliJ does incremental compilation so most compiles take about 1 second typically.
I am curious to what extent Paul Phillips lecture [https://www.youtube.com/watch?v=4jh94gowim0] is just a rant or in-depth Scala criticism. If he's right then seems that we should not expect bright future for Scala.
I am wondering what Scala experts would say (I don't know Scala sufficiently to trust my judgment)?
The rant is mostly about the Scala collections API. I think the "expert" opinion is that yes, Scala collections are broken and incredibly complex, and theres a proposal for new implementations here:
More than the specific problems with collections that he talks about, I think the problem of allowing clearly broken methods into the standard collections is the real issue. Why would they accept an implementation f map where map isn't safely composable? Why would they use incorrect type signatures? Why would they allow so many casts? I haven't finished the video yet but every issue he brings up seems to be about more than a mistake in a library.
It looks to me like you can't trust Scala because the people behind it are ok with accepting clearly broken or poor quality implementations.
You don't have to trust us (though you can -- trust me :-)). Take a look at the source, check some references of happy Scala users. Submit a PR to fix what you don't like. Ask a question on our forums/stack overflow.
Whatever you decide, I'd recommend rounding out your view of Scala by consulting more sources than Paul's rants. There's a lot of hyperbole there. This of course doesn't mean he's wrong, and we've certainly listened to him and fixed some of the issues he complained about (I was in the front row of that talk).
Paul's a perfectionist. Sure, the collections aren't perfect, and we will continue to improve them, but you should keep some background in mind when watching that video: he was pretty frustrated with issues in Scala development not directly related to (but, I guess, embodied by) the collections. Some of the stuff he pointed in that video was already deprecated at the time of his presentation, and we've fixed other issues since. We have to strike a careful balance between fixing "obviously" wrong code and breaking code that had come to rely on the perhaps not-so-obvious wrongness...
To be fair, the only at all comparably production-ready languages that don't have substantially "weaker type systems and bad abstractions" compared to Scala are Haskell and to a lesser extent Ocaml and F#.
Scala is my favorite language. I use it every day. And yet, the language, and its ecosystem, have big problems nobody wants to tackle. They have nothing to do with the myths people talk about though.
For me, the biggest problem is how fractured the community is. Scala is not a very opinionated language at all, so work from different subcultures barely looks like the same language. Working with scalaz? You won't touch the standard library at all. Finagle? Twitter came up with a Futures library before it came standard with Scala, and there seems to be no interest in migrating out of that, so you have multiple libraries for basic concurrency (along with the scalaz version of course). Those groups, that use the language so differently, do not share many interests when it comes to the evolution of the language, what parts are good and bad. This leads to much infighting, and an environment where instead of learning Scala, what people learn is that the community is divided, and often very rude. It makes learning a challenge. Each community also has a different idea of what it is considered acceptable behavior to other human beings. The scala mailing lists has postings that make your average Linus Tovalds post seem sensible and polite.
Another part that makes learning a challenge is how different application level Scala and library level Scala are. In a language like Java, it's easy to go into the standard library, or into whatever popular library you want to use, and look under the covers, even and a newbie, and learn how to write idiomatic Java (whether that's good or not). In Scala, the standard library, and pretty much every major library out there, use language features that are not needed at all for regular use, but are very valuable for building libraries, and yet make the code very hard to understand for someone new. For the worst case of all, look at the source code of Shapeless: Even very seasoned Scala programmers find it difficult to understand, because it's attempting to do things that the designers never even expected people to do with the language.
Another problem, related to the previous one, is that since it's so easy to build DSLs in Scala, pretty much every major library does it. Internal DSLs, implemented around a bunch of implicit conversions that you need to understand pretty well to solve syntax errors. For instance, say you want to use json in your spray web service. There are two ways to do it: Follow a tutorial without understanding it and hope you do not have errors, or spend the time to understand the type class pattern.
And that leads to the last, problem: There are basic intro articles out there. There's information for experts. But the route from basic knowledge to mastery is a big cliff with very little information, and it's easy for people to give up. As it is now, Scala is a language better learned tribally: You have a few gurus, and when complicated questions happen, go ask one of the gurus to see if he can teach you about whatever problem you are having, or about how a feature really applies to your problem.
I love Scala, I really do, but if the community problems are not resolved, the rest of the problems will not get resolved. It will remain a hard language to learn, and other languages that right now do not offer all the rich features of Scala will just steal what they can and replace it instead, just because they have better, less fractured communities.
(Scala team lead @ Typesafe here). Thanks for the kind & insightful words!
We absolutely want to resolve the issues you're referring to, and we welcome everyone's help and input to speed up this process. We're working on refining our process and our plans regarding library evolution, especially in light of the recent controversy around adding a JSON module to our platform.
The first (baby) step I've taken towards bringing the communities closer was embarking on modularizing the standard library in 2.11, which will continue in 2.13. The goal was to create room for other modules to compete with the ones that were originally in scala-library and to allow community maintainers to step forward and own a module (every module now has a community maintainer, who is in charge of everything, short of promoting staged artifacts to maven). I'm very pleased with how this turned out, and we'll continue this effort in 2.13, where we'll focus on modularizing the collections library. (We alternate compiler and library-focussed releases for smoother upgrades, 2.12 is focussed on making the most of Java 8's new VM features.)
Two more examples, in which I was not involved: the design of the 2.10 Future API, which was done in collaboration with Twitter; I'm also very excited about the prospect of including Reactive Streams in JDK 9!
Personally, I prefer to write a bit of boilerplate over relying too much on extreme implicit magic to eradicate every last line of unneeded code, but I think it's cool that you can do that it you like to. (Disclaimer: I did a lot of work on enabling this implicit magic, between implementing support for higher-kinded types and improving the interaction of implicit search, type inference and dependent method types.)
The whole Scala Team is very concerned with the experience of our community in our forums, and we do our best to step in when things get out of hand, including banning repeat offenders (which we don't do lightly). Because Scala fuses so many different PL concepts, it's not possible to make everybody happy all the time.
I really like Martin's quote on using abstraction responsibly:
> [...] trading abstraction capabilities for basic features
> is exactly the opposite of my design philosophy.
> My design philosophy is more like free speech.
> We protect it even if it is abused. In fact free speech would be a
> fiction if there were no abuses. In the same way, abstraction
> capabilities would be a fiction if there was no over-abstraction.
> I completely agree this approach comes with challenges. It is our
> responsibility as good engineers to realize that abstractions have
> benefits as well as costs, and to collectively develop designs where
> the benefits exceed the costs.
Regarding learning the language, I believe we have a lot of excellent post-tutorial material out there, including many books and the Coursera classes. There are many stages of Scala mastery, but you don't have to get to the boss level in order to be productive. Part of the joy of Scala for me is that you can keep learning new features and improve your understanding of how it all fits together (quite nicely, if you ask me).
There's just one sad trouble with the current state of tooling: Scala and SBT in IntelliJ IDEA do not go together.
Each Scala run inside SBT-enabled project takes >10 secs minimum to run. Every single run. SBT make is invoking the full SBT startup sequence and is unable to reuse the session. This comes from the context of using Play Framework where you can't substitute SBT with something else.
I do hope that SBT server becomes a reality at some point and IntelliJ Scala plugin is able to fully use it.
Totally agree. I don't understand what the Scala compile server does. It's far slower than using SBT (or Activator, when working on a Play project).
I always just keep a console open and compile from the command line, because IntelliJ compilation is huge waste of time. I rarely debug because that means I'd have to wait 2 minutes for IntelliJ to do it's thing before it launches.
Oh, and the test output is inconsistent. It often reports that all tests pass even when they are failing.
Other than that - and it is a huge inconvenience - IntelliJ is still awesome for Scala development. Everything else the editor is capable of easily justifies it's place in Scala development.
I've been developing a Play app in IntelliJ for the past year. When run in development mode ($ activator run) it recompiles only the necessary files when you make a change, save, and reload the page in the web browser. I can even switch branches to another developer's work and don't have to restart the server. I'm not sure what you're talking about. Maybe you have something set up incorrectly.
SBT and Play work together great and activator run recompiles incrementally, this setup is OK.
The issue is with IntelliJ -- when I want to run Play app from IntelliJ, not from command prompt, to get easier debug support when running the app or junit tests. Running tests from IntelliJ is more convenient than using activator shell.
Yes, I could provide -jvm-debug flag to the activator command line but this works only when fork:=false, which can't be always used.
An interesting note is that new language Kotlin which aims to the similar niche has many things mentioned in the article. It handles NPE a bit differently and I'm not sure about multiple inheritance, but compilation is definitely fast, IDE support is wonderful, no need to learn another build tool and no monads allowed (joke, but language is more practical, than academical).
For me it's a strong competitor to Scala, if you want to use something better than Java.
What are scala people's opinion of nested functions? Is it good practice to use them? I never liked them because it clutters up the functions and makes them bigger. I like have short functions. And looking for them in the code isn't really a problem.
I started writing an explanation about how nested functions are great, but at some point you stop making a distinction between `val a: A` and `val a: A => B` in the scope of a def.
A bunch of my code is interspersed with these function vals. It's useful when within a function when you repeatedly apply the same transformation in multiple places. It kinda makes the code easier to read once you stop making that distinction.
re:short functions - I am not as much in favor of them as I used to be before. IMO, splitting functions into smaller ones are fine as long as there is some "meaning" to splitting them ( mostly related to your domain logic ). But I have stopped splitting functions just to make it smaller. When your code is referentially transparent, I think the size of your function really matters less.
To put it simple - Scala puts together the best parts of Standard ML (functional programming - so one could reason about code as a form of calculus, plus pattern matching on type-constructors with destructuring, partial application, etc) Smalltalk (everything is an object and objects communicate via messages passing by calling methods - the original concept of "pure OO" by Alan Kay and the best part of Smalltalk implementation) plus foundation ideas from Erlang (an Actor and messaging DSL based on high-order procedures and some syntactic sugar). Together it is already worth switching, especially afrer realizing what a crap Java is compared to this conscise and well-researched mix of complementing each other features). Scala is also famous for its emphasis on immutable collections, which makes GC less painful.
Scala initially is a product of a single remarkably well-trained mind, instead of committee of mediocrity.
Clojure, BTW, was a solo remake of Scala's approach (Java interloop, emphasis on collections, immutability, etc) while starting from Common Lisp (strong, dynamic typing) instead of Standard ML (strong, static) as its basis.
Actually, the Scala course on Coursera is very good start.
I really miss having proper type inference in Scala. I'll admit, the "we will infer the types as whatever javac would have said they should be from left to right" is a lot more effective than one would think, but still feels rather primitive.
Scala compilation time is probably the worst problem I have with scala, and it still is frankly. I have seen no evidence that this has improved from personal experience and after talking with devs I know working in large scala code bases (Foursquare, Twitter).
Even if there are speed improvements in the scala compiler in future versions, the author doesn't address binary incompatibility between major scala versions either which, along with slow compilation times, makes it a serious PITA to upgrade to a scala version that might actually compiler faster.
[edit] hit submit too early last time, re-edited after regathering my thoughts.