It's amazing how many people in this thread justify their own language choices by making negative, sweeping statements about another language (Scala in this case) that is successfully used by people other than themselves.
Yes, some people who previously used Scala, now use Kotlin. And some people who would've used Scala if Kotlin didn't exist, use Kotlin. Same probably goes for Rust.
But there is a big enough market for people that like the intricate and expressive type system that Scala gives you, in combination with the JVM ecosystem. People that think Kotlin is nice, but not expressive enough, for example. People who don't want to deal with Rust's memory management and/or don't have a use for that. People who think Go's simplicity is sometimes more of a burden.
Why are people so intent to bash sombody else's language choices?! Go, Rust, Kotlin, Scala are all great languages in different ways. They all cater to different needs, sometimes radically different (Go vs Scala), sometimes subtly different (Kotlin vs Scala). I think there is a market for all of them, and more. And the introduction of a new language (Kotlin for example) does not necessarily spell doom for another (Scala).
Let's all enjoy our own tastes and needs, and respect those of others.
The reality is that a language lives or dies by its ecosystem - particularly when it comes to a language like Scala that's in a tightly symbiotic relationship with its IDEs (the next time someone tries to sell you a "visual programming language", look at Scala for a language that makes really effective use of the GUI for programming without compromising the things that make textual programming languages good - see e.g. https://blog.jetbrains.com/scala/2018/03/27/intellij-scala-p... ). Only a few big players can afford the kind of investment it takes to make something like that. So much as I wish it were otherwise, I can't just sit on my island and use Scala because I think it's the best - if the language is to live, I have to convince other people it's the best. I don't begrudge other people feeling the same way about the languages they do like (provided that doesn't fall into dishonesty, as some of the claims from Kotlin advocates about e.g. null and Scala have).
As a fellow emigrant of `Scala is the best island` forced off for pragmatic reasons.
I've been around scala long enough to see the rise and fall of multiple expeditions into the bowels of the OSGI eclipse cave of horrors (Sean McDirmid / Miles Sabin etc). Before switching horses and settling in Intellij for several years. So I understand where you are coming from.
I've since jumped ship again to vscode (along with typescript), and in my humble experience / opinion, vscode & it's language server protocol (LSP), alleviates quite a bit of this. It abstracts away a lot of common operations from a language to the languages compiler, while providing a consistent front-end. I think if scala eagerly adopts supporting this it would be a good thing. Having a language server protocol applies back-pressure in a certain sense on language features, if you are going to add them, then the tooling (and LSP) needs to support them.
It moves code completion and refactoring into a dedicated process behind the LSP API, which is maintained by the language devs. As a consequence the IDE automatically keeps up with language changes.
I understand your strategic motivation, but to convince people your language is best, saying "my language is better than X" can be a very bad tactic. The reason is that most programmers aren't big PL fans, and that, to be honest, language choice (among various more-or-less suitable options) has a negligible -- if any -- effect on a business's bottom line. The biggest effect is on programmer enjoyment, as there are (perhaps unfortunately) many programmers who feel that their programming language does affect their satisfaction from programming. Such a preference for a language is often largely personal, but you can make your language more or less attractive to certain crowds. Creating a combative, competitive atmosphere, often results in a very fervent community, but usually not a large one, as such an atmosphere doesn't have a broad appeal. To increase the appeal of a language, the best things you can do is create great tooling, lots of useful libraries, and a friendly community.
In addition, an argument for "my language is better than X" can usually be easily countered with an argument for the opposite, creating a net result that doesn't help your goals. Debates are sometimes interesting and informative, but they're not effective marketing.
> language choice (among various more-or-less suitable options) has a negligible -- if any -- effect on a business's bottom line.
That's just, like, your opinion, man.
(By which I mean: I disagree, and we both know there's no clear evidence one way or another)
> To increase the appeal of a language, the best things you can do is create great tooling, lots of useful libraries, and a friendly community.
I do what I can, but a bad language can do those just as easily as a good language - tooling and libraries are much more a function of big-corp backing than they are of good language design. It's remarkable how much Scala has managed to achieve in those areas without having a big name behind it, but there's no competing with the amount of programmer-hours the likes of e.g. Google can pour in. The only way Scala can hope to win is on actual language design merit (and maybe winning popularity on that is impossible, but given how well Scala has managed to do so far, I remain hopeful).
> In addition, an argument for "my language is better than X" can usually be easily countered with an argument for the opposite, creating a net result that doesn't help your goals. Debates are sometimes interesting and informative, but they're not effective marketing.
A genuinely better language should have at least a slightly better chance of winning a debate over which language is better. If we don't believe that then we have no hope of ever learning truths, and may as well pick languages to use at random (or I guess go with whatever Google picked).
> I disagree, and we both know there's no clear evidence one way or another
I think that the lack of evidence in favor of a strong effect is normally counted in favor of the null hypothesis. It may not feel fair, but the burden of proof is solely on those who claim a strong effect exists, not on those who don't.
> but a bad language can do those just as easily as a good language
Again, the burden of proof is solely on those who claim there is such a thing as a good language with a strong effect. If you can't display an effect, or claim that others can also induce it by other means is a win for the null hypothesis. If you claim there is such a thing as a "bad" language, and there is no evidence this is true, doing so will harm your cause.
> A genuinely better language should have at least a slightly better chance of winning a debate over which language is better.
I just don't think that a debate is the right way to establish an empirical claim. It's ok to engage in a debate - even a heated one - but the lack of evidence should at least encourage humility. My point is just that an imagined "win" at a debate only harms your mission. If it convinces anyone, which is highly doubtful, it's probably not the people you want, anyway. An approach that says, "I think this is cool. I like it and it helps me and may help you, too" is so much more effective at marketing than "my way is best" especially if the evidence is not on your side.
JVM languages can usually use other JVM libraries, but they often aren't idiomatic to that language.
Scala can use JVM libraries, but it often feels wrong or cumbersome, e.g. Java is full of mutable builder classes, which I've never once encountered in "native" Scala.
The other way around, in Scala you have to be careful if you want your library to be usable from other JVM languages. There's certain Scala features you just cannot use.
This is a problem that Kotlin actively markets itself with, they promise 100% Java compatibility all the time, going as far as encouriging to mix Kotlin and Java code in a single project.
Real example from my experience: you use a Scala Map and pass it to something like FreeMarker, expecting that it will work like Java Map there. It will not and depending on your test coverage and SQA capabilities, you may notice it very quickly or very late (you'll get no compile error, since template engines usually accept generic objects and then analyze their type via reflection).
Well, I wouldn't even try to have a source code written in two different languages in the same component. But I've seen this problem in someone else's real code, which was probably caused by a not really well-thought migration of Java code to Scala. Basically, a developer replaced one Map with another, fixed all the places where API was different on source code level and a number of tests. However, interoperability issues like this one may be hard to catch in general (even with good test coverage and regular code reviews) and they show the complexity of the problem, that requires certain level of development skills and discipline to do it right.
You are right, this particular example is more of case against reflection rather than anything else. Having said that, Scala maps actually are tricky to use from java. In general scala does a better job in compatibility when it comes to reuse of java code from scala, but not the other way around
The bytecode is the same but not the standard library of classes. Scala uses different collection classes, Option instead of Optional, companion objects instead of static classes, etc...
In my experience mixing scala and java in a single codebase is inelegant, with plenty of conversion code at the boundaries.
No single person can maintain an entire ecosystem around a language. And programming languages have strong network effects. If I think a language is dying then I won’t want to use it because I’ll worry about future support. It becomes a self fulfilling prophecy and a negative feedback loop.
> Aren't JVM languages interoperable - because they use the same classfile/byte code format?
Up to a point - libraries from another paradigm tend not to be idiomatic, but I could live with that. I'm more worried about tooling availability - I talked about IDE support in particular, and it's also things like profilers and monitoring/instrumentation.
I'm not sure I see anything in that post that's tied to scala in particular, or even really visual programming. Are you referring to the hints? The parameter names hints seem to be result of having functions in general, and type hinting look like a direct result of static typing; I like the idea, but it doesn't appear to me to be tied to Scala the language at all.
The macro expansion is more tightly tied to Scala itself, as are things like showing implicit parameters. To a certain extent the things you mention are generic things, but Scala pushes type inference further than other languages that that IDE supports and makes more use of static types in general, so the IDE might well not have bothered with the type hints without Scala. Likewise Scala makes fuller use of named parameters than many languages (in Java parameters have names but you can't pass them with name=value syntax).
> Why are people so intent to bash sombody else's language choices?!
Partially it's simple tribalism. But part of it is also the rational awareness of the opportunity cost of investing in a language other than the author's preferred one. The more people using language X that I don't like, the fewer people using my preferred language Y. That means fewer libraries I can use, docs I can read, bugs that get fixed, etc.
Language ecosystems aren't entirely zero-sum, but they aren't totally orthogonal either.
That might be true to some extend, but many languages have completely different target audiences. People whose favourite language is Go, will probably not move to Scala (and vice versa). Kotlin vs Scala is an easier to understand competition.
In any case, if people want other people to invest in "their language", they should focus on making that language and its ecosystem compelling to use, not bash other languages...
> People whose favourite language is Go, will probably not move to Scala (and vice versa). Kotlin vs Scala is an easier to understand competition.
That said, in terms of language features and type system, Kotlin is arguably closer to Go than it is to Scala. Yes, Kotlin competes with Scala on the JVM, but it's a very different language. Scala is much closer to a language such as OCaml than it is to Kotlin.
I have seen 2 types of bashers. One which seems typical line of business app developers. They disparage other languages, praise theirs on basic things like IDEs, libs etc. I don't mind these much.
However others who approach from position of authority like compiler hackers, language authors themselves, or very senior developers etc. Ideally criticism from them should be more valid but more often than not I have seen they keep making bad faith arguments and justify their hate by precise technical arguments so they can't be challenged by non-technical arguments. It makes vary of their arguments even on topics other than favorite programing language.
>That might be true to some extend, but many languages have completely different target audiences
Thats somewhat by happenstance though isn't it; you take go, build a good enough scientific computing library, and tease out good enough performance, and get enough coworkers on it, and you'll probably have Go advertising itself as a scientific computing language (when talking to the relevant people).
And then you'll probably have go language devs implementing features better targetting the scientific computing community
And as the new people feed in, and implementing their own needs and libraries, suddenly go becomes good at ML...
And eventually we reach an 80s C-like status, advertised for nearly everything, because of its heavyweight ecosystem.
In terms of ecosystem, its really not constant for what they're good at. What might be constant is the flavour of programming the language prefers; you're probably not getting rid of goroutines as a significant language feature no matter how big Go gets.
Here's my view on languages: They have strengths and weaknesses. They cater to different needs, as you say. If you have the need for what Rust, say, does, and Rust solves some real problems for you and makes your job a lot easier, then it's kind of natural that you think Rust is wonderful. In fact, what you found is that Rust is wonderful for that problem, not that it's wonderful in general. But it's real easy to think that your situation is more universal than it is, and therefore that Rust (in this example) is this wonderful language that makes all of programming so much better.
Once you've fallen into that flawed perspective, then it becomes easy to criticize other languages. Why would you ever want to use Scala? It doesn't have Rust's advantages. But people miss that, if you have a Scala problem rather than a Rust problem, and you pick Rust anyway, it's not going to go well...
Part of the issue with alternative JVM languages is that there isn't a good standard way to mix them within a single application. There are various compatibility layers for calling foreign functions but nothing baked into the lower-level platform. So that forces a degree of competition and mutual exclusion.
Whereas with Microsoft's .NET CLR there's a standard calling convention supported by all the languages. So it's easy to mix and match C#, F#, VB.NET, etc within a single application or reuse libraries. So developers have more freedom to pick the best language for each problem domain.
(It looks like Hacker News filters out the Unicode sharp symbol. Why?)
So many cool things already done and even more to come. Some parts that excite me as a Scala nerd:
* One can now use implicit function types to basically build your own table language syntax that is type-safe. [1]
* Multiversal Equality: you get to decide whether it makes any sense to compare an Apple and an Orange using "==" or "!=", as opposed to Java's forced requirement of allowing you to compare anything with anything. [2]
* Null safety checks! Quoting from [3], "Adding a null value to every type has been called a "Billion Dollar Mistake" by its inventor, Tony Hoare. With the introduction of union types, we can now do better. A type like String will not carry the null value. To express that a value can be null, one will use the union type String | Null instead."
* First-class enums, finally. [4]
* Erased parameters: you can declare variables specifically for type-safety that _don't exist_ during run-time, improving efficiency. [5]
> * Multiversal Equality: you get to decide whether it makes any sense to compare an Apple and an Orange using "==" or "!=", as opposed to Java's forced requirement of allowing you to compare anything with anything. [2]
Unfortunately it's being introduced in a fail-unsafe way that as far as I can see makes it virtually useless. If I see "x == y" in code, I have no way to be confident that this isn't an old-fashioned universal comparison without going into the details of x and y, so I'm no better off than I was without this feature.
I'm impressed with what they're doing (and love the language), and it's hard work, and their speed is faster than, say, Java's dead-pace evolution in the 2000s.
But, poking around at TypeScript, I've been blown away with the MS/TS speed of development. ~2-3 month release cycles, with non-trivial changes to the language (mapped types, conditional types, etc.), that are themselves unique/novel type system features, not like Java finally getting around to copying the obvious/best-practice approach to case/data classes.
Granted, I'm sure the MS/TypeScript budget is huge comparatively...
Seems like that's the "best" (realistic) way for dev tooling to evolve lately: come from, or attach yourself to, a ~top-5 tech company that makes their money somewhere else and can bankroll developer tools/languages (e.g. MS with TS, FB with React/Flow, Google with a myriad of things e.g. Dart & AdWords, Go).
Bringing it back to Scala, seems like they were close to this (flirted with adoption/sponsorship from a few startups like FourSquare, etc.) but, thinking about it now, "software development in the large" is a huge concern for those companies (e.g. anyone with enough extra money to actually pay for language/tooling improvements), and that's never really been Scala's strong suit (compile times, painless compiler upgrades).
TypeScript has a comparatively easier job because the type system is unsound. They can add new type features without needing to be paranoid about a hole in the system or a usability pitfall because the tools don't rely on types for correctness or optimization, and because users can easily cast to "any" or otherwise work around an annoyance.
When you want the whole system to actually deeply rely on the static type system, it needs to be much more robust, and that takes a lot of effort.
I'm not trying to minimize what the TypeScript folks are doing — it's a really nice, well-designed language. But the complexity required to evolve an optionally-typed language is closer to that of a dynamically-typed one than a statically-typed one.
Good point; I've mused before that dynamic languages will always be a ~generation ahead of static languages, due to what you point out, that type systems are hard, and very few people start working on a new type system as their hobby project (vs. hacking out a new syntax with an interpreter).
But I hadn't appreciated the difference in difficulty between an unsound type system like TS and Scala's type system...
I guess I've taken the formalism that underlies Scala's type system (e.g. the academic research behind dotty) as a given/for granted, but you're likely right, that TS can move faster because it doesn't need/have that level of rigor.
Is it actually unsound if you perform no casting? That seems to be a more interesting property, because then the additions they're making still have to mesh cohesively.
Contravariance then, right. Thanks! I haven't gotten into Typescript yet, and at this point it looks fairly large and a little daunting, so this is helpful.
The major tradeoff, IMO, is that it is even toggle-able at all, let alone being a feature switch that is enabled by default.
If you enter any random TS project today, odds are it isn't using that switch, and enabling the switch will lead you down a month long rabbit-hole of fixing cascading type errors.
Not only that, but since it isn't required many libraries in the ecosystem won't be using it. Comparing my experiences with TypeScript and Flow vs. my experiences with Haskell, any optional type system is going to leave a lot of gaps if you don't have a pretty strict not-invented-here culture.
For what it's worth, this isn't just an evolution of the language, but a complete rewrite of the compiler. I appreciate that they're taking the time to do it right. They probably could've pushed new, non-trivial features out faster if they weren't doing a total rewrite.
Pity that in this rewrite, they were unable to speed up compilation speed. From the latest reports, dotty appears to compile at about the same speed (if not more slowly than) scalac.
Where did you get that from? I’ve seen performance increases of 2-3x mentioned on conference talks: eg here https://d-d.me/talks/scalaworld2015/ slide 50.
This is pure speculation on my part, but my guess would be that better, simpler foundations and a fresh start end up unlocking a lot of optimization opportunities in the long run. But it would seem foolish to invest too much in optimization while things are still in flux.
Scalac has evolved considerably, but it appears to be much harder to evolve. The Scalac team and contributors are backporting some of the Dotty features, but this is far from a trivial effort. Doing the same changes in Dotty is typically much easier in comparison.
The Scala compiler has to be cleaned up. Maybe rewritten, maybe not. Dotty started as a fork of Scala, and now Odersky decided/announced that it'll be merged back eventually. (And it should conform to whatever the Scala language is/will be after the SIP processes for 3.0.)
Correct, but collectively, between now and 2020 it's almost certain in aggregate, Typescript will have significantly more `features` than Scala 3.0, and they will have the benefit of being used and iterated upon for 2+ years.
Like many things, the utility of a language can actually decrease as you add more things on to it. Both Scala and Go take a principled approach of only accepting heavily vetted features.
The time it takes to develop a feature is very rarely the bottleneck.
That's my negative point about Kotlin (sorry, offtopic here, but stil). It moved fast before release and now for a long time there were only very minor releases (well, coroutines were nice addition and that's all) despite the fact that there's a lot to improve in Kotlin.
There are a lot of comments pointing out pros and cons of Scala, comparing it to other languages using superficial proxies.
I'd like to share a different perspective based on my own work with OOP-centric and FP-centric languages.
If you take OOP to its logical conclusion, OOP is a slippery slope that eventually leads to Gang-Of-Four centric designs.
If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.
Both are equally valid ways to design complex applications, so it all comes down to individual taste.
Personally I have a taste for mathematical abstractions, so Scala is better suited for my way of thinking.
It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.
If we use this as the basis for comparison to other languages, it's very easy to understand that Scala has carved out a very powerful niche for itself and is here to stay.
> It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.
Nonsense. Scala was designed to make programming easier and safer than Java - XML literals and pattern matching certainly have nothing to do with category theory. Scala's for/yield has surprising behaviour when e.g. mixing lists and sets, precisely because it doesn't have a categorical grounding and was implemented in terms of what working programmers wanted to do with it rather than enforcing that you've formed a valid semigroupoid in the category of endofunctors.
Over time it's emerged that some categorical constructs provide useful ways of thinking about code and solving programming problems, but you're putting the cart before the horse if you think the language was designed for category theory.
> Scala's for/yield has surprising behaviour ... because it doesn't have a categorical grounding
Not sure what you're talking about, but for/yield is syntactic sugar for map/flatMap, just like the "do notation" is in Haskell. Of course it has theoretical grounding, because it wouldn't work without flatMap being the monadic bind.
> mixing lists and sets
Again, not entirely sure what you're talking about, but if true, it's entirely unrelated to for/yield.
> If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.
I don't understand what you mean by this, and I write production haskell for a living. We don't have teetering towers of transformers, and the best advice I've seen is often "put away the shiny tools and just use functions",
https://lukepalmer.wordpress.com/2010/01/24/haskell-antipatt... . Similarly in http://www.parsonsmatt.org/2018/03/22/three_layer_haskell_ca... , which is like one real transformer layer and a way of claiming only the capabilities you need in your impure code. His "invert your mocks" article talks about this, too.
Some are fixed (Either, inference for many of the type lambda cases), some are being fixed in the next version (container overloads, CanBuildFrom), some are not really issues at all (free theorems are fine, an extra map on a monad is not a problem and sometimes more efficient, specific compiler bugs have been fixed but never said anything about the language in general, Haskell as used in the wild (with orphan instances) doesn't guarantee typeclass coherence either), some are real but exaggerated issues (subtyping and implicits, type inference for tricky recursion), a few are genuine issues that remain and probably always will (having to trampoline your monads, no kind system).
It's been a while since I lurked Haskell forums/haunts (was into it way back when), but at that time Lens was a big deal...if that's not a highfalutin library made for the most entrenched monad geeks (and for the purpose of making setters and getters, no less), I don't know what is. I guess I'm really shocked to hear that Haskell is all pragmatic and simple now with slim abstractions...but I'd love to be corrected.
I'd suggest that you're giving Lens a pretty short shrift. It's not "just" getters and setters as it works on immutable data. It also has a composition story that's better than normal getters and setters and _much_ better than other immutable data update stories.
I disagree with that. Haskell is category theory as a language. The Scala community got invaded by Haskellites trying to turn Scala into Haskell, but there’s a particular style of OOP/FP hybrid that requires a language like Scala to use/teach/explore. The problem is that in creating the tool to enable that, we ended up with a language that was too big to have a coherent style, which led to an incredibly fractured community.
> I disagree with that. Haskell is category theory as a language
No it is really not. Haskell only really has one category of any import, typically denoted 'Hask'. There is no builtin way to embed or express arbitrary categories in Haskell. It's not even clear what this would mean, given that Haskell only has one conception of arrow (the function type '->'). In fact, Haskell is so far removed from true category theory that we have to use a compiler plugin (CCC by Conal Elliot) to get anything approaching compilation to arbitrary categories.
Haskell is based on the polymorphically typed lambda calculus. Some decades after its design, some people realized they could use some abstract algebra concepts to help structure programs. You can use these concepts in any language with a sufficiently expressive type system. It is not specific to Haskell
Designing good multi-paradigm languages is very hard. I'm a big fan of Mozart-Oz and Common Lisp, and I think Odersky did a really good job with Scala. In fact, given that Common Lisp is languishing and Mozart was never a serious real world contender, Scala fills in a niche where there are not many competitors. C++, just for some use cases, Julia for others, and .NET.
The fact that many organizations working on massive datasets have embraced Scala proves there are not many alternatives around, sadly.
I don't think CL is attracting a lot of new developers or there's a lot of new libraries getting developed. But I would be glad to be proven wrong. Perhaps Racket will eventually become a CL replacement, now it's adopting a lot of Chez low-level stuff and it's multiparadigm efforts keep growing.
My guess would be the JVM. Racket is an academic language, designed for academia. At least it markets itself as such. Made to explore the design space of programming languages.
Clojure was designed and marketed for the enterprise. Builds on the JVM, full Java interop, emphasis on pragmatism.
I disagree with this. Haskell's creation [1, 2] predates the realisation (by the FP community at large)
of the close connection between parts of category theory and parts of
pure functional programming, which happened in the 1990s, perhaps
driven by Moggi's realisation that monads are a fundamental
abstraction in computation that can reconcile effects with pure functional computation. More importantly, there is no category
"Hask" of Haskell programs, and there isn't even a candidate category that's even close.
One of Odersky's motives in creating Scala was bringing the power of
Haskell into the JVM world, although this is not the only motive
(tight integration of OO and FP being another).
Haskell's key innovation over its predecessors (Miranda and ML) was
the addition of higher-kinded types, which in turn made ad-hoc
polymorphism digestable in a typed world (in the form of type-classes). The combination of HKTs and
type-classes enables the monadic abstractions that contemporary
Haskell programming is widely known for today.
Note that Scala has HKTs and type classes (via implicits), so Haskell
programs can typically transliterated into idiomatic Scala without
major complications.
> One of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world, although this is not the only motive (thigh integration of OO and FP being another).
I'm not sure why you would think that. Odersky has long claimed ML as his primary functional programming influence, not Haskell. It does have HKT, so there's that...but that's about it. Typeclasses aren't really a part of the language. They were made possible by implicits. They have since had some language level support (context bounds), but that is more of a nice to have than a primary influence.
Just because Scala can "do" Haskell doesn't mean odersky was inspired by it any more than ML or Java.
From an abstract PL theory POV, Haskell's key innovation over ML was HKTs.
It was an experiment at the time, but one that was successful beyond
all expectation: I cannot imagine designing a new PL without HKTs. (Note
that Rust is also trying to add HKTs, but has been running into difficulties with type-based lifetime tracking IIRC.)
Implicits are a generalisation of default arguments (I don't know
where they were pioneered, maybe C++). Implicits were first tried in
Haskell [1]. Scala refined them in several steps, the last being [2]
which represents implicits on the type leve. Mimicking type classes via implicits
is a well-established Scala idiom [3].
In fact, ML modules have had higher-kinded types [1] since before Haskell even existed. I guess Haskell's main innovation in this domain is really its very convenient higher-kinded parametric polymorphism with type classes.
[1] MacQueen, D B (1984). Modules for Standard ML. Conference Record of the 1984 ACM Symposium on LISP and Functional Programming Languages. 198-207.
Thanks. That's really interesting. I need to read MacQueen's paper. Is this full HKT, in the sense that all constructions wiht HKTs can be encoded in MacQueen's system?
If by "all constructions wiht HKTs", you mean what can be done with HKTs in Haskell, then I'd say yes. It is well-known that ML modules provide a very advanced level of expressiveness, especially since OCaml's introduction of first-class modules. Also, Scala took this idea further and provides principled recursive first-class modules (which Scala calls dependent object types).
The problem is that modules in ML have a verbose syntax and are clunky to use compared to type classes. OCaml's modular implicits aim to make this better (see https://arxiv.org/abs/1512.01895), taking inspiration from Sala's implicits.
I understand that making modules first-class gives them a lot of expressivity. But they are a fairly recent development" at least in OCaml the come much after Haskell. But are modules in MacQueen's sense first-class?
> Implicits are a generalisation of default arguments.
There is an extreme semantic difference between an implicit parameter and a default argument. Have you spent any time at all using them? That's like claiming HKT is just a generalization of type constructors. Would you claim that HKT isn't anything new or novel because constructors have been around forever?
Default arguments and implicits both have the same key idea: you can
omit arguments to functions, and the compiler, guided by type
information, synthesises the missing arguments during compilation. In
order to understand the difference between both, it is crucial to
realise that this compile-time synthesis of missing arguments has two
related but different dimensions.
- Declaration that an argument is allowed to be omitted (and hence
synthesised automatically).
- Declaration of the missing argument that is used in this synthesis.
Default arguments merge these two into one, e.g. with
def f ( int x ) ( bool b = false ) = ...
f (2)
f (2)
all calls f(2) become f(2)(false). The problem with this is that the
devault value to be used in synthesis cannot be context dependent.
Implicit arguments separate these two, enabling the programmer to
make default context dependent, e. g.
def f ( int x ) ( implicit bool b ) = ...
implicit val c = true
f (2)
implicit val c = false
f (2)
Now the first call f(2)
is rewritten to f(2)(true), while the second becomes f(2)(false).
> Would you claim that HKT isn't anything new or novel because constructors have been around forever?
I'm not sure I see the connection: constructors are program constructs, while HKTs are "types for types".
> Default arguments and implicits both have the same key idea: you can omit arguments to functions, and the compiler, guided by type information, synthesises the missing arguments during compilation.
Saying implicits are a generalization of default arguments because they're both synthesized during compilation is like saying steam trains are a generalization of pipes because they're both made of metal. It elides too big a difference to be helpful, to the point that it's actually misleading.
I'm not saying implicits and default parameters are the same thing. I'm saying the key idea behind implicits is the realisation that default arguments merge two ideas that should be kept separate, if we want context dependent defaults. Some of implicit's other features can (and have been) added to default arguments, see my other reply.
> I'm saying the key idea behind implicits is the realisation that default arguments merge two ideas that should be kept separate, if we want context dependent defaults.
This isn't true though - that's not what implicits are for and not how they're used. Indeed the clearest proof is that Scala still has default arguments, and they still have the behaviour given in your OCaml example. They're different things.
I did not say implicits and default arguments are the same thing. I said that implicits improve upon default arguments in various ways, the key realisation being the split between providing elided values in a context-dependent way, and permitting the elision of values.
I should not have said the key idea, but a key idea.
Default arguments are convenient if you don't need context dependence of elided arguments. I'd probably have remove defaults in order to have a smaller and more simple language.
Maybe a disagreement here is fueled by the power level difference in these two features. Beyond the superficial injection examples, Scala implicits also support injecting values constructed dynamically and also recursive/derived implicits.
Your example of nested functions with default arguments does not correspond to what derived implicits do. As a result of implicit resolution, an expression with an arbitrary number of subexpressions may be synthesized, the shape of which depends on the types involves. Default parameters simply cannot do that.
The example you give to justify "recursion is quite restricted" is not a restriction on recursion at all, it's an ambiguity problem. Define `a` as `y` to shadow the function parameter, and it compiles.
Thanks for the Simplicity reference, hadn't seen that yet. Seems like it builds on the implicit calculus [1], which applies more generally to any lambda calculus than in the Scala context.
Would love to see a citation re: Odersky (or if he'd chime in here).
My general assumption is that if Scala was intended to be category theory, implemented, something along the lines of scalaz or cats would have been built into the language -- the philosophy of the language does not include a JS-esque philosophy of small stdlib, big library ecosystem.
What does this have to do with the original assertion that "one of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world"?
AFAIK, Odersky doesn't particularly like people trying to replicate Haskell patterns in Scala. For example, he thinks using monads for most effects is inappropriate.
Disclaimer : I'm nobody important, but I do have an opinion
If scala team were to disavow sbt, that'd be the single best thing they could possibly do for the ecosystem.
I used to write a lot of scala, and working with sbt was enough to eventually get under my skin.
I really like some of the OOP aspects of scala, the left-to-right style of thinking matches how my brain works. Scala having a lot to offer can distract new learners, especially with just how overwhelming all the different features can seem for someone not coming from both sides. I had ocaml and ruby under my belt when I found scala, so it was an easier curve, but lots of guys struggle with it. I don't mind showing them, and of course I learn a lot from guys with different experiences and different backgrounds, but that's sometimes a complain I hear, also.
It's a language that doesn't compare very well. What is scala like? We don't know, there's only one language like it, it's a departure from many different paradigms.
I'm back to writing ocaml, all said and done. I enjoyed my time with scala, and interacting with my guys on the other side of the office still doing scala is always enjoyable. Smart guys! I'm not academic enough to grasp some of the new DOT stuff, but they are excited about it.
Anyways, I gave up hope a while back on scala getting rid of sbt. There are other build tools that exist, cbt, mill, maybe more, but without the blessing from lightbend or whatever they call themselves this week, it's not feasible, or responsible to deliver a solution to a client with an unofficial build setup. Of course, after five years of doing sbt I decided it's not ok to deliver sbt either.
Thirded. There are some exciting things happening in the Scala build tool space right now. We're migrating everything to Bazel right now, including our Scala code. Bazel + Scala is not amazing yet, but so far I have enjoyed it much more than sbt. It's also very nice to use the same build tool across all our languages.
If you look at what they currently support in terms of plugin availability and stability these are nowhere near being viable alternatives at the moment.
The only remotely viable alternative to SBT I've found is Pants. (I guess Maven and Gradle might qualify too, but it's been so long since I've used them with Scala that I can't be sure.)
The thing with Scala if you’ve come from OCaml is there’s no substitute for real Hindley-Milner, and the Scala object system isn’t good enough to compensate
I've moved to Scala in 2012 (because I needed to work on the JVM) from over a decade of Ocaml. I found that the absence of full type inference was not a major problem. The JVM library ecosystem on the other hand ... lack of libraries (at the time) really hampered Ocaml.
Mainly Spark, but it really depends on the domain, what libraries exactly that you'll need, hell I look at what kind of deployment routine the client wants, what their maintenance crew looks like compared to what they want from me.
There is plenty of scala in fintech, I can tell you that. But there's plenty of ocaml, f#, matlab, excel macros, r, kdb, cobol, fortran, c++, a vast array of different technologies.
Most places that choose scala are choosing it over erlang or Java or c#, they USUALLY only choose scala over ocaml if they want to have some old ocaml code talk with new infrastructure that they want built in scala, ime. Because of how performance works and that multi core isn't working yet, ocaml is limited in application if they're willing to find or already have scala programmers.
Up to a point. I had a look and an implementation of traverse is already getting pretty cumbersome, and something like recursion-schemes seems beyond what anyone's even attempted.
I don't think SBT is in any way officially endorsed in the first place? Many popular libraries use it, but there are plenty of other options. If you're not maintaining a public library that has to cross-build against multiple versions of Scala, IME there's no reason not to just use Maven, which gives you simple and well-documented builds.
What problems did you have with sbt? Depending on how long a go you used it, those problems might have been fixed. In the last couple of years, a lot of simplifications have been made to how sbt build scripts are written.
SBT is the worst build tool I've seen in the 20 years of programming across a dozen languages.
And by a very, very long margin. It has arcane syntax, is slow, is incredibly complex if you want to extend functionality, it has multiple configuration files, the ridiculous Ivy global lock and inability to fetch Ivy artefacts in parallel etc. And that's just the start.
> SBT is the worst build tool I've seen in the 20 years of programming across a dozen languages.
>
> And by a very, very long margin. It has arcane syntax, is slow, is incredibly complex if you want to extend functionality
This seems a bit hyperbolic and a little vague. So, I can't really address it directly. Interestingly, I find sbt much easier to extend than most build tools because sbt tasks generally don't rely on side effects to communicate. Obviously YMMV.
> it has multiple configuration files
Doesn't every build tool have this? Or do you mean multiple config file formats? If that is the case, newer version of sbt have moved a single file format.
> the ridiculous Ivy global lock and inability to fetch Ivy artefacts in parallel etc
I can understand this concern. Ivy is kind of annoying. Fortunately, a new artifact resolver called Coursier[1] has been written and it solves this problem. You can use it now and it is currently in the process of being integrated as part of the default SBT install[2].
Macros going wrong with complicated builds, implicits galore all over the place, just the usual complaints.
The thing that makes sbt difficult in practice for me is twofold. I don't want to deliver hackedy shit to a client that I then have to defend if their quality control comes back complaining about it. Being able to use regular scala in sbt definitions is convenient but at the cost of making things expressable that should wisely be done another way.
Which brings me to the second point which is getting a sbt definition from a client and then they don't want to change it, or they don't have quality control so it's just awful, or its so bad that someone has to rewrite it for the solution. The latter being the easiest to deal with, but that also costs time.
For those looking, what things do you feel have most improved the sbt experience?
> Macros going wrong with complicated builds, implicits galore all over the place, just the usual complaints.
I'm not sure I totally understand this. What do you mean by "macros going wrong"? It doesn't seem like there are that many implicits used. The mostly to add methods to strings to make expressing things more concise. Do you think implicits shouldn't be used at all?
> The thing that makes sbt difficult in practice for me is twofold. I don't want to deliver hackedy shit to a client that I then have to defend if their quality control comes back complaining about it. Being able to use regular scala in sbt definitions is convenient but at the cost of making things expressable that should wisely be done another way.
> Which brings me to the second point which is getting a sbt definition from a client and then they don't want to change it, or they don't have quality control so it's just awful, or its so bad that someone has to rewrite it for the solution. The latter being the easiest to deal with, but that also costs time.
This seems contradictory. It seems like in the first paragraph is at odds with the second. In the first paragraph, it sounds like you are saying that you want to be able to write messy builds and not have anyone complain. In the second paragraph, it sounds like you are saying you don't want deal with other people's messy builds. Am I understanding correctly? If so, do you want messy builds or not? How is this related to the build tool? What build tool prevents people from writing messy builds? Are you thinking of something like maven that is very rigid?
> For those looking, what things do you feel have most improved the sbt experience?
Some things that may (since I don't know when you last looked) have improved:
* SBT 0.13 greatly improved the .sbt build file syntax (no more line breaks between settings, multi project in one file)
* SBT 0.13 also cleaned up the symbol soup. Now to define tasks/settings you only need to know 3 symbols that are fairly self explanatory ':=', '+=', and '++='. If you want to depend on another task/setting output you just have do `taskName.value` in your task/setting definition.
* A new artifact resolver has been written[1] that fixes all of the issues with ivy. It can be used today and is currently in the process of being integrated in the default SBT install.
* SBT 1 introduced a build server concept with language server protocol support that is starting to improve IDE/editor integration
* This isn't strictly SBT but IntelliJ now natively understands .sbt files so you can autocompletion and code navigation.
I was in a meeting where this just happened. And as liberal as I am, it is exactly PC nitpicking. You can also choose to lead by example rather than correcting people. Despite all the good intentions in the world, people may not react well to this style, making your contribution locally ineffective.
This is double if the person you're correcting doesn't think what they're doing is "wrong".
Some of my biggest stressors in the past came from trying to change people or believing that I was entitled to have people act and react the way I wanted them too.
I don't give that privilege to others and I don't expect anybody else to.
I say guys because my developers and analysts and quality managers are all men. The only women I work with are two secretaries, and they're not even in my department.
What's too bad, friend, is that you have deluded yourself into believing you have the right of way over another person, for any reason whatsoever.
> Scala 3 code can use Scala 2 artifacts because the Scala 3 compiler understands the classfile format for sources compiled with Scala 2.12 and upwards.
I feel like this isn't getting much attention but it is a huge deal. A big part of the reason that Python 3 went the way it did was users didn't want to upgrade until their libraries did and libraries didn't want to upgrade until their users did. With Scala 3, users can upgrade pretty rapidly, freeing libraries to upgrade without fear of leaving their users behind.
Just curious, what companies (or kinds of companies) are betting on Scala nowadays? It seems that the people wanting "better Java" all decided they like Kotlin, and the functional programming people now gravitate more towards either F# (for .NET ecosystem) or OCaml/Reason (for the more unixy world). And the academic/research/learning crowd seems to like Haskell more.
Who's still in Scala boat? Are Google or Fb or other big cos known to give back to open-source growing any Scala codebases now?
I'm working since 3 years at a growing startup that has been hiring Scala Engineers at least every half year since I started.
In the last ~18 months, the amount of CVs we're getting has been constantly increasing. Part of that is probably that our startups hiring matures, but it's also the kind of CV that is changing.
I'd say that 3 years ago, there was an 80% chance that the applicant was highly self-motivated to learn Scala in their freetime, and tried/did introduce it at his/her current workplace.
Today, there is an 80% chance that the applicant either "had to" learn it in their current workplace, or learned Scala when switching jobs. (Don't get me wrong, they're still motivated, and they took the chance when it was there!)
So there is a switch from the Early Adopters to the Early Majority (where the Early Majority now has worked 1 or 2 years with Scala at their current job, and is confident enough to look for a new one).
One driving force was definitely Spark, but there are a lot of Enterprise apps, unrelated to ML (usually with higher traffic requirements). The sort that would have most likely been done with Java or C# 5 years ago. It seems a lot of Enterprises introduce Scala when they try to break up their (Java) monolith into microservices.
So it seems that Scala has been carving out it's place in backend/microservices with scalability requirements, and is eating part of Javas cake there.
> It seems that the people wanting "better Java" all decided they like Kotlin
I've been using Java and Scala exclusively for the past 10 years at two large NY banks and two fintech start-ups, and I literally do not know anyone who has ever compiled a Kotlin program.
Simply extrapolating from one's own experience is fraught with peril.
F# only has access to a subset of .NET deployment scenarios, where Scala can be used pretty much everywhere there is a JVM.
So most companies actually are more keen to bet on Scala than F#.
OCaml/Reason world lacks the wealth of Java libraries, which are an import away on Scala.
Regarding Kotlin, so far its killer use case is targeting Android, where one is stuck with an ageing Java subset. Outside of Android, it remains to be seen how it will keep up with the Java improvements coming up every 6 months now.
Kotlin and Java 8. I started using Scala in Java 7 era and loved it. When Java was stagnating it seemed like an alternate language that targeted the JVM was the best option. Progress with the language has really picked up a lot. There is a lot that Scala has that Java doesn't, but now its not enough to make it worth it. Java is so much easier to work with on a large team.
The two languages though also have preferred libraries / frameworks. I code in Java again and the main thing that bugs me is being always in the Spring ecosystem. There is a mindset in javaland that every technology needs to be wrapped by Spring. Scala gave me the freedom to choose alternative libraries.
Oft. I agree with you there. I never enjoyed spring. I don't get the appeal. With scala I loved play framework. I know it has java support but I never tried it. It used so many scala idioms and features I'm not sure how well it would translate.
Java is a simple language, the syntax is easy to parse and logic is generally straightforward to reason about - as a programmer you have very little ability break expectations of how the language itself behaves (no operator overloading, etc).
Scala, on the other hand, gives you a huge toolbox down to some really complicated to reason about features like implicit parameters, creation of completely arbitrary operators, etc.
[Insert some funny pun here about Java giving you a simple tool while Scala gives you an incredibly complicated one]. They're both great languages, but they serve very different purposes and audiences - Kotlin happens to fit Java's demographic better than Scala as a result, it doesn't have the magic and complexity to the same degree Scala does (the most confusing new constructs probably revolve around builders/lambda's with receiver types which aren't needed by most developers not writing DSL's).
That assumes that Java is the only funnel for Scala. Sure, Kotlin likely now captures more of the people trying to get out of Java, but how many people does it capture with backgrounds in Haskell, OCaml, Lisp, R, Python/Pandas, or any other language for that matter?
No? I came to Scala from R and Clojure. The point is that Kotlin has pigeonholed itself into being a language for people who can't stand Java. Scala is partially that, but it is also vying for the attention of anybody who does object oriented programming, anybody who does functional programming, anybody that cares about type safety, etc.
The idea that Kotlin is hurting Scala adoption rests on the assumption that the only people who would ever adopt Scala are the people who are trying to get rid of Java.
Eh, it mostly works on .Net Core. There's still some bugs with the compiler around Generative Type Providers when running the compiler on the .Net Core runtime, forcing you to compile with a build of fsc running on mono or the full .Net Framework.
Still, for all intents and purposes you are correct - but there's still some growing pains in the tooling that are hard to ignore.
Meh, the reason many people weren't targeting .Net Core until recently was because of the huge pain in the ass it involved. .Net Standard 1.0/1.1 were missing a LOT of API surface area that many libraries needed. .Net Standard 2.0 fixes this, and makes life easier for library authors who already support multiple runtimes in the process by providing a sane upgrade path from the hell that is PCL's. It's all about tooling, there's no reason for library authors to not target .Net Standard in their projects now unless they really need some Windows/Full-specific assembly that isn't included.
> Meh, the reason many people weren't targeting .Net Core until recently was because of the huge pain in the ass it involved
+1000. It wasn't even library support, it was the tooling that was a huge pain in the ass. Half the time it didn't work right, builds broke unexpectedly, build file formats kept changing, cripes what a pain. I swore off it until the early .NET 2.0 standard prereleases when things seemed more stable, and it's been much easier to port my libraries.
I've been using .NET Core since it was released. Sure in the beginning library support was spotty at best, but that was quite a while ago. I have yet to find a popular library that doesn't support .NET Core/Standard.
> F# only has access to a subset of .NET deployment scenarios
And what subset would that be? The only place you can't use F# IIRC is UWP application which is likely not the deciding factor in choosing F# or Scala.
- Windows form works but not perfect. The editor works if you install the template for it, but not the auto generation of code, like double click on button -> handler generated. You write code programmatically. but is used a lot. I use it in the repl (fsi), to generate chart, custom data visualization.
- WPF the same, works, no editor (but codegen is less needed)
So depends how much time you edit the view (and why), vs gains in the logic behind the view.
For me the global tradeoff, but depends, so you are right.
The only time I have ever felt compelled to design a UI visually is when working with iOS or macOS because the framework is so centered around Interface Builder. When I write a WPF or JavaFX view I am not using Blend/Scene Builder to drag and drop controls, but to have a mostly-accurate preview of what crap looks like without having to build and run.
If you're waiting for feature parity with C# in VS, you'll never get it. There are millions of C# developers and F# devs are counted in the tens of thousands (sadly) so there's a reason for that.
I don't believe most organizations considering adopting F# or Scala are considering them for GUI development so I'm not sure why you'd rule out F# because of that
This is kinda true, but not in the sense that you mean it.
And maybe not across the board, but my clients are very comfortable with the (very few) F# solutions we've done. Ocaml clients generally use it for stability, or correctness, they certainly aren't competing in the same sphere that kotlin, scala, Java really operate in. Prominent in the financial spade especially, but the trend is the same everywhere.
Reason may eventually help ocaml move closer into those spaces, as people are coming to ML with more openness and are more eager to learn and get involved, so that'll be cool to see in the future.
Anyways, don't count F# out either. I'm no expert, only ever used it with other project leads and with smaller clients, but MS is doing impressive work lately, like NET Core and stuff like that. They at least can show that making net more visible and flexible is part of what they're after.
Java won't be killed. Ever.
But people may stop writing it eventually.
Python is king in analysis, but for big data engineering, most of the building blocks (as mentioned elsewhere in this thread, Akka, Kafka, Flink, Spark, etc) are written in Scala
pyspark might be the go-to language for data scientists playing with the spark repl, or MLLib, but for production data engineering, scala is still king.
Besides performance and the obvious fact that not knowing scala makes it difficult to understand the underlying Spark code, there are multiple ways in which scala is more natural to develop in (many libraries are for scala only, for example).
I don't think so. Python and data frames is arguably more natural to think about and reason than scala.
I have no doubt that scala is more performant and the "fat" jar mechanism makes dependency management and codeshipping very easy (it's still tricky to install python dependencies on your spark nodes), but the pandas ecosystem is definitely more intuitive to understand.
I have the impression you are leaning towards thinking of data analytics (pandas, data frames, etc) whereas I and some other commenters may be thinking more of more data pipelining kind of architectures, where you can't afford wrong typing, scale is quite large and you are not even doing the kind of operations pandas dataframes are useful for
It depends on the use case. Our work primarily revolves around extending Spark with custom pipelines, models, ensembles, etc. to be deployed into our production systems (petabyte scale). Scala was really the only way to go for us.
I can understand performance difference, but I have not generally seen a difference in building custom pipelines and ensembles .. although I grant I'm not at your scale yet.
What kind of specific pipelines did you have trouble in pyspark ?
Although we decided to start using Scala specifically because PySpark was not as performant (2.0 is not so far ago), a reasonable use case I keep always in mind is aggregation (and in general any API which is still not solid/experimental/under work). Python bindings are always the last to be available (because all groundwork is being done in Scala). We have a relatively large scale process that takes advantage of custom-built aggregation methods on top of groupedDatasets, where we can pack a good deal of logic in the merge and reduce steps of aggregation. We could replicate this in Python using reducers, but aggregating makes more sense semantically, which makes the code is easier to understand. Also, the testing facilities for Spark code under Scala are a bit more advanced than under Python (they are not super-great, but are better), even without considering that being strongly typed makes a whole kind of errors impossible, right out of the compiler.
I very, very rarely think of using PySpark (and I have way more experience with Python than with Scala) when working with Spark. In a kitchen setting, it would be like having to prepare a cake and having to choose between a fork and a whisker. I can get it done with the fork, but I'll do a better and faster job with the whisker.
Only checked the implementation of the "Arrow UDFs" recently, because I'm interested in the Arrow interaction (for curiosity), so still don't have a strong opinion. My main concern is that a lot of the PySpark systems are playing around how to interact and speed up the systems while still staying on top of the Scala base.
I'd recommend Dask (haven't tried it much but from all I've seen is top-notch) to anyone who wants Python all the way down (at least until you hit the C at the bottom) ;)
well we run a hundred machine cluster on Dataproc for doing our stuff. Dask is still not battle-tested, cloud ready (or available) and is generally harder to work with than pyspark.
In general, I will stay happily in the spark world using pyspark rather than go to Dask right now.
Being able to pass data through Arrow is a big improvement, but there's also a lot of serialisation going on you pay in Python. Also, if you want to do anything in the fancy areas (like, write your own optimisation rule for the SparkSQL optimiser) it's Scala. Even something simple as writing a custom aggregator is impossible in Python (at least it was in 2.2, haven't checked in 2.3 or "current" 2.4)
Scala is still primarily used for data engineering workloads due to the fact it is a JVM language. (There's Java too, but no one wants to write Java code)
PySpark is often used for data science experimentation, but is not as frequently found in production pipelines due to the serialization/deserialization overhead between Python and the JVM. In recent years this problem is less pronounced due to the introduction of Spark dataframes which obviates the performance differences between PySpark and Scala Spark, but for UDFs, Scala Spark is still faster.
A newer development that may change all this is the introduction (in Spark 2.3) of Apache Arrow, a in-memory column store engine which lets Python UDFs work with the in-memory object without serializing/deserializing. This is very exciting as this lets Python get closer to the performance of JVM languages.
I've played around with it on Spark 2.3 -- the Arrow interface works but still not quite production-ready but I expect it will only get better.
Many folks are making strategic bets on Arrow technology due to the AI/GPU craze (and an in-memory standard enables multiple parties to build GPU-based analytics [1]), so there is tremendous momentum there.
At some point I expect the relative importance of Scala on Spark will decrease with respect to Python. (even though Spark APIs are Scala native)
Scala has an insanely productive and efficient streaming ecosystem with the likes of Kafka and Akka streams. You can use those with other languages but it's not nearly as nice.
Why isn't it nearly as nice? Is there something particular about Scala such that Kafka or Akka are best implemented in it? Could you give some concrete examples (e.g. compare it with Java, C, C++, Rust, Haskell)?
(Compared to Java, C or C++ there's all the usual ML-family goodness - first-class functions, algebraic data types with pattern matching, type inference).
Scala has a for/yield construct similar to Haskell's "do notation", which is the perfect way to work with async code - it strikes the right balance of avoiding an unreadable callback pyramid of doom, but still your async yield points visible (the difference between <- and = is about as concise as it gets, but still very visible). And having HKT and allowing you to express concepts like monads means that a whole library ecosystem can build up of standard operatons operations (e.g. traverse, cataM) that work on async futures and also on other contextual/effecty types (e.g. audit logging, database transactions). That's the big advantage it has over Rust and most other competitors, and it particularly shows in things like Kafka/Akka that need to work with async a lot, but also in large programming projects generally. (What it shows up as in practice is that you can do everything in "plain old code" - all the things that need reflection/interceptors/macros/metaclasses/agents/... in other languages are just libraries in Scala - and that makes development so much easier and bugs so much rarer)
Haskell has all that - the trouble with Haskell is that "there's no way to get to there from here". I was able to go from writing Java on Friday to writing Scala on Monday - not great Scala, but working Scala, and I was just as productive as I was in Java. I couldn't've done that with Haskell.
I think the parent comment was talking more about using those libraries with Scala, not necessarily implementing them.
Can't speak too much about the others, but I recently looked into using Akka with Java. Akka itself had a good deal of documentation for Java users. However, other related products (look tools for monitoring Akka, etc) were clearly treating Java as a second class citizen. When there was documentation, it would be outdated, and unmaintained. I ran into this over and over again, which weighted heavily on my decision to not use Akka. I'm not quite sure if there is anything about the language that would have made it particularly challenging (doubt it), but it definitely seems like the community built around it is more Scala centric. And that in itself makes it challenging to use with anything else.
I took the comment to mean that Scala is the best choice for having implemented, e.g. Kafka, and that other languages somehow aren't suited for it, and asked the question with that in mind.
I still think Scala is a better better-jav than Kotlin, but Kotlin captures that market better because of the much lower learning overhead.
I'm much more of an ML-style programmer, but the thing keeping me away from OCaml is the lack of a multicore runtime. Not only is Scala's concurrency mature and performant, implicits (like having an ExecutionContext) make it extremely easy to use. Even if OCaml released it's multicore runtime today, it would take years to get to the maturity of the Scala ecosystem.
Plus, occasionally I run into areas where OOP really blows away functional programming. It's nice to not have to learn a new language for those occasional times.
> Just curious, what companies (or kinds of companies) are betting on Scala nowadays?
LinkedIn, Twitter, The Guardian, Morgan Stanley, Barclays, Zalando. Generally speaking, Scala is used a lot by companies involved with Big Data (because they use Spark).
I’m not sure if your info is dated or mine is, but all the systems development at LNKD that was being done in Scala is now done in Java. There is some legacy Scala still but as of a couple years ago they decided no new development. Perhaps they are still using Scala in a data science context, since it is hard to avoid there, these days.
It's trivial to avoid it in data science, with even modest to large (although not "Big") data in the few TB range. My data science team's entire stack is basically Python, with a smidge of Java and Rust for infrastructure (soa, pipelines) development.
I almost added the clarifying statement "on the JVM" but got lazy. Yes, Python is the obvious choice for data science until you hit a certain scale.
It's not even clear to me that most companies doing data science in Scala have that scale -- they're just using tools and libraries companies at that scale have open sourced. You could call it cargo culting, but I think it's more nuanced than that. I think engineers can be separated into two camps roughly: those who are passionate about the language(s) they use, and those who are simply trying to get the result they need and don't care what language they use to get it. A lot of data science engineers naturally fall into the second bucket, so using Scala because a library they want to use is written in it comes naturally, even if they could get the job done in Python (possibly with a bit more wheel reinvention).
Python is a good choice for data science even at relatively large scale. I'd question it's suitability for stable, scalable deployment in production (not to say it can't be used then, just that I wouldn't necessarily reach for it first, preferring either C++ or Rust for that).
Scala just doesn't figure into the picture at all. I consider that some "Big" data tools were written in it to be a matter of trivia and not essential to the work of data science.
I think your productivity in Scala would be quite a bit higher than in Rust. I've done reasonable amounts of Rust and quite a lot of Scala, and _given the current state_ Rust is simply slower to develop in. C++, well, you know what the downsides there are if you prefer it over Python.
I think for a particular data science mindset (the category theory toting, bijection loving person) Scala actually _is_ essential to the work of data science. But these people are in a minority.
Anyway, if you're truly in the second category, then the fact that the best library for doing X is in Scala would mean you're going to write some Scala, despite the fact that it's ~accidental that it was written in that vs Python.
This doesn't mean they're using Scala to develop, necessarily. Some architects at the company I work for decided we needed Kafka, but there isn't a single line of Scala being written by anyone here.
Twitter created a website "Scala School" awhile ago, so I'm guessing they didn't jump ship yet (but could be wrong). https://twitter.github.io/scala_school/
Interesting that you see it that way. I see Haskell and Purescript usage growing, and I don't know many people interested in O'Caml / Reason. Not trying to say you're wrong---just commenting on the effect of our respective filter bubbles.
As for Scala usage. I'm a Scala consultant, so I'm biased, but I'm seeing a lot of adoption. Most finance companies and most media companies are using Scala.
Yeah, I don't think Scala 3 changes anything with regards to perception and/or adoption of Scala. That ship has pretty much sailed.
Martin Odersky seems like a really good language designer. I took a look at the Scala 3 languages features, and a more radical departure from historic Scala probably would have helped Scala.
Three or four years ago the biggest criticism of Scala was that releases were breaking backwards compatibility too much. On this very thread you'll find people worrying that Scala 3 will create a Python 2/3-style endless awkward transition. At this stage it's a mature language with a big established ecosystem, it can't afford to break too radically.
(Did you have specific breaking changes in mind? The most important changes I'd want to make to the language - having a syntax for guaranteed-safe equality comparison and guaranteed-safe pattern matching - could be done in a backwards compatible way, and the only other thing I can think that I'd like would be Idris-style totality checking which could also be backwards compatible.)
I wouldn't have called it Scala 3. And if I had the big brain of Martin Odersky, 5 years ago I would've had started thinking about an intermediate step in programming productivity, correctness - instead of doing Doty. I don't follow Scala that much anymore, but it seems that it's mostly a reworking of the internal consistency of Scala from an implementation perspective. At least that was the gist of it I got from watching Odersky give a talk about it a couple years ago.
> 5 years ago I would've had started thinking about an intermediate step in programming productivity, correctness - instead of doing Doty.
Wait what? In the last post you were complaining it was too close to existing scala, now you're saying it's too big a step? Again, what is the change you're actually advocating?
You misunderstand. The intermediate step in programming productivity has nothing to do with any particular language. I would have not done "Scala 3", but something else that implements "intermediate step up in programming productivity and correctness". Probably something a bit radical, but hey development is so stagnant when it comes to way we program, it would've been worth the risk.
Hint, the stuff that Chris Granger was trying to accomplish.
I find Scala is the thing that's actually advancing programming productivity - even when it comes specifically to IDEs/HCI/visual programming - whereas I expected Granger's effort to fail as it did. So I'm very glad Odersky's continuing to focus on Scala; if anything I was worried that his efforts on Dotty were detracting from maintenance and improvement of Scala proper.
Could you (and perhaps others) say more about this? I was excited to read that key goals were simplification, clarification, and consolidation. But my exact worry is that they didn't go far enough.
I liked the idea of Scala and took Odersky's course. But I built a few things in it and it was never not frustrating. What Bruce Eckels said resonates with me: "I’ve come to view Scala as a landscape of cliffs – you can start feeling pretty comfortable with the language and think that you have a reasonable grasp of it, then suddenly fall off a cliff that makes you realize that no, you still don’t get it."
My hope with Dotty, etc, was that they had learned a little humility and were pruning back enough to make it a decent developer experience. But I could well believe that it was impossible to do enough and still end up with something that could be fairly called Scala.
We use Scala heavily. If kafka/Spark/Monoids/Semigroups/Cats/Algebird makes any sense to you and you are looking for a job, send your resume to mansur.ashraf@apple.com
Here's one datapoint: AutoScout24 (the vehicle vertical of Scout24, a Europe-based digital marketplace) has fully committed to Scala a couple of years ago as part of a re-platforming (microservices on AWS). It's still the recommended default language.
Not a "big" company by global standards, but not small either, with around 120 engineers.
> functional programming people now gravitate more towards either F# (for .NET ecosystem) or OCaml/Reason (for the more unixy world). And the academic/research/learning crowd seems to like Haskell more
What if you want functional programming without having to go to MS stack and stay on JVM. You only have have two real options clojure and scala.
How have people faired with Scala integration with Java libraries? When Scala first came out, I liked the idea. If I wanted to use X library, great, I can. As I pondered it more, this seemed terrible. Say I hug the functional side of Scala. I know what I'm doing. Then I hire a Jr Dev. She comes from the world of Java/Python where functional is no as big of thing (keeping in mind that while Python does support, to a degree, functional, many use it as a procedural/OO language in school). Suddenly I have to worry about the Jr Dev doing silly things like introducing mutable Java objects into functional code. Side effects could start to show at odd times, which makes debugging terrible in a production environment.
All that said, how have others dealt with this issue? Did you just not work directly with Java libraries? Did you use the OO side as a Better Java?
There are many answers, often related to company size and interest in teaching. Some large bay area companies think that training people into a FP style of Scala is too expensive/hard to hire for, and end up using it as a nicer Java. Smaller companies that do not hire 100 people to work in Scala every year just bite the bullet, expect FP, and end up wrapping a large majority of Java libraries with FP abstractions, some thin, some quite big.
I prefer an intermediate solution when I can get away with it: Localize the mutable Java objects as much as possible, and just make sure that I don't leave a team/teammate that has little Scala experience all alone for a while. This often leads to styles that might be frowned upon by both camps of programming, but in my experience, dropping to imperative code when FP solutions are harder to optimize, while making sure that mutability is well contained and doesn't cross interfaces is Scala's happy place. Depending on the work to be done, each codebase can be pretty FP heavy, or be mostly imperative with an FP facade.
The real trick with Scala is really library design though: It's very easy to make a new library that has dozens of new concepts and is hard to learn and use, all while exposing things like Shapeless HLists to the outside world, while libraries that are easier to consume and don't crush compilation times through type magic are often tougher on the author, leading to more code generation and macros. Most library authors know so much Scala that they don't realize that just using their library well incurs in quite the mental cost to new developers.
I find that once you're working in immutable/pure style it's worth having an explicit boundary between Scala and Java, a class that exists only to wrap the Java thing you're using, and really it's pretty easy to do so - Java-style code sticks out immediately in code review (though you can implement rules to flag it if you really need to).
I've done all of these approaches in various projects - using Scala just as a better Java, with Java-style code, using Java libraries, then writing code that was in a more immutable style that used wrappers around Java libraries, then writing very pure (i.e. no unmanaged effects) code that either used scala libraries, or used Java libraries only in an interpreter for a monad. All of them work. Mixing mutable objects and pure functional style is never going to go well, and that's as true in Python-only projects (say) as it is in Scala/Java hybrid projects; the language interop doesn't leave you any worse off than you were without it, if that makes sense.
> How have people faired with Scala integration with Java libraries?
It's useful because the JVM ecosystem is much larger than the Scala ecosystem. There's almost always a library for your purpose. If it isn't as idiomatic Scala as you'd want you can often easy create a small wrapper.
> Suddenly I have to worry about the Jr Dev doing silly things like introducing mutable Java objects into functional code.
Code reviews. Let developers peer-review each other's code changes and share knowledge about the functional programming practices used in the codebase. This is not a technical problem and has nothing to do with Scala.
Language design can discourage those bad practices and lessen the workload of code reviews.
Catching bad practices and anti-patterns in code review sounds fine on paper, but I'm sure we've all been in the situation where there simply isn't enough time in the day to do intensive code reviews while still maintaining the pace needed for delivery.
I’ve programmed in Scala most days for the past 3 years, we use plenty of Java libraries, and it works well. Generally it’s as simple as a thin wrapper around the library, that converts mutable data classes to immutable versions (or vice versa), and wraps blocking methods in futures. Sometimes it’s more complex than that, but when I’m reaching for a Java lib, normally it’s just for a few methods that are easily wrapped.
Most Java libraries that would cause issues due to blocking or side effects are covered by the big Scala frameworks like Play/Akka or the large healthy ecosystem of alternatives such as those from Typelevel, 47Degs and many others. If I do need to pull in a Java library it's usually used for something minor and without side effects. On occasion I've had to write small wrappers around blocking or mutable Java API's but it's no big deal. wrt to junior devs they generally write code based on established patterns and we review everything in detail.
When interacting with a Java library, I use scala.collection.JavaConversions to convert any Java structures to the equivalent Scala collection. Once I'm done operating on the collection (map, filter, etc.), I convert it back to Java.
You should be using scala.collection.JavaConverters instead, as it makes the conversion explicit (by adding a ".asScala"/".asJava" method to collections).
Implicit conversions (i.e. where a type is silently converted to another type) is dangerous and can lead to very surprising problems.
My big question (not answered in the FAQ) is will it be faster or slower than Scala 2.12? Having worked on a big Scala codebase, the slow compiler speed is one of the biggest downsides to the language.
It's currently about as fast as Scala 2.12, which means it compiles at roughly 3000 lines/sec. However, it supports more aggressive incremental compilation than Scala 2 so at least in my projects, which are typically 50K to 100K lines, most compiles are under a second, and I rarely see compile times over 5 seconds.
There's ongoing performance work to make both Scala 2 and Scala 3 faster than they are now. A lot of what people perceive as slow compile times are actually other factors (such as inefficient macros or slow build tools).
> However, it supports more aggressive incremental compilation than Scala 2
I'm not sure what you mean here... but Scala 2 and Dotty have the same incremental compiler, and their independent bridges feature overall the same quality (which is high and heavily battle tested, for those interested).
It has to do with what kind of dependencies the incremental compiler wrapper (e.g. zinc) can extract from the baseline compiler (e.g nsc or dotc). The Scala 3 compiler dotc handles that itself, which has the potential to achieve better accuracy. But I am not sure how much it matters in practice. We'll find out.
> which has the potential to achieve better accuracy
The quality of the used names extraction affects accuracy, but not compile speed. I must say, Dotty and Scala are extracting about the same amount of dependencies, with some corner cases being covered better in Dotty (for example, changes in constant values).
I don't see a way Dotty can become a faster incremental compiler that doesn't involve adding the incrementality in the compiler.
There was experimental work to see how fast a scala compiler could be, but I think it's still far from production ready, was more of an experimente: https://github.com/gkossakowski/kentuckymule
I saw a presentation recently on Mill[1] which looks extremely promising and looks poised to remove the giant Scala development headache of working with SBT.
sbt is terrible, and I have a lot of respect for Li as "the scala.js guy", but I'm slightly baffled that YABT (yet another build tool) was really necessary vs. just making the existing (say) gradle experience super-polished.
Granted, it's probably more fun to start from scratch :-), just seems more realistic for long-term adoption/maintainability to leverage existing tools.
The models of SBT/Gradle/Maven are broken, or at least, that's the premise for writing these build tools.
The model of Mill, Bazel, Buck, Pants are fundamentally different in very important ways which allow e.g. using a shared build cache across all the developers in a team (or even all the developers in a whole company if one takes the monorepo approach for absolutely everything). This is not something that can be retrofitted into, say, SBT or Gradle.
It is a newer feature, so that might be why you're unaware of it.
Also, I get that the models of Bazel/Buck/Pants are fundamentally different, because they are inherently cross-language / "sit on top of other compilers" systems. And are best for mono repos.
Ah, right I wasn't aware of that. Still, AFAICT Gradle isn't nearly as 'ambitious' in tackling the hard problem of hermetic/reproducible builds as e.g. Bazel, so who knows how reliable that build cache actually is.
You may well be right about Mill, but it's probably too early to tell exactly where it'll land in the design space.
(Btw, I think it's entirely right to be wary of yet-another-build-system, especially when it doesn't seem to do anything that other systems don't already do.)
I've never understood what SBT was supposed to solve in the first place. A Maven plugin that supports cross-building, which we're finally now getting with Scalor, seems like what the language actually needed.
This major breaking change to the language reminds me a lot of the Py2-Py3 fiasco. I wonder how Scala's community will handle the transition given the previous issues with cross compilation in the 2.x series. Given the multitude of language alternatives in a similar space to Scala, I'm worried. Fingers crossed they can work through this.
The shared library we are talking about is the upcoming 2.13 library, which has the new collections design. We currently link against the 2.12 library but will switch to 2.13 once that comes out.
Just wanted to say- thanks for your great work in making Scala possible. And wanted to add that going through your Scala Functional Programming MOOC was one of the most rewarding experiences.
Last time I tried Scala I had to give up because I couldn't understand what a "trampoline" was, or how to use it to get multi-function tail recursion to work. Probably not Scala's fault.
What level of Scala were you trying? The things you're referring to don't sound pedestrian at all. You shouldn't let them turn you off from ALL of Scala.
I find async/await to be much better for conditional execution, especially with lots of branches. For example, I had a scenario where I tried one async call and if it didn't have the information I needed, I had to make two subsequent async calls to get the information I needed before proceeding with the final call. Monadic composition was a nightmare where async/await was perfectly suited to the task.
For comprehensions are currently tremendously limited vs async/await. They constrain the top-level of your code structure to a tiny subset of the language.
Its fascinating process where if you're a hot new language you attract a lot of people that like shiny new things. When the next hot new language comes along and those people move on - have you attracted enough of the secondary wave to keep alive? Its going to be interesting to watch Scala adoption in the next few years.
Don't think it has a bright future since Rust and Kotlin share much of what brings people to Scala and are better in many ways, including and perhaps most importantly the fact they are improving faster than Scala.
That's true. More importantly, Rust and Kotlin work on improving things that matter to users, not on features that read nicely in a paper, but are largely irrelevant in the grand scheme of things.
Just compare Rust's announcement on tooling (https://www.ncameron.org/blog/dev-tools-in-2018/) with the state in Scala, where pretty much every promised improvement has been "around the corner" for the last 5 years. :-/
Not sure much how much can or will change here, given the departures of experienced tooling contributors in 2017.
I challenge the fact that Rust and Kotlin are improving faster than Scala. Scala is not only more stable, but is under active development and there are lots of discussions to improve the language over time (check our SIP meetings).
Kotlin doesn't even get closer to what Scala is, and Rust is for those folks with the mentality that memory management is something worth keeping track of.
They're not charging for debugger access. CLion is the only IDE where they have tie ins to GDB etc. so the Native plugin is only for that, but most Kotlin users are on the JVM, where the IDEA Community Edition is not only free, but open-source.
Because I was trying to explain that the only reason Kotlin/Native is tied to (paid) CLion is because that's the only IDE where JetBrains have native GDB/LLDB integration, not that they charge for a debugger, which is demonstrated by the fact that where there would be the most potential customers, (JVM Kotlin users), they don't charge anything.
The only reason they charge here, is because Clion is primarily a C/C++ IDE, (which explains why they have the GDB integration there) and they want to sell that, (the C/C++ IDE), but given the debugging integration, it's also the best IDE to integrate with Kotlin/Native, (which is free, but the C++ IDE is not), where they have a free offering, (Intellij CE on the JVM), they charge nothing even for the debugging GUI for Kotlin.
This is further demonstrated by the fact that their Rust plugin doesn't require CLIon for its IDE features, (even open-source IDEA will do), only if you ALSO want the debugging GUI, you have to go with CLion, a paid C++ IDE.
In other words, I am trying to get this idea across that:
1. The Kotlin/Native plugin is free.
2. The only IDE Jetbrains have with GDB integration is CLion.
3. CLion is a paid IDE for C/C++, not Kotlin.
4, Kotlin/Native plugin can only work with CLion for technical reasons, see 2.
5. Because CLion is paid and Kotlin/Native plugin work only with CLion, it happens to come out to you having to pay for CLion to get the Kotlin/Native plugin working.
6. That does not mean that Kotlin/Native or the Kotlin/Native plugin themselves are paid products, they're not, nor are Jetbrains charging for access to the debugger itself.
They are charging for the experience of using a GUI based debugger.
I don't remember when it was the last time I was forced to use a command line debugger even for C and C++, typing s, n, l, p all the time. Maybe around 2000 or so.
I would bet on Scala over Kotlin for the server side. Java is starting to incorporate Kotlin features so in a few years there won't be that much different. Kotlin reminds me of Coffeescript. It was a much nicer javascript but then Javascript just incorported coffeescript features.
Neither of those has or intends to have HKT, and they'll never be an acceptable replacement for Scala without it. They're "improving faster" but only in that they're starting from further behind.
The reason why Kotlin is gaining marketshare is specifically because they aren’t chasing after things like HKTs. Kotlin is squarely going after the better Java market and not the advanced FP one.
I think Kotlin's design - adding lots of ad-hoc support for specific use cases but without the underlying general constructs that make them coherent - will come to bite them as and when the language needs to evolve over time. (Indeed to a certain extent it already has, as Java 8+ adopts Options which don't play nice with Kotlin's ?. etc.) I guess we'll see how things look in a few years.
We may never get HKT directly, but associated type constructors can give you similar powers and fit into the language more naturally, along with impl Trait in traits. We’ll see how it shakes out.
I thought this RFC was for HKT only: https://github.com/rust-lang/rfcs/issues/324 this is the one I have been paying some attention (not like I use HKT that often in Scala anyway)
BTW I really like playing with Scala, but I also really like Go, OCaml, Clojure, Kotlin, Elixir as well. As well as the traditional Java/Python/C++ I feel like I'm drowning in choice.
I didn't realize so much of Apache ecosystem had moved to Scala which is huge.
OCaml users tend to avoid its object system, so while it attempted to fuse object-oriented and functional I don't think we can say it succeeded. F# is, as you say, later.
OCamlers tend to avoid OOP mostly because we can get away with modules, functors, polymorphic variants to model subtyping, and now also GADTs and open variant types to precisely model and add cases.
When we do need to use it, it is actually pretty elegant and well-designed. It's just that since we've tasted the power of the more functional abstraction techniques, OOP loses its allure.
I mean I can understand all the reasons not to, however It's just a pita that there is no sane way to do it, besides reimplementing either two api's or come up with your own collections.
Is this really such a big problem? You can either work directly with the Java collections if the code is performance sensitive, or have a `.toScala` call at your Java -> Scala boundary and a `.toJava` call at your Scala -> Java boundary.
As a heavy Scala user I'm excited by this. But really I don't think it will improve Scala adoption because the blockers there are all around developer experience (tooling is terrible - sbt, and feedback cycles are very slow).
Personally, for a lot of my current use cases (APIs) I'm veering towards Go more and more as I find it a lot more productive.
At the Scala Center we’re working on addressing these problems. We now have working groups devoted to ease coding in Scala and make it easier for any dev to use any build tool.
I know on HN people like Scala but it's by far one of the language I hated the most when I worked with it. It's the complete opposite of Go, every time the Go team said no to a random feature, the Scala team said yes to that random feature. Scala feels to me like a kind of functional Perl.
The Scala team appear to have their hearts in the right place. They seek to empower their users and have respect for them. I can't really say the same about the Go team, based on at least one Google presentation I have seen. This isn't that surprising given the goals of the projects.
I did not want to start a flame-war Scala vs Go, it's just that I like Go simplicity & readability (I also like Python and Ruby for those reasons as well) whereas I'm not a huge fan of the Perl feeling of Scala.
When there's no ability in the language to abstract, there's no abstractions to learn, beyond what is built in. But this is not a blessing, it means we have to study every line of code and hope that we can uncover some emergent higher-level structure, if it exists.
> hope that we can uncover some emergent higher-level structure, if it exists.
That's exactly what I feel when I read Scala code, everything is a soup of custom operators, implicits and objects disguised like functions (or the opposite, you think it's an object but actually it's a function), it makes the code very hard to maintain and read. At least with Go, Python or Ruby your logic is in plain English, that makes it much easier.
I find the opposite. With Scala my logic can be plain English written in the language of the domain, because Scala makes it very easy to push out secondary concerns like authentication or audit logging or error handling out into the type system where they're still there, still managed, still checked by the compiler, still visible on mouseover/click-through in my IDE, but they don't have to obscure the straight-through happy-path business logic. Whereas in Go, all the secondary logic has to be right there, which clutters up the code and makes it harder to see the primary thread, and in Python or Ruby either you do the same thing or else the secondary logic has to work by invisible magic (metaclasses, exceptions, method_missing...) and you have no hope of ever being able to understand what's actually going on.
This is analogous to the tension between “worse is better” and “perfect is the enemy of good.” One the one hand, elegance is impossible and one must settle early on inelegance. On the other hand, elegance is possible but one must be clever and spend a lot of time looking for it. These trade offs are really apparent in competing language designs (Scala, Go) and it isn’t clear which ones are the better ones to make.
Right: phrased differently, weaker abstraction power means recurring problems have to be addressed by documenting patterns with fill-in-the-blanks recipes, instead of reusable code libraries.
You said [Scala] it's by far one of the language I hated the most. If you don't want to start a flame-war maybe you should change the tone of the comment.
I share the same opinion of Scala. It definitely feels like it has too many features and is too complicated.
Back in 2010 - 11, when I last worked with it, the compiler was also dog slow and IDE support was terrible. No doubt this has changed for the better, but it did ruin the language for me. I'll stick to Kotlin and Java, thanks.
Can you be specific? This is a popular meme but it doesn't match my experience of Scala's language features at all (indeed I'd say Scala is very good at pushing things out into libraries or composition of existing features rather than making them language features), and often I find people with this kind of complaint were actually using a library with poorly named functions and mistaking its features for language features.
I would start with operators and operator overrides, most of the time they don't make sense and it's difficult to know what it does, that's why most other languages banned them.
Secondly, the implicit concept makes the code difficult to read. Then I would add that the differentiation between function & variable isn't always clear and sometimes you don't know what you call.
> I would start with operators and operator overrides, most of the time they don't make sense and it's difficult to know what it does, that's why most other languages banned them.
Yeah, that's what I was talking about - in Scala operators are just functions, and badly named operators are an issue with some libraries rather than an issue with the language. They're unfortunate, but all one can really do is avoid them - no language is impossible to write bad libraries in. Certainly I wouldn't ever want to go back to a language that uses magically-named methods for operator overloading, like Python and Kotlin do - I find that far more confusing, I can never remember whether a * b is calling a.__times__(b) or a.__star__(b) or something else. I could get behind a language that disallowed symbolic names entirely, but I don't think I've seen one of those since Java.
> Secondly, the implicit concept makes the code difficult to read.
IDE support has gotten a lot better now (green underline to highlight any implicit conversion, expand out implicit parameters) which makes them much easier to work with. If you only use implicits in cases where you otherwise wouldn't do anything in code at all, they enhance readability - but of course if you use them to replace things that you'd otherwise make explicit then they can reduce readability. I don't have a good answer, because I definitely want extension methods, typeclasses with derivation, and the magnet pattern, but I haven't yet seen a language design that makes those things possible while disallowing the bad uses of implicits.
most of these criticisms I agree with but I don't blame scala as a language for it. I write, and encourage my colleagues to write, code that is very easy to follow and reckon about. Few obscure operators, almost zero implicits....
They don't upgrade the open source version because their internal version depends on that open source version and they don't can't upgrade internally to 2.12.
They can't upgrade because of several dependencies of their own, the last one Finagle.
All of those although "cross compiles are no problem" as the default answer from the Scala development team on these issues.
I assume that the "added feature" section of the Dotty documentation and this blog entry have been written by different persons, because it's utterly impossible to reconcile those documents:
More complicated features, more experiments, more NIH, more ad-hoc inventions, larger language footprint with more inconsistencies and surprising behaviors.
Completely agreed. Implicits are the one place where Scala genuinely pushes the art of language design a bit beyond what's really possible; I expect future languages to come up with a better solution to the same problem. But in the meantime the things they enable - typeclasses, extension methods, the magnet pattern - are too valuable to do without, and we can't just sit around and wait for the perfect language to arrive.
I wouldn't want to have any more language features that pushed as hard as implicits do, but one somewhat-experimental feature in a language is ok, just about. I don't think Scala is a thousand-year language, but it strikes the best balance given the current state of the art.
If you don't like them, why don't you add "Use of implicits forbidden" to your coding style guidelines, and check in code-reviews, and add a "grep implicit" check to your CI setup that refuses to commit Scala code with implicits?
> If you don't like them, why don't you add "Use of implicits forbidden" to your coding style guidelines, and check in code-reviews, and add a "grep implicit" check to your CI setup that refuses to commit Scala code with implicits?
Because Scala libraries that do useful stuff are infested with them. Akka loves the things.
> main complaint
They obscure complexity, pretending that complex stuff is simple. It makes simple stuff trivial but complex stuff harder.
"If you don't like them, why don't you add "Use of implicits forbidden" to your coding style guidelines, and check in code-reviews, and add a "grep implicit" check to your CI setup that refuses to commit Scala code with implicits?"
Only someone who has never worked in a team before would suggest this.
So your biggest complaint is "Scala doesn't look like Python"? I mean, I love Python, but indentation-based syntax isn't that great. I think Scala is fine as it is.
Being brutally honest, compared to Haskell, ML and to a lesser extent Python, Scala's syntax is rather ugly and unnecessarily verbose. I'm almost certain its creators chose the syntax in order to tempt curly-bracket programmers, not because they liked it personally. I really hope that future high-level languages will use better syntax and move on from the C/C++ influence.
The constructor syntax can be a source of confusion. People have to be taught that the class declaration parameter list is actually the primary constructor, and that the body of the class is also the body of the primary constructor, except for any method definitions, some of which may be secondary constructors!
I don't think it's fair to refer to Scala's syntax non-specifically like that. Scala allows you to write code that looks like Java or code that looks closer to Haskell. It's a very wide spectrum.
You're probably referring to the worst of Scala.
Personally, I don't find expression-centric, brace-less Scala to be "rather ugly".
> Also many open source libraries in Scala land have a very short life time and will not be upgraded to Scala3. No problem with Java, but it will take months for many larger projects to remove Scala2 library dependencies from their code base.
The announcement says Scala 3 will be able to call Scala 2 libraries, so even if you're right (which is not my experience of open-source Scala libraries at all FWIW) this won't be an issue.
What web framework are you using? Both Lift and Play had major migration problems over the last 5 years due to cutting off support for older Scala versions.
A mix of spray/akka-http for REST backends (looking to shift to rho) and Wicket for HTML UIs. I don't think Play makes good use of Scala's strengths (frankly it didn't seem like very good code), and I didn't even realise Lift still existed.
> what happens if the 2.12 class file depends on old Scala library code?
Not possible - 2.12 is source- but not binary-compatible with 2.11, any "2.12 class file" will only have 2.12 dependencies.
So we're talking about the release planned for 2020 having binary compatibility back to 2016 and source compatibility back to 2014. Not the same level of backwards compatibility as Java by any means, but better than many languages offer.
Yes, some people who previously used Scala, now use Kotlin. And some people who would've used Scala if Kotlin didn't exist, use Kotlin. Same probably goes for Rust.
But there is a big enough market for people that like the intricate and expressive type system that Scala gives you, in combination with the JVM ecosystem. People that think Kotlin is nice, but not expressive enough, for example. People who don't want to deal with Rust's memory management and/or don't have a use for that. People who think Go's simplicity is sometimes more of a burden.
Why are people so intent to bash sombody else's language choices?! Go, Rust, Kotlin, Scala are all great languages in different ways. They all cater to different needs, sometimes radically different (Go vs Scala), sometimes subtly different (Kotlin vs Scala). I think there is a market for all of them, and more. And the introduction of a new language (Kotlin for example) does not necessarily spell doom for another (Scala).
Let's all enjoy our own tastes and needs, and respect those of others.